Simultaneous control of error rates in fMRI data analysis.

Kang H, Blume J, Ombao H, Badre D
Neuroimage. 2015 123: 102-13

PMID: 26272730 · PMCID: PMC4626324 · DOI:10.1016/j.neuroimage.2015.08.009

The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to "cleaner"-looking brain maps and operational superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain.

Copyright © 2015 Elsevier Inc. All rights reserved.

MeSH Terms (8)

Brain Mapping Computer Simulation Data Interpretation, Statistical Frontal Lobe Humans Likelihood Functions Magnetic Resonance Imaging Research Design

Connections (2)

This publication is referenced by other Labnodes entities:

Links