Monday, May 28, 2012

The trouble with brain scans

A breath of fresh air to see informed skepticism about neuroimaging in the press:

Vaughan Bell: the trouble with brain scans

The points in the article are elaborated and the original sources included here.

Readers of this blog recently reviewed articles in 'genoeconomics' (listed here, with key questions here) which have clearly stated the statistical power considerations required in order to identify the genetic factors (mainly in the form of variation in single-nucleotide polymorphisms) that contribute to behaviour. Recurring throughout these articles was the mantra that genetic variation which is common in the population has very (very) small effects on behavioural traits. There are likely to be a few exceptions but by and large this appears to be the rule. The vast majority of studies showing large effects of single-nucleotide polymorphisms on complex traits such as intelligence are likely to be false positives/findings which fail to replicate, as demonstrated here.

A similar theme is emerging from the neuroimaging literature. One implication is that the statement "neuroscientists identify the brain area for (fill in your own favourite psychological characteristic - happiness, jealousy, helping, intelligence, the neural basis of social and physical pain (r = .88) etc.)" is likely to be as false as the similarly tantalizing headline "neuroscientists find the gene responsible for (fill in the blank)".  Tal Yarkoni, an exponent of appropriate consideration of the shortcomings and power limitations (and associated incentives for small samples and a lack of statistical transparency) of many neuroimaging studies, summarizes this well:

"...we expect complicated psychological states or processes–e.g., decoding speech, experiencing love, or maintaining multiple pieces of information in mind–to depend on neural circuitry widely distributed throughout the brain, most of which are probably going to play a relatively minor role. The problem is that when we conduct fMRI studies with small samples at very stringent statistical thresholds, we’re strongly biased to detect only a small fraction of the ‘true’ effects, and because of the bias, the effects we do detect will seem much stronger than they actually are in the real world. The result is that fMRI studies will paradoxically tend to produce *less* interesting results as the sample size gets bigger. Which means your odds of getting a paper into a journal like Science or Nature are, in many cases, much higher if you only collect data from 20 subjects than if you collect data from 200. The net result is that we have hundreds of very small studies in the literature that report very exciting results but are unlikely to ever be directly replicated, because researchers don’t have much of an incentive to collect the large samples needed to get a really good picture of what’s going on."

No comments: