Pages

Monday, November 25, 2013

The 'Many Labs' Replication Project

The need for more replication studies in psychology has been in the air for a few years now, particularly since Kahneman's priming letter. The publishing incentives in academia encourage novelty and finding new effects - driving forward at all times rather than squinting back at old papers and asking, say, whether doing a crossword can really make you walk more slowly down a hallway. As such not many individual researchers are willing to sink a few weeks or months into replicating a major paper - after all, either they will replicate the effect and won't particularly bolster their reputation or they won't find the effect and will then receive whatever plaudits are going for getting a null result (spoiler: there aren't any). There's no individual incentive to run replications*, but it's tremendously important for science in general to kick the tires of notable research papers to establish how sturdy they really are. In that sense it's a classic tragedy of the commons situation where the incentives of the individual and the group are pulling in different directions.

So on that note I'm happy to link to this absolutely outstanding work by the Open Science Framework, project leads Rick Klein, Kate Ratliff and Michelangelo Vianello and all co-authors for their new paper The “Many Labs” Replication Project. Labs from America, Brazil, Czech Republic, Malaysia, Turkey, U.K.and more have cooperated to replicate 13 major effects in the psychology literature using 36 studies and a sample of 6,344. Figure 1 below is a tremendous step towards putting some of the recent psychology literature on firmer footing. The x-axis shows the standardized mean difference between the control and treatment group, so the interpretation is that the further rightwards the dots are, the stronger the effect is. The first two anchoring studies have proven to be extremely robust with an average effect of over 2 standard deviations above the mean (quite a bit stronger than the original studies interestingly). The priming studies have not found evidence of a priming effect.

Fig. 1 The blue X's represent the effect size in the original paper. The large circles represent the aggregate effect size obtained across all participants. The error bars represent 99% confidence intervals. Small circles represent the effect size obtained within each study site.



















This paper should be mandatory reading for any social psychology course. You can download the pdf here. It has succinct summaries of each of the studies the authors are replicating and details the practices used to test them.

*I mean no individual incentive in the sense that if you're interested in getting published, replications are not the way to go about it. That is obviously a major issue. The point of science is not to confirm hypotheses - it's to investigate them. Disconfirming a well-reasoned hypothesis is just as valuable as confirming one.

1 comment:

  1. Anonymous11:25 pm

    Wow, it doesn't look good for social priming research. I wonder if they have anything where d>0. If so it's time to put up or shut up and close down.

    ReplyDelete