Social scientists & certainly economists tend to look up to medical research, partly because its where the money is, and also because one of its key methods, the randomized control trial, is seen by many as providing a "gold standard" when it comes to measuring treatment effects - though Deaton, Heckman and others have questioned whether RCTs in economics should enjoy this privileged status.
The public generally tend to hold medical research in even higher respect. Medical researchers are good people, passionately if objectively, pushing back the frontiers of knowledge to help make us better.
So how worrying would it be if much medical research was actually wrong? This conclusion has been emerging from the work of a Greek medical researcher, John Ioannidis, and his team. The causes of this problem are various, including publication bias, and are well known but the scale of the problem is probably not. This article may make you distinctly uneasy.
Part of the problem, that key studies have not been replicated and may be wrong, is not peculiar to medicine. It may well plague the social and behavioural sciences too. This article on the subject won't make you feel any better.
Ioannidis was one of the keynotes at a conference I was at last week, and he discussed a whole series of papers he has written on this issue, for example Why Most Published Research Findings Are False. See here for an interesting discussion. Most of the issues he raises clearly apply to economists, e.g. multiple comparisons, effect size as opposed to statistical significance (see Deirdre McCluskey’s work on this), the importance of replication etc. However, from my experience at the rest of the conference the source of bias I would be most worried about in epidemiology is omitted variable bias. I take some comfort in the emphasis placed on this problem in economics (at least compared to certain other disciplines). There was also a preponderance of odds ratios and not a marginal effect in sight.
ReplyDeleteOn the dangers of bad research, you only have to think back to the damage that the “research” supposedly linking the MMR vaccine to autism did in terms of the fall off in vaccination rates. The paper (published 12 years ago in one of the top medical journals [the Lancet]) was formally retracted at the beginning of this year.
Thanks Kevin and Mark for pointing to much that should be scrutinised in the conduct of research, and indeed plenty that can steer young researchers towards developing good habits in how they go about their work. The importance of replication is something that I think should also be strongly emphasised. If there was more emphasis on replication, then it is fair to expect that the probability of retractions might be reduced. (In fact, this might even be something to test!). As things stand, there is even some concern that sometimes retractions don't go far enough. Liu, in a paper in 'Local Biology', discusses the "the continued “citation glories” of retracted papers".
ReplyDeleteWhile there is a difference between freak errors, deliberate errors and gross negligence, the best thing that a researcher can do is to ensure that their work can be replicated. Indeed, many journals are now looking for data and code (e.g. do-files) to be submitted alongside a research paper.
The EconJeff Blog provides a good discussion about the importance of replication:
"We all know that empirical papers in economics are complicated things, carried out over long periods of time, sometimes in temporally separated frantic bursts of activity to meet deadlines, and often with assistance from multiple graduate student research assistants who are, rather by definition, still learning how to do research. So occasional errors should not come as much of a surprise." Furthermore, a study in the Journal of Money, Credit and Banking from 1986 found that "inadvertent errors in published empirical articles are a commonplace, rather than a rare, occurence." In fact, it has even been documented, in The Scientist magazine , that a Nobel prize-winner has retracted publications from Science and PNAS, though not for work related to her prize.
Cont...
ReplyDeleteThe first thing that comes to mind in this regard is Scott Long's recent book on "The Workflow of Data Analysis". This is an excellent starting-point for anyone looking to go about best practice in their research; Liam blogged about it here: Workflow of Data Analysis. Also, Daniel Hamermesh has a paper on replication in economics, that is arguably a must-read for graduate students beginning a program in empirical economics. An IZA WP version of the Hamermesh article is available here: Replication in Economics
Hamermesh draws a useful distinction between pure replication and scientific replication: "This examination of the role and potential for replication in economics points out the paucity of both pure replication – checking on others' published papers using their data – and scientific replication – using data representing different populations in one's own work or in a Comment. Several controversies in empirical economics illustrate how and how not to behave when replicating others' work."
Hamermesh also discusses the incentives for replication facing editors, authors and potential replicators. In relation to Kevin's discussion of publication bias, it is also worth considering the incentives facing researchers to publish results that they may perceive to be "not all that interesting". A good motto here would be that every result is important and every result must be published. Of course, researchers may shun this motto due to their concerns about building up citations and downloads, or even due to a concern to publish in a certain tier of journals. Many suggestions could be made about how to improve this situation; without getting into too much detail, a couple are as follows:
(i) Award bonus points for replication-studies that would give researchers and advantage in citation indices
(ii) Set aside quotas in all journals (including the top ones) for replication studies, and perhaps studies where the researcher "fails to reject" the null hypothesis
Mark, marginal effects are very mucn an economists thing. Never understood the obsession with odds-ratios. I think economists use probit and its generalizations much more so it doesn't arise. I published a paper in a psych journal with marginal effects but I put in an appendix to explain.
ReplyDelete