Tuesday, August 18, 2009

Evidence Based Policy: Andrew Leigh

Andrew Leigh examines the extent to which randomised controlled trials are the way forward for policy in Australia

"In Australian policy debates, the term „evidence-based policymaking‟ has now become so meaningless that it should probably be jettisoned altogether. The problem in many domains is not that decision makers do not read the available literature, it is that they do not set up policies in such a way that we can learn clear lessons from them. In employment policy, the
early 1990s recession saw Australia spend vast amounts on active labour market programs, without producing a skerrick of gold-standard evidence on what works and what does not. In Indigenous policy, there are as many theories as advocates, but precious few randomised experiments that provide hard evidence about what really works."

We have already debated extensively the papers by Deaton, Heckman, Imbens and others outlining the pros and cons of RCT's as a policy tool. Across health, education, social welfare, crime, aid and other policy areas in Ireland it is certainly long past time to ask exactly what we are trying to do with many of our policies and whether they could be improved with these approaches. Institutional innovation in an environment where budgets are closing must partly take the form of using academic knowledge directly in policy.

3 comments:

Unknown said...

Read the Leigh paper with great interest. I know I'm coming late to this, but would love to get a link to the papers of Heaton and others you mention.

Within medicine there has been an evolution from the early days of evidence-based practice, which very much championed the kind of "pyramid" approach that Leigh describes, to a more nuanced one. It's recognised that medical RCTs, for instance, tend to be undertaken in populations so sharply defined (within certain age brackets, without comorbid illness, not using substances, often all male) that they are not generalisable. Effectiveness studies and pragmatic studies have been introduced.

Secondly, there was an initial tendency for the advocates of evidence-based medicine to dismiss utterly the lower level evidence. This has changed, partly as for rarer diseases and outcomes doing experimental studies is difficult if not impossible, and case studies, cases series et al may be the best we can get.

Finally, I'm curious to know if there is much discussion of the problem of blindedness in the social science literature on RCTs? Certainly it is known that if the patient, or doctor, or indepdendent assessor in an RCT is aware of the allocation, the positive effects of intervention are overestimated by as much as 40%. How can one get around this in the social sciences?

Kevin Denny said...

Blindedness presents real problems for social experiments. You can't really give a placebo can you? I don't know if there has been much discussion, there is a resident expert in Geary who might know though.
As I understand it, in a lot of social experiments (including the one Geary is involved in) the control group gets X and the treatment group gets X+Y. But whether they know which group they are in I am not sure. Of course in a community setting theres a problem of contamination:people talk..so they may figure out which group they are in.

Kevin Denny said...

For the Deaton paper see:
http://gearybehaviourcenter.blogspot.com/2009/04/deaton-limits-of-experiments.html

Imbens' defence of IV is at:
http://gearybehaviourcenter.blogspot.com/2009/04/imbens-on-iv.html