The use of randomised controlled trials in policy is a key aspect of the emerging literature in behavioural science and policy. Key researchers in development economics such as Esther Duflo (papers here) successfully advocated their use in evaluating development projects. Economics has a tradition of using RCTs including famous experiments on negative income taxation and the legendary RAND Health Insurance experiment. However, clearly for the first time they are becoming widespread across many fields. Below will not be new to anyone working in this area and is intended to get students thinking about RCTs more broadly.
It is often argued that RCTs avoid the problems of methods such as linear econometric modelling and instrumental variable analysis in that the mechanism assigning participants to treatment is completely observed by the researchers and thus it is possible to make causal inference but students should be careful in evaluating claims about RCTs. It is simply not true that, as a rule, they involve fewer assumptions than techniques such as fixed effects panel models or instrumental variable modelling. The assumptions required depend upon the type of problem being addressed and the feasibility of different designs. Many RCTs involve experimenting with a small geographical subset of the population and there are various stages leading from targeting that subset to them actually participating and many other issues such as differential attrition and so on. Going from the results of such experiments back up to the population parameter of interest requires many assumptions just as interpreting a well-constructed IV parameter does. (See Angus Deaton for an infinitely more eloquent elucidation of these issues eg here).
The phrase Local Treatment Effect is now commonly used in Economics to get across the idea that most experiments, whether randomised or natural, give you an estimate of the effect of the variable of interest that is, to a degree, specific to the type of treatment and to the group that receive it (see the very good Mostly Harmless Econometrics for a gentle, albeit deceptively so, introduction to this). Estimating local treatment effects is a more modest and achievable goal than estimating fixed average population effects. Many RCT designs estimate local treatment effects well but are often presented as giving a causal effect that will apply to different groups in different situations. The usual debates about ecological validity, replicability, representativeness etc., all apply to RCTs in policy and are interesting to think about in the local treatment framework.
Understanding the conditions under which a randomised controlled trial reveals something useful about policy is an important goal of behavioural science. For example, many RCTs carried out on large online groups may meet exactly the conditions to be informative as they are being conducted on very large samples of people who themselves will be the target for future changes. Similarly, policy experiments in small regions on self-selected individuals may be informative if those regions and individuals are precisely the group being targeted by the policies e.g. a job-centre RCT may tell us nothing about the effect on people who don't go to the job centre but this is not such a problem if they are not of interest to the policy question. Arguably, many of the trials being conducted by the Behavioural Insights team would score highly on these criteria as they are often trialling directly with programme participants with a view to changing those specific programmes. But in general, there are a lot of questions to address before going from RCT results back up to the population of interest.
This debate between Deaton and Banerjee is very informative regarding the issues at hand. This is obviously still an ongoing and at times vociferous debate. What is emerging is a much more nuanced language for describing the results of experiments and their relation to policy.