An interesting event hosted by NESTA, the National Institute for Economic and Social Research, and the Institute for Government in the UK: Making Policy Better: The Randomisation Revolution – How far can experiments lead to better policy? It follows from some of the findings from the IfG’s Making Policy Better report, which called for:
- A public statement by each department (secretary of state and permanent secretary) on how they will meet a set of new “policy fundamentals” – the building blocks of good policy. The minister and the civil service can then be held to account by, for instance, a departmental select committee, on how far they have met that commitment
- A new responsibility for the permanent secretary to ensure that ‘good policy process’ has been followed – along the lines of their existing responsibility for value for money; Policy Directors in departments would be personally accountable to departmental select committees for the quality of ‘policy assessments’ published alongside new policies
- A new Head of Policy Effectiveness in the Cabinet Office – a very senior official responsible for ensuring the quality of policy making in government, overseeing evaluations to make sure they are both independent and used and able to commission lessons learned exercises when things go wrong
- New emphasis on both ministers and civil servants recognising the value each brings to the policy making process.
This may of course not be particularly relevant to many developing countries but the fact is that most countries -regardless of their level of development- have systems that are heavily influenced by these ideal ones. So it is relevant to know what is going on in the developed world.
This event’s core subject matter is well known to many onthinktank readers as it is all about randomised control trials. The main speaker is Dr. Rachel Glennerster, Executive Director of the Abdul Latif J-PAL at MIT:
With Esther Duflo and Michael Kremer, she is one of the prime movers behind the “randomisation revolution” that has transformed both the theory and practice of development economics over the last decade.
‘There has been an explosion in the use of randomised evaluations’, says Dr. Glennester. The main thing they have learned are specific lessons about the evaluations but there may be some more general lessons. Which are these?
- Rigour matters: it helps to test our gut feelings.
- We can look at a range of questions: this new wave of RCTs are looking at new kinds of questions that are being looked at such as adolescent empowerment, corruption, etc. For these things we need to get very objective measures:
- It is a flexible tool: we can use elements of randomness that is consistent with ethical and logistical constraints.
- There are technical lessons related to doing randomised evaluations (on a budget).
Her presentation is very interesting particularly relating to how to incorporate randomisation into a programme or project. One of the main disadvantages, she accepts, is that while it is good at providing a very specific answer for a very specific group, it is hard to assess the impact of an intervention on society in general. But there are ways to attempt to address this around the edges.
I think it comes as no surprise that I think this evidence based policy discourse has become a bit of a hype. I appreciate the power of rigorous research (including RCTs). And I appreciate their value in policy maker. I do not doubt this. But I continue to think that there is a danger that we end up believing that the only appropriate policies as those based on RCTs.
Policies and politics are inseparable. The very policies being tested are driven by politics (persona, community, local, national, or international). The trick is to find ways to bring different policy drivers together (no check-list here) to promote the most appropriate policy outcomes.
This is what think tanks (independent, free-thinking, well staffed, networked, close but not entirely controlled by politics or finances) can do better than any other type of organisation.
I worry that those proposing them think (do they?) that all policy debates can be solved by introducing rigorous evidence. Dr. Glennerster seems to suggest that policy debates in Africa and Latin America related to fertilisers can be solved by a RCT. RCTs can solve a policy problem and can improve the quality of the policy debate -for sure- we should be careful of claiming that it is the only input necessary.
Developing countries need systems to manage the RCTs, policymaking bodies to demand and use evidence, tertiary education systems to maintain the production of evidence, etc. Unless these system-wide issues are addressed, the J-PAL and 3ie data-bases of what works will have limited impact on policymaking -interesting as it may be.