In the first part of this long post I discussed the concepts of the ‘gap’ and ‘bridge’ between research and policy. I argued that the idea of a gap limits the way we deal with research-policy interface. It leads to an often unintended oversimplification of the challenges faced and a reliance on impersonal approaches to communication.
Rather than a gap, we should think of a space that has people and organisations with relations between each other. The perception of the gap relates to our lack of knowledge or understanding of these relations -and sometimes, as I will discuss in greater detail later on, to the fact that there are, in fact, very few players. Therefore, rather than searching for bridges we should be looking for maps.
Focus on research not on researchers
The literature and debate on ‘bridging research and policy’, often confuses the researcher with research (and the policymaker with policy). We have, mainly because the sector is guided by a consultancy business model and a research communications narrative that is driven by giving the audience what the audience wants, sometimes become focused on (if not obsessed with) the influence of researchers or a particular piece of research on policymakers or on a particular policy (the audience –using a term straight from marketing, by the way). As a consequence, we have not always tackled some basic questions that would have helped us to identify and construct the more complex system suggested above. For instance, although our guiding objective is to make policy more informed by research, we have never been able to provide indication as to what should be considered as ‘more’ or ‘enough’ –or at least, an appropriate contribution, in any given system, of research on policy, vis à vis any other factors. I accept that we do not ask this as often as we should because we do not know how to calculate ‘enough’ and, driven by a competitive research-policy market funders do not want just any research to inform policy; they want it to be their research, and their researchers, that make the difference.
Some researchers and development practitioners that I have worked with often complain that the policies in their sectors or regions or countries are not based on research. As a consequence of this view and of donor demands for impact (and measures of impact), they are spending an increasing amount of resources and efforts developing and implementing policy influencing and communication strategies to change this. However, when I ask about who their main audiences are, it is never difficult to notice that the policymakers they are targeting already base their policy decisions on some research; only just not their research. Or, if it is theirs, then they are probably interpreting the evidence in a different way, influenced by a different development narrative, analytical framework or values, and so either dismiss it or implement what the researchers consider as being ill-advised recommendations.
As I have argued above, policymakers use their own networks to access the inputs they need to make decisions. Because networks are largely based on trust, the relationships between policymakers and their advisors or sources are bound to reflect the complex historical relations that exist between the research and policy communities; whether formal or informal, and personal or institutional. It seems sensible then to expect that only a small number of researchers will have access, through these networks, to decision makers; and that most researchers will have to be satisfied with contributing to general public narratives, the literature or informing their advisors, if they are lucky.
Therefore, I would suggest that when it comes to the roles of research in policy we should not worry too much about whose research this is; only that research plays a value-adding role. Our attention should maybe move beyond the skills and competencies of individual researchers and centres towards better understanding how and why research, in relation to other factors, influences policy and policymakers. Assistance, then, should be directed at the knowledge sector as a whole.
In terms of the type of research we should be doing this means that the unit of analysis may no longer be the researcher or research centres but their political context and their audiences –the other actors in the system. With respect to advisory work, support to developing strategies will benefit from this better understanding of the context in which we are working.
Sometimes it is good to hold back
Again, partly because we are driven by demand (from donors mostly) and the impression that we must communicate with communities way beyond our reach (on the other side of the imaginary gap) we often fail to consider that it might be possible that some researchers, some of whom may be perfectly well connected, choose not to engage with the policy process directly; and that, in fact, it may be counterproductive for them to do so. There seems to be an assumption that because in a progressive society, ‘evidence must to inform policy’, researchers themselves must therefore become pro-active agents in this process; regardless of who these researchers are, the type of research they conduct, the policy process they are related to, and the political context in which they work. And little or no mention is ever made of their political or ideological affiliations.
This is in stark contrast with the, now obvious to everyone, lessons that the roles that research based knowledge plays in the policy process depend on the political context, on the sector or policy issue being addressed, and the organisations where research is being undertaken –even though it is impossible to offer a blanket statement about what these roles are for a particular sector or a type of context or type of organisation.
The failure of the scientific community to deal with climate change sceptics, for instance, has been put down, by some practitioners in the research communications field, to their lack of engagement with the politics of climate change. Some even call for scientists to be more pro-active and politically savvy in their engagement. I disagree and would argue that these scientists’ research, probably more so than that any others, is already extremely relevant to policy and that engaging with the politics of the debate would only make it more difficult for their research to influence it. In a politically charged issue such as this, the impression independence is the one thing that will allow evidence based debate to take place and for well informed arguments to develop. If scientists were to (even unintentionally) pledge allegiance to any side of the discussion they would lose credibility (the only reason why they maintain a legitimate seat at the table) and rule themselves out of any further discussion; and we would all be worse off because of this.
There may be a role for other types of organisations in the system to fulfil this role. Remember that it is not empty: there are scientific journals, popular science authors and magazines, government scientists, scientific NGOs, think tanks, universities, schools, etc.
This is also true of national level politics in many developing countries. When the Nitlapán Institute of the Central American University in Nicaragua sought to inform the debate on trade and complementary policies their most powerful asset was that all parties perceived the University as an independent actor. Even when dealing with an issue like trade in which sides are clearly defined by their political and ideological allegiances, Nitlapán was able to convene all parties and facilitate a research-informed debate. This required a very careful negotiation of the fragile balance between objective academic research and political engagement with the various policy stakeholders. But there was no bridging: Nitlapán was a connector and offered a space where they all met. Nitlapan played the dinner party host.
Obviously, in less contentious situations, where there is consensus around the problem, actively engaging with the policy process and the political debate that surrounds it might be possible and desirable: the immediate responses to the effects of the financial crisis, for example, called for politically as well as technically competent experts. But this has more to do with the public’s perception of what needs technical expertise and what needs common sense than anything intrinsic about the policy issue itself –and politicians would be fools (and the are) to go against this.
This assumption about the blanket need for more engagement appears to be in collusion with another equally questionable one: that an investment in the communication of research will lead to proportional increase in influence based on its findings. Andrew Rich’s study of the role of experts in U.S. policies shows that think tank’s communications capacities are unable to explain their experts’ substantive influence. The most important factors that determine the substantive influence of experts have to do with the policy context: the length of the policy process, who drives it, and the involvement of interest groups. Communication strategies may be able to influence the visibility of an expert and increase their chances of being called to, for example, give evidence to Congress or as a source for the media, but there is no empirical evidence that investing more on communications leads to more influence -certainly not to substantive influence.
James McGann’s go-to-think tanks index is based on the idea that more visibility is synonymous with influence. As a result, his top think tanks are the most popular –the ones that more of the people who respond to his survey know about; but not necessarily the most influential. As we have seen above, in the real system of complex relations between research and policy, influence is an outcome of the co-evolution of personal, formal and informal, relations. Because the survey is based on the partial views of a small group of respondent, the internal think tanks that hardly anyone knows about, academic departments with poorly designed websites that house foremost experts, and consultancies and privately funded interest groups that are probably the most connected players in the system, do not feature in the index.
This drive to invest (or spend) in communications is partly explained by donors’ own pressures to show, to their taxpayers or supporters, the return on their money. [Although recently, this very same argument has led to the freeze of all communication spending by DFID.] Research bodies in the Britain and the European Union as well as other development donors have indicated their interest in making research more relevant and promoting researchers’ active engagement with policy processes. DFID, for example, required all research programmes to spend around 30% of their budget on communications (AUSAID, expects something between 5-10%). Ironically, this policy is not based on evidence. There is after all no conclusive evidence that more investment on communications leads to more influence; it is just common sense that your chances may be higher.
This proportional approach is, in my view, at least partially questionable. Yes, it has promoted a more serious discussion about research communications; yes, it has encouraged research programmes to think more systematically about what to do with their research findings; and, yes, it has led to research based knowledge being more easily accessible; but it does not reflect the very basic fact that the size of a research project has nothing to do (or may be at least inversely related) with the difficulty to influence. This approach assumes that it will be 10 times easier to bring about policy change for a £100k research project than for a £1 million one and that nothing else matters when dealing with research based policy influence: political contexts, policy processes, sectorial dynamics, interest groups, etc. do not seem to make a difference.
We know, however, that they do.
I have argued above that current and long term relations between the research and policy communities matter; that the relative positions and roles of individual actors in the system are critical in understanding their influence; and that the type of research and the sector or policy process itself greatly affects the need or appropriateness of an investment in communications.
In sum, in the context of a new policy research system where differences rather than empty spaces exist, I suggest that instead of encouraging all researchers to actively engage directly with policy processes we should encourage them to do what works best to contribute towards making research informed arguments is available and useful for policy as possible. This could very well be to keep quiet and run a few more tests, or to mix evidence with appeals to values, justice or arbitrary targets and goals. At the very least, policies towards encouraging the uptake of research should avoid blanket measures and demand bespoke strategies –with budgets that appropriately reflect the challenge.
Next week … 3 of 3