Prospect Magazine has published a fantastic article by Gregory F. Treverton (director of the Rand Corporation’s Center for global Risk and Security) on the role of the intelligence services that is extremely relevant for think tanks and some of the challenges they face. (Just change the word ‘intelligence’ for ‘policy’)
I won’t go over the whole article but instead list and comment on a few quotes from it that I think have a lot to say about the roles of think tanks and policy advisors in general. For think tanks in developing countries –and development policy think tanks- who increasingly work under project contracts, this will be particularly relevant:
On evidence versus argument
Intelligence is about creating and adjusting stories –this view has crystallised during my career as a producer and consumer of intelligence.
A senior official at the Bank of Zambia recently told me that their job is to develop a story for the data. This is something that I come across a lot. Researchers and donors often want to communicate their research findings –and nothing more. But facts (or findings) need to be put into a story –and this sometimes means joining someone else’s tale (or theory). Facts alone, will not move the world.
Interestingly, those who could not imagine the story didn’t believe it could be true.
The power of stories should not be underestimated. We do not think in facts; we think in stories –metaphors or narratives that help us make sense of our complex environment. Our stories give us comfort and certainty, and so when new stories come along they can be difficult to accept.
On jigsaws, puzzles and mysteries
Often we talk about simple, complicated and complex to describe the challenges faced by think tanks and policymakers ‘fighting poverty’. But maybe an even better way of looking at this is to think of jigsaws (there is a known answer), puzzles (there is an uncertain answer –we may not know, right away, that we go it right), and mysteries (there is no answer –it always depends).
When the Soviet Union would collapse was a mystery, not a puzzle. No one could know the answer: it depended.
Puzzles are a very different kind of intelligence problem. They have an answer, but we do not know it. Were there Soviet missiles in Cuba? How many warhears did the Soviet SS-18 missile carry?
Puzzles are no necessarily easier than mysteries –consider the decade it took to solve the puzzle of Osama bin Laden’s whereabout.
Intelligence puzzles are not like jigsaws, in that we may not be sure we have the right answer.
On how policy affects evidence (the way it is produced and what it can accomplish)
[The puzzle of] whether Saddam Hussein’s Iraq had weapons of mass destruction in 2002, drives home the point that because the intelligence industry is a service industry, what policy officials expect from it shapes its work.
The interaction of intelligence and policy shaped the results [of the assessments of the existence of WMDs) in several other ways. Policy officials, when presented with a range of assessments by different agencies, cherry-picked their favourites (and sometimes grew their own cherries by giving credibility to information sources the intelligence services discredited).
As elsewhere in life, how the question was asked went a long way towards determining the answer.
[American intelligence] stuck to its analytical guns –the link was tenuous at best- but the repeated questions served both to elevate the debate over the issue and to contribute to intelligence’s relative lack of attention to other questions.
Policy affects evidence in more than one way, and many think tanks are aware of this. A year ago, I wrote that ODI and other development think tanks were dumbing down their audiences by constantly offering increasingly digested and sanitised advice. I mentioned, briefly, that in the process we were loosing skills and dumbing down ourselves:
Questions not asked or stories not imagined by policy are not likely to be answered of developed by intelligence.
And so, if there is no interest on quantitative (or qualitative) analysis then what is the point of learning about it? Or if there is no interest in the negative effects of microfinance why bother studying it? For contact think tanks this is particularly difficult: they are after all only paid to research only what the client (let’s not keep fooling ourselves by calling them donors) wants.
What policy officials expect from intelligence also shapes how intelligence is organised and what kind of people it hires… What is expected of intelligence shapes what capabilities it builds –and hires.
On the American side, the crown jewel of intelligence products is the President’s Daily Brief. Often caricatured as “CNN plus secrets”. On the British side, there is les of a flood of current intelligence, and the assessments of the Joint intelligence Committee are, in my experience, often thoughtful. But on both sides, the tyranny of the immediate is apparent.
The focus on the immediate, combined with the way intelligence agencies are organised, may have played some role in the failure to understand the contagion effects in the recent Arab Spring.
This immediacy (always working on the latest campaign –often led by new political masters or the latest fad) and focus (experts working only in very specific fields –large enough to have their own jargons, communities and global conference seasons) means that development ‘experts’ know nothing of the whole and cannot offer measured (not measurable) advice that is of any use for policymakers that must weigh up choices and interests.
When asked, officials say they would like [longer-term assessments]: how could they answer otherwise? But in practice too often the response when presented with a longer-term assessment is: “That looks interesting. I’ll read it when there is time.” And there is never time.
We’ve all been there. I cannot now remember you how many times I have been told by ‘Advisors’ in DFID and iNGOs that a 15 page document was too long. Or to cut sections out because they were too complex. This paper by Frances Cleaver goes right to the point.
Lacking demand, it is not clear that intelligence agencies hire or train people who could do good strategic analysis –that is, analysis that locates choices in a wider context and perhaps in a longer timeframe. Most analysts are trained to look for measurable evidence and struggle with alternative possibilities, and are not always willing to venture beyond the facts and the level of policy description.
As one American analyst put it to me: “We used to do analysis; no we do reporting.”
On what is the real product of intelligence
When we asked DFID staff how they accessed information they told us that they called “someone who knew about it.” The fancy intranet and new research portal is certainly not the first point of call. Google is probably second on the list of sources. People matter a great deal more than they are given credit for.
But the problem of course is that with incentives for reporting rather than analysis, immediacy rather than long-term visioning, and simple rather than complex, policy bodies (and their contracted think tanks) have eroded the capacity of their human resources.
At the National Intelligence Council, I came to think that, for all the technology, strategic analysis was best done in person. I came to think that our real product weren’t those papers, the NIEs (National Intelligence Estimates). Rather they were the NIOs, the National Intelligence Officers –the experts, not the papers.
And this is a long quote because it serves to make an excellent point regarding research communication:
We all think we can absorb information more efficiently by reading, but my advice to my policy colleagues was to give intelligence officers some face time. If policymakers ask for a paper, what they get will inevitably be 60 degrees off the target. In 20 minutes, though, the intelligence officers can sharpen the question, and the policy official can calibrate the expertise of the analyst. In that conversation, the intelligence analysts can offer advice; they don’t need to be as tightly restricted as they are on papers by the “thou shalt not traffic in policy’ edict. Expectations can be calibrated on both sides of the conversation. And the result might even be better policy.
Who cares about the Briefing Paper? –I want to talk to whoever wrote it