Last Wednesday evening was the Oscars of the UK think tank world — the Prospect Think Tank of the Year Award.
It was a great evening full of surprises. I managed to livetweet from the event and to scrape together a Storify (see below) that gives an overview of the ceremony itself.
Prospect Magazine have published a list of winners and the reasons for their selection for this year along with their shortlist for each of the categories.
But that hasn’t stopped criticism. The DC-based Think Tank Watch wasn’t convinced, going as far as to suggest that the magazine’s awards might ‘be rigged‘. To grossly oversimplify the TTW argument: Brookings didn’t win so it must have been rigged — or more likely, the judges don’t know enough about the US think tank world to choose the most appropriate winner.
To this I would make two points.
First of all, it is worth noting that the Prospect Magazine Think Tank Awards are judged based on individual submissions of think tanks. It is possible, given that it is mainly a UK-based magazine, that a wide array of very credible competitors from the US failed to make a submission and therefore would not and did not qualify.
But let’s assume for a moment that wasn’t the problem — it’s incredibly difficult to judge the success of think tanks, wherever they may be. Jim McCann’s Global Go To Think Tank Index has been widely criticised, including in these pages, as nothing more than a popularity contest based on a biased sample.
There have been other noble attempts to try a different methodology (see for example CGD’s work on measuring the public profile of think tanks). Indeed, one of the fun surprises of the night came from Wonkcomms by way of a ‘top trumps’ card deck of various local think tanks.
The shortlist was published too shortly before the ceremony for them to be able to have all of the evening’s winners. As such, I’ve gone through and added to the list for the winners (and I’ve also taken the liberty of adding my own workplace, the Institute of Development Studies for another point of reference):
There is an obvious public profile and communications slant to these stats, but I enjoyed having them on the night. My main complaint was that they were a little difficult to read across and to compare. I, therefore, took it upon myself to put together a few visualisations starting with the Wonkcomms data and adding on from their.
Based on this, do I think the Institute of Development Studies should have won the award because it has the most pages indexed on Google? Not really. And that’s the point — there’s never going to be a perfect quantitative way to rank think tanks (especially since their effectiveness is not causally linked with its public profile). So, I’d rather have a set of expert judges discussing and deliberating who has had influence this year in the hope that it gives insight into the zeitgeist. Of course it’s biased, but at least it’s vaguely more interesting than discovering that Brookings is still the biggest think tank of them all (though I should note that Brookings only has 225,000 pages indexed on Google — to IDS’s 1.2 million and ODI’s 750,000).
Personally, I send my congratulations to Prospect and to the winners of the evening. It was a strong field and a tough decision, but one that I think is appropriate. But what do you think? Are awards the way forward for think tanks? What goes into a good ranking? Or is it all futile and we should just get on with it? Share your comments below…