The Super Forecasters on the Good Judgment Project

May 10th, 2012

A note from Philip Tetlock about his cool Good Judgment Project:

The Year 1 results are in–and they contained more than a few surprises. Most surprising was how well our forecasters performed. They collectively blew the lid off the performance expectations that IARPA had for the first year. Their original hope was that in Year 1 the best forecasting submissions might be able to outperform the unweighted average forecasts of the control group by 20%. When we created weighted-averaging algorithms that gave more weight to our most insightful and engaged forecasters, these algorithms beat that baseline by roughly 60% (exceeding IARPA’s expectations for Year 4).

Our forecasters did so well that some thoughtful observers now doubt it is possible to do much better — which is why we have taken the unusual step of skimming the best forecasters from our year 1 experimental conditions to create teams of “super forecasters.” These teams will be functioning more as research collaborators than as research participants (they will have access to our algorithms but the discretion to override the algorithms with their own judgment). In my view, these “super forecasters” are distinguished by three characteristics: (1) an intense curiosity about the workings of the political-economic world; (2) an intense curiosity about the workings of the human mind; (3) cognitive crunching power (“fluid intelligence” and a capacity for “timely self correction”).

Of course, the decision to skim off our best forecasters into elite teams — coupled with the inevitable attrition rate in a time-consuming exercise of this sort — means that we are in the market for new forecasters, ideally potential future “super-forecasters.” So we are launching a new recruiting drive.

Although not nearly as prodigious, FantasySCOTUS has identified some of the best players–we call them “Power Predictors.”