FantasySCOTUS v. SCOTUSBlog

June 27th, 2011
In a previous column, we compared our predictions for the final 14 cases with predictions made by Tom Goldstein at SCOTUSBlog (Tom did not make predictions for two of the 14 cases). While the sample size is rather small (14 cases out of the total 81 cases, about 17% of the cases decided this term) this small experiment allows for an informal comparison between the wisdom of the crowds and the accuracy of experts. At the end of the term, the final score is FantasySCOTUS: 11, SCOTUSBlog: 9 (79% to 64%).

While we in no way doubt Tom’s knowledge and expertise about the Supreme Court’s docket, it is not too surprising that 10,000 members of FantasySCOTUS, on the aggregate, generated more accurate results, than a single expert. What our members lack in credentials they make up for in a wide-range of experience (many top-ranked players aren’t even attorneys), and knowledge on a breadth of topic (many players focus on statistics, political science, and even psychology). On the aggregate, this allows them to produce better, more informed predictions than an individual expert.

These results reflect the outcome of the 2002 Supreme Court Forecasting Project, where a cadre of Supreme Court “experts” (SCOTUS litigators, former clerks, and Professors) was able to accurately predict about 60% of the cases (Tom got about 64% correctly). In contrast, this Term, the members of FantasySCOTUS have been able to predict nearly 70% of the cases correctly (the top players approached 80%). FantasySCOTUS predicted about 79% correctly.

FantasySCOTUS has a number of additional benefits over the expert prediction approach; namely, timing. FantasySCOTUS yielded ex ante predictions for these 14 cases months ago. Experts, like Tom, only attempt (publicly at least) to make these predictions at the end of the term with few cases remaining. Usually the Justices write an equal number of opinions for each sitting. Through the process of elimination, Tom determined potential authorship, and made these predictions with the added benefit of knowing who has not yet authored opinions for the various sittings. When authorship changes, a Justice loses a majority, or an opinion flips, the calendar approach is imprecise. FantasySCOTUS predictions were made months ago, well before authorship of any opinions had been determinable.

Additionally, the ex ante reliability of Tom’s predictions are unclear. While he can couch his prognostications with language like “my relatively uninformed read” (for Stern v. Marshall, which he got wrong) or “it would seem sensible” (for Goodyear, which he failed to provide a prediction) these hedged predictions are still somewhat nebulous, and unreliable. The confidence of FantasySCOTUS predictions, in contrast, are made with an attendant confidence level–90%, 95%, 99%. In other words, we known in advance when our predictions are likely not going to be accurate.

This post was co-authored by Josh Blackman and Corey Carpenter.