In our article on FantasySCOTUS, we divided our predictors into two crowds: the “experts” and the “crowd.”
Additionally, we split the sample size of predictors into two groups: the “experts” (those who made predictions for more than 75% of the cases) [JB: approximately 30 members], and the “crowds” (the remainder, who made predictions for less than 75% of the cases). …
The FantasySCOTUS experts predicted 64.7% of the cases correctly, surpassing the Forecasting Project’s Experts, though the difference was not statistically significant. The Gold, Silver, and Bronze medalists in FantasySCOTUS scored accuracy rates of 80%, 75% and 72% respectively (an average of 75.7%).
Who are these experts? And how do they decide cases?
To find out, I reached out to the 30 members of the “expert” group. About half replied, and most wished to remain anonymous. The composition of this cadre is quite varied. Only one has any experience arguing before the Supreme Court. A few others have written amici for the Court. A number work have had appellate or district clerkships. Some have no appellate experience at all, and work in small, general practice law firms. Others did not attend law school, and have political science backgrounds. One user in particular has never attended law school, taught himself constitutional law school, and has no formal training in the law.
In our article, we noted that our use of the term “experts” is somewhat imprecise.
It is important to stress that the FantasySCOTUS “experts” are experts in nomenclature only. Unlike the “experts” selected in the Forecasting Projects, who were selected based on credentials and work experience, the FantasySCOTUS experts selected themselves by predicting more than 75% of the cases. When comparing the FantasySCOTUS experts with the Forecasting Project’s experts we are not comparing two similar groups. The former is effectively a crowd, while the latter is a group of specialized experts. Once again the wisdom of the crowd trumps.
The “experts” were essentially a wise crowd.
Hail to the Chief
I have profiled Justin Donoho, the Chief Justice of FantasySCOTUS who achieved an impressive 80% accuracy rate.
Justin recently finished a clerkship with the Honorable William J. Bauer on the United States Court of Appeals for the Seventh Circuit. Before clerking, he attended law school at the University of Chicago, where he served as a research assistant for Judge Richard A. Posner. Now Justin is an associate at Grippo & Elden in Chicago. Justin started with the presumption that the Court would vote 9-0 reverse and worked from there. In many cases, involving controversial topics, he would look how Justices voted in like issues. In other cases, he “just went with his gut.” Interestingly, his predictions generally didn’t change following oral arguments, though he still read the transcripts closely. Justin said he initially predicted that Citizens United v. FEC would be 9-0 reverse. That “was pretty silly” he commented. When I asked him how he predicted the votes of Justice Kennedy, Justin chuckled and said that if he had the answer to that question, “he could license that.”
Users with Some Supreme Court Experience
A few of our “experts” had some Supreme Court experience, ranging from arguing (and winning) a case to working on amicus briefs.
David Mills graduated from the University of Michigan Law School. David clerked in a District and Circuit Court, and worked in litigation at a large firm. He is a solo practitioner at The Mills Law Office LLC in Cleveland, Ohio, where he focuses on federal appellate work. David argued and won Ortiz v. Jordan No. 09-737 (OT-2010), writing all certiorari stage and merit briefs. David has also written several certiorari petitions. To make predictions, David reads the summary of the cases on SCOTUSBlog, and reads a summary of oral argument transcripts, or the transcript itself.
A user (who wished to remain anonymous) graduated from Harvard Law School in 2007. He has clerked on a District Court and a Circuit Court, worked for a big law firm, and is currently a candidate for the degree of Doctor of Philosophy in Law at University of Oxford. He wrote the first draft of an amicus brief to the Supreme Court in NAMUDNO v. Holder while in practice. To make predictions, he reads both the briefs and transcripts, and talks to people–including the Judge he clerked for–he thought might have a better sense of the legal issues than he did.
Kedar Bhatia was a 1L at Emory University School of Law. Kedar works at Emory’s Supreme Court clinic, and blogs at SCOTUSBlog. He claims he makes a “gut call” on about 40% of the cases based on the question presented. For the remaining cases, he makes a prediction based on the oral argument transcripts. He also “times” his predictions according to the “flow of the term.” Through January, he has a strong bias towards guessing 9-0 (usually reverse), but as opinions are delayed for over a certain number of days, he adjusts his prediction to account for dissenters. Currently Kedar is in 57th place on the FantasySCOTUS scoreboard for the October 2010 term.
Another user (who wished to remain anonymous) graduated from the University of Virginia Law School, where he participated in the Supreme Court Litigation Clinic, and worked on a number of briefs. He currently works in litigation at a large firm. Last term he was familiar with many of the cases from reading certiorari petitions as part of the SCOTUS clinic at UVA. He would supplement his predictions by reading the summaries on SCOTUSBlog and trying to make his “best guess.”
A few of the experts were professors, though none were “household” names, so to speak–Jack Balkin wasn’t facing off against Larry Tribe on FantasySCOTUS (anecdotally, notable Professors lacked the time to prepare and make predictions for all cases).
One member of the expert group graduated Columbia Law School in 2006 and is currently a teaching fellow teaching criminal procedure and litigation. He previously worked for one year in litigation and clerked in a District and Circuit Court. He makes predictions simply by reading the question presented in the certiorari petition, and occasionally reading the Circuit Court opinion.
User AbbaMouse did not attend law school, but received a PhD in Political Science from Rice University in international relations. He is currently a political science professor, and teaches an undergraduate constitutional law class from time-to-time. He never reads briefs, and finds precedent “usually unimportant unless it was set by one or more current members of the Court.” AbbaMouse has a 3-step process to predicting cases: (1) look for voting blocs in similar past cases; (2) use left-right ideology if blocs are unknown or nonexistent; (3) try to find the split given the ordering of justices by bloc or ideology from oral argument recaps. He notes that he can predict the cases with reasonable accuracy if he simply assumes that members vote their ideologies (given that their ideologies may be more complex than left-right). He also relies on the findings of a handful of quantitative studies of Supreme Court voting. Currently AbbaMouse is in 7th place on the FantasySCOTUS scoreboard for the October 2010 term.
Law Students/New Attorneys
Many of our experts (and most FantasySCOTUS players) are law students, or recent law school graduates with little-to-no practice experience.
One user who will graduate the University of Tulsa School of Law in December of 2011 makes predictions based on knowledge of the Justices’ prior decisions regarding similar issues.
User JonathanIngram was a 2L at Southern Illinois School of Law. For most cases, he reads the party briefs and the transcripts. If a case piques his interest, he will read the amici as well. For the remainder, he only reads the certiorari petition and the lower court opinion.
User Banana873 graduated from the University of Virginia School of Law in 2010 and is currently practicing at a two-attorney firm in Northern Virginia. To make predictions, Banana873 reads the transcripts and combines that knowledge with what he knows about the Justices. He finds Justice Kennedy the hardest to predict (“for obvious reasons,” he says). Justices Sotomayor and Kagan, due to their relative newness, are difficult as well.
User metsgreg99 graduated from the Vermont Law School in 2010. He currently works for a legislator in the New York State Assembly, focusing on legal policy. To make predictions, he scans through the briefs and listens to a few minutes of oral arguments. Most of his predictions rely on the “politics of the case” and “which circuit the case came up from.” He notes that the Court tends to affirm the decisions of the “more conservative circuits and appellate courts.” He believes that politics have never played a more important role in determining which side a justice will take. If he discerns any “political overtone” to the case, his chances of guessing correctly were greatly improved. He laments his ability to predict cases based on politics, and feels the Court would be a more respected institution if “ideologies were left at the front steps.”
Experienced Attorneys without Appellate Experience
One user graduated from the University of Alabama in 1979. He has practiced law with his wife for 26 years, focusing on general practice, criminal law, bankruptcy, domestic relations, and civil litigation. To prepare, he reads the synopsis of the case on FantsySCOTUS and on SCOTUSBlog. He attempts to pick as many justices as he can on the winning side.
No Legal Training
Perhaps the most intriguing “expert” was Melech. Melech never attended law school, and is currently a BA graduate seeking employment as an actuary. Melech taught himself constitutional law in high school by reading a case book cover-to-cover. He notes that his “Supreme Court experience consists entirely of reading Supreme Court caes” and has never “had the privilege to make a pitch directly to any of the Justices.” His prediction process is meticulous.
I first read through the preview on SCOTUSblog or the PREVIEW they link to. Any prior SCOTUS case they cite as a possibly relevant precedent I look at, and particularly take note of any votes a current member of the court cast in that case. I also look up any other case I can remember that might be relevant. After getting an idea of the case from the preview, I usually read the oral argument transcript. (Sometimes by the time I get around to it the case has already been reported in the media with sufficient detail that I’m fairly confident that the decision will be unanimous. If I’m pressed for time, I’ll rely on that and ditch the transcript). Often if I’m in a rush and/or the case is boring, I’ll skip over counsels’ arguments and only read the Justices questions and comments. Predicting a vote involves knowing the Justice’s views on that area of the law in general, prior votes on precedents, and gauging which comments shed light on the Justice’s thoughts on the case and which are just devil’s advocate. Often the position on one side of the case is so ludicrous that you can tell early on, or even before the argument, that even Justices who might be sympathetic to that side as a matter of policy will not vote for that side.
Currently Melech is in 2nd place on the FantasySCOTUS scoreboard for the October 2010 term.
This sampling of our experts reveals only a smattering of players with actual Supreme Court experience. As we noted in our article:
The FantasySCOTUS top three experts not only outperformed the Forecasting Project’s top three experts—the crème de la crème of appellate lawyers and academics—but they also slightly outperformed the decision-tree algorithm. Justin Donoho, the Chief Justice of FantasySCOTUS, achieved a staggering accuracy rate of 80%, while the silver and bronze medalists scored 75% and 72% respectively. These results suggest that the Forecasting Project’s selection process did not acquire the superb expertise they sought—their top three experts averaged only a 59.1% accuracy rate. Do credentials and pedigree—such as scholarship, appellate practice, and Supreme Court clerkships, the metrics the Forecasting Project selected—sufficiently signal a prognosticator’s jurisprudential prescience? Not necessarily—credentialed does not equate with skilled.
Unlike the Supreme Court Forecasting Project, our ranks are not stacked with Supreme Court clerks, Appellate Litigators, and Constitutional Law Professors. This suggests that the accuracy rate of our experts is even more noteworthy.