Supreme Court Prediction Model Featured on Vox

August 4th, 2014

Dylan Matthews of Vox interviewed me and my co-author Mike Bommarito about our Supreme Court prediction model, developed also with Dan Katz.

We haven’t gotten nearly that far in predicting court cases. But three scholars — South Texas College of Law’s Josh Blackman, Michigan State’s Daniel Martin Katz, and Bommarito Consulting‘s Michael Bommarito —  have built a model that comes close. As Blackman noted in a blog post announcing the model, it “correctly identifies 69.7% of the Supreme Court’s overall affirm and reverse decisions and correctly forecasts 70.9% of the votes of individual justices across 7,700 cases and more than 68,000 justice votes.”

Dylan also highlights the nuanced manner in which we approach ideology.

The process seems extremely complicated, but Bommarito and Blackman note that you can still draw conclusions about the way the court behaves from it. For one thing, Bommarito notes that ideological variables seem to make a major difference, which seems to refute the naive view that the Court is somehow above politics. “If there were an argument ongoing between political scientists and lawyers as to what mattered, as to whether judges are really independent judicial reasoning machines on high, or whether they’re just political animals like anyone else, then in terms of the features that the model uses to successfully predict, it appears they’re just political animals,” he concludes.

Blackman caveats that a bit. For one thing, a lot of the Court’s decisions are uncontroversial 9-0 reversals of lower courts: a lower court got it wrong, every justice agrees about it, and they act together. And the model gets those right very often, and struggles with the one in three cases where the court ultimately affirms the lower court’s ruling:

The model can only do that well if it brings non-ideological variables into play. “The set of ‘case information variables’ — which includes the lower court where the case the originated, the issue, who the petitioner and respondent are, etc — contributed 23% of predictive power,” Blackman explains. “These were among the most predictive factors, and are factors that most people in the press don’t think about.”

All the same, if anyone still labors under the misimpression that the Court’s political views don’t matter, the model should give them reason to reconsider. Bommarito puts it in statistical terms: “The null hypothesis for legal academia is that ideology doesn’t matter; we’ve rejected that hypothesis.”

The most important element is the conclusion. Dylan really hits home where our research is leading.

From there, it’s on to lower courts. Supreme Court cases, while high-impact, are pretty few in number and are already widely predicted. The authors plan on using the model as private consultants, and the real growth market there is in predicting outcomes in district and appellate court cases. There isn’t as strong of a database in that area yet, but Blackman and Bommarito are optimistic. “When you look at the average law firm, they’re swimming in data that’s not very well collected and not very well structured,” Bommarito says. Collating that data into practical models could make a real difference for lawyers plotting their court and negotiation strategies. It could also potentially help legislators get a sense of how vulnerable laws they pass could be to a legal challenge. And of course, academics studying lower courts could find an effective model valuable too.

Want to learn more about the model? Check out David Kravets’ post at Ars Technica on it, Blackman’s blog post, the article the authors wrote describing it, or the model’s Github page. And don’t forget to click the toggle above to read my full interview with Blackman and Bommarito.

And in case you were wondering (I was), the reporters at Vox type their own transcripts. No court reporters involved. You can read the entire interview here.