After 4 Cases, {Marshall}+ Beats FantasySCOTUS (Slightly)

December 15th, 2014

This term, the Supreme Court has issued four decisions from 40 argued cases. Overall, the FantasySCOTUS crowd correctly predicted 2 out of the 4 cases, for a 50% accuracy rate. {Marshall}+ correctly predicted the outcome in 3 out of 4 decisions. However, the crowd has a higher overall Justice accuracy rate of 63.89%. {Marshall}+ is less than 50% on predicting individual Justices. In other words, when the Crowd got the outcome right, it was more likely to guess the split. {Marshall}+ predicted the overall outcomes more accurately, but was less efficient with the split.  It is very early and it will be interesting to see if these trends continue as more decisions are rendered.

stats2

overall

Warger v. Shauers

In Warger v. Shauers, Justice Sotomayor affirmed for a unanimous Court.  FantasySCOTUS nailed the decision, and quite well. The Crowd forecasted that all of the Justices would affirm, and at very high levels–all between 80% and 90%. Further, the crowds were able to forecast one of the rarest decisions, a 9-0 affirm. This is a split that {Marshall}+ does very poorly. Here, the algorithm forecasted a 9-0 reverse, all with confidence scores between 60% and 70%.

warger

Integrity Staffing Solutions, Inc. v. Busk

In Integrity Staffing Solutions, Inc. v. Busk, Justice Thomas reversed for a unanimous Court.  Here, the algorithm beat the crowd. The Court unanimously reversed. FantasySCOTUS predicted a 6-3 reversal. The Crowd forecasted a 5-4 affirm. Specifically, of the Justices {Marshall}+ missed, the confidence scores were very low: Justice Ginsburg (55%), Sotomayor (53%), and Kagan (52%). In other words, the algorithm was very close to forecasting a 9-0 reversal, which would have been right on. Also, we should stress that Justice Sotomayor, joined by Justice Kagan, wrote a concurring opinion explaining that they agreed with a narrow conception of the Court’s holding. It is fascinating that our algorithm was able to sense their distance from the majority.Here, the crowd badly forecasted the votes of the liberal voted. FantasySCOTUS pegged Justices Ginsburg (81%), Breyer (76%), Sotomayor (83%), Kagan (80%) to affirm.  Here the algorithm caught something the crowd did not.

integ

 

Dart Cherokee Basin Operating Co. v. Owens

In Dart Cherokee Basin Operating Co. v. Owens, the Court reversed in an odd 5-4 alignment. Justice Ginsburg, joined by the Chief Justice, Justice Breyer, Justice Alito, and Justice Sotomayor voted to reverse; Justice Scalia, joined by Justices Kennedy, Thomas, and Kagan voted to affirm. The crowd expected the Court to unanimously agree with the 10th Circuit. All of the votes were north 70%, so they were fairly confident. The algorithm forecasted a 7-2 reverse, which wasn’t correct, but was a lot closer. Specifically, the algorithm accurately predicted the votes of Chief Justice Roberts, as well as Justices Ginsburg, Breyer, Alito, and Kagan. But it missed the votes of Justices Scalia, Kennedy, and Thomas, who were all above 70%. Justice Sotomayor was predicted to affirm at 53%. This low value is awfully close to a reversal, which would have been correct.

dart

Heien v. North Carolina

In Heien v. North Carolina, the Chief Justice Roberts wrote for 8 Justices to affirm; Justice Sotomayor penned a lone dissent to reverse. Both the crowd and the algorithm predicted a 5-4 decision to affirm along the usual lines. Instead, the Chief wrote a fairly narrow opinion for 8 Justices. Only Justice Sotomayor dissented. Both the crowd and algorithm predicted that Justices Ginsburg, Sotomayor, and Kagan would reverse, at over 70%. Only Sotomayor would splinter off and vote to reveres.

hein