Need a SCOTUS diversion from the Kagan-Palooza? Look no further (although we note that FantasySCOTUS.net correctly predicted Kagan would get the nod from the very beginning).
The Supreme Court decided several important cases during the Month of April, and in this post we will consider them, and see how accurate the league was in predicting those cases. We take a look at United States v. Stevens, Perdue, Merck Co., Stolt Nielsen, and Salazar v. Buono. While our members did not predict that vandals would reverse the Supreme Court’s opinion in Salazar by stealing the memorial cross, these diverse cases help to explain user perceptions of these issues, and in what circumstances predictions are less precise.
The table lists these five cases, their outcomes (with confidence intervals), number of users who correctly guessed the split, and finally, the standardized majority ratio, which tests whether or not users perceive the Court as dominated by conservative ideology, for each Justice.
|Outcome||Affirm 83%||Affirm 44%||Reverse 58%||Affirm 69%||Reverse 31%|
|Outcome CI||+/- 3.82% (99%)||+/- 4.98 % (95%)||+/- 6.71 % (95%)||+/- 11.58 % (99%)||+/- 13.85 % (99%)|
|Roberts||0.89 +/- .08||1.2 +/- .15||1.26 +/- .2||1.09 +/- .24||1 +/- .27|
|Stevens||4.41 +/- .41||1.13 +/- .16||1.22 +/- .23||2.13 +/- .51||2.35 +/- .63|
|Scalia||0.91 +/- .08||1.06 +/- .14||1.23 +/- .2||0.97 +/- .23||1 +/- .27|
|Thomas||0.87 +/- .08||1.07 +/- .14||1.20 +/- .2||0.93 +/- .23||1.06 +/- .28|
|Ginsburg||4.13 +/- .4||1.02 +/- .15||0.89 +/- .2||2.16 +/- .52||2.35 +/- .63|
|Breyer||4.19 +/- .4||1.16 +/- .16||1.55 +/- .26||2.29 +/- .53||2.17 +/- .6|
|Alito||0.69 +/- .07||1.07 +/- .14||1.24 +/- .2||0.96 +/- .23||0.98 +/- .27|
|Sotomayor||4.33 +/- .41||1.13 +/- .16||0.95 +/- .2||1.97 +/- .49||2.22 +/- .61|
United States v. Stevens considered the constitutionality of a statute banning depictions of animal cruelty. Out of over 600 predictions, over 83% of the members corrected predicted that the Supreme Court would affirm the Third Circuit’s opinion. Specifically, 98 users, representing 15% of the total predictions, correctly guess that the split would be 8-1. With only Justice Alito dissenting, this was an unusual split.
While the predictions for affirmation were clear, the users by no means indicated that the decision would be unanimous. Looking at the SMRs, the real question for users was whether or not the conservative Justices would take the expanded view of 1st Amendment speech over a law and order viewpoint. The liberal justices were considered highly likely to join the majority, with SMRs all above 4, while the conservative Justices were all significantly below 1. However, the users predicted that the holdout tendency would be weak among Roberts, Scalia, and Thomas, but strong for Alito. In this sense, our data forecasted that Alito would be the sole disenter. And in light of the 8-1 split, our data was accurate.
For more results, read on at JoshBlackman.com.
Salazar v. Buono considered whether the placement of memorial cross in the Mojave Desert violates the Establishment Clause. Now that the cross has been stolen, the relevance of this narrow opinion remains questionable. Yet, our members still got it right. The majority of users (56%) predicted that the court would reverse the Ninth Circuit’s decision, at a 95% confidence level. However only 97 users, comprising 25% of the predictions for the case, correctly guessed that the Court would split 5-4. While the members guessed the outcome correctly, they were unable to predict it would split 5-4. One issue that might explain this anomaly is that the cert grant presented two questions, one question dealing with whether the plaintiff had standing, and another question dealing with whether the transfer of the land violated the establishment clause. The two questions create multiple dimensions of any decision, and therefore complicate predicting the actual outcome of the case. However, the users did predict that the case would come out according to ideological lines, as each Justice had an SMR not significantly different from 1. What this shows is that while a decision can be made by a conservative or liberal majority, it is sometimes difficult to discern what exactly the “conservative” or “liberal” answer to the problem will be.
Perdue discusses fee awards for the purpose of fee shifting statutes. Although this case delves into the technical aspects of statutory law, 58% of users correctly guessed the overall outcome of the case at a 95% confidence level. Only 12, 5% of total predictions guessed the correct split, showing that it much more difficult to predict the specific voting behavior of justices in technical situations. For the most part, the SMRs were mostly not significantly different from 1, indicating that the decision would generally come out by ideology, despite the fact that Roberts and Alito showed a weak possibility of joining the liberal justices. However, the predictions did indicate that Breyer had a moderately strong chance of joining in a “conservative” decision, but did not in this case, leaving ideological lines uncrossed in the issue of fee awards.
Merck is another complex case that deals with the junction between securities fraud, inquiry notice, and statute of limitations. Once again, 69% of predictions correctly guessed the correct outcome of the case at a 99% confidence level. In this case, 26 users correctly predicted that the decision would be unanimous, comprising 26% of the predictions for this case. The SMRs further indicate that the case was ripe for a unanimous decision as the “conservative majority” of the court had SMRs not significantly different from 1, while the “liberal” justices had SMRs significantly above 1. These numbers shows the justices were particularly certain to form a large coalition for a majority.
Uncharacteristically, members failed to accurately predict Stolt Nielsen, a technical case dealing with class arbitration. The majority of users predicted the wrong outcome, and only three users predicted the correct split. It is easier to see what issue the predictions ran into once the SMRs are in the picture. The conservative justices had SMRs not significantly different from 1, which is just standard majority status. However, the liberal justices all had SMRs above 2 and were significantly above 1, showing that users thought there was a strong opportunity for a large coalition. However, the 5-3 decision fell along ideological lines, so users were somehow over-optimistic about the outcome. An additional wrinkle is that Sotomayor recused, as the case came from the 2nd Circuit. This was not reflected in the SMRs and consequently the predictions. Here, the predictions may have overlooked what was taking place in this technical case.
Overall, the decisions for April may have had some twists, but were overall not offbase. Although the majority of users predicted Stolt Nielsen incorrectly, while a smaller majority failed to predict Salazar, the other three cases show that the fundamentals of predicting behavior are strong and can be very helpful in determining the outcome of most cases. The issue with wrong predictions is not that they are wrong, but how and where they went wrong. One major case on point is Sotomayor’s recusal. This is not the first case she has recused herself in, and like this case, the predictions failed to take her recusal into account. While there is no mandatory “recusal rule” for the court to follow, previous recusals show that she has a definite tendency to recuse from 2nd Circuit cases. Recusals nonwithstanding, the majority of users correctly predict the majority of cases.
This post was co-authored by Josh Blackman and Corey Carpenter.