On Tom Goldstein’s Predictive Prowess

October 25th, 2012

The Atlantic has a lengthy profile of Tom Goldstein, founder of SCOTUSBlog and renowned SCOTUS litigator. In addition to telling a great story about Tom’s skills at fast cars, online poker, and finding clients, the article explores Tom’s prediction abilities.

One of the Court’s most quoted observers, he has also developed a reputation as a Supreme Court seer, someone with a claim on understanding one of Washington’s most opaque institutions. This role is turning into a vocation in its own right: his predictive skills are coveted by TV producers, while hedge-fund managers pay him to evaluate cases they’re betting on.

All of which raises a question: What makes Goldstein so good at calling cases in the first place? When I asked him about his biggest triumph this year—his very public prediction that the Affordable Care Act would be upheld, with Roberts writing the decision—he described it as his “big read,” as if he had guessed someone’s hand during a card game. And in fact, Goldstein approaches the Court much the way he would an opponent in Texas Hold ’Em, developing a close profile of each of the justices. He knows, for example, that the chief justice likes to create some “drama” in his decisions—a quirk apparently not well understood by the Fox News and CNN reporters who misreported the Court’s verdict, having failed to flip to the last page of the opinion and read Roberts’s dramatic conclusion. Goldstein is similarly attuned to the other justices’ moods. Two weeks after arguments in the ACA case, and before the decision came out, Goldstein argued the very next case on the docket. Standing just feet from the bench, he decided that the liberal justices looked too happy to be losers. Goldstein lost the case he was arguing that day, but he nailed the ACA prediction.

Likewise, earlier this year UPI ran a piece calling Tom “the most prescient high court analyst.”

I am generally very skeptical of self-professed experts who claim they have special knowledge at predicting how things will turn out–for the very reasons Tom mentioned.

He knows his gamble could have gone the other way. “One of the great things about predictions,” he told me, grinning impishly, “is that if you make them and make them high-enough-profile, well, if you’re right, people think you’re a supergenius. And if you’re wrong, people tend to forget them.”

Experts who happen to get something right are lauded, and experts who mess up are forgotten–or more precisely, the blown predictions are forgotten, but the experts are, amazingly, called on to make more predictions. Future Babble is a great book that documents these follies.

Tom certainly nailed the ACA case with Roberts’s vote, though a number of the top FantasySCOTUS predictors, including Chief Justice Berlove, did it months earlier (I spoke with Berlove quite a bit, and his rationales were not-too-different than Tom’s).

Also Tom correctly predicted that President Obama would nominate Elena Kagan to the Supreme Court on 2/23/10. FantasySCOTUS predictors came to the same result two weeks later. In February, Tom wrote a post listing over 30 possible people who may be appointed to the Supreme Court in a 2nd term for President Obama, virtually guaranteeing that anyone who is picked will be from that list. If he’s right, he’ll look like a “supergenius.” If he’s wrong (which is impossible with such a broad list), no one will remember.

Tom’s track record on predicting cases isn’t quite so well-developed, as he rarely makes specific predictions on cases, though we do have one small sample set where we can compare with FantasySCOTUS. During the October 2010 term, FantsySCOTUS went head-to-head with Tom Goldstein for the last 14 cases for the term. FantasySCOTUS won.

In a previous column, we compared our predictions for the final 14 cases with predictions made by Tom Goldstein at SCOTUSBlog (Tom did not make predictions for two of the 14 cases). While the sample size is rather small (14 cases out of the total 81 cases, about 17% of the cases decided this term) this small experiment allows for an informal comparison between the wisdom of the crowds and the accuracy of experts. At the end of the term, the final score is FantasySCOTUS: 11, SCOTUSBlog: 9 (79% to 64%).

I try not to draw to many conclusions from such a small sample, though in some respects, because the biggest cases are saved for the end of the term, they are the toughest to analyze. Also we offered a justice-by-justice vote for each case, while Tom only predicted “Affirm” or “Reverse.”

I’d love to do a year-long competition, though I am doubtful anyone would agree, as our research has shown that our crowdsourcing approaching is more accurate than most expert predictions.

One other area of Tom’s prescience is his petitions to watch. Assessing the accuracy of these predictions is difficult for a few reason. Perhaps the toughest reason is that Tom’s blog is so widely read. It is quite possible–and indeed there are rumors that it is indeed fact–that SCOTUS clerks scan the blog for possible petitions to recommend. For those cases, which would be impossible to prove, Tom listing a petition as one to watch makes it more likely that it will be granted. There’s nothing that could be done to isolate that fact.

That being said, it is still possible to look at all the petitions submitted to the Court, look at all the ones Tom picks out, see what is granted, and run the numbers. And I’m working on that, as part of a larger project into algorithmically analyzing the Supreme Court’s procedures. Along with my colleagues Nick Wagoner (of Circuit Splits fame) and Dru Stevenson, I am working on researching the factors that go into granted cert petitions in a very systematic, empirical manner. Using some pretty cutting-edge technology (not manually coding petitions in a spreadsheet or with Stata), I am in the process of scanning, parsing, and crunching a very large number of cert petitions to analyze their elements over several dimensions. This research is part of my broader focus to analyze the data of the courts to assist attorneys in understanding how courts operate, and how they decide cases. It is what I call “Assisted Decision Making.”

I’ll be giving a presentation about Assisted Decision Making on a panel about Big Data and the Law at a symposium at Georgetown University Law Center, titled “Big Data and Big Challenges for Law and Legal Information.”

Much more about that in the new year once I finish my book about the ACA case.