It’s time for a bit of a victory lap. Since the end of November, Judge Neil Gorsuch has been firmly perched atop the FantasySCOTUS prediction market. We called it before anyone else. At the LexPredict blog, my colleague Mike Bommarito wrote a post explaining our methodology. I reproduce it here.
On November 14th, 2016, just days after the election, the New York Times published an article covering likely Supreme Court nominees from the Trump administration. A day later, CNN had published their own piece on the topic. And within the next few weeks, the Wall Street Journal, Washington Post, USA Today, LA Times, and many others had all followed up with their own predictions and profiles. By early December, PredictIt, the real-money prediction market, got into the fray, listing contracts for 25 candidate Justices to their market.
Collectively, pundits and market participants spent thousands of hours researching, interviewing, and writing on the topic. Old contacts were dug up, hunches were whispered over Beltway lunches, and the tea leaves of Trump’s tweets were read. But despite this small mountain of effort, none of these early prognostications were right. In fact, Gorsuch was barely mentioned in this early coverage, and even then, often as an “also-ran.”
Well, almost. FantasyJustice predicted the Gorsuch appointment on November 20th. But for a few brief hours on November 23rd, Gorsuch never fell from the lead, and his margin continued to grow right up to the announcement at 8PM last night. Our not-for-money crowd prediction results are shown in the figure below, beginning on November 14th and running up through the night of January 31st.
THREE WAYS TO PREDICT
We start from a simple idea – there are three ways we predict things: experts, crowds, and algorithms. Experts are best exemplified by pundits, doctors, and lawyers, and for much of recent human history, we have delegated decision-making to solitary specialists like these – the so-called “cult of the expert.” Experts typically rely on tacit knowledge and implicit models, which is “technical” for “experienced gut instinct.”
(If you’ve made it this far, do yourself a favor and purchase a copy of Professor Tetlock’s Superforecasting: The Art and Science of Prediction. Professor Tetlock’s career has been dedicated to exploring human judgement, good or otherwise, and his book is an excellent tour through modern research.)
Crowds, on the other hand, are defined by their multiplicity. While books like James Surowiecki’s have popularized the idea of the “wisdom of the crowd” despite the lack of wisdom of its constituents, crowds can take many forms. For example, a panel of experts can form a crowd, just as the market of PredictIt users may form another.
Lastly, algorithms are best demonstrated by the progress of “Artificial Intelligence” or “Machine Learning” technologies. Can I safely turn right at this intersection in four seconds? Is this borrower likely to repay their mortgage over the next 30 years? Algorithms are systematic approaches based on explicit, data-driven models. While humans can technically execute algorithms without the aid of computers, our general distaste for arithmetic has left this task to the machines.
While recovering from a brunch in Chicago on Saturday, November 5th, Josh, Tyler, and I applied this framework to the upcoming nomination process. Experts – well, the papers are already full of their guesses. Algorithms – not much data to use here, so we can’t train a model. And so, through process of elimination, crowds it was.
In reality, we spend much of our time helping clients deal with issues just like these. In addition to running FantasyJustice, we’ve run FantasySCOTUS, a Supreme Court prediction tournament, for the last 6 years; we also offer a legal technology product called LexSemble used by corporate legal departments and law firms. Will the FTC approve our merger? Should we settle this commercial litigation? How much in damages will the EPA seek? And, most importantly, which experts, attorneys, and law firms have been right about questions like these in the past?
Armed with this experience, the rest was easy. We had a site up and running within days. Referrals through Twitter and our FantasySCOTUS got the first few hundred predictions. Josh, iPad in hand, walked the floor of the Federalist Society Conference, collecting votes from tens of experts (as some of the potential nominees watched!). We even had Russian and Brazilian botnets weigh in with their opinions.
In the end, our process got the right answer, and did so quickly. Our crowd of interested parties, many of whom would be deemed experts, provided nearly 4000 opinions without any offer of reward or compensation. The results of the poll were public and transparent from day one, and we’ll be published detailed vote logs (including IP) in the near future.
The moral of the story? The golden era of “the cult of the expert” is over. Armed with science on the judgement of groups of people and technology to help, we can do much better than relying on one person’s gut instinct.
I am very proud of my team, and the wisdom of the crowds, for getting this prediction exactly right.