This paper looks really cool. Here is the abstract:
In Judges in Jeopardy!: Could IBM’s Watson Beat Courts at Their Own Game?, Betsy Cooper examines IBM’s Watson computer and how it might affect the process by which new textualists interpret statutes. Cooper describes new textualism as being founded on the ‘ordinary meaning’ of language. She writes: “New textualists believe in reducing the discretion of judges in analyzing statutes. Thus, they advocate for relatively formulaic and systematic interpretative rules. How better to limit the risk of normative judgments creeping into statutory interpretation than by allowing a computer to do the work?”
Cooper’s essay considers how Watson – the IBM computer which won a resounding victory against prized human contestants on Jeopardy – might fare as a new textualist. She concludes that Watson has many advantages over humans. For example, a computer can pinpoint the frequency with which a phrase is used in a particular statutory context, and can “estimate the frequency within which each connotation arises, to determine which is most ‘ordinary.’” And Watson avoids bias: “when he makes mistakes, these mistakes are not due to any biases in his evaluation scheme “ because the computer has “no normative ideology of his own.”
Nevertheless, Cooper ultimately concludes that Watson has a fatal flaw: it lacks a normative ideology that is essential for ethical judging. Watson can provide to judges “a baseline against which to evaluate their own interpretations of ‘ordinary meaning,’” but but cannot replace the job of judging itself.
From the article:
Could Watson perform better than judges at the tasks of statutory interpretation? Each of the three elements of new textual interpretation—premise, process, and reasoning—point toward the possibility of Watson outperforming new textualist judges at their own game.
First, computers support new textualists’ premise by offering a mechanical way of determining the “ordinary meaning” of a statute. According to Merriam-Webster’s Collegiate Dictionary, “ordinary” means “of a kind to be expected in the normal order of events; routine; usual.” The common factor in each part of the definition is frequency; given a set of circumstances, the ordinary outcome is the outcome that occurs more often than other possible outcomes. Humans are flawed textualists because they have only one frame of reference: their own “ordinary” experience. Any computer is better equipped to identify the frequency with which a particular phrase occurs in common parlance.
Take a famous example: in Muscarello v. United States, the Supreme Court debated the meaning of the phrase “carries a firearm.” The majority argued that the ordinary meaning of carrying a gun included transporting it in a vehicle. The dissent disagreed, arguing that “carry” required holding a gun on one’s person. The two sides marshaled a vast array of evidence from the public domain to demonstrate that their interpretation was the most ordinary, including dictionaries, news articles, and even the Bible. Watson could have saved the Court’s law clerks a great deal of trouble. The computer would have been able to calculate how frequently the terms “carry” and “vehicle” (or their synonyms) appear together versus “carry” and “person” (or their synonyms). Thus, in at least one sense Watson is better at textualist interpretation than humans—he can not only identify ordinary meanings but can tell us just how ordinary a particular meaning is!
Watson’s superior recall is particularly important given the historical nature of statutes, meanings of which can change over time. Justice Scalia, for example, has suggested that absolute immunity for prosecutors did not exist at common law. A well-informed Watson could report back in a matter of minutes as to the likelihood that this was true. Watson even may be able to help decipher antiquated meanings on which there is no modern expertise—such as common law phrases no longer used today—by looking at the context in which such phrases were used.
This raises a second Watsonian virtue: his process of interpretation. Most computers merely isolate instances where identical words appear most closely to one another. Watson’s algorithms go a step further by distinguishing which connotation of a particular word is intended based on the particular context. Watson might not only look for words elsewhere in the statute, but could also draw from other words not in the statute to provide additional interpretative context. In the Muscarello example, there was at least one contextually-appropriate usage of “carry” that was not uncovered by either party in the litigation: whether state “carry” gun laws (for example, “open carry” and “concealed carry” gun laws) apply to vehicles. Watson could have estimated the frequency with which each connotation arises—including the state law use of “carry” not considered by the actual parties—to determine whether “carry” ordinarily encompasses transportation in vehicles.
Finally, and perhaps most importantly, Watson’s reasoning is more systematic than humans’ reasoning. Inasmuch as he makes errors, these errors are randomly distributed. His mistakes are not skewed due to political preferences, personal relationships, or other sources of human prejudice. Watson by design avoids the ideological bias of judges—which textualists so deeply fear—because, of course, he does not have any ideology of his own. These arguments are summarized in Figure 1.
At Computational Legal Studies, Katz writes:
It is worth noting that although high-end offerings such as Watson represent a looming threat to a variety of professional services — one need not look to something as lofty as Watson to realize the future is likely to be turbulent. Law’s Information Revolution is already underway and it is a revolution in data and a revolution in software. Software is eating the world and the market for legal services has already been impacted. This is only the beginning. We are at the very cusp of a data driven revolution that will usher in new fields such as Quantitative Legal Prediction (which I have discussed here).
Pressure on Big Law will continue. Simply consider the extent to which large institutional clients are growing in their sophistication. These clients are developing the data-streams necessary to effectively challenge their legal bills. Whether this challenge is coming from corporate procurement departments, corporate law departments or with the aid of third parties – the times they are indeed a-changin’.
And from Larry Ribstein on Watson:
Of course computers won’t replace humans anytime soon. Watson’s creator conceded that “A computer doesn’t know what it means to be human.”
Yes, but do lawyers know that?