Blog

Between 2009 and 2020, Josh published more than 10,000 blog posts. Here, you can access his blog archives.

2020
2019
2018
2017
2016
2015
2014
2013
2012
2011
2010
2009

“But if computers can write journalism, why shouldn’t they be able to write briefs? “

September 11th, 2011

Larry Ribstein comments on a Times article about computers that can write articles:

The company’s (Narrative Science) software takes data, like that from sports statistics, company financial reports and housing starts and sales, and turns it into articles. * * *

The Big Ten Network, a joint venture of the Big Ten Conference and Fox Networks, began using the technology in the spring of 2010 for short recaps of baseball and softball games. * * *

The Narrative Science software can make inferences based on the historical data it collects and the sequence and outcomes of past games. To generate story “angles,” explains Mr. Hammond of Narrative Science, the software learns concepts for articles like “individual effort,” “team effort,” “come from behind,” “back and forth,” “season high,” “player’s streak” and “rankings for team.” Then the software decides what element is most important for that game, and it becomes the lead of the article, he said. The data also determines vocabulary selection. A lopsided score may well be termed a “rout” rather than a “win.” * * *

But wil computers replace humans?

The innovative work at Narrative Science raises the broader issue of whether such applications of artificial intelligence will mainly assist human workers or replace them. Technology is already undermining the economics of traditional journalism. Online advertising, while on the rise, has not offset the decline in print advertising. But will “robot journalists” replace flesh-and-blood journalists in newsrooms?

 

Larry opines:

Yesterday I wrote about computers predicting future court decisions.  I concluded, however, that lawyers will still have to create the law that is being predicted “by making arguments and human judgments.”

But if computers can write journalism, why shouldn’t they be able to write briefs?  Both types of writing have the sort of predictability that enables production by even primitive artificial intelligence.   You could even say this type of predictability is what makes for a “profession” that can be taught and learned in schools and through apprenticeship.  (Which suggests that computers won’t replace bloggers.)

So now defending lawyers from computers requires retreating further uphill to someplace computers can’t climb.  Computers can write briefs, but they can’t decide what issues need to be briefed or legal strategy.

Even so, as I concluded yesterday, “future lawyers will have to learn to work alongside computers.”  I speculate in my article, Practicing Theory, on the implications of this new world for legal education.  It certainly won’t involve training for the sort of “real life law practice” that present-day lawyers think is so important but that computers will soon render obsolete.

Larry notes that the types of partnerships that enabled this to happen in the journalism field likely won’t happen in the legal field.

The NYT story noted that Narrative Science resulted from a collaboration between the journalism and computer science schools at Northwestern. It would be nice if law schools explored similar collaborations.  Unfortunately, as I discuss in my article, they are saddled by regulation that doesn’t inhibit experimentation in other professions.

So while we may yet see productive partnerships between journalists and computer scientists, I wonder whether lawyers, stubbornly resisting the future, will simply find themselves on the cutting room floor (to borrow from another industry’s old technology).

Ahem. Get me a faculty position, and I’ll take it from there. At the moment, I have 15 interviews scheduled for the AALS.

“Judges in Jeopardy!: Could IBM’s Watson Beat Courts at Their Own Game?”

September 9th, 2011

This paper looks really cool. Here is the abstract:

In Judges in Jeopardy!: Could IBM’s Watson Beat Courts at Their Own Game?, Betsy Cooper examines IBM’s Watson computer and how it might affect the process by which new textualists interpret statutes. Cooper describes new textualism as being founded on the ‘ordinary meaning’ of language. She writes: “New textualists believe in reducing the discretion of judges in analyzing statutes. Thus, they advocate for relatively formulaic and systematic interpretative rules. How better to limit the risk of normative judgments creeping into statutory interpretation than by allowing a computer to do the work?”

Cooper’s essay considers how Watson – the IBM computer which won a resounding victory against prized human contestants on Jeopardy – might fare as a new textualist. She concludes that Watson has many advantages over humans. For example, a computer can pinpoint the frequency with which a phrase is used in a particular statutory context, and can “estimate the frequency within which each connotation arises, to determine which is most ‘ordinary.’” And Watson avoids bias: “when he makes mistakes, these mistakes are not due to any biases in his evaluation scheme “ because the computer has “no normative ideology of his own.”

Nevertheless, Cooper ultimately concludes that Watson has a fatal flaw: it lacks a normative ideology that is essential for ethical judging. Watson can provide to judges “a baseline against which to evaluate their own interpretations of ‘ordinary meaning,’” but but cannot replace the job of judging itself.

From the article:

Could Watson perform better than judges at the tasks of statutory interpretation? Each of the three elements of new textual interpretation—premise, process, and reasoning—point toward the possibility of Watson outperforming new textualist judges at their own game.

First, computers support new textualists’ premise by offering a mechanical way of determining the “ordinary meaning” of a statute. According to Merriam-Webster’s Collegiate Dictionary“ordinary” means “of a kind to be expected in the normal order of events; routine; usual.” The common factor in each part of the definition is frequency; given a set of circumstances, the ordinary outcome is the outcome that occurs more often than other possible outcomes. Humans are flawed textualists because they have only one frame of reference: their own “ordinary” experience. Any computer is better equipped to identify the frequency with which a particular phrase occurs in common parlance.

Take a famous example: in Muscarello v. United States, the Supreme Court debated the meaning of the phrase “carries a firearm.” The majority argued that the ordinary meaning of carrying a gun included transporting it in a vehicle. The dissent disagreed, arguing that “carry” required holding a gun on one’s person. The two sides marshaled a vast array of evidence from the public domain to demonstrate that their interpretation was the most ordinary, including dictionaries, news articles, and even the Bible. Watson could have saved the Court’s law clerks a great deal of trouble. The computer would have been able to calculate how frequently the terms “carry” and “vehicle” (or their synonyms) appear together versus “carry” and “person” (or their synonyms). Thus, in at least one sense Watson is better at textualist interpretation than humans—he can not only identify ordinary meanings but can tell us just how ordinary a particular meaning is!

Watson’s superior recall is particularly important given the historical nature of statutes, meanings of which can change over time. Justice Scalia, for example, has suggested that absolute immunity for prosecutors did not exist at common law. A well-informed Watson could report back in a matter of minutes as to the likelihood that this was true. Watson even may be able to help decipher antiquated meanings on which there is no modern expertise—such as common law phrases no longer used today—by looking at the context in which such phrases were used.

This raises a second Watsonian virtue: his process of interpretation. Most computers merely isolate instances where identical words appear most closely to one another. Watson’s algorithms go a step further by distinguishing which connotation of a particular word is intended based on the particular context. Watson might not only look for words elsewhere in the statute, but could also draw from other words not in the statute to provide additional interpretative context. In the Muscarello example, there was at least one contextually-appropriate usage of “carry” that was not uncovered by either party in the litigation: whether state “carry” gun laws (for example, “open carry” and “concealed carry” gun laws) apply to vehicles. Watson could have estimated the frequency with which each connotation arises—including the state law use of “carry” not considered by the actual parties—to determine whether “carry” ordinarily encompasses transportation in vehicles.

Finally, and perhaps most importantly, Watson’s reasoning is more systematic than humans’ reasoning. Inasmuch as he makes errors, these errors are randomly distributed. His mistakes are not skewed due to political preferences, personal relationships, or other sources of human prejudice. Watson by design avoids the ideological bias of judges—which textualists so deeply fear—because, of course, he does not have any ideology of his own. These arguments are summarized in Figure 1.

At Computational Legal Studies, Katz writes:

It is worth noting that although high-end offerings such as Watson represent a looming threat to a variety of professional services — one need not look to something as lofty as Watson to realize the future is likely to be turbulent. Law’s Information Revolution is already underway and it is a revolution in data and a revolution in software.  Software is eating the world and the market for legal services has already been impacted.  This is only the beginning.  We are at the very cusp of a data driven revolution that will usher in new fields such as Quantitative Legal Prediction (which I have discussed here).

Pressure on Big Law will continue.  Simply consider the extent to which large institutional clients are growing in their sophistication.  These clients are developing the data-streams necessary to effectively challenge their legal bills.  Whether this challenge is coming from corporate procurement departments, corporate law departments or with the aid of third parties – the times they are indeed a-changin’.

And from Larry Ribstein on Watson:

Of course computers won’t replace humans anytime soon.  Watson’s creator conceded that “A computer doesn’t know what it means to be human.”

Yes, but do lawyers know that?

“A year after Congress passed the broadest financial overhaul since the Great Depression, the law has spawned a host of new businesses to help Wall Street comply — and capitalize — on the hundreds of new regulation”

September 9th, 2011

Yet more unintended consequences of Dodd-Frank. Larry Ribstein writes about a Times piece discussing how some firms have charged insane rates to have access to the developments of this boondoggle.

Some law firms have even become small-scale publishing houses. Davis Polk & Wardwell, for example, is offering a $7,500-a-month subscription to a Web site that tracks the progress of every Dodd-Frank requirement. So far, more than 30 large financial companies have signed up.

As Congress started drafting the legislation in the spring of 2010, Davis Polk & Wardwell began compiling a spreadsheet to keep its lawyers updated on hundreds of regulations. Then, Gabriel D. Rosenberg, a young associate, proposed turning the firm’s database of legal summaries and rule-making deadlines into an interactive site — and spent a weekend building a prototype.

By late July, clients started logging on to the “regulatory tracker” — and have steered more business to the firm as a result, said Randall D. Guynn, the head of Davis Polk’s financial institutions group. “There were a lot of new relationships because people want this,” he said.

Ribstein finds this as an example of a product in Law’s Information Revolution.

Oh, how I love admin law:

 “It is a full-employment act,” said Gregory J. Lyons, a partner at Debevoise, where a team of a half-dozen lawyers has drafted 30-plus comment letters in the last six months.

“The law is passed, but we are still reasonably early in the process,” Mr. Lyons said. “There is still a lot to be written.”

Plea Bargaining and Overcriminalization

August 30th, 2011

Interesting article in JLEP (at George Mason!)

In discussing imperfections in the adversarial system, Professor Ribstein notes in his article entitled Agents Prosecuting Agents, that “prosecutors can avoid the need to test their theories at trial by using significant leverage to virtually force even innocent, or at least questionably guilty, defendants to plead guilty.” If this is true, then there is an enormous problem with plea bargaining, particularly given that over 95% of defendants in the federal criminal justice system succumb to the power of bargained justice. As such, this piece provides a detailed analysis of modern-day plea bargaining and its role in spurring the rise of overcriminalization. In fact, this article argues that a symbiotic relationship exists between plea bargaining and overcriminalization because these legal phenomena do not merely occupy the same space in our justice system, but also rely on each other for their very existence.

From the article:

As these hypothetical considerations demonstrate, plea bargaining and overcriminalization perpetuate each other, as plea bargaining shields over-criminalization from scrutiny and overcriminalization creates the incentives that make plea bargaining so pervasive.

Makes sense.

No Krugman Alert

August 28th, 2011

Before Irene, I announced a Krugman Alert:

Let’s see if the Keynesian Nobel Laureate at the Times writes some column how the (potential, hopefully negligble) damage caused by Hurricane Irene on New York City is a positive event it will create spending and benefit the economy.

During the store, I commented on broken windows.

It seems the storm petered out, so I will quote Larry Ribstein (on facebook):

Irene was less bad than expected. I imagine the market will go down tomorrow. Not enough broken windows for Krugman.

Update: An estimate of the damage:

Irene may cost insurers as much as $3 billion to cover U.S. damage, with overall economic losses of $7 billion, according to Kinetic Analysis Corp., which predicts disaster impact. The U.S. suffered $35 billion in losses in nine separate events so far in 2011, according to the National Oceanic and Atmospheric Administration, tying a record for disasters causing more than $1 billion damage in a single year.

Update: From Freakonomics, on the media’s “overkill”

But here’s where I blame the media. Rather than admitting on Sunday that the storm had simply not been so bad, the New York City media was way too eager to join in on the fray, don its rain jacket, and get its disaster yahs-yahs out. While there is clearly a danger in under-estimating the risk of events, there are also negative consequences in trumping up the damages of an event that ultimately, wasn’t all that damaging. To me, Sunday’s all-day reporting blitz was classic overkill, and ultimately undermines the local TV media’s credibility to be able to tell me when something matters, and when it doesn’t.