I, For One, Welcome Our New Computer Overlords, But Wonder About Their Liability

January 25th, 2012

Google has developed cars that can drive by themselves. IBM has developed a program that can win at Jeopardy, and can diagnose diseases. I am working on a system that can do many of the things a lawyer does (see also). In short, our new computer overlords are coming. One key issue that I have been mulling over is liability.

The Times writes about a conference held at Santa Clara Law School about the legal implications of self-driving cars. The discussion of liability is particularly poignant:

As Google has demonstrated, computerized systems that replace human drivers are nowlargely workable and could greatly limit human error, which causes most of the 33,000 deaths and 1.2 million injuries that now occur each year on the nation’s roads.

Such vehicles also hold the potential for greater fuel efficiency and lower emissions — and, more broadly, for restoring the United States’ primacy in the global automobile industry.

But questions of legal liability, privacy and insurance regulation have yet to be addressed, and an array of speakers suggested that such challenges might pose far more problems than the technological ones.

Today major automobile makers have already deployed advanced sensor-based safety systems that both assist and in some cases correct driver actions. But Google’s project goes much further, transforming human drivers into passengers and coexisting with conventional vehicles driven by people.

Last month, Sebastian Thrun, director of Google’s autonomous vehicle research program, wrote that the project had achieved 200,000 miles of driving without an accident while cars were under computer control.

Over the last two years, Google and automobile makers have been lobbying for legislative changes to permit autonomous vehicles on the nation’s roads.

Nevada became the first state to legalize driverless vehicles last year, and similar laws have now been introduced before legislatures in Florida and Hawaii. Several participants at the Santa Clara event said a similar bill would soon be introduced in California.

Yet simple questions, like whether the police should have the right to pull over autonomous vehicles, have yet to be answered, said Frank Douma, a research fellow at the Center for Transportation Studies at the University of Minnesota.

“It’s a 21st-century Fourth Amendment seizure issue,” he said.

The federal government does not have enough information to determine how to regulate driverless technologies, said O. Kevin Vincent, chief counsel of the National Highway Traffic Safety Administration. But he added:

“We think it’s a scary concept for the public. If you have two tons of steel going down the highway at 60 miles an hour a few feet away from two tons of steel going in the exact opposite direction at 60 miles an hour, the public is fully aware of what happens when those two hunks of metal collide and they’re inside one of those hunks of metal. They ought to be petrified of that concept.”

How would liability work in such a case? What if the driver took a nap while the Googlemobile was cruising down the highway, malfunctioned, and created a multi-car accident with (gasp!) fatalities. Would car insurance cover that? Would the driver be liable for vehicular manslaughter for dozing off behind the wheel (computer-driving-systems be damned)? Could someone without a drivers license, or who cannot drive due to some disability, be able to rely on an autonomous vehicle as an accommodation?

Now, imagine a assisted decision making engine for the law. Assume a computer system exists that can answer many basic legal questions automatically, without the need for a lawyer. What happens if the advice is bad? If a lawyer gives advice, there are disciplinary proceedings with the Bar, as well as malpractice suits. Computers wouldn’t be bound by ethic codes (or would they?). Who would be subject to malpractice suits? The developer of the software? Would it be like a products liability defective technology case?

This also raises the issue of whether legal advice is fungible or a commodity, which can be easily substituted.

Now, go back to my previous parenthetical about the bar association and ethic codes. Going forward, I think the largest barrier will not be technological, but legal. Bar associations would label such programs as engaged in the unauthorized practice of law (similar to suits against LegalZoom.com and the like) and try to shut it down. Etrenched interests have little incentives to enable such change. Lawyers, are a key group who would oppose this. Doctors will likely oppose computers that can perform diagnoses on similar grounds. What happens inf a computer makes a bad diagnosis!

So, to bring this argument back to the broader issue of liability for our new computer overlords, the law needs to evolve a bit, and I’m guessing it will begin in the legislature, not the courts. More and more, computers will be responsible for making decisions, without human input. To simply shut them down under the auspices of the judgment that only human judgment that meets certain criteria would halt these technologies. To impose some regulations on these new systems that are congruent with the nature of artificial intelligence, systems of law will have to emerge.

Legal advice, unlike autonomous cars, has the unique element of involving speech!  I’ve been thinking through a constitutional challenge to such laws based on commercial speech doctrine, and limitations on expression. No matter how I look at it, this runs into an economic liberty challenge, and the state would certainly have a rational basis to limit the dispensation of legal advice to duly barred lawyers.

There are lots of issues to consider. I am considering them. I’ll be in court, sooner or later.