I really enjoyed my time at the We Robot conference this past weekend. It clarified my thinking a bit, and now I am resolved to write two separate pieces.
The first will be titled “Regulating Robots.” To say that the legal landscape for robotics is uncharted is a bit of an understatement. The entire weekend, the attendees were struggling to even define what a robot is (is it a tool just like a hammer, or is it something more?), decide what the best metaphor in the law would be (product liability, agency law, etc.), and worry about what happens if a robot harms someone else (who would be liable, the creator of the robot, the operators of the robot, the robot itself!?), etc.
My contribution will confront what I think will be the immediate legal concern of these robots–regulations. Most of the robots-in-the-works that are controversial are those that provide a service to man that was traditionally provide for by another man. Primarily, these robots deal with health care in various fashions, helping the elderly or the disabled accomplish tasks that they could not do by themselves. Ultimately, these robots likely will be able to replace nurses and other health care workers. More sophisticated robots are being programmed to diagnoses diseases, prescribe treatments, and even perform surgeries. Ultimately, these robots will be able to replace much of what a doctor does. Ian Kerr and Jason Millar call this Dr. Watson. I can also throw in that mix a robot that can understand legal problems people have, and advise them on how best to proceed in the courts–call it Watson, Esq.
All of these professions directly touch on the health, safety, and welfare of society as a whole–and, as we all know, are regulated by occupational licensing regimes. Nurses, doctors, lawyers, architects, professional engineers, and a host of other professions, have been deemed important enough that the state has (through various forms of regulation) allowed the groups to set minimum qualifications, training requirements, liability and insurance requirements, in some cases fixed prices, and police those who engage in the unauthorized practice of that medicine. Some random schmo off the street, without the requisite qualifications, cannot engage in these professions.
Enter Dr. Watson or Watson, Esq. Assume (and please assume) that these systems are sophisticated enough to perform the job at an accuracy rate comparable to that of the median human licensed in the field. I think the first meaningful legal challenge for these technologies will not necessarily come through some overarching analysis of theories of liability. Rather, if public choice theory has taught me anything, entrenched interests who fear this type of competition will take various steps to shut down these programs. Sure, at first, they are novelties. Watson playing Jeopardy or diagnosing diseases is cute. But once the medical profession or the legal profession figures out what these robots can do, they will coalesce, and fight it, ensuring that these tools remain novelties, and don’t displace their work.
“Regulating Robots” would provide a roadmap of how that fight would be waged, and how proponents of artificial intelligence can set the proper legal framework to (perhaps) strike back against the entrenched interests.
The second work–closely related–will focus more specifically on what it would mean for a Robot to function as a lawyer. There are a host of issues, other than the regulatory issues.
First, there is a possible of ossification of the law . Assume that given a set of facts, a robot can create the best legal argument for a specific jurisdiction. You could imagine that after a number of iterations, where many robots are giving the same answer to the same question over and over again–especially if that is a winning argument–a stagnation in the law results. A robot given a specific set of inputs will always deliver the same output. Granted, in the law, there are seldom identical inputs (that is, facts of a case), but it is not inconceivable for the law as know it could change in the way it develops. The adversarial process would have to take on a different nature.
Second, from a human perspective, some studies show that humans anthropomorphize robots (especially those in humanoid form) in the same way we anthropomorphize animals (my dog has feelings!). Further, in some cases, humans begin to empathize with robots, and even trust them more (especially if the robot is designed in a way to feed off human emotions and adapt to personalities in ways humans usually suck at ). What happens if humans prefer robot lawyers to real (bloodsucking, ambulance chasing, you fill in the adjectives) lawyers?
Third, from an expert perspective, what happens if robots and lawyers work together in some fashion. When the robot and the human lawyer disagree, whose opinion trumps? What happens if the human is right and the robot is wrong, but the pair follows the robot? Or, the opposite–the human is wrong and the robot is right, but the pair follow the human? And really, how can you assess whether a legal argument is right or wrong? Courts frequently botch cases and reject good arguments, etc.
Fourth, from a public choice perspective and societal perspective, how will people react to robots taking jobs once reserved for humans? We see this in some sense with opposition to immigrants “taking American jobs” (rubbish), or opposition to outsourcing (also rubbish). But now, instead of a living, breathing person taking the job, a robot, who can work 24/7, more efficiently, for less pay is taking the job. It would be interesting to think of whether there will be a Luddite blowback to such changes.
Fifth, what about liability? How would issues of legal malpractice play into this fold? And relatedly, assuming rules of professional responsibility apply to robots in some fashion, what about rules of ethics, confidentiality, fiduciary duties, etc? WIll people be able to trust robots? And what happens when that trust is violated?
So those are my thoughts for now. These two works, who knows when I’ll do them. But I will do them.