Imagine the world of a future. Artificial intelligence has advanced to a point where robots, in various forms, can perform not only rote tasks (such as vacuuming or driving), but also tasks that we would think are reserved to the human mind.
Say hi the Lawyer-Robot. In much the same way that you would explain to a lawyer what your problem is (contract dispute, family issue, personal injury, etc.), the Lawyer-Robot will ask you questions (the same questions one would be asked during an attorney-client interview) to understand the nature of your situation. The Lawyer-Robot will scan any relevant documents, and research the positions of any potential opponent parties. The Lawyer-Robot will understand what the law is in various jurisdictions. Eventually, the Lawyer-Robot will recommend a course of action.
Meet the Doctor-Robot. By measuring your vital signs, taking tests, instantly analyzing the results, and asking you questions, the Doctor-Robot can provide highly accurate diagnoses almost immediately at a very low cost.
Assume that both Lawyer-Robot and Doctor-Robot can perform at a level at, or above, that what a human could do.
The actual doing–in the case of the Lawyer-Robot, that would be litigation or transactional work, or in the case of the Doctor-Robot, that would be surgery or other form of treatment–would still be done by humans. Rather, the initial screening work of both professionals would be automated, and that screen would aid and facilitate the human expert’s job.
Handing over to machines tasks which we reserve for humans is a big jump. I suppose ethicists can consider whether this is a desirable end, and would it actually improve our lives. I’ll assume (for better or worse), that the answer to that question is yes. My assumption, frankly, is based on how technology has progressed, and how it will progress. I’m not sure if—even assuming this technology is a bad thing–it can be stopped. That is, it can be stopped, but by other concerns.
Putting aside the technological issues, let’s consider the implications of such robots.
Let’s start with the Lawyer-Robot.
First, regulatory issues. Today, the practice of law is regulated by state bar associations. In order to engage in the practice of law (a term that is quite vague, maybe even void because of vagueness)—under any definition, what I describe above would constitute practicing law—one would have to be licensed (presumably by going to law school and passing the bar). Bar associations are already going after firms like LegalZoom which effectively provide templates which laymen can fill out. I’d imagine bar associations would have a field day with Lawyer-Robot. How would occupational-licensing regimes work for artificial intelligence?
Second, ethical issues. How would the rules of professional conduct apply to Lawyer-Robot. Would an attorney-client relationship be possible? What about rules of confidentiality? What about conflicts of interest? What about (getting meta here) doing the right thing? Would Lawyer-Robot have an obligation to report unethical conduct by a client? Would it withdraw under the circumstances where a real lawyer would withdraw?
Third, liability issues. What happens if Lawyer-Robot gives bad legal advice? Would a malpractice suit lie? If so, against whom? The developer of the software? Would Lawyer-Robot have malpractice insurance? Who would insure that?
Fourth, marketability issues. I suspect that laymen would love this product. Get easy to access and cheap legal advice. This would do wonders for access to justice. But what about corporations? Would GC, who are wedded to the old ways, turn away law firms and adopt such technology? Law firms that bill by the hour sure-as-hell would have no interest in using a system that can render them obsolete. Unless this technology can disrupt existing interests, it would go nowhere fast.
These four issues–and not the technology–seem to be among the most pressing issues to consider before we can reach a state where artificial intelligence can accomplish a transformation of our society.
If the regulatory burdens, ethical limitations, and liability rules are not reconsidered, technology will be unable to progress.