Bryant Walker Smith posits that this is the proper focus for considerations of the safety of self-driving cars:
“[Our car] always does the right thing.”
These quotes, which have been attributed to people close to Google’s self-driving car project, suggest that Google’s cars (and hence their designers) are technically and morally infallible. This suggestion, though probably unintentional, concerns me.
My technical critique is straightforward: Systems fail. Engineersknowthis. So do lawyers. How can we credibly say, at this point or perhaps at any point, that “[n]othing is going to catch this car by surprise”?
Rather than asserting that computers are perfect drivers, we might ask whether they actually drive better than humans.
So are they safer?
In 2009, according to very rough estimates, motor vehicles in the United States traveled three trillion miles (yes, trillion) and were involved in 11 million crashes. These figures suggest a rate of one crash for every 275,000 vehicle miles traveled; the injury- and fatal-crash rates are orders of magnitude lower. In comparison, Google’s cars have reportedly traveled some 200,000 miles with “occasional” intervention by safety drivers and1,000 miles without any human intervention. I have not asked about the circumstances under which these human drivers intervene. I also don’t know the extent to which this travel is representative of the American driving experience; for example, it may include a disproportionately large or small share of urban environments, extreme weather, and unusual events.
For the sake of a cursory analysis, let’s assume that any mile driven (whether by a human or a computer) is identical to any other mile driven. Let’s also assume that our crash and mileage data are accurate and precise. Under these assumptions (and an exponential distribution), Google’s cars would need to drive themselves over 1.25 million miles without incident for us to say with 99 percent confidence that they crash less frequently than conventional cars. If we look only at fatal crashes, this minimum skyrockets to nearly 440 million miles. To my knowledge, Google has yet to reach these milestones.
(By the way, can we draw meaningful conclusions about safety from total crash data rather than from injury- and fatal-crash data alone? This issue merits discussion for at least another million miles.)
I expect similar questions to be posed about other technologies replacing humans.