Yesterday morning I was interviewed on the Today show on the implications of an Uber driverless car killing a pedestrian in Phoenix, Arizona. The segment is below.
The big picture
It is truly tragic that 1.3 million people are killed every year by automobile accidents (heavily weighted to developing countries, putting into context the already devastating death toll of for example 30,000 in the USA and 1,200 in Australia). By one analysis 94% of these deaths are caused by human error.
The promise of autonomous cars is that this number can be reduced, potentially dramatically.
It is clear that the best autonomous cars today (which very likely are NOT those of Uber) are safer drivers than the average human.
That is of course not a very high bar to reach, given drinking, distraction, texting, tiredness and many other factors that impede the driving performance of many humans.
Will we tolerate machines killing humans?
However it is certainly not true that machines need only be safer than humans. We accept death through human error far more readily than we do through machine error.
It is worth remembering that it was in 1896 the first person was killed by a car. In the US traffic deaths peaked at 55,000 in a year in the late 1960s. All along that trajectory we implicitly decided as a society that the value of having cars was greater than all of those deaths.
Our attitudes have changed since then, and we place a greater value on human life. We also do not like the powerlessness of machines that we have created killing us.
What happened in Phoenix
At the time of the interview there was very little information available on what had actually happened, save a statement from the local chief of police that “it’s very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven)..”
However the release of the video taken by the car of the accident suggests this is not true.
The car should have at least begun to brake from its visual input from the video, whereas it did not appear to slow at all to the sight of the woman.
More importantly, the LiDAR which is at the heart of almost all autonomous cars, with an ability to scan continuously in all directions for at least 100m, should have picked up well ahead that she was on the road, even though not visible to cameras, and allowed a safe stop before reaching her.
In his excellent analysis of the accident Brad Templeton suggests it is possible the LiDAR had been turned off, which if true would be criminal.
It is also clear the human backup driver simply wasn’t paying attention.
Even beyond the fact we need to know far more about what happened in Phoenix, there are massive uncertainties on what will happen in terms of the response from society, regulators and the self-driving car industry.
However most people understand that the capabilities of self-driving cars will inevitably improve from now, potentially dramatically. We shouldn’t throw something based on historic capabilities.
The potential benefit of driverless cars is not just safety, with the potential of cutting the road toll by 90%, but also of assisting the elderly, disabled and poor to be mobile, with a massive impact on quality of life.
Probably the immediate next step is to tighten regulation, so autonomous cars need to be demonstrably safe enough to drive in the streets. My guess is that some of the autonomous cars currently on the road are sufficiently safe, but others may not be.
We also need ways of ensuring that human supervisors pay constant attention to the road, as did not happen in Phoenix.
Despite the extreme uncertainty, I think it’s most likely that after some regulatory and industry repositioning will continue at a pace similar to that before the accident within some months. I believe we will come to accept occasional deaths from self-driving cars, as long as they are sufficiently occasional. We will see.