Unraveling culpability in autonomous vehicle accidents

Accidents will happen, but agreed-upon standards could go a long way toward building regulator and consumer confidence in robocars.

Robocars will not be accident-free.

For regulators who harbor hopes of fostering a future of autonomous vehicles (AVs), this is a political reality that’s likely to haunt them. For the public, it’s a psychologically untenable prospect, especially if a robocar happens to flatten a loved one.

From a technological standpoint, though, this inevitability is the starting point for engineers who want to develop safer AVs.

“The safest human driver in the world is the one who never drives,” said Jack Weast, Intel’s senior principal engineer and Mobileye’s vice president for autonomous vehicle standards. He delivered the quip in a tutorial video explaining what the company’s Responsibility-Sensitive Safety (RSS) entails.

“[Weast] is right,” Ian Riches, vice president for the global automotive practice at Strategy Analytics, told EE Times. “The only truly safe vehicle is stationary.”

So, if we hope to see commercially available AVs that will actually run on public roads, what must happen?

Before the AV ecosystem can answer that question, it must open a long-overdue dialog on another: Who’s to blame when a robocar kills a human? 2020 will be the year the industry at last confronts that demon.

On one hand, AV technology suppliers love to cite such figures as the “1.35 million annual road traffic deaths” reported by the World Health Organization (WHO) when they pitch their highly automated technologies as the ultimate solution to road safety. Eager to paint a rosy picture of zero-collision future, they use the stats to explain why the society needs AVs.


More than 1.3 million traffic deaths occurred on the world’s roadways last year. (Source: WHO Global Status Report on Road Safety 2018)

On the other hand, most in the technical community labor to avoid answering the question of whom to blame when robocars fail. Daunted by legal questions beyond their power to answer or control, they prefer to defer the blame game to regulators and lawyers.

Against this backdrop, Mobileye, an Intel company, stands out. As Weast told EE Times, “Intel/Mobileye is not afraid of asking a tough question.” In developing RSS, Intel/Mobileye engineers spent a lot of time pondering how safe is safe enough — “the most uncomfortable topic for everyone,” as Weast described it in a recent interview with EE Times.

“We all want to say that autonomous vehicles will mitigate traffic accidents, but there are limitations to the statement for non-zero chance of AV accidents,” he said. “The truth is that there will be an accident. Of course, our goal is to make the chances for accidents as low as possible. But you can’t start AV development from the zero-accident position or by thinking one accident is one too many.”

A predetermined set of rules

Intel/Mobileye led the AV industry by developing RSS, “a predetermined set of rules to rapidly and conclusively evaluate and determine responsibility when AVs are involved in collisions with human-driven cars.”

When a collision occurs, Mobileye wrote, “There will be an investigation, which could take months. Even if the human-driven vehicle was responsible, this may not be immediately clear. Public attention will be high, as an AV was involved.”

Given the inevitability of such events, Mobileye pursued a solution that would “set clear rules for fault in advance, based on a mathematical model,” according to the company. “If the rules are predetermined, then the investigation can be very short and based on facts, and responsibility can be determined conclusively.

“This will bolster public confidence in AVs when such incidents inevitably occur and clarify liability risks for consumers and the automotive and insurance industries.”

Most scientists agree that the zero-collisions goal is impossible even for AVs, which neither drink and drive nor text at the wheel.

“Of course we’d prefer to have zero collisions, but in an unpredictable, real world that is unlikely,” Phil Koopman, CTO of Edge Case Research and a professor at Carnegie Mellon University, told EE Times. “What is important is that we avoid preventable mishaps. Setting an expectation of ‘dramatically better than human drivers’ is reasonable. A goal of perfection is asking too much.”

Even so, the notion of assigning blame for an accident makes everyone uncomfortable.

“The biggest problem that I saw in the original RSS paper is its emphasis on blame,” said Mike Demler, senior analyst at The Linley Group. “It includes mathematical models for defining an AV’s movements and actions, which I see as its strength. But the weakness is that it states, ‘ The model guarantees that from a Planning perspective there will be no accidents which are caused by the autonomous vehicle .’”

Put yourself in the consumer’s shoes. If you’re a passenger in an AV, or in a vehicle that might be involved in an accident with one, who — or what — is at fault is the least of your concerns. That’s for the lawyers and insurance companies to determine. You just don’t want to be killed or injured.

If you design autonomous vehicles, though, you can’t afford to disregard culpability.

Demler cited The Safety Force Field paper, which Nvidia published in response to RSS. The paper focuses almost entirely on the mathematical models. “The issues that are more difficult to formalize are what constitutes ‘safe’ driving,” he said.

RSS, for example, creates what Weast has called a “safety bubble” around an AV. “How far a vehicle should be from one in front of it seems rather obvious,” said Demler. “But modeling every possible driving scenario and describing a ‘safe’ course of action is impossible.”

Demler noted that, “as powerful as the AI is getting, computers still can’t reason.” AI-driven AVs, therefore, “can only follow rules.”

But human drivers understand that rules are made to be broken. “In some cases, avoiding an accident may require an evasive maneuver that would otherwise be considered unsafe, such as rapidly accelerating around an obstacle even though you’re cutting into another lane at an ‘unsafe’ distance from other vehicles,” said Demler.


The formula shown calculates the safe longitudinal distance between the rear vehicle and the front vehicle. (Source: Intel/Mobileye)

Demler wonders whether we are truly capable of anticipating all the potential scenarios and teaching robocars a safe course of action for each case. It’s likely a rhetorical question.

>> Continue reading the next section on page two of this article originally published on our sister site, EE Times.

 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.