Was Tesla responsible for the 2016 Autopilot crash?

Written by cfmccormick | Published 2017/10/09
Tech Story Tags: self-driving-cars | tesla | autonomous-cars | safety | automation

TLDRvia the TL;DR App

NTSB says car makers have to prevent mis-use of their autonomous systems — but it doesn’t say how.

There’s no question that autonomous vehicles will make our roads safer, and reduce accidents and fatalities. But they won’t be perfect, and some AVs will crash. There was a tragic reminder of that in May 2016, when a Tesla Model S driving in Autopilot mode crashed into a semitrailer near Williston, Florida, killing the driver Joshua Brown.

After the crash, NHTSA analyzed what happened. It cleared Tesla of fault, saying that it couldn’t find any “defects in design or performance of the AEB or Autopilot systems of the subject vehicles nor any incidents in which the systems did not perform as designed”. In fact, NHTSA went on to analyze airbag deployment data from all MY 2014 to 2016 Tesla Model S vehicles, and found that crashes actually decreased by 40% — a huge number — in vehicles after Autosteer (a component of Autopilot) was installed. That conclusion was a very strong vote of confidence in Autopilot, and by extension in the ability of autonomous vehicles to improve road safety.

However, things got more complicated last month. The National Transportation Safety Board (NTSB) also studied the crash, and reached some significantly different conclusions. The NTSB report agrees with the overall facts as identified by NHTSA: the Autopilot system didn’t identify the semitrailer as a threat and thus didn’t warn the driver or start braking, and the driver didn’t take any action to brake or steer clear before the crash.

NTSB is essentially saying that Tesla should have made it harder, or impossible, for the driver/passenger of the vehicle to use Autopilot mode outside of the ODD, both in terms of where it was driving, and in terms of paying attention.

But NTSB extends its analysis into the question of the “operational design domain” (ODD) of the Autopilot system — basically, the conditions under which the system is supposed to be used. It found two major problems with Autopilot: first, that “the driver could use the Autopilot system on roads for which it was not intended to be used”; and second, that “although the operational design of the Tesla Autopilot requires an attentive driver as an integral system element, the Autopilot on the Williston crash vehicle allowed the driver to operate in the automated control mode for almost 6 minutes, during which the system did not detect the driver’s hands on the steering wheel”.

NTSB is essentially saying that Tesla should have made it harder, or impossible, for the driver/passenger of the vehicle to use Autopilot mode outside of the ODD, both in terms of where it was driving, and in terms of paying attention. In one sense NTSB is absolutely correct about this: Tesla’s Autopilot was (and still is) only a SAE Level 2 autonomous vehicle (“Partial Automation”) and thus the human driver is still explicitly responsible for monitoring the environment, and being prepared to take back control of the vehicle if necessary. But the heart of the report’s finding is the proposition that Tesla, as the manufacturer, is at least partly responsible for the vehicle owner/driver acting in a way that was inconsistent with the intended ODD — most notably, not paying attention.

Manufacturers of AVs will want to find ways to ensure that their systems are only used within the ODD, even if drivers/passengers don’t want to obey those limits, and even if there isn’t an explicit regulatory requirement to do so.

NTSB is not a regulatory agency, and the findings in their reports are generally restricted from being used in legal proceedings. But the findings in this report will still have an important impact on how AV safety policy and practices are developed. On the policy side, we don’t see any of these considerations in the current version of legislation being considered by Congress, but some aspects may be included during further legislative proceedings.

In terms of AV safety practice, it’s very likely that autonomous vehicles will shift legal responsibility for road accidents away from a driver negligence paradigm and towards a product liability paradigm. That means manufacturers of AVs will want to find ways to ensure that their systems are only used within the ODD, even if drivers/passengers don’t want to obey those limits, and even if there isn’t an explicit regulatory requirement to do so. How can they do this?

The most obvious aspect of ODD is location: AVs should only operate in limited areas. In principle this can be imposed on a vehicle using geo-fencing — i.e. only enabling the autonomous capabilities of a vehicle in certain physical areas. But it’s not really clear how this would work in practice. If an AV driver/passenger tries to direct the vehicle onto a road or area that’s outside the ODD, would the vehicle simply refuse to go there, unless the human resumed control? Would that be safe, especially if the driver/passenger directed the vehicle to do this without much lead time? There’s no clear procedure for how this would happen, particularly if the ODD is complicated, such as including certain categories of roads (e.g. divided highways) but excluding others that are very close by (e.g. an undivided road off an exit ramp). Would an AV in autonomous mode simply whizz by an off-ramp unless the human grabbed the wheel in time?

The next most obvious aspect of ODD is driver awareness. For all AVs below SAE Level 4, humans are responsible for some amount of situational awareness and availability to perform driving tasks. This is problematic, because humans are extremely bad at monitoring automated systems that are usually in normal (i.e. non-fault/non-emergency) mode. The NTSB lead investigator explicitly noted this, pointing out that research going back to nuclear power plant operators has shown humans don’t do “attention tasks” very well. So AV manufacturers will need to figure out ways to monitor whether humans are paying attention to the road, and (gracefully) turn off autonomy if they’re not.

If an AV driver/passenger tries to direct the vehicle onto a road or area that’s outside the ODD, would the vehicle simply refuse to go there, unless the human resumed control?

Nobody really has this problem solved yet. Autopilot tries to gauge driver attention by monitoring whether hands are on the steering wheel, but this is clearly faulty — it’s easy to keep one hand on the wheel while actually watching YouTube on the phone in your lap. Some automakers are developing internal driver-monitoring camera systems that track the driver’s face and eyes, to determine whether their attention is wandering. An example is the GM Super Cruise system in the 2018 Cadillac; unfortunately, this can really annoy drivers, and it isn’t necessarily accurate. The system’s current bail-out method if the driver ignores multiple warnings to pay attention is to slow to a stop right in the middle of the lane — not ideal under most circumstances.

Getting driver awareness monitoring right is going to be a major issue, and a vital part of developing AVs. One of the most interesting companies in this space right now is Nauto, which is developing more complex methods of monitoring driver attention, and has certainly attracted a lot of investor attention (more on them in a future post). This is such a tough problem that there are companies who think it’s hopeless, and have giving up on Level 2 and 3 (and some cases 4) AVs. One example is Zoox, which eschews human-usable controls entirely, skipping directly to Level 5; they appear to be aggressively grabbing engineering talent to make that vision a reality.

And there is still the issue of how these restrictions would be explained (and justified) to impatient vehicle owners who don’t want to have to stop watching Game of Thrones and take over driving for a dusting of snow.

Beyond location and driver awareness, there are other dimensions to the ODD, most notably weather (fog, rain, snow and other non-ideal weather conditions can severely hamper AV performance) and in some cases time of day (glare from sunset might also be a limitation). As with the other dimensions, there still isn’t a good understanding of how complying with this ODD limitation would actually work in practice. Would an AV pull over to the side of the road if snow begins falling? Who would decide how much snow impedes the vehicle’s ability to function — manufacturers, regulators, or some other body? And there is still the issue of how these restrictions would be explained (and justified) to impatient vehicle owners who don’t want to have to stop watching Game of Thrones and take over driving for a dusting of snow, and who might decide to go with a different brand of car the next time that doesn’t feel quite as nanny-ish.

Overall, the NTSB report raises some critical questions about how AVs will actually operate in the real world, given the realities of human nature and technology. That makes thinking about AV policy a lot more complicated, but in the long run, that’s a very good thing.


Published by HackerNoon on 2017/10/09