Making (Autonomous) Trucking Safe

Written by stefan_66577 | Published 2018/06/12
Tech Story Tags: self-driving-cars | autonomous-trucking-safe | autonomous-trucking | trucking | autonomous-driving

TLDRvia the TL;DR App

There are many reasons why I believe trucks are the sensible first step for autonomous vehicles. The trucking industry accounts for nearly 4% of the US economy, with a quarter of that going towards labor costs. There’s currently a shortage of that labor (which has the effect of increasing the cost of every physical thing bought or sold), but the biggest argument for automotion is that trucks are just disproportionately dangerous.

Truck driving is one of the country’s most deadly occupations and fatal accidents are common. One in four drivers of these 80,000lbs vehicles report having fallen asleep behind the wheel in the previous month and are usually surviving on 5 hours of sleep a night, so it’s hardly a surprise that out of over 32,000 fatality accidents in 2015, nearly 3,900 involved a large truck or bus (more than one in ten).

This isn’t because drivers are daredevils, but because they work in a system where they’re only paid per freight mile hauled. This can force them to choose between driving safely and paying rent.

My team at Starsky Robotics is working day and night to make unmanned regular service a reality soon. Which means that, unlike many in the space, we can’t think about safety “later.” Safety needs to happen now.

Automotive Safety (or, how I learned to stop worrying and love ISO 26262)

While relatively unknown in Silicon Valley, Safety Engineering has been one of the core disciplines of automotive engineering for over 100 years. And with good reason: when you build things that can hurt people it’s important to develop processes that allow your team to raise concerns, understand the risk and design for safety.

The former is important. At Mapbox’s Locate Conference the other week I was asked if autonomous engineers should swear an oath akin to the Hippocratic. The question has some basis: as an engineer building a self-driving truck you can be paralyzed with worry that a bad line of your code can hurt someone. It’s incredibly important that we give our team the opportunity to voice any concerns that they might have. If we choose to move forward anyways, Starsky’s leadership does so while taking the responsibility from those who voiced the concerns (and who developed the system).

While perhaps over-referenced, ISO 26262 (the automotive safety bible), remains as relevant as ever when it comes to designing safe automotive products. ISO 26262 defines the risk of a (sub)system as the product of three values: Severity (of a failure), Exposure (to the failure), and Controllability (in the case of failure).

Risk Score = severity * exposure * controllability

To spare us the lengthy (and seemingly inevitable) pontifications around the different scenarios of an automotive accident, we can judge the severity of a system-wide failure to be a constant. If our system fully fails there will be significant irreparable harm.

Exposure is easy to understand. How much of your drive would that failure affect? You always need your brakes, but how often do you need your left turn signal? What is the likelihood that a particular (sub)system fails: whether it’s your front right tire or your perception system?

Controllability is more nuanced. Essentially, controllability is how skilled of a driver you need to be to safely deal with a failure. Almost anyone can safely manage getting a flat tire at speed on the freeway.

Putting this all together we see the risk score of the tire is sufficient to allow occasionally getting a flat tire is acceptable. The uncontrollability of an outright perception failure is why almost every autonomous team requires a lot of perception redundancy.

The easiest way to “cheat” controllability for an autonomous vehicle is to always have a trained driver behind the wheel…which is what most of the autonomous industry is doing. That’s why it’s such a big deal that we’ve done a fully unmanned test.

In February 2018, Starsky Robotics completed a 7-mile fully driverless trip in Florida without a single human in the truck

Safety: an AV’s Most Important Feature

To recap: safety is highly important for self-driving trucks. At Starsky we want to quickly get unmanned trucks on the road. It’s really hard to design a system that’s safe without a physically-present human as backup.

Which is why it quickly became apparent that our first senior hire wasn’t going to be a controls vet or a machine learning pro, but a Safety Lead.

We’ve built a robot that can drive a truck. We’ve built teleoperation capable of parking a 53’ trailer in-between two others with a foot of clearance on each side. We’ve built highway autopilot capable of keeping a 45,000lbs trailer in a lane with high winds and heavy rain.

But making sure that system is safe to regularly go out amongst the motoring public without a physically-present human is a real challenge.

A weak Bench

When we started meeting with safety engineers in and around the autonomous space we noticed something: while everyone and their brother wants to do machine learning for autonomous vehicles, almost no one is working on safety. And many of those tasked with safety roles are looking in the wrong direction.

At one point, we even had a ‘big deal’ safety guy ask us why we even needed to design a system that was safe without a physically-present person in it, because “what’s the point of a self-driving car with no one in it?”

We met people who were willing to audit the work of others but not set safety policies (and vice versa). Folks who had good thoughts about hardware but stumbled when it came to software (and vice versa). And many who didn’t know how to start in an industry where for some time we’ll be our strictest critics.

And then we met Walter.

Walter Stockwell: Starsky Robotics’ Director of Safety Policy

The clear exception to all of the above was Walter Stockwell, who I’m incredibly pleased to announce has joined Starsky as our Director of Safety Policy.

Walter Stockwell, Director of Safety Policy at Starsky Robotics

From our first conversations with him, Walter has helped shift our view of “safety” from the naive-yet-common concept of safety as a definite to thinking about safety as a process, and as a series of qualified statements.

A system is not definitely safe after it completes ’n’ number of tests, whether n is one test drive or Rand’s 11B miles. A system becomes safer as it is designed to be safe and rigorously validation tested. A system will never be absolutely safe against all conceivable threats, but it needs to be able to operate absent of unacceptable risks within a specified operational design domain.

Walter brings not only this level of engineering maturity, but also years of hardware/software experience, experience in safety system engineering, practice in developing an organization to raise safety concerns, and leadership experience setting national safety policy in his last role at DJI.

“Starsky Robotics has become the frontrunner in the autonomous vehicle space. It has managed to solve some of the most complex challenges for driverless trucks. It’s evident that everyone at Starsky has been focused on safety from the beginning and they are years ahead of the competition. I’m excited to join this immensely innovative and forward-thinking team,” says Walter.

With Walter on our team, I’m incredibly excited to see what’s just down the road.

Keepin’ on Truckin’

-Stefan

(P.S. We are still looking for a slough of team members to help us get our unmanned trucks on the road, not the least of which include help in Controls engineering, Machine Learning, and many others. You can apply at starsky.io).

I’d also be remiss not to thank Julia Ilina, Kartik Tiwari, and Walter Stockwell for reading (and heavily editing) earlier version of this article.


Published by HackerNoon on 2018/06/12