Who's To Blame If One Gets Killed In An (Uber/Tesla/Waymo) Self-Driving Car?

Written by seyi_fab | Published 2017/06/05
Tech Story Tags: self-driving-cars | technology | business | innovation | business-strategy

TLDRvia the TL;DR App

The most recent episode of Invisibilia (a fantastic podcast, I encourage you to subscribe), centers around a car/truck accident between a family of four (a mother, father and two girls) and a truck driver. Sudden flash rains, lost control and one person gets killed (I won’t spoil the story) and this leads to an Invisibilia team deep dive into emotions. While listening to the episode, I remembered sitting in an MBA Ethics Class and the conversation was on the ‘Trolley Problem’. The question was, if you are the driver of a Trolley with faulty brakes whom would you choose to hit between the five unsuspecting workers directly on your path or you could turn the trolley and hit one unsuspecting worker? The dilemma that you have to actively decide to save some lives to kill one is a moral gray area with no right or wrong answer. At the point in the class when we were having the conversation, I didn’t really think too deeply about it; it was an abstract conversation about a situation I didn’t really think I would ever find myself in. It was more of an intellectual exercise than a real one to me.

But for some reason, listening to that Invisibilia episode on the way to picking up my son waiting in traffic behind a Tesla, the question became real to me. Because we are moving into a world where, while we might not have to make those Trolley Problem decisions, our technologies might…

Self-Driving Car Technology

Dreams of self-driving cars have been with us since the first cars were made. At the World Fair in 1939, General Motors introduced the concept of the driverless car. Far from the models we see driving on the streets of California and Austin, these used more rudimentary technology.

1956 advertisement by America’s Independent Electric Light And Power Companies

Like most, I believe we are still a ways off from the machine learning technology being robust enough for full autonomy, even as pundits suggest that driverless electric vehicles will be the death of big oil. But what if all this is much closer than we think? What if I am totally wrong and we’ll have full autonomy in 2018? With the work that the likes of GM, Waymo, Uber and Tesla this might not be so far in the future after all. So where does that leave us on the innovation path of viability -> feasibility -> desirability (from Creative Confidence)

  • Self-driving car technology is close to feasibility (technical) as we see them in our cities (I’ve seen some in Austin)
  • We are getting close to viability for some use cases, especially the logistics based instances.
  • Where we are failing is in getting these technologies to be desirable because we are not having the required conversations at scale. Instead of debating which jobs AI will replace in the future, we should spend more time talking about the ethics and decision-making models of the AI being deployed today. Said another way; do we, the folk who will be sitting in these autonomous vehicles, trust the companies well enough to believe that their technology will make the right choices for us even as we hand over control?

The Ethics Questions that Autonomous Vehicles/Robots Raise

For companies like Uber, developing autonomous vehicles is core and, frankly, existential. The very business model that sustains Uber now depends on Uber replacing the drivers behind the wheel. As I laid out in the post, the company has to shift to driverless cars to reduce the cost of doing business. It’s a critical business decision. Do we totally trust that when Uber, with all it’s ethical and cultural problems, will build autonomous vehicles that make the customer-centric decision when it faces the ‘Trolley Problem’? Because you know it will happen don’t you? When the company deploys millions of autonomous vehicles on the roads, you know there will be accidents and moral decisions to make. No technology system is 100% perfect, with more possibilities for error, there will be errors.

For companies like Google and GM, are we comfortable that their machines will have our best interest at heart when it comes to non-binary decisions that might be related to life or death? Will a Waymo car be able to decide between swerving to hit a car with 4 cute puppies or risk the lives of your family in the car? How is this decision model being programmed into the self-driving cars? We know that the defaults embedded in our machines are not always as clear cut and unbiased as we think.

And these questions do not just relates to autonomous vehicles; Robina (below) is a robot that is supposed to assist elderly residents in their homes. Robina has the machine intelligence required for it to learn from the performance and behavior of other Robinas, retrieving real-time information from centralized cloud databases. But who is to blame if something goes wrong with Robina, and she hurts/maims, as she treats my parents? What are the default mental models that are being embedded in Robina, Humanoid and ASIMO (all robots intended to serve elderly home care residents) that ensure they make the best decisions for us?

Technology Always Outpaces Regulation

Technology advancement always beats policy and regulations. Always. So the defaults that will be embedded in these technologies will have to come from the moral codes of the programmers and technologists who will embed these vehicles with decision-making software. We are on the cusp, and in some cases experiencing, advancements in technology that seemed like magic just a few short years ago and these technologies will improve our lives immensely. We now have to, as informed consumers, ask and demand answers to these questions from our leading tech companies. Our lives might depend on it.

I’ll leave you with this quote

‘Speed is irrelevant if you are traveling in the wrong direction.’ M. Gandhi

Are we moving too fast?

Please share, like and tweet. Write your own blog posts using our WYOP tool (it gets you into writing flow) and s_ign up for the Polymathic Monthly Newsletter_ here, you’ll love it.


Published by HackerNoon on 2017/06/05