Self-Driving: Who decides who will die?

Written by matthewbiggins | Published 2017/02/04
Tech Story Tags: self-driving-cars | tech | autonomous-cars | autonomous-vehicles | tesla

TLDRvia the TL;DR App

The year is 2025. A woman takes one last bite of toast and swig of OJ before heading out the door. She slides into her autonomous car and tells it to head for the office. This businesswoman settles in, glancing at her tablet for the morning news as the car pulls into the street. A green light — the car proceeds through the intersection. Just then, a ball bounces onto the road and a child darts into traffic to retrieve it — too little too late. The car’s onboard computer quickly calculates a crash is imminent. There are two outcomes. The car can try to slow, but will hit and kill the child. Or the car can swerve to avoid the child, but will ram into the median and kill its passenger. What should the car do? Save the woman who purchased the car or the innocent child?

What would you do?

Moral Machine_A platform for public participation in and discussion of the human perspective on machine-made moral decisions_moralmachine.mit.edu

Welcome to the ethics of driverless cars.

But the car won’t really be making a choice; it will do as its algorithms dictate. The real choice will be made years earlier when the algorithm is designed. That real choice is now. We are the ones to decide who holds the power to determine who will live and who will die.

If you need proof that this is a choice for right now, Ford recently reported it will have completely autonomous vehicles by 2021. Tesla has been selling cars that have the hardware needed for full autonomy since last year.

But with fully autonomous vehicles years away for the majority of consumers, news stories today focus on the positives a driverless future will create: fewer accidents, reshaping our cities, and insurance cost savings.

While these are only a few examples of the massive impact that self-driving technology will precipitate, news coverage tends to leave out potential negative externalities. Consumers, media, and business are all to blame — images of a wonderful future just sell better than handling necessary practicalities. As is life. The only problem being one of these practicalities — an ethical algorithm — may very well decide to kill you.

Companies developing self-driving are the one’s determining that ‘practicality’ without consumer input. Let’s emphasize that:

Privately held companies are currently determining who self-driving cars will allow to live and choose to kill.

In fact, it’s already happening with AI more generally. Over 2000 preeminent academic and business leaders in AI research have signed the Asilomar AI Principles. There is even a section on Ethics and Values. But who determines what ethical system to follow? This literally affects our ability to live. We have less than 5 years by Ford’s count. Now is the time to decide who will create driverless car ethics. Will it be business with a profit incentive, government with a political incentive, or larger society with our actual lives at stake? Ultimately government and business will implement it, but we can force there to be a transparent and honest conversation where our voices are heard.

This is an inflection point in public morality.

From a macro perspective this is the opportunity for us to select the moral code we want for our society, written down for all to see. Hitherto individuals, businesses, and governments have been guided by moral ideas to be sure, but they have not been transparent. When I cut you in line, you do not know what moral code I used to justify that. But when a car chooses to kill its passenger, we will all know what moral code it used. Once written, software will determine how cars react in unavoidable accidents with deadly consequences. Those lines of code will follow an ethical principle. A principle we will have agreed to whether through intentional dialogue or by remaining silent.

This generation is uniquely situated in time to have a society-wide discourse on what ethical principles we want to live by. For the first time, these will be hardcoded into our products and services. These will affect how cars determine which life to save, but more importantly they will direct and determine society’s moral compass for how other technology is developed. Ethical debate needs to venture forth beyond the walls of academia and enter the colloquial world.

Now we can either choose to participate in this moment of change or let others — governments and businesses — do it for us. What will we choose?

follow Matthew Biggins on here or LinkedIn for more


Published by HackerNoon on 2017/02/04