Our Cars Will Soon Make Life And Death Decisions. Will We Agree?

Written by peterbrack | Published 2017/09/07
Tech Story Tags: artificial-intelligence | business | ethics | tech | venture-capital

TLDRvia the TL;DR App

We recently bought a new car, loaded with all of the latest safety features. It’s not a Tesla and doesn’t drive itself — but when the safety features activate, the car pretty much takes over, braking quickly or even steering the car back into its lane in the event that the driver dozes off and swerves. Exciting stuff, and not hard to imagine a future when we can just sit back and let our cars drive us from Point A to Point B.

Not long ago I ran across this Ted Talk by Iyad Rahwan, a computational scientist at MIT Media Lab, and have been thinking about it lately— both in light of our new purchase, and because I spend time with many founders building companies powered by AI.

In his talk, Rahwan lays out two moral options which programmers of AI-powered autonomous cars will need to consider — because without a doubt, these cars will need to follow some set of predetermined ethics. The scenarios described are inspired by two famous philosophers: Jeremy Bentham, and Immanuel Kant.

Bentham’s philosophy suggests that in the event of a life and death scenario, an autonomous car should follow utilitarian ethics and minimize total harm — even if that action will kill a pedestrian, and even if that action will kill the passenger (depending on number of pedestrians vs. number of passengers). Kant’s philosophy would suggest that the car should follow duty-bound principles such as “thou shalt not kill” (applied to the passenger), even if that means harming more people.

It is fascinating and offers a glimpse of the myriad moral dilemmas that innovation in artificial intelligence will soon present. I highly recommend watching it. And I’d be curious to know your preferred set of ethics: Kant, or Bentham?


Published by HackerNoon on 2017/09/07