Why AI is a Fear-Driven Discipline

Written by FrederikBussler | Published 2019/09/02
Tech Story Tags: ai | statistics | mathematics | latest-tech-stories | artificial-intelligence | fear-of-artificial-intellignce | mass-surveillance | ethical-implications | web-monetization

TLDR People fear what they don't understand, so it makes sense that highly complex systems like AI inspire fear. AI was created as a tool to better understand the world, to make models that reveal insights in how we interact with our environments. Most people don't think of the future of AI as something like this: A robot has never decided so much as to lift a finger on its own, and most industry experts think we'll never get to that point - the point of building consciousness, free will, and intelligence into machines.via the TL;DR App

People are scared of AI. According to Genpact research:
"71% of consumers fear AI will infringe on their privacy."
When asked about their thoughts on the impact of AI, a survey of Americans conducted by Oxford revealed this:
"34 percent of respondents thought it would be negative, with 12 percent going for the option 'very bad, possibly human extinction.'"
Another 18% were uncertain of the impact, which means that 64% of people have an uncertain or negative view of AI.
Besides the general fear, uncertainty, and negativity surrounding AI, there are a number of specific concerns, as listed by a CNBC article:
  1. "Expected mass unemployment."
  2. "AI in military applications could give rise to a nuclear war by 2040."
  3. "Data-driven algorithms that automate applications by using that data — could hold ethical implications over the privacy of patients."
  4. "Fear that AI could be used for mass surveillance."
  5. "Machine-learning that threatens to bake in racial, sexual, and other biases."
Humans fear what they don't understand, so it makes sense that highly complex systems like AI, that also impact billions of people, inspire fear.
Ironically, AI was created as a tool to better understand the world, to make models that find patterns and reveal insights in how we interact with our environments.
However, this greater understanding through AI is highly asymmetrical. The people who better understand cancer diagnosis, self-driving cars, recommendation systems, and so on are the tiny minority of people working in the field, while everyone else is trapped in fear.
This highly prevalent fear is bound to rub-off on even the most logical, objective industry practitioners and regulators.

So what can we do about it?

Well, the biggest misconception about artificial intelligence is that it's intelligent. You probably see the problem with the name alone. When a layperson hears "Artificial Intelligence," they don't think of what it really is--a series of input/output functions, like a "neural network" that connects a bunch of I/O blocks to make a pretty complex algorithm.
Most people don't think of the future of AI as something like this:
They think of this:
And that's because an intelligent machine can be a pretty scary idea. Something that makes its own decisions? That possesses true intelligence? That acts on its own? That learns from data to do so and always gets better?
Well guess what, we're not even close to being there.
Even our most advanced robotics and the most cutting-edge AI systems require intense human input and tuning. A robot has never decided so much as to lift a finger on its own, and most industry experts think we'll never get to that point - the point of building consciousness, free will, and intelligence into machines.
At the end of the day, let's stay away from calling it "artificial intelligence." Here are some alternatives:
  • Computational statistics.
  • Statistical optimization.
  • Error minimization.
  • Machine learning.
  • Statistics.
  • Math.

Written by FrederikBussler | Published author and writer.
Published by HackerNoon on 2019/09/02