Why Elon Musk is Wrong about AI

Written by dan.jeffries | Published 2017/08/14
Tech Story Tags: artificial-intelligence | ai | deep-learning | machine-learning | elon-musk

TLDRvia the TL;DR App

You know the story. AI will rise up and kill us all.

Didn’t Facebook have to shut down their latest monstrous experiment because it went rogue and developed its own secret language?

It’s only a matter of time. For all we know, Skynet’s factories are cranking out an army of Terminators already! We better move fast!

The only problem is, it’s all nonsense.

Elon Musk ain’t helping.

It’s an “existential threat worse than North Korea,” he warns. Last I checked they have nukes and a little madman in power and super-AI is still confined to the pages of cyberpunk novels, so I’m not buying it. Look, the guy is a lot smarter than me and I think his batteries, cars and solar roof tiles will change the world but he’s spent a little too much time watching 2001: A Space Odyssey.

The pop press isn’t helping either. How else do we end up with three months covering a bogus story about Facebook shutting down their AI because it got too smart? Guess what? It’s not true.

They shut it down because it was a crappy, failed program that didn’t do its job. Simple as that.

And yet somehow every day a new version of that story pops up on my social media feeds: AI did something diabolical and had to be stopped.

Don’t get me wrong. I’m a sci-fi writer. I love this stuff. Terminators? HAL? Aliens? Star Trek? Some of the greatest stories ever written.

But that’s just what they are: stories.

And it distracts us from dealing with real problems we have right now in Artificial Intelligence.

Even if we wanted to stop super-intelligent machines from slaughtering us all, we can’t. Why?

Because they don’t exist and y_ou can’t create a solution for a problem that doesn’t exist_.

We literally cannot solve this problem right now! Take something like Asimov’s famous “Three Rules of Robotics.” They’re nothing but a literary construct. They can’t protect us. You know why? Because that’s not how we program AI! We don’t give it a bunch of explicit rules. It figures out the rules for itself.

Asimov imagined a solution to an imaginary problem and it won’t work because that’s not how AI actually works. All of our imaginary solutions will just look hopelessly stupid when mega-brilliant machines come calling.

The truth is we really don’t have a freaking clue how to build true intelligence. Listen to DARPA: Today we have “spreadsheets on steroids” not general purpose AI. There is no consciousness behind it.

AGI (Artificial General Intelligence) is not even in a lab somewhere. We don’t know what kind of processors we’ll need. We don’t know the right algorithms. We don’t even really know where to start!

For sixty years researchers thought we were just around the corner from machines that thought and acted like us.

We’re still waiting.

Researchers figured if they gave it a few basic rules it would magically become Einstein in a box. Turns out, we don’t know how we do what we do because it’s happening automatically and unconsciously.

We’re a black box.

If you want to tell a computer how to recognize a cat it seems simple to you because you do it every day, but that’s because the complexity is hidden away from you. Yet if you really stop to think about it, what you do in a fraction of second involves a massive number of steps. On the surface it’s deceptively simple but it’s actually incredibly complex.

Now at least our Deep Learning systems can recognize sounds and pick cats out of a picture by figuring out those rules for itself. That’s something. But it’s not consciousness.

We have a long way to go to C-3PO and R2-D2.

What We Talk About When We Talk About AI

The real problem with talking about super-intelligent robots and existential threats is that today’s problems are much more insidious and under the radar.

Let’s take a look at few to understand why.

Here are the big ones:

  • AI security
  • Bias built into models
  • Initial job disruption
  • Backlash from mistakes

Security

This one is a major challenge with no easy answers.

What do I mean by security? Today it’s super easy to corrupt Deep Learning systems by altering the data they’re fed. Hide a little snow crash-y distortion in images and convolutional neural nets go from smart to real stupid, real quick. None of those tricks would work on the dumbest people on the planet but they fool our best machines.

Image by Ian Goodfellow, Jonathon Shlens, and Christian Szegedy

What we have today is the exact opposite of super intelligence.

Call it dumb-smart AI or “narrow” AI.

Machine Learning and Deep Learning systems have zero higher reasoning or moral compass. It’s just a box of applied statistics. There is no desire or will behind it, except our own.

Today these tricks are just in a lab. But as these systems come to dominate fraud detection and supply chain logistics, people can and will learn to hack them.

International gangs will do everything they can to warp that data to hide illicit deals, grand theft and everything else in between. Even worse, you could kill someone with these tricks. If you manage to fool a self-driving car by doctoring a street sign, you could send someone hurtling into a wall and a fiery death.

Want to cover up money laundering? Corruption? Hacking AI will make it easier. People can and will learn to hack fraud detection classification systems, sentencing software, and more.

This will be the target of choice for nation states, espionage masters and black-ops squads. We now know it wasn’t some blonde CIA agent that found Bin Laden but an analytic model built by Palantir. If a foreign government wanted to attack a Bin Laden detector they might hit the database storing the satellite images or the NSA captured phone records.

If they manage to poison those databases the AI won’t know the difference. Remember, it has no higher reasoning of its own. It’ll happily gobble up the wrong data and start looking in the wrong places for terrorists.

There is also a real worry about military AI that goes well beyond Terminators. In my novel the Jasmine Wars, a young hacker creates an AI called Swarm that coordinates attacks in thousands of places at once to disguise one real attack, which quickly overwhelms conventional armies.

After the war, the hacker destroys the system not because he’s worried about it growing conscious but precisely because it has no consciousness at all.

In other words, it obeys anyone with the proper keys.

AI has no morals. Skynet and the machines in the Matrix have a good point about humans. We’re kind of jerks. We don’t actually need any help killing each other, we’ve been doing it just fine since the first cave man picked up a stick to bash someone’s head in.

Military systems that simply follow orders will follow whatever morals their creators have, even if they have none.

If authoritarian regimes with automated killing machines don’t scare you more than super AI they should.

In fact, super intelligent machines might just be a step up from our own idiocy. I welcome our robot overlords. Maybe they’ll do a better job than the current morons running the show.

That brings us to bias.

Bias

The sad fact is that most people can’t see objective reality all that clearly. They see a movie in their head about it and project that reality onto the world. So how can people define what is truly “good” for a model and what’s bad?

You and I might generally agree on what makes a good self-driving car. That’s pretty easy.

  • It shouldn’t hit anyone.
  • It shouldn’t veer off the road.
  • It should get to where you want to go.

But many other tasks are subject to the eye of the beholder and their moral compass, or lack thereof.

Take sentencing software, used by judges. You probably don’t realize it but we’ve been using AI sentencing software for years in courts.

But how do you define a criminal?

What we define as crime changes with time and with the people in power.

Chinese propaganda poster from the Mao era.

How China defines a criminal is much different than how you or I might. Criticize the government? Crime. Win too many cases against the government? Crime. They routinely beat and jail lawyers who stand up for individual rights and the little people. They hunt down dissidents, even in other countries.

Should we teach AI’s that too?

Actually that’s exactly what we’ll do. Bet on it. AI will help authoritarians scale their operations.

If powerful machines making decisions about your life and hunting down dissidents doesn’t terrify you then I don’t know what will. Again we don’t need diabolical super-intelligent machines to have terrible morals, we’re already awesome at having none ourselves.

Even worse are the sentencing decisions we make from criminal histories. If we let an AI chew on all the arrest records in the US for the past fifty years what will they find?

A bunch of poor people. A bunch of African Americans.

Maybe you don’t think that’s really a bias at all. That’s just the way it is, some things will never change. But John Ehrlichman, Nixon’s domestic policy chief and the architect of the drug war disagrees, in a perfect definition of how to abuse the law for dark purposes. Here’s what he had to say:

“We knew we couldn’t make it illegal to be either against the war or black, but by getting the public to associate the hippies with marijuana and blacks with heroin. And then criminalizing both heavily, we could disrupt those communities,” Ehrlichman said. “We could arrest their leaders. raid their homes, break up their meetings, and vilify them night after night on the evening news. Did we know we were lying about the drugs? Of course we did.”

So how will a sentencing algorithm make decisions after studying that history? You guessed it. When the judge is trying to figure out whether someone is a likely flight risk, who’s going to jail?

The same people we always put there.

Even worse it will now have the illusion of authority and impartiality because the “computer said so.” People don’t question computers.

But they really better start.

Job Disruption

That next major challenge is what to do with all the folks who get automated out of a job?

Now before I go further, it’s crucial to note that job loss is the most overblown story in AI next to existential threats. In the long term, automation ironically can create more jobs. And of course, it’s much easier to see the jobs we’ll lose than the ones AI will create. It’s hard to see what comes around the corner. You can’t explain a web programmer job to an 18th century farmer because he has no context for it. The Internet didn’t exist. Without the Internet there is no web programmer. We don’t know what other inventions we can’t see yet that will help mitigate the threat.

As stupid as we are at times, we’re also incredibly creative and resourceful. When problems arise we find solutions, somehow, someway. Necessity is the mother of invention. We will invent solutions as these things come to pass. We’ll have no choice. But the question is what kind of chaos do we have to live through before we come up with a real answer?

Make no mistake though: Automation is a real threat.

Lots of people losing their jobs at once is a recipe for disaster.

I have a story that I wrote fifteen years ago called In the Cracks of the Machine, where the AI revolution starts in fast food. One greasy spoon chain goes fully automated and the others quickly follow to stay competitive. That causes a domino effect in society and we swiftly suffer mass unemployment, which leads to rage.

“Fear leads to anger. Anger leads to hate. Hate leads to suffering,” said Yoda.

Money is worthless in Germany in the 1930s. Children play with it in stacks. From “Getty Images.”

Mass unemployment is a witch’s cauldron of unrest and violence. The Chinese have a proverb: “When the price of rice is high, Heaven decrees new rulers.” Germany went crazy in the 1930s for this very reason: massive job losses, economic stagnation and hyperinflation. When you have a lot of angry young men on the bread lines with nothing better to do than fight, bad things happen.

Universal Basic Income is a partial answer but do you see governments that can barely agree on anything passing it any time soon? I don’t. In fact, I see them doing it as a desperate reaction, which is the exact opposite of what we need.

Don’t get me wrong. Long term I’m bullish on AI. It can and will change the world for the better. It will help us find cures to cancer and other terrifying diseases.

Star Trek inspired Tricorder Xprize for building home diagnostic machines

It will save lives as it automates treatment recommendations and helps hospital staff triage patients. It will diagnose disease at home and that means more people will get the right care at the right moment, instead of when it’s too late.

A massive German retailer already uses AI to predict what its customers will want a week in advance. It can look at massive amounts of data that no human can and spot patterns we miss. It now does 90% of the ordering without human help. Their factories crank at top speed, their warehouses are never full of stuff they can’t sell and people get what they want before they even know they want it.

And to top it all off, they hired more people, now that they’ve freed their staff of drudgery. That’s how AI can and will go in the long term. In the end, AI assistants and automation will likely lead to a boom in creativity and productivity.

But in the short term we might not deal with the disruption very well. And when we don’t deal with problems, nature has a way of dealing with it for us.

If you don’t patch the dam, eventually it breaks and the river drowns everyone living under it.

Backlash

The simplest problem to foresee is backlash from AI mistakes.

The fact is, humans are awful at seeing real threats. The US spent 5 trillion on anti-terrorism wars since 9/11. But the chance of the average person dying from terror is absurdly small. On the flip side, heart disease and cancer kill 1 in 4 men in the US. And yet we spend only about $10 billion a year on cancer and heart disease research combined.

We’re wired to see big, flashy threats, not the tiny little ones that play out over time. That’s why we’ll likely do something stupid the very first time an AI screw ups, costing lives or money, which will cripple much needed research.

The first five car pileup from a self-driving car could easily cause Congress to overreach with terrible legislation. That would set the industry back years and put us behind other countries real fast. China would leapfrog the US almost overnight if we go crazy with legislation.

Can We Talk About Real Problems Now?

We have lots of serious challenges with AI. And yet we seem utterly incapable of talking about real issues. That needs to change fast because there’s lots more, such as:

  • How do we audit the decisions an AI makes?
  • Can we even “fix” an AI’s mistakes when it decides to crash into a wall? There are no explicit rules to change so how do we make sure it doesn’t do the same thing again the next time?
  • When a car crashes, who pays? Who’s responsible?
  • If you don’t get a loan from an AI can a human intervene and change its decision?

The list goes on and on. It will only grow with each passing day.

So let’s focus on issues that really matter today instead of ones that won’t matter for 50 or 100 years. Or else we won’t need Terminators to wipe us out.

Our own stupidity will do just fine.

############################################

If you enjoyed this article, I’d love it if you could hit the little heart to recommend it to others. After that please feel free email the article off to a friend! Thanks much.

###########################################

If you love the crypto space as much as I do, come on over and join DecStack, the Virtual Co-Working Spot for CryptoCurrency and Decentralized App Projects, where you can rub elbows with multiple projects in the space. It’s totally free forever. Just come on in and socialize, work together, share code and ideas. Make your ideas better through feedback. Find new friends. Meet your new family.

###########################################

Photo credit

A bit about me: I’m an author, engineer and serial entrepreneur. During the last two decades, I’ve covered a broad range of tech from Linux to virtualization and containers.

You can check out my latest novel,an epic Chinese sci-fi civil war saga where China throws off the chains of communism and becomes the world’s first direct democracy, running a highly advanced, artificially intelligent decentralized app platform with no leaders.

You can get a FREE copy of my first novel, The Scorpion Game, when you join my Readers Group. Readers have called it “the first serious competition to Neuromancer” and “Detective noir meets Johnny Mnemonic.”

You can also check out the Cicada open source project based on ideas from the book that outlines how to make that tech a reality right now and you can get in on the alpha.

Lastly, you can join my private Facebook group, the Nanopunk Posthuman Assassins, where we discuss all things tech, sci-fi, fantasy and more.


Published by HackerNoon on 2017/08/14