What if Intelligent Machines Tell Us Something We Don’t Want To Know?

Written by robmay | Published 2016/11/13
Tech Story Tags: politics | machine-learning | artificial-intelligence

TLDRvia the TL;DR App

The article below is republished, with minor modifications, from my email newsletter. If you like A.I., you should go signup.

I want to warn you in advance that what am I about to write may cause some uncomfortable emotions. If you are easily offended may want to skip this section this week and wait again until next week. Today I want to push the limits with a demonstration on a coming problem in A.I.

Let us start with Donald Trump. There was an article in TechRepublic this week that an A.I. predicted a Trump win, a win that all the pollsters failed to catch. Not only that, but this A.I. has accurately predicted the Presidential election 4 times in a row now. It’s an impressive feat if it’s true but, it may not be true. It could be a version of the Perfect Prediction Fallacy.

I assume, based on the election, that at least 50% of you reading this are upset that Trump won. And actually, since most of you are in tech, looking at those numbers it’s probably closer to 65% of you who are upset that he won. Now let me ask you a question… what happens when A.I.s start telling us things we don’t want to hear? What happens when it makes predictions we don’t like, particularly if there is nothing we can do to intervene and change the outcome? More importantly…

What happens when A.I. does something that shatters the foundations of one of our core beliefs?

I know we all like to believe we are open minded, but, are we really? In previous eras, when technology questioned humanity’s core beliefs (Copernicus, Galileo, Darwin), humanity didn’t deal with it very easily.

Coming back to our Trump example, what would you do if an A.I. told you Trump was going to win, in advance, yet every other data point you had said otherwise. Or, what if an A.I. projected that Trump would actually be a better President, and encouraged you to vote for him, when you hadn’t planned to? Would you suddenly move from feeling that A.I. is smart and useful to “these machines don’t know what they are saying”?

I spent some time this week digging through research papers, looking for machine learning models that might have predicted something very offending to us. I found several, and chose two to present here. These are findings that have not been written about in the media because people don’t like the results and are afraid to touch them publicly. I’ll link to the papers at the end of this instead of now, because I want you focused on these findings first.

The first paper used machine learning to classify I.Q. test questions as biased, or unbiased, along racial lines. After adjusting for bias, the machine reported that Caucasians actually have a lower average I.Q. than Asians (by 11 points) or African Americans (by 8 points).

The second paper looked at genetic information associated with violent behavior and built a model that the press ignored, because the model showed that African Americans have a genetic make up that predisposes them to 3 times the likelihood of violent behavior as other races.

Did you feel differently as you read each conclusion? What assumptions did you instantly make? Now what if I told you that I am actually A/B testing this, using comments as a measure of open-mindedness and that this article was published twice, with two different titles, but with the racial descriptions for the papers above reversed in each case?

Do you feel better or worse knowing that, if you were offended, it was just part of an experiment? Just pause and think for a minute because, this is important. There is a book I read recently called But What If We’re Wrong? The theme of the book is that, if you ask people if there are deep core beliefs we have about the world that are probably wrong, they will say “of course, every generation has them.” But if you drill in to any particular belief, for every single case they will explain why THAT BELIEF can’t be the one that is wrong. Climate change? No way. Democracy? Nope. Equality? Hell no. So while we generally believe that we are probably wrong about some big things, we won’t nominate any specific belief to be in that category.

Now, if we develop machines that are smarter than us, they will inevitably prove us wrong about some things. What if the things we are wrong about are things we don’t want to give up believing? What if a neural net could predict who makes good smart thoughtful decisions and who doesn’t, and suggested we limit voting to only the thoughtful camp? And what if this could show that the country ended up for everybody, all races, genders, and socioeconomic classes, if we did that? Would we ban a large chunk of the country from voting?

What if machines told us things about race, gender, equality, democracy, etc, that we didn’t want to know? What if machines prove that humans aren’t even special, and that some form of statistical determinism is true?

I can’t predict what machines will learn or understand about society that we don’t, but I feel comfortable predicting that they will probably find some truths that make us uncomfortable.

So about those research papers… they don’t exist. There was no A/B test, and the conclusions I referenced above were entirely made up. My point (which I hope worked for at least some of you) was to setup a believable argument with a conclusion you didn’t believe, or wouldn’t want to believe. My point was to generate some form of emotion that is probably inevitably coming as machines drive more of our world, and we understand their decisions less. My point is that these machines could end up putting stress on some of the core ideas that define enlightened humanity today, which raises the potential for confusion, abuse, or even something worse. Those of us working in this field should tread carefully.

Something to think about.


Published by HackerNoon on 2016/11/13