A computer will (probably) eradicate humanity

Written by george3d6 | Published 2018/08/02
Tech Story Tags: artificial-intelligence | technology | nuclear | military | safety

TLDRvia the TL;DR App

But it won’t be artificially intelligent

You can read this article on my own blog, if you prefer that to medium.

There are thousands of computers that could destroy human civilization as we know it, if not eradicate the human race altogether.

Those computers aren’t hosted in gargantuan data centers, filled with custom made Nvidia Titan Vs and running the C-LTSM-GAN describe by “Trendy ML whitepaper Q3 2018”.

They are old IBM computers, from the 60s and 70s, that reside in old dusty rooms and on board nuclear submarines and bombers. These old computers run programs written in Fortran 77 or Ada 83 (*if we are lucky) or some god forsaken assembly language.

They control 14905 nuclear missiles.

Focusing on a real issue

An algorithm powerful enough to eradicate humanity, courtesy of some army intern from the 70s

Quite frankly, I don’t think many people have the expertise required to make these kind of systems safer.

Determining how this software and hardware should be refactored, rebuilt and rewritten is the definition of a hard job.

Maybe the primitive machines with punch-card era software hosted on giant floppy disks are more than enough ?

However, I feel there should at least be some discussion around these computers, that, the very moment you are reading this, could put and end to the last 200,000 years of human civilization.

These computers can decide if we, as a species, live or die. Our civilization and possibly our race exists due to their cold, unthinking, mercy.

What we decide to do (or not do) about these computers, will affect that decision.

However, I haven’t seen a single newspaper article or panel of global experts gathered to discuss the subject.

It’s never been causally mentioned over a beer when discussing with my more “tech savvy” friends.

It hasn’t found its way into pop culture or memes. To put it succinctly, the thoughts of these killer machines are not part of the zeitgeist.

But what about the DANGERS OF AI !!! ?

I’ve seen many safety discussion, centered on AI, all the bloody time. I can’t open a video without some guy blurting out some bullshit about the trolley problem and driverless cars, or the paper clip maximizer (a kind of millennial Grey goo), or robots extinguishing human life to prevent unhappiness.

I can’t fathom the leap from the childish curve fitting algorithms we have right now, to the Übermensch robot.

This superhuman AI is the latest in a group of theories proposed by: populists, speech-givers, fear-mongers, clueless philosophers, professors who want to appear on TV… and other creatures trying really hard to be a living argument against the concepts of tenure and free speech.

Maybe I’m short-sighted, maybe I haven’t drank enough of Sutskever’s snake oil, maybe the doomsayers are right.

Wouldn’t that be even more of an argument for securing, updating and publicly auditing the software that controls our weapons of mass destruction ?

But then we’d be dealing with a real problem, one that has solutions and requires work, one that requires engineers instead of bikeshed designers.

So I think it’s unlikely that it will ever happen, people like to ignore the literal armies of killer robots that already exists, instead focusing on imaginary solutions for the imaginary dangers of the future.

If you enjoyed this article you may also like:


Published by HackerNoon on 2018/08/02