Is Artificial General Intelligence on the Horizon or is it Already Here?

Written by mrdbourke | Published 2018/03/27
Tech Story Tags: artificial-intelligence | technology | trends | siri | google

TLDRvia the TL;DR App

And how can we spot other technological elephants in the room?

I’ve been a big believer in the fact that we’re not going to accidentally invent artificial general intelligence (AGI).

What’s AGI? When you walk into a room, you don’t necessarily have to be told what to do in there, you can figure it out based on your environment. Reproducing this ability with technology is what many consider to be AGI.

For such a monumental event to occur, it will take the combined and sustained effort of many. But it may have already occurred. We just don’t know it yet.

Hidden in plain sight

“There is not the slightest indication that nuclear energy will ever be obtainable. It would mean that the atom would have to be shattered at will.”

— Albert Einstein, 1932.

Later that year, Rutherford and his colleagues did exactly that.

“I think there is a world market for maybe five computers.”

— Thomas Watson, president of IBM, 1943.

No comment.

There are hundreds of examples of this kind of unseen technological capabilities littered throughout history. Even more thought-provoking, these predictions were made by masters well within their field of work.

Let’s step even further into the past.

Close to six thousand years ago, the Mesopotamians invented the wheel (or so we assume). Why then, did it take so long for someone to think about using it to make moving luggage through an airport a little more bearable?

To think, for decades, people would lug their heavy suitcases by hand across the world amazes me. Luckily, I was born after this era.

But even the Mesopotamians themselves were blind to the practical applications of their invention. The wheel began its journey on toys for children. Instead of attaching some wheels to giant slabs of stone, they used manpower and sweat. All the while, their children were happily rolling their toys along the ground.¹

What else are we missing out on?

Slowly but surely

We’ve established that even some of the brightest minds make blunders from time to time but how should we use this knowledge?

There’s no doubt about the impact a general artificial intelligence could have on society. We dream of it by making science fiction movies and using the cut scenes as hooks to lure readers into our articles.

What’s not fiction is what’s happening now. More and more tasks are being automated. This we know.

What will be automated next? This we don’t.

If the past is an indicator of the future — and history does repeat itself — anyone who claims to know what’s next is still guessing.

As Jobs so famously said.

“You can’t connect the dots looking forward; you can only connect them looking backwards. So you have to trust the dots will somehow connect in your future. You have to trust in something — your gut, destiny, life, karma, whatever. This approach has never let me down, and it made all the difference in my life.”

We can use the knowledge of today to fuel our foresight. But as proven, our foresight is cloudy. Only in hindsight does our vision clear up.

We can expect change to occur, we can expect to be wrong about some things, we can expect to miss obvious solutions sitting right in front of us. We can use these expectations as preparation for what’s next.

In the case of technology, things move fast but there’s a difference between speed and velocity. Speed is simply distance over time. Velocity is speed with a directional component. When we think about speed, the directional component is often forgotten.

“We’ve got to move fast, things are changing!”

But where?

Use your current workplace or role as an example.

What changes have happened recently?

How fast were you able to adapt?

If the changes appeared in a hurry and without adequate meaning, adaptation would’ve been hard. You end up moving fast but with a lack of direction.

Flip this scenario and apply the models we’ve discussed above.

“Okay these changes are coming in, but I’ve been expecting them.”

“I can’t believe I didn’t see that coming. It doesn’t make any sense now but I’ll be able to connect the dots in the future.”

All of sudden, you have direction. You knew they were coming. No longer are you working with just speed, you’ve got velocity.

What is? versus What if?

Okay, so where does AGI come into all of this?

The truth is, this kind of thinking can be applied to any kind of technology. Not just what’s spilled across the headlines.

Artificial intelligence is not new. The abacus first appeared over 5000 years ago. I’m sure people back then were wondering what else they could do with their magical counting machine. Only now have the technological means become available for our science fiction stories to become reality.

What if AGI is already here?

What if we’re missing something?

What if our children are playing with the AGI equivalent of the wheel in their toys?

Why don’t we try to ask Siri or Google to answer these questions?

We can try but they don’t provide very good answers. The reason? Much like us, even the smartest intelligent agents don’t deal well with what if questions.

Hey Siri, what is the population of France?

Okay Google, what is the square root of 64?

Our current technologies are very capable of answering what is questions, even better than us in some cases.

But what is questions don’t lead to anything new. They’re a way for existing knowledge to be regurgitated. What if questions invoke our imagination. They allow us to build stories of the future and think about how stories of the past could’ve been different.

So where’s the elephant?

I’d argue that most inventions are by accident. Think about the last time you can up with an idea. Where did it come from?

Sure, there’s the background information, you could probably trace it back a few steps but go back far enough and the likely answer is “I have no idea.”

After running the same trail for 10-years, you’ll eventually trip over a log. The log will likely appear as if came out of nowhere. But you wouldn’t have tripped unless you were on the path. This is the same with ideas, after working on a problem for so long, solutions often appear out of nowhere.

Any wise person will tell you they don’t know nearly as much as what they do know and what they do know is as stable as a termite infested wooden house. Even the most fundamental truths and beliefs are open to being challenged.

This idea excites me. We don’t know far more than we do know. Life becomes a whole lot more fascinating when you think about it this way.

As for AGI, it may right under our noses. A couple of monks tinkering around with some computers may bring it to fruition — think Mendel and his peas.

There are many more secrets we are yet to discover. AGI is only one fish in a vast ocean of technological progress.

By understanding we don’t really know what’s next and expecting to miss things that are right in front of us, our foresight becomes a little clearer.

What if the elephant is in the room but we’ve got our eyes closed?

Photo by Harshil Gudka on Unsplash

¹ Excerpt from Antifragile by Nassim Nicholas Taleb.

Watch me: YouTube

Updates of my work: www.mrdbourke.com/newsletter

Connect: LinkedIn


Published by HackerNoon on 2018/03/27