If AI Becomes a Threat Can We Just Pull the Plug?

Written by allan-grain | Published Invalid Date
Tech Story Tags: artificial-intelligence | ai-technology | llms | future-of-ai | ai | ai-trends | technology | technology-trends

TLDRThe question of Artificial Intelligence (AI) becoming a major threat to humanity is serious and should not be discounted. Many doomsayers are forgetting that, unlike humans, computers require some form of power and connectivity to be efficient or threatening.Geoffrey Hinton, an AI expert, has warned against the technology’s dangers.via the TL;DR App

The question of Artificial Intelligence (AI) becoming a major threat to humanity is serious and should not be discounted. It is not difficult to imagine how such a scenario could take place. On the other hand, many doomsayers are forgetting that, unlike humans, computers require some form of power and connectivity to be efficient or threatening. Pull the plug and the problem is solved.

For the sake of argument, let us create a scenario. Imagine Russian President Vladimir Putin collaborating with Iran to create a suicide drone swarm 10,000 strong. Each drone is driven by AI and will indiscriminately target infrastructure as well as humans. The algorithms driving each drone dictate that it observes, evaluates, and acts based on what it decides independent of other drones or its programmers. Sent as a swarm into a populated area such as a town or city, the drones could wipe out entire families and destroy life completely. This frightening scenario is not far-fetched and could potentially result in the deaths of thousands of people. Envisioning such a scenario as the possible future of warfare renders useless the argument that it is stoppable. There would be no way for an opposing military to stop such a drone swarm and prevent it from destroying an entire city population.

Geoffrey Hinton, an AI expert, has warned against the technology’s dangers.

“the 75-year-old professor is revered as one of the ‘godfathers’ of AI because of his formative work on deep learning – a field of AI that has driven the huge advances taking place in the sector.” - Irish times.

Hinton comes from a prestigious family in the world of computing. He is the great-great-grandson of the British mathematicians Mary and George Boole, the latter of whom invented Boolean logic, the theory that underlies modern computing.

Hinton resigned from Google so that he could speak freely about AI and its potential dangers. His resignation followed a series of groundbreaking AI launches over the past six months, starting with Microsoft-backed OpenAI’s ChatGPT in November and Google’s own chatbot, Bard, soon after.

Hinton, along with others, voiced concerns that the race between Microsoft and Google would push forward the development of AI without appropriate guardrails and regulations in place. Several governments and activist groups have expressed the same concern.

Billionaire Elon Musk, the owner of Twitter (now X Corp), Tesla, and SpaceX, has also warned of the dangers of AI, though he is set to launch TruthGPT, another AI tool. Musk has said in the past that AI is a “danger to the public.”

“AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production,” Musk is quoted as saying. “In the sense that it has the potential — however, small one may regard that probability — but it is non-trivial and has the potential of civilization destruction.”

A recent New York Times article tried to explore the dangers of AI. The article describes the problem as based on neural networks and Large Language Models, or L.L.M.s. The short-term risk involves disinformation, while the middle-term risk involves the loss of jobs due to AI.

According to the article, “A paper written by OpenAI researchers estimated that 80 percent of the U.S. workforce could have at least 10 percent of their work tasks affected by L.L.M.s and that 19 percent of workers might see at least 50 percent of their tasks impacted.” The long-term risk, as per the article’s authors, relates to loss of control. Some people believe that AI “could slip outside our control or destroy humanity.”

But it appears other experts say that scenario is wildly overblown, according to the same article. The fictitious wartime event described above is thus perhaps not yet achievable and therefore does not yet pose a real threat. However, AI technology is moving fast and countries will need to institute laws and regulations to control its development and implementation. Italy has already banned some AI to explore its potential risks and China is already using it for nefarious purposes. As AI proliferates across industries, it will become more difficult to control and regulate. In fact, pulling the plug will not be so easy.

What do you think? let me know in the comments!



Written by allan-grain | Avid reader of all things interesting to mankind. Futurist, artist, pianist, realist.
Published by HackerNoon on Invalid Date