The Questions We’re Going to be Asking When Artificial Intelligence Takes Over

Written by ava411 | Published 2018/02/10
Tech Story Tags: artificial-intelligence | machine-learning | life | technology | philosophy

TLDRvia the TL;DR App

An overview of the economic and social quandaries that are likely to dominate our working-memory in the forthcoming years.

Machine Learning and why it Matters

Artificial Intelligence (AI) is the broader concept of machines being able to carry out tasks in a way that we humans would consider ‘smart’. Machine Learning (ML), a subset of AI, refers to the idea of machines using data to make smart decisions — i.e. decisions they are not necessarily programmed to make.

Today, AI is growing faster than ever. A simple Google search lists the various industries AI is threatening to disrupt — finance, education, science, law… An AI system beats a Grandmaster at chess, and then at Go. Watson, an AI system designed by IBM, beats two Jeopardy champions.

Achievements and milestones of these super-complex technologies are the headlines of Techcrunch and Quartz articles, scoured by scholars, technology aficionados and VCs looking for ‘the next big thing’. But what happens when years of gradual improvements, development, testing, and innovative cost-reduction strategies push these technologies into the realm of mainstream consumer and enterprise affordability? What if companies could reduce their variable labor costs by making a one-time investment into an AI system to be their accountant, webmaster, content creator, designer and salesperson?

What will be the impact on the economy? Sociological frameworks and systems as we know them? The world? This article explores the economic and social implications of AI systems, and what the future issues of legal and economic institutions could look like.

Short-term Implications: The Economy

Historically, tech has been used as an instrument to enable entrepreneurs to disrupt specific industries. The closer we approach the steeper vertical on the exponential growth curve, the more powerful the technology we create. And the more powerful the technology, the faster the rate of economic disruption within and across industries. As time goes by, and technology gets more powerful, this incessant, industry-wide economic metamorphosis we label disruption could perhaps, one day, lead to the large-scale implosion of entire economic systems.

Consider a concrete example: the increasing pervasiveness of devices and digital services in lieu of objects and physical places. A GPS is a map; your smartphone, a portable camera, entertainment system and planner. Kindles and e-readers are bookstores and books, and Google is one giant, global library. Granted, technology has been replacing objects for a while now. But soon, it will start replacing people.

Software is eating the world, and soon, it will start eating us too.

Infact, it already has. We’ve moved past industrial assembly-lines that assemble cars and package boxes — machines today are driving the cars, designing the boxes. They’ve begun learning how to code and create their own applications.

And businesses are rapidly transforming their models to accommodate these changes. Panera Bread decided to replace all their cashiers, worldwide, with machines; Best Buy’s Chloe is a robotic arm that responds to consumers’ movie requests by finding and delivering items from the appropriate shelves.

A recent study found that each additional robot in the US economy reduces employment by 5.6 workers. Indeed, according to the Law of Accelerated Returns and exponential growth in technology, specialized intelligence systems will dominate industries in 10 years, leaving much of the world unemployed.

Will the business cycle be a solid model if households no longer provide labour and other factors of production to firms? The circular flow of income model ceases to exist if there is simply no income flowing into households because masses of people don’t have jobs. Monetarism and Keynesianism were developed pre-1950s, within an entirely different technological landscape. The economic systems, though universal, are not equipped to handle the colossal upheaval that could result from unprecedented, and imminent, technological progress.

If we don’t update our systems in time, they will crash.

Unless we devise an alternate system before intelligence systems integrate into mainstream society, governments will struggle to support the masses of unemployed workers, and will have to delegate valuable resources to provide training programs designed to make workers employable in the ‘updated’ version of the world.

Long-term Implications: Society, Ethics & Integration

This section explores the concerns associated with the development of Artificial General Intelligence (AGI). AGI are not specialized only for a narrow set of tasks — they are systems with world-knowledge, or common sense, and have the capacity to carry out higher-order mental functions — like humans. What does this mean for the world?

Consider a variation on the Trolley problem. A self-driving car has to decide whether to crash into two kids crossing the street and save the owner in the car; or to save the kids and crash into a tree, killing the owner. From a utilitarian perspective, saving two children instead of one owner would be optimal. But would you buy a car that is designed to kill you in the event of some extraneous circumstance, in the name of social virtue?

What, then, if the self-driving car has been programmed to save its owner at all costs, but suddenly uses its self-learnt logical frameworks to make a decision that overrides its programmer’s, choosing to crash the car in order to save the children. Would you, a consumer, pay for a data-driven AGI, that uses a combination of self-devised logical frameworks and its programmers’ implicit biases as a metric for how and when to kill you?

Consider the criminal justice system. If an AGI commits a crime, who will be held liable — the programmer that created it, or the company that sanctioned its creation? Or the government of the country that permitted the industrial development of such AGIs in the first place? Or, should the AGI itself be punished, since it is thinking, rational and reasoning. Indeed, if it is a ‘thinking’ creature, should it not be ‘punishable by law’?

What, then, if we do hold the AGI liable. Is the solution to imprison it? The purpose of correctional facilities for humans is to correct social deviance, and ensure future deviance doesn’t occur again. What if an AGI’s ability to commit a future crime of a similar nature could simply be prevented by reprogramming it, and avoiding the process of entering the criminal justice system altogether? Logically, this is sound. If a robot makes a mistake, you fix the error and release the new-version with bug fixes. Perhaps, then, it should not be incarcerated for murder.

But there is an alternate perspective to be considered: if it is endowed with the same social liberties as a human, should it not be subject to the same social contract, and thus, surrender an equal degree of control to the State for the purposes of social regulation? By this line of reasoning, indeed, it should be incarcerated.

However, an AGI is (likely to be) cognitively different from a human; could certain aspects of its social functioning be attributed to these cognitive differences? It would then be unjust and inequitable to generalise human-developed social frameworks onto AGI populations. When conducting behavioral research in humans, cross-cultural differences are imperative to consider, notably because a difference in culture indicates a difference in social norms, and consequently, acceptable behavioral variations between culturally-distinct populations. Similarly, it is possible that different ‘cognitive-norms’ could have implications for behaviour, and must therefore be considered when appraising social behavior.

How will we ever be able to quantify whether robots are operating at the same intellectual level as humans? This is especially tricky to appraise if we differ from them in biological structure and cognitive functionality; our evaluation as a species is likely to be subject to human chauvinism.

Consider the psychological whiplash humans will face as they realize they are not alone — as general intelligence becomes increasingly prevalent in their daily lives. Will we become weaker as we become increasingly dependent on the machines we create?

What Can We Do About It?

Today, only political authorities have the power to impose regulatory constraints on the development of new technologies. Which will only happen if we stir up enough public outrage to prompt changes in regulation. Which takes time, and a series of wake-up calls.

But until then, it is the responsibility of the innovators, the creators, to take a step back and think. To stop writing code, to stop testing algorithms, and to stop calculating weights on neural networks, until they can answer the ethical and philosophical questions associated with the new entity they are helping bring to life.

Email me at arm4pk@virginia.edu if you’d like to continue the conversation!


Published by HackerNoon on 2018/02/10