Three AI Ethics Questions That Soon Will Need Real Answers

Written by techlooter | Published 2019/11/23
Tech Story Tags: artificial-intellingence | ai-conferences | malware-threat | ai-crime | ai-powered-systems | latest-tech-stories | futurism

TLDR In 1950, Alan Turing first proposed a means to determine if a machine had developed the ability to think independently. Almost immediately afterward, researchers, journalists, and politicians began to ponder the implications of artificial intelligence (AI) The idea of ethics in AI has roared back into the public consciousness. It's clear that there are some difficult questions that need to be answered about how we will mutually agree to both use – and not use – AI technology. To help get a specific and productive conversation started, here are three AI ethics questions we should settle as quickly as possible.via the TL;DR App

In 1950, Alan Turing first proposed a means to determine if a machine had developed the ability to think independently, giving rise to the concept that we now recognize as artificial intelligence (AI). Almost immediately afterward, researchers, journalists, and politicians began to ponder the implications of such a technology, wondering what sort of ethical constructs would be necessary to regulate it.
For decades, those discussions remained mostly theoretical. After all, nobody had yet come anywhere close to a real, functional AI system. Today, however, with developers closing in on that goal, the idea of ethics in AI has roared back into the public consciousness. Technology CEOs are voicing concerns over what unchecked AI might do to society, ethics are a frequent topic at today's AI conferences – even the Pope has weighed in on the subject.
It's clear that there are some difficult questions that need to be answered about how we, as a society, will mutually agree to both use – and not use – AI technology. To help get a specific and productive conversation started, here are three AI ethics questions we should settle as quickly as possible.

How to Handle AI-Driven Crime

Photo: Siarhei / Adobe Stock
In an age where digitization has spread to almost every industry imaginable, and where threats to both data and infrastructure are everywhere, AI poses some unique ethical questions. Already, researchers (and criminals, too) are exploring the use of AI as a means of creating a new generation of malware threat. That raises an obvious question: are AI developers going to be held responsible if an AI system they develop is used for some nefarious purpose and causes harm?
This is a difficult ethical question because it's easy to imagine a developer creating an AI system for a completely harmless or even beneficial purpose, only to see it used to do harm. It's a situation that's already happening, as AI-powered deepfake photos and videos are being used to target both companies and individuals for harassment or worse.
Right now, there's no liability for developers who create the systems that do this – nor is there a requirement that they bake in ways to spot the fakes, or as in the case of malware, disarm the threat. This is probably the most pressing AI ethics issue today, as it's already becoming a problem in real-time.

Preventing Bias in AI-Powered Systems

Photo: xyz+ / Adobe Stock
Another thorny ethical issue connected to AI is one that is an extension of a problem we humans suffer from, too. It's about how we're going to handle the very real likelihood that AI systems will develop biases and discriminate against both groups and individuals. This is another problem that is unfolding in front of our eyes.
There are already documented instances of AI systems learning human biases from the data that developers use to train them. There's also evidence that today's AI-powered facial recognition technology is biased against women and people of color – and they are in wide use by law enforcement agencies anyway. All of this means that it's necessary that we come up with new methodologies and testing requirements to prevent the creep of biases into AI systems, and develop mechanisms where oversight prevents such flaws from affecting people in real-world scenarios.

Managing Labor Displacement Due to AI

Photo: Soryn / Adobe Stock
The last AI ethics issue is one that gets plenty of attention from the media and by people all over the world. It's about how AI is going to lead to job displacement as it comes to replace human workers doing a variety of work. From an ethics standpoint, this creates a question that's twofold.
The first part is whether developers of AI automation systems should have some ethical obligation to minimize the impact of their work. The second is if society as a whole bears some responsibility for making sure that AI-driven economic displacement doesn't exacerbate economic inequality.
From the developer's point of view, the answer to the responsibility question is murky at best. After all, it's impossible to predict the impact that any individual AI platform would have on the labor market. As for society's role, however, the answer is a bit clearer.
It's actually one of the big reasons that there are so many experiments with universal basic income (UBI) schemes going on in the world, and why so many technology industry heavyweights believe such systems are inevitable. Wherever one falls on the issue, however, it's a question that's going to need an answer – and sooner than many believe.

The Ethical Implications are Vast

Photo: Prostock-studio / Adobe Stock
The bottom line here is that the ethical implications of the continued development of AI are enormous. The three issues here are just the tip of the iceberg. Going forward, scientists, legislators, and society as a whole are going to have to confront these and many other unforeseen ethical quandaries related to AI.
Failure to do so could upend much about the way the current social contract works, and create the real possibility of the kind of dystopian future so many science fiction authors have imagined in the age of AI.
So, for all involved, now's the time to get the discussions started – lest the genie in the bottle become a monster we cannot control.

Written by techlooter | A dedicated writer and digital evangelist.
Published by HackerNoon on 2019/11/23