Can We Really Trust AI?

Written by nightwolf | Published 2022/03/24
Tech Story Tags: ai | artificial-intelligence | agi | machine-learning | ml | artificial-general-intelligence | ai-law | singularity

TLDRAI is a phrase thrown about a lot nowadays, maybe a little too much. But do you even know what it means, or even that AI defying humans isn’t what we should be worrying about? What does AI even stand for? According to [Cambridge dictionary] AI ***“is the study of how to make computers that have some of the qualities of the human mind” – basically giving the computer the ability to think, creepy, I know. And that’s just one common common use of AI that people don't even realise has been integrated into their everyday life.via the TL;DR App

AI is a phrase thrown about a lot nowadays, maybe a little too much. But do you even know what it means, you’ve probably used it many times before without even realizing it, or even that AI defying humans isn’t what we should be worrying about?

What does AI even stand for?

Well, according to Cambridge dictionary, AI “is the study of how to make computers that have some of the qualities of the human mind”

Basically giving the computer the ability to think, creepy, I know.


What’s it used for?

Well, now that we’ve got the basic picture of what it means, what is it being used for?

This may or may not come as a surprise to you, but you were very likely led to this page through AI. And that assumption is based on the 2019 report that over 4 billion people use googles services and as it’s estimated that there are 4.66 billion active internet users, that means 82.9% of internet users come through google (that’s according to the 2021 stats from Statista).

But why am I telling you this? Because google’s search results are actually based on what their AI thinks will be best for you, based on everything it’s learned from your past interactions with its services. And that’s just one common use of AI that people don’t even realise has been integrated into their everyday life.

Some more common uses of AI include:

  • Virtual Assistants or Chatbots
  • Agriculture and Farming
  • Autonomous Flying
  • Retail, Shopping and Fashion
  • Security and Surveillance
  • Sports Analytics and Activities
  • Manufacturing and Production
  • Live Stock and Inventory Management
  • Self-driving Cars or Autonomous Vehicles
  • Healthcare and Medical Imaging Analysis
  • Warehousing and Logistic Supply Chain

But that, I’m sure, you probably already knew (or at least should have 😆).

So here are some more lesser-known uses of AI that may surprise you:

  • Writing Hit Songs
  • Detecting Deterioration in a Patient’s Health Before a Major Critical Event Occurs
  • Figuring out Which Disney Movie Will Perform Best at the Summer Box Office
  • Fortune Telling
  • Creating their own language

And that’s just brushing the surface of what’s possible with AI. With companies spending nearly $20 billion on AI products and services each year, tech giants like Google, Apple, Microsoft, and Amazon spending billions to develop those products and services, and universities making AI a more prominent part of their respective curricula, AI is becoming a more important part of the educational landscape.


Can we trust AI?

Big companies experimenting with AI doesn’t exactly have the best history, to say the least.

In 2004 Microsoft unveiled Tay, an experimental bot put on Twitter to, as Microsoft said, experiment in “conversation understanding”. The project was simple, the more you chat with Tay, the smarter it gets, learning to engage people through casual and playful conversation.

What could go wrong, right? Oh, how wrong they were. 2016 Twitter wasn’t the type of place you’d want AI to develop conversational understanding. In under 24 hours, Twitter users were able to turn the bots friendly greetings of “I’m stocked to meet you” and “humans are super cool” to racist, antisemitic and practically every other discriminatory form there is. To say the least, it wasn’t a successful experiment.

But don’t worry, Microsoft and Twitter with AI experimenting weren’t the last of it. In 2017 Facebook had been experimenting with AI bots that negotiated with each other over the ownership of virtual items, they wanted to see how linguistics played a role in the way such discussions played out for the negotiating parties, but a few days later they started conversing with each other in a modified version of the human language. Text that seemed completely meaningless, but that was being replied to, an example of it is:

  • Bob: “I can can I I everything else”
  • Alice: “Balls have zero to me to me to me to me to me to me to me to me to”

And that’s just two early experiments at the beginning of integrating AI with common technology, there are many more public examples including a French chatbot suggesting suicide, which you can read about on Analytics Insight.


AI and the law

In early 2017 the development of global governance board was suggested to regular AI, in 2020 The Global Partnership on Artificial Intelligence was launched, requiring AI to be developed in accordance with human rights and democratic values, to try and gain public trust. It included a whole pile of countries, the EU, UK and USA to name a few.

Some of the adopted guidelines include:

  • AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and should include appropriate safeguards to ensure a fair and just society
  • The need to ensure that people understand when they are engaging with them
  • AI systems must function in a robust, secure and safe way throughout their lifetimes, and potential risks should be continually assessed and managed
  • Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the OECD’s values-based principles for AI

You can read the rest at OECD.


The threat of AI

The stereotypical view on AI going wrong, is it getting too clever and turning hostile to humans. But in fact, according to AI researcher Stuart Russell, the threat is the exact opposite!

Stuart is a professor of computer science at the University of California, and in his book Human Compatible: Artificial Intelligence and the Problem of Control, he theorizes that the problem is not that they’ll become too clever and defy us, but that they’ll do exactly as they’re told – but we’ll tell them to do the wrong things – and this could end in a disaster.

Stuart Russell, who helped pioneer the idea of value alignment, likes to compare this to the King Midas story. When King Midas asked for everything he touched to turn to gold, he really just wanted to be rich. He didn’t actually want his food and loved ones to turn to gold. We face a similar situation with artificial intelligence: how do we ensure that an AI will do what we really want, while not harming humans in a misguided attempt to do what its designer requested?

A quote from Future of life


Que the outro!

This article was originally posted on my blog, https://kilabyte.org/. I recently started it, as I’ve always been interested in technology, I have a lot to learn and can’t wait to see where this road takes me! 😉

A bit about me
👋 Hi, I'm Night Wolf, I'm 16
💻 I'm interested in anything computer related
🌱 I'm currently doing a full-stack course
👨‍💻 I do web development http://nightwolf.tech/
✍️ I run a blog https://kilabyte.org/
✉️ Feel free to contact me :)
👉 info@nightwolf.tech
📸 @nightwolf.tech
📹 @kilabyte_blog


Written by nightwolf | 👋 My names Night Wolf, I'm 16 👨‍💻 I do web dev www.nightwolf.tech ✍️ I run a blog www.kilabyte.org 📸 @nightwolf.tech
Published by HackerNoon on 2022/03/24