AI for Everyone: Things to Consider when Experimenting

Written by sh_reya | Published 2017/09/14
Tech Story Tags: artificial-intelligence | ai | aiforeveryone | experimenting | software-development

TLDRvia the TL;DR App

I’ve recently started co-teaching CS53SI — a series of discussions centered around technology and social good. There’s a lot of hype in this intersection on campus.

We’re at an exciting time in our lives, where we have the hardware and software to accelerate AI development and see research manifest in everyday tech products. With large, growing repositories of data on the Internet, data freaks like me can obsess over applying machine learning techniques to extract insights.

Unfortunately, unchecked exploration could lead to some nasty consequences. A couple of weeks ago, I was shocked to find that a professor at my own university came up with an algorithm to detect whether a person is gay based on a picture of their face. This algorithm can be extended to detect political views and IQ, he claims.

Why is this scary? Here are 4 things to consider when training models on datasets.

1. What would happen when these algorithms are used by employers to scan future employees? Or if this technology sits in people’s back pockets, giving people the opportunity to “predict” whether someone is gay by scanning someone’s face with their phone? This reminds me of the X-Men movies, where mutants wage wars to protect fellow mutants’ identities. The reality is, we don’t live in a perfect world — sadly, gay people and other minorities face discrimination.

2. Machine learning is a tool that primarily identifies existing biases and separators in datasets and exaggerates these differences to make future predictions. If the input data are not representative of the population data, the output predictions will be inaccurate. In this case, input data comprise faces with self-reported sexualities. The only training points represent people who have come out or are open with their sexuality. There are plenty of other people who have not publicly stated their sexual orientations because they may face adverse consequences — biasing the inputs.

3. Current media blows AI research findings out of proportion or puts an entirely new spin on things. The Stanford professor who conducted the study talks about the negative social repercussions of his research. But the larger population doesn’t understand the machinery of AI, and news agencies capitalize on the public fear of artificial intelligence. We have to design for a world without threatening people or putting their identities at stake.

4. Making conclusions about people based on how they look or on factors they cannot control can be bad. As it is, we’re still fighting racism, sexism, homophobia, and other similar phenomena in the 21st century. If AI perpetuates differences in treatment based on factors beyond individual control, we will never achieve the equality so many of us are fighting for.

The possibilities and applications of AI are endless. They have so much potential to do good for the world. But as developers, scientists, engineers, and geeks, we have to anticipate and assess the consequences of our work on society. As we design the future, we can’t leave parts of the population behind — we should aim create an inclusive world for everyone.


Published by HackerNoon on 2017/09/14