My algorithm lost sight of the forest for the trees

Written by ethicaldata | Published 2017/04/19
Tech Story Tags: artificial-intelligence | machine-learning | ethical-data | algorithms | data-ethics

TLDRvia the TL;DR App

Deep learning models are pretty amazing. They can label images, categorise data, figure out text sentiment, and create interesting new designs and pictures (a list of examples here). But they are just simple beings really. They operate on specific data, structured in a specific way, to achieve a specific outcome.

Nothing wrong with that I suppose. And the range of the outcomes will only get more interesting and valuable. But these algorithms don’t exist in a vacuum. They are fed data from humans that are collected by humans with some goal in mind. The particular features that the model uses to determine its inferences are also the result of human intervention. What else? Humans code the model, they test the model, and then they tweak it. Finally, humans make choices about how to use the data.

So for all the talk of the magic and the mystique of AI, it is just another computational tool. It is a better one than we have had to date, and it can achieve some pretty cool results. But at every step of the way there is a human.

I can’t think of a nice analogy, but essentially what all these algorithms are is a force (an irrepressible essence like the blob!) that is re-shaping society. Some processes are made more efficient, some decisions are more accurate, and some people will have redundant skills. Nothing about this change is new, but the scope and the consequences may be. We have discovered a new tool which will permeate all parts of our lives. It happened with factory machines; with electricity; and the internet.

Perhaps a critical difference between any disruptive change in the past is how this one will impact on our decision making. The sheer volume and range of data that exists creates an almost infinite space for experimentation. If someone has a goal, they can find a way to ‘torture the data until it confesses’. But are we ready for the implications?

And this is where we enter ethical territory. We are a step beyond computer science — deeper, more conceptual. It isn’t a question of how efficient the hardware or software is in crunching the numbers. No … this is where we need to ask questions about society, about culture, and about life or death.

What makes this change so consequential is that we have a tool that makes it very tempting for people to delegate their responsibility to something they do not understand. The recent United Airlines incident is instructive. Amongst other things, it happened because a process existed that relied on a predictive algorithm that none of the staff had any idea about — they abrogated their involvement to effectively say ‘I was just following orders’.

And this is a problem. It is a problem for those who construct these processes as well as those who are impacted by them. At every step along the way — from the data collection to the action taken on a model’s output — we need to think about what will change as a result.

These are not easy questions; they are messy questions. But if we neglect to fully appreciate the alternative — blind obedience to a data-driven process — then we may regret the efficiencies we thought we gained. The need for these conversations about ‘explainable AI’ and #ethicaldata grow by the day.

Hacker Noon is how hackers start their afternoons. We’re a part of the @AMIfamily. We are now accepting submissions and happy to discuss advertising & sponsorship opportunities.

To learn more, read our about page, like/message us on Facebook, or simply, tweet/DM @HackerNoon.

If you enjoyed this story, we recommend reading our latest tech stories and trending tech stories. Until next time, don’t take the realities of the world for granted!


Published by HackerNoon on 2017/04/19