2019 Predictions: How We Learned to Stop Worrying and Love AI

Written by nickcaldwell | Published 2018/12/10
Tech Story Tags: artificial-intelligence | machine-learning | software-development | ai-predictions | ai-2019

TLDRvia the TL;DR App

I recently got the chance to share a couple predictions about what 2019 will bring to the world of Artificial Intelligence via Forbes. Thought I’d expand on them here.

As someone who has been fascinated with AI and machine learning since the earliest days of my career, the rapid pace of progress in the field has been astounding. Almost overnight we went from an era of struggling to do simple image recognition tasks (think: “hot dog, not hot dog”) to AIs powered by deep neural networks capable of understanding and describing complex scenes with more accuracy and speed than humans.

The pace of innovation is only accelerating. The tools are getting more powerful, cheaper, and accessible. I spent the first 5 years of my career working on machine learning projects, and I often joke that those 5 years could today be replaced with 5 lines of Python package imports.

Which is why my first prediction is that the job title “machine learning engineer” will start to disappear.

You don’t need a fancy degree or specialization to harness AI nowadays, these tools are becoming a part of the standard developer toolbox. In the 90s an engineer who wanted to experiment with neural nets would often need to start from the simplest unit (say a perceptron) and work their way up, understanding the mathematics and principles as each layer progressed. Nowadays even a novice can use tools like Google Cloud AutoML to automate nearly every aspect of creating AI models and produce impactful results. The the complexity is increasingly abstracted away. And that’s OK — when’s the last time you bumped into a coder who wanted to learn assembly (or even C++?) Abstraction is power. Modern developers may not understand why their AI models work, yet they do.

Which brings my to my second prediction: interpretability (the ability to understand how an AI system works) will become a nice-to-have.

Think about it. When you visit the doctor’s office to get a diagnosis, you’ve never once asked them to provide all the reference materials, case studies, comparative patient records, etc. to prove their point. At some level you accept that the doctor is an expert and trust them. How long will we hold AI to a higher standard of interpretability than we hold other humans?

In the past, mistrust and unfamiliarity have been the biggest anchors on accepting AI. But with AI-powered interfaces becoming omni-present, we’re rocketing out of that Uncanny Valley. The reality is that over the past few years AI has begun exceed human capabilities, and 2019 is the year we’ll get okay with it. We humans don’t need to fully understand why AI’s make decisions and maybe the systems will get better faster when we decide to get out of the way.

Which finally brings me to my last prediction for 2019 and beyond: reputation, certification, oversight, and regulation will become necessary as AIs begin to take on decisions that impact human lives.

I’m not afraid of Skynet being born, but one unfortunate aspect of letting un-interpretable AIs learn for themselves is that it turns out they are as susceptible to biases as human beings. If we lack understanding of why AIs make decisions yet decide to transfer more and more responsibilities to them, then we must have mechanisms to ensure trust and prevent abuse. There have already been many calls from thought leaders on this subject and companies like Microsoft are already banging the drum to attract regulatory attention. If the last decade’s misadventures with social networks taught us anything it is that we aren’t great at predicting how new technology will be misused. But unlike social networks, unregulated AI is a problem we can see coming a mile off.

Hope you liked the read, please give it it a Clap, Tweet, or Share!

rock on-nick


Published by HackerNoon on 2018/12/10