The Unknown Path of AI Innovation — When Will We Really Be Ready For Machines to Take Control?

Written by azeem | Published 2017/05/15
Tech Story Tags: self-driving-cars | artificial-intelligence | technology | autonomous-cars | innovation

TLDRvia the TL;DR App

Two takes on innovation in different AI domains caught my eye this week.

The first is a review on AI chatbots and the importance of the human trainer to push these services across the ‘uncanny valley’. It is certainly true that many startups ‘fake it until they make it’. Or to put it another way, do things that don’t scale until they learn how to scale them. Training an AI system falls into that category.

The question really is whether this current crop of startups, like Amy, are chasing a tractable problem? And can they solve in the time available (before cash runs out) through the heavy use of human trainers to train the AI systems?

Sign up HERE to receive weekly updates on the most pressing questions in technology

This fascinating Bloomberg analysis takes us into the detail of just how much human assistance these AI systems need for the rather complex task (for a computer) of scheduling an email. It’s also a reminder of Moravec’s paradox: scheduling meetings is a quotidian human activity but tough for AI systems.

Introduction of self-driving cars is ‘many years away’

The second piece is this incredibly smart take on self-driving cars from Sam Lessin at The Information on self-driving cars:

New technologies usually take longer than initially expected to be introduced because rapid improvement eventually hits a point where the next stage of work becomes very expensive. That’s what will happen to self-driving cars, which is why their introduction is many years away.

The point with self-driving cars is:

they need to be very, very close to perfect before they are valuable at all. There is no 50% credit. A self-driving car that works 90% of the time, or even 99% of the time, might be a nice safety addition, but it doesn’t deliver the true dream of not needing a driver at the wheel.

The line between machine and human responsibility in autonomous driving is blurry: the expectation that humans will keep their supervisors role in autonomous driving is common but could be disastrous, as research shows.

Alphabet’s self-driving car autonomously passed > 3 million miles

Separately, Alphabet’s self-driving cars have driven more than three million miles autonomously (and billions more in virtual environments). The last million those miles took less than seven months, compared to over a year for the previous million.

At the same time, the rate of driver engagements (when the human needs to take over) has declined. Interesting, long-term data shows that it typically takes three decades for new car safety features to become ubiquitous on vehicles on the road.

What connects these distinct analyses of the AI-bot and autonomous vehicle market is ultimately the observation that the path to successful innovation is not a known one. How far are we really from Amy taking over all our scheduling or autonomous vehicles achieving level five control in the streets of Paris, Abuja or Manila?

And to what extent are we as consumers going to be part of the training data? And as humans become the source of learning for machines, “being the training data” even more disarming than “being the product”?

Sign up HERE to receive my weekly newsletter Exponential View every Sunday.


Written by azeem | i am new here.
Published by HackerNoon on 2017/05/15