Why AI is now on the menu at dinner, even with my non tech friends

Written by benb | Published 2017/08/29
Tech Story Tags: artificial-intelligence | machine-learning | ai | media | change

TLDRvia the TL;DR App

A couple of weeks ago I had some friends over for dinner, and it struck me afterwards that we had spent a lot of the evening talking about Artificial Intelligence. As a venture capital investor focussed on emerging technologies, I am used to my work day being filled with conversations about technology trends and advances. However over the last 12 months I have found myself having more of these conversations outside the office too, and they are almost always focussed around AI.

Looking at mentions of key technologies in a set of general news sources, you can see that AI has caught public interest to a far greater extent than other innovations. Since early 2016 when AI overtook the Internet of Things, interest in the topic has grown rapidly, and news volume now exceeds IoT by 50% and other key technologies by 6 times.

Source: Factiva; Methodology: Searches for key terms in Factiva classified “Top News Sources” in English Language. ‘Artificial Intelligence’ search also includes OR’d terms ‘AI’ and ‘Machine Learning’, ‘Internet of Things’ includes ‘IoT’, ‘Autonomous Vehicles’ includes ‘Driverless Car’ and ‘Self Driving Car’, ‘Industry 4.0’ includes ‘3D Printing’ and ‘Advanced Manufacturing’, ‘Blockchain’ includes ‘Bitcoin’.

What is it about AI that means it has eclipsed the column inches of its peers?

Technologies such as Autonomous Vehicles and Industry 4.0 are seen as well defined and with perceived benefit to a clear set of problems. AI though has captured the public imagination at another level, due to a sense of its broader significance and potential impact, both positive and negative. I see this as being driven by four significant factors:

  • Popular Culture: AI, intelligent robots and other similar concepts have been a key theme of science fiction for many years, and until recently for most people this portrayal was the limit of their exposure. For the most part today’s AI advances look nothing like HAL 9000, Skynet or a Replicant. However concepts such as ‘AI as the cause of dystopian futures’ and ‘AI doing things its creators didn’t anticipate’ still inform people’s baseline impressions, impacting their relationship with modern forms of AI. In Mona Lalwani’s interview with anthropologist and Intel Senior Fellow Genevieve Bell, Genevieve shares an interesting perspective on a potential approach to using learnings from popular culture to improve AI in the real world.

We frequently dismiss the fears without acknowledging that they are based in a little bit of truth. Humans have built technical systems for a long time, and they’ve often had unintended consequences … What would happen if we took the fears seriously enough not to just dismiss them, going, “You’ve watched too many Terminator movies”? But actually took them seriously and said, “Here are the guardrails we’re implementing; here are the things we’re going to do differently this time around and here are the open questions we still have.”

  • Hype and Fear: While all of the technologies mentioned have been the subject of public debate and discussion, few have been painted in the same sensationalist light as AI. Whether it is the mass media regularly describing AI as a threat to human existence, or the authoritative voices of high profile figures such as Vladamir Putin quoted as saying “Whoever becomes the leader in [Artificial Intelligence] will become the ruler of the world” and Elon Musk then suggesting that an AI arms race would likely cause World War 3, much of the narrative surrounding AI comes with a message of something to fear.

  • While the impressions of well informed and involved individuals like Musk should be heard and debated in the right forums, I agree with Google AI researcher François Chollet who recently tweeted about the negative impact of sensationalising the narrative in the mainstream:

  • AI as a Black Box: In my previous post “We can build these models, but we don’t know how they work”, I wrote about black box perception as a hurdle to the adoption of AI systems, driven by limited understanding of how a machine learning system works. In this context, this has both positive and negative implications. Positively, this complexity drives part of the fascination, with the idea that a computer can ‘think’ and perform actions beyond that which it has been specifically programmed really capturing public imagination. More negatively, lack of understanding can lead to lack of trust, which can then feed into the fear discussed previously.
  • Impact on Everyday Life : One of the most significant and important angles of the AI debate is the potential for job losses as more activities can be automated using AI. In the January 2017 report ‘A Future that Works’, McKinsey estimated that 60% of US jobs will have at least 30% of their activities automatable based on current technologies. While one line of thinking says that these workers will have 30% more time to do a better job of their other tasks, the equally logical counter argument says that this will reduce the required workforce by 30%, creating a highly relatable concern of a significant and negative impact on everyday life. Recently the debate here has moved from hypothetical to real examples, such as the well covered case of Fukoku Mutual Life Insurance replacing 34 employees with IBM’s Watson Explorer AI, and so the evidence for the likely imminent impact here has increased.

McKinsey, January 2017

In a fascinating survey from the UK report ‘Public views of Machine Learning’ by the Royal Society, you can see that the cumulative impact of the factors discussed is to leave people’s opinions divided, with a 30/35/30 split between people who feel the risks of machine learning were greater, equal to or less than its benefits.

Royal Society, April 2017

Interestingly though, looking deeper you see a significant variation in opinion between potential uses. While 61% of respondents were positive on the benefits of using computer vision on CCTV to catch criminals (a use case which seems to have limited downside), 45% were negative on driverless vehicles and 48% negative on autonomous military robots, which in both cases have implicit safety concerns alongside potential for job losses.

Royal Society, April 2017

For those working in and around AI today, the challenge is to harness the public engagement in a positive way. We need to help educate and convert those who are undecided to enable faster adoption of this new technology, while dispelling the myths and misconceptions that could slow this down. Most importantly though, we need to work with the public to understand and mitigate the real concerns around the potential negative impacts. And while some of this will need to be done in government and the media, some of this can probably be done round the dinner table too.


Published by HackerNoon on 2017/08/29