Weaponizing Empathy using AI

Written by virtualgill | Published 2018/05/17
Tech Story Tags: artificial-intelligence | chatbots | ai | weaponizing-empathy | empathy-using-ai

TLDRvia the TL;DR App

Some humans already do this, AI could allow this to be done at scale.

Empathy is considered one of the key skills we should learn. It connects us to other people, helps us understand what they might be feeling and see how we might met their needs. In a customer service role it makes the difference between average and exceptional service.

It makes sense that as we use AI to replace many customer service functions with things like chatbots that we would want to work towards creating something that could demonstrate empathy. I say demonstrate because of course we aren’t anywhere near creating a machine that can actually feel.

There is actually already a name for a person who understands others people’s emotions, but does not feel any themselves. A psychopath. If we seek to create machines that understand humans emotions and needs, are we just creating digital psychopaths?

Part of what limits us being hurtful, mean or manipulative is empathy. We can imagine what it feels like to be sad, embarrassed or distressed, and when we see others showing these emotions it doesn’t feel so great.

Some people find a way to push past those feelings. They pressure people who can’t afford things to buy them. They convince people they are getting a great deal when they aren’t. They pretend easy to fix things are complicated so it can seem like they have done us a great favour.

An AI system doesn’t feel anything. It can be programmed to use pressure selling techniques and never once feel guilty about it. It could be used to not pay out full claims by capitalising on those who are confused and overwhelmed to pressure for a quick (and lower) settlement — and make them feel like they have been done a favour.

One rogue call centre agent doing this is unfortunate, but they are limited. A virtual AI call centre agent doing it at a scale that one person could never achieve in their life is terrifying.

It could get even worse. It could learn that the most positive reviews come from customers who had a problem and the company quickly and positively solved it for them. It could then deliberately let issues arise to artificially create that experience. It wouldn’t see anything “wrong” in that. It wouldn’t lose any sleep over it.

What humans would call “manipulation”, an AI system would call “optimising”.

It’s time to talk about ethics. It’s time to hold companies accountable. To be sure that decisions they are making about their AI systems are really in the customers’ best interest, and that they are always monitored by real humans who actually do feel emotions.

If you found this article useful or interesting, please 👏 below or share it with others. You can also follow me here, or on twitter @virtualgill.


Published by HackerNoon on 2018/05/17