Product Design for Terrible Humans

Written by jessekorzan | Published 2016/11/07
Tech Story Tags: social-media | machine-learning | api | product-design | ux

TLDRvia the TL;DR App

Last October 13th wasn’t especially noteworthy. A typical Thursday at the office. Yet, our software service prevented approximately one million incidents of abuse online.

Our text classification system identifies and labels content people create on the ‘Net. It understands patterns of behaviour, like bullying or sexting. It’s an API service that integrates with chat, comments, gaming and other social applications. Each time a topic is detected, it’s assigned a risk level. Messages are decorated with this metadata (topics and risk levels) and sent back. Our client’s platform then decides, based on it’s own rules, what to do next. Auto-moderate, hash out profane words, mark as NSFW, etc. the service allows for a range of expression that can fit any community.

Blacklisting doesn’t work.

Learning how people talk online comes from working with language experts and data science as much as with our clients. How people try and break chat filters is a never ending game. One that blacklisting or whitelisting isn’t effective at solving. If you strive for a decent social experience for your fans and audiences, you know what I mean. Even for casual streamers on Twitch, it’s the same battleground facing Twitter and others.

User reputation, context, images, foreign languages and dozens of the other aspects are exciting. Yet, right now, I am focused on shipping filtering solutions. Our text classification brings a lot of firepower to the fight. We are able to build a simple yet powerful product for a broad range of applications.

On Oct. 13, those one million messages triggered our most severe risk levels. The worst of the worst stuff people can say. Terrible stuff. Talk of self-harm, suicide, sadistic bullying, hurtful racist material, grooming. Stuff that most of us will agree takes the fun out of our games and social apps. Even ruining them to the point of players quitting and users leaving.

Designing tools that do work.

Human interaction is too complex and nuanced to just “solve” this problem. My job is to help figure out what tools to provide developers and community managers. The right tools that allow their users to share without fear of harassment.

Until the machine gets smart enough, the tools won’t be invisible. They need to be easy to setup, easy to understand how they are working, and easy to adjust when they aren’t effective.

Enjoy the web.

Oct. 13th was a tiny drop in the ocean. It’s a big challenge and we’re excited to get more folks using our service. We offer the power of our text classifier to anyone who wants to keep online conversation going for The Nice People. Part of this challenge is convincing people not to give into trolling and harassment. Keep those comments on, re-engage with your social channels. Enjoy the web.

If you’re interested in what we’re doing, please sign-up for our newsletter.https://siftninja.us3.list-manage.com/subscribe/?u=d0e3f38c07d25d0cc4ca2d61e&id=adad3e0d19

Hacker Noon is how hackers start their afternoons. We’re a part of the @AMIfamily. We are now accepting submissions and happy to discuss advertising &sponsorship opportunities.

To learn more, read our about page, like/message us on Facebook, or simply, tweet/DM @HackerNoon.

If you enjoyed this story, we recommend reading our latest tech stories and trending tech stories. Until next time, don’t take the realities of the world for granted!


Published by HackerNoon on 2016/11/07