Social media platforms on the nature of fake news

Written by asandre | Published 2018/07/17
Tech Story Tags: fake-news | politics | capitol-hill | social-media-fake-news | fake-news-platform

TLDRvia the TL;DR App

Today’s hearing on Capitol Hill was not just about regulation, but also on the definition of truth.

(Credits: photo illustration by Slate)

During the latest hearing of the House Judiciary Committee on Capitol Hill, an interesting comment by Rep. Pramila Jayapal caught my attention.

“The challenge here is that it is difficult to determine exactly what may qualify as false news. But the bigger problem to me is that somehow we get to a standard that truth is relative,” she said in regards to the current debate on fake news and misinformation on social media.

Truth is not relative. An apple is an apple. It can’t be a tomato tomorrow and a pear yesterday. It is an apple.

In response, Monika Bickert, Head of Global Policy Management at Facebook, one of the three witnesses at the hearing alongside Google and Twitter, explained that there are a couple of different things that her platform does to face the issue.

Facebook, in fact, acknowledges that “the majority of false news that we see on social media tends to come for spammers and financially motivated actors.”

“That violates our policies,” said Bickert. “We have technical means of trying to detect those accounts and remove them. We made a lot of progress on the past few years.”

She added: “Then there’s content that people might disagree about, or it may be widely alleged to be false. We definitely heard feedback that people don’t want to have private companies in the business of determining what is true and what is false. But what we know we can do is counter virality, if we think that there are signals — like third-party fact checkers — that the content is false, by demoting a post and by providing additional information to people so that they can see whether or not the article is consistent with what other mainstream sources around the Internet are also saying.”

In answering a previous question on the nature of fake news by Rep. Ted Poe, Bickert stressed that “we don’t have a policy of removing fake news.”

She added: “What we do is that, if people flag content, as being false, or if our technology, or if comments and other signals, detect that content might be false, then we send it to these fact-checking organizations.”

The fact-checking organizations with which Facebook works were mentioned earlier in the hearing. They include Associated Press (AP), PolitiFact, The Weekly Standard, FactCheck.org, and Snopes.

“If they rate the content as false — and none of them rate as true — then we would reduce the distribution of the content and add the related articles,” Bickert said.

As the Congressman was pressing, she stressed again: “Sharing information that is false does not violates our policies.”

The definition — as well as the nature — of fake news continues to nurture the debate around social media. And identifying fake news continues to be a key focus for platforms like Facebook, Google, and Twitter and their actions to countering the phenomenon were highlighted in their testimony today (see their written testimony below) and in their answers to the House Judiciary Committee.

But the question about the nature of fake news came about a few times in the hearing.

Google’s Juniper Downs, Global Head of Public Policy and Government Relations at Youtube, identified a spectrum, in response to Rep. Mike Johnson.

“Fake news is a term used to define a spectrum of content,” she answered. “On one end of the spectrum, we have malicious, deceptive content that is often spread by troll farms. That content would violate our policies and we would act quickly to remove the content and/or the accounts that are spreading it. In the middle, you have misinformation that may be low quality. This is where our algorithm kicks in to promote more authoritative content and demote lower quality content. And then, of course, you’ve also heard the term referred to mainstream media, in which case we do nothing. We don’t embrace the term in that context.”

Both Facebook and Google mentioned that, when content is removed, the users are notified and they are allowed to appeal.

Facebook’s testimony by Monika Bickert, Head of Global Policy Management:

Google’s testimony by Juniper Downs, Global Head of Public Policy and Government Relations at Youtube:

Twitter’s testimony by Nick Pickles, Senior Strategist at Twitter Public Policy:


Written by asandre | Comms + policy. Author of #digitaldiplomacy (2015), Twitter for Diplomats (2013). My views here.
Published by HackerNoon on 2018/07/17