The Media Bias Problem: How AI Is Here to Help

Written by nickkyriakides | Published 2022/10/05
Tech Story Tags: ai | ai-applications | ai-trends | ai-top-story | ai-technology | ai-bias | unconscious-bias | unconscious-ai-bias

TLDRBias has always been a factor in the way we interpret the world. However, we also spend more time-consuming media than ever before, inhibiting trust in our personal experiences and clouding our perception of the truth. Given that it is so difficult to discern what is genuine from reality, it is crucial for us to have tools that can detect biases in the information we regularly consume.via the TL;DR App

Bias has always been a factor in the way we interpret the world. However, we also spend more time-consuming media than ever before, inhibiting trust in our personal experiences and clouding our perception of the truth. Given that it is so difficult to discern what is genuine from reality, it is crucial for us to have tools that can detect biases in the information we regularly consume.

On top of the inherent bias rife throughout various media sources, public trust in the veracity of news organizations is dwindling. Today, __58% of individuals in the US __claim to have at least some faith in the information obtained from news organizations. Even while it is still a majority, this is the lowest percentage share from Pew Research studies on media bias over the previous five years.

Let’s look at how various AI techniques can help weed out media bias and provide a healthier perspective on how we stay across the news.

What We Know So Far

Historically, finding political bias in media, for example, is regarded as a job for natural language processing (NLP)—essentially, the analysis and processing of text. A subfield of artificial intelligence (AI), NLP, aims to create algorithms that can imitate human intellect. Machine learning (ML), which involves training computers to learn from experience, is frequently regarded as a subfield and technique of AI and creating AI systems. An NLP model is commonly used to predict political prejudice, for example, using ML.

To build a successful machine learning model, you need a pre/labeled dataset with the inputs and outputs required. With regards to detecting media bias this would involve the text and bias values. MIT recently conducted a study involving 3,078,624 articles from 100 media news sources.

Interestingly, when looking into the MIT study, it became clear that NLP analysis posed several issues with the data analysis. One of the biggest problems came from picking up on phrases that could be taken out of context. Usually, the reader of either a left-leaning or right-leaning publication would be expected to take certain phrases positively or negatively. The clearest example MIT cited was the phrase ‘defund the police’, where it could easily feature on either end of the political spectrum but would need context to be an indicator of political stance.

The general conclusion from various studies is that news is generally biased towards bad news. The classic adage that states: ‘The front page of the newspaper features the worst news from the day's events throughout the world’ is largely true. Brutal murders, pandemics, natural disasters, and corruption are all deemed the most newsworthy items. Any left-right political prejudice pales in comparison to this general negative bias.

New Developments Paving the Way for Better Analysis

There is still room for improvement, but many companies have been making great strides in identifying media bias. The Bipartisan Press recently tested various NLP models and found that Facebook’s RoBERTa was one of the most effective models currently available. It was born out of Google’s Bidirectional Encoder Representations from Transformers (BERT), where it trains the system to anticipate purposefully hidden content inside samples of unannotated language.

The Bipartisan Press astonishingly found that both of these models could classify bias by website domain. For instance, they were able to correctly suggest that Fox News and the Washington Examiner leaned to the right, whereas CNN and the New York Times leaned to the left. However, as with MIT’s analysis, there were some issues—both models were not able to recognize ideologically-charged phrases such as “we need more gun control”. Furthermore, some bias values were attributed purely to the mention of names such as Clinton or Trump.

There are now apps, such as NOOZ.AI and Ground.news, which can integrate opinion analysis (personal feelings, views, beliefs, or judgments in a journalist’s writing), sentiment analysis (a journalist’s positivity or negativity to the general news content or the specific topic they write about), and revision analysis (investigation on the evolution of a news story and its manipulation of opinion and sentiment over time). This has the potential to give a more comprehensive overview of media bias.

Organizations looking to promote clear and transparent media in the future should seek to increase the dataset. This should help to expose an ML model to a much broader range of topics, which would in turn increase accuracy in theory. The analysis could also be improved by filtering out content within newspapers and websites that isn’t relevant, such as adverts and other pieces of content which may not be related to news media whatsoever.

Thanks to our unlimited access to social media and news websites, we are constantly aware of what is happening worldwide and can instantly respond to any event. However, not all of this information is impartial or even pertinent to our requirements. The speed with which events are reported leaves us with little time to scrutinize sources, exercise critical thought, or conduct thoughtful evaluation. This is why utilizing AI to identify bias is so crucial in society nowadays. Hopefully, we can continue holding journalists accountable and be better informed about the truth in the process.


Written by nickkyriakides | Nick is the CoFounder and COO of netTALK CONNECT and MARITIME. He focuses on Marketing, International Trade, among other
Published by HackerNoon on 2022/10/05