COVIDFakeExplainer: An Explainable Machine Learning based Web Application: Related Works

Written by escholar | Published 2024/02/15
Tech Story Tags: machine-learning | fake-news | machine-learning-fake-news | covid-19-machine-learning | deep-learning | fake-news-ml-algorithms | research-paper-on-fake-news | explainability

TLDRLeveraging machine learning, including deep learning techniques, offers promise in combatting fake news.via the TL;DR App

This paper is available on arxiv under CC 4.0 license.

Authors:

(1) Dylan Warman, School of Computing, Charles Sturt University;

(2) Muhammad Ashad Kabir, School of Computing, Mathematics,.

Table of Links

II. RELATED WORK

The current landscape of tools for detecting fake news reveals several limitations and gaps in addressing the critical need for accessible and explainable solutions. CoVerifi [12] is a functional application that provides accurate classifications and human generation scores for COVID-19-related news articles, but it lacks genuine explainability techniques, leaving users without a comprehensive understanding of the reasoning behind the classification. However, a more fundamental issue highlighted by dEFEND [13] is the scarcity of tools accessible to end-users for detecting fake news.

A notable example of such limitations is FakerFact [14], a Chrome extension that analyzes and verifies fake news by URL. Although it offers classification percentages in various areas, it falls short of providing direct explainability. Similarly, SEMiNExt [15] analyses user search terms for potential fake news content without explaining or classifying actual news articles. While there are tools [11] like xFake [16] that demonstrate the potential for sentiment and linguistic analysis with explainable outputs, they often have limitations, like being restricted to specific websites, as seen in xFake’s compatibility with PolitiFact [17] only.

Despite these limitations, tools like Bunyip [18] provide promising visual explainability outputs and classifications for human-generated text, showing that similar applications can be developed. However, these tools do not directly address traditional fake news detection, although they serve as proof of concept for the feasibility of creating user-friendly and explainable applications.

In summary, the existing tools fall short of providing comprehensive and user-friendly solutions for detecting fake news, particularly identifying COVID-19-related fake news with explainability. There is a significant gap between the state-of-theart studies on machine learning and explainability techniques for fake news, and the ideal end-user tools that offer both accurate classifications and clear interpretable explanations. While the above-discussed tools demonstrate progress, they underscore the necessity for a holistic approach that provides a user-friendly experience coupled with meaningful explainability to empower users in identifying and understanding fake news.


Written by escholar | We publish the best academic work (that's too often lost to peer reviews & the TA's desk) to the global tech community
Published by HackerNoon on 2024/02/15