Suggestions for Sanitizing YouTube Recommendations and Search Results

Written by pmurali1000 | Published 2018/12/22
Tech Story Tags: youtube | recommender-systems | algorithms | youtube-recommendations | youtube-algorithm

TLDRvia the TL;DR App

Hateful YouTube videos or spurious Search results are a source of controversy which has spilled over to Congress as witnessed by the rough questions Google CEO Sundar Pichai had to handle. Automatic YouTube feeds have become so normal (with the default auto-play setting being ‘on’) that this is a favorite pastime of many, to keep watching feeds. My son keeps watching YouTube videos to no end, without any knowledge about how trustful the video source is. And one can’t underestimate the influence such unvetted information can have on gullible minds, young and impressionable. Google, surely would be seized of this huge issue which has a potential to fuel violence. The Washington pizzeria shooting is an example and another incident of a more violent nature can bring Google (and Alphabet) right under the cross-hairs of the Government or the courts. Google’s answer to questions by Congressmen did not sound convincing. Transferring the blame to algorithms cannot absolve Google of the responsibility if something serious happens. It is their duty to ensure the algorithm does not let spurious, hateful information end up on a kid’s phone (or anyone’s phone).

Google gives explanations which broadly are along these lines,

Enormity of the problem: As Pichai says about 6 million videos are uploaded every day. To sort out the good from the bad is a humongous task any which way you look at it.

We are already on the job: Google is said to employ thousands of engineers to censor videos for hate content, adult content, etc. They also keep updating their algorithms which detect spurious content.

Difficulty of sorting right from wrong: While an algorithm can be configured to spot cuss words, hate words, etc., how can an algorithm know if a guy talking of a Government conspiracy that caused 9/11, is talking rubbish or the truth.

Surely this is a huge and complex issue and Google is perhaps investing the best it can on this.

Still, as an avid YouTuber, here’s my two pennies to the issue, if it can help.

The outline of my idea is as follows

- Rather than blocking spurious videos, the target is to prevent them from going into recommendation lists, search results, subscriber lists (lets call them feeds)

- Go with the assumption that extremist views, conspiracy theories tend to be created and followed by groups. If there is way to detect these groups then all videos followed by them can be tagged unfeedable (cannot go into feeds).

- Once such groups are detected then a sample review of their videos should be enough to tag all their videos, thereby reducing work content.

- To use crowd sourcing to verify samples from the group and rate the group

- To use collaborative filtering (a method already in use in recommender systems) for the above (an idea explained in detail later).

Recommender systems

There is a paper on how Google uses a Deep Learning architecture for recommendations here. Google uses a two stage process to first identify possible candidate videos and then to arrive at a list of videos with ranking which it displays to the user. These processes extensively use information that Google collects of user interaction, including video watch time, feedback of likes/dislikes, commenting and so on. It appears to do a thorough job of taking many factors into account like freshness of video, user demographics, etc. to ensure that the recommendations catch the attention of the viewer. As Google says in their site “The goals of YouTube’s search and discovery system are twofold: help viewers find the videos they want to watch, and maximize long-term viewer engagement and satisfaction”. The emphasis is to get videos that the viewers would see, would like and that would make viewers continue to watch YouTube videos. In fact the Deep Learning Models are trained to mimic actual viewing patterns. It is fed millions of examples of past customer usage data and is trained to predict the next watch correctly.

Google also has algorithms in place which detect unacceptable content in the video script (as per their published policies) and flag them as strikes. There is also a manual verification process and an appeal process for the video uploader. If the strikes go beyond a certain number the video is blocked. It appears they have different algorithms for recommendations and those for content verification. The idea suggested here is to integrate these two.

One of the methods many Recommender systems use is what is called collaborative filtering (CF) which Google mentions is used for candidate selection above. This is a process which is based on “the concept that different people whose evaluation (of say songs) matched in the past are likely to match again in the future”. For example, let’s say you view Song A by Band X, also like Song B by Band Y. Then this algorithm will look for another person (say individual K) who has the same view/like history and pull out Song C liked by him and show it in your feed. And if you view this Song © then the pattern will get reinforced and you will get more videos seen by K. It looks for the class of people who have tastes similar to yours and recommends items that they like (which you have not seen). Collaborative filtering is considered to be the most popular and widely implemented technique in Recommender Systems, as per Recommender Systems Handbook, Springer (2015).

What results with CF is that people of similar tastes get grouped together. For example out in Google or Amazons database you would be part of a group which would appreciate Hard Rock music, or classical music or country western music, depending on what your tastes are. Unknowingly you get grouped with people who see/like the kind of stuff that you like. Groups with shared interests, passion, ideas, positive or negative. It is this exact feature that can be exploited by Google to spot videos of people with hateful ideas since they would form a group too.

So if there is a way to identify this group then the whole class of videos that belong to this group can be tagged as unsearchable, unfeedable.

Profile Grouping

Collaborative filtering is used by Recommender Systems for filtering out possible recommendations from a large corpus. Here it is proposed to be used to extract groups with shared interest user profiles. Let’s call this process Profile Grouping. The PG process will simply map users to videos that users like, comment on or engage with positively (in technical terms). It will identify sets of users who like sets of videos. So a mapped Groups-Videos list would get generated. More the members of this group watch the video exclusively, the stronger would be the mapping. It would not have any tag to define it to begin with. Google has its own tags to define which class a video belongs to (based on user descriptions, what it calls as metadata) which it uses during searches, but that’s not needed here. Remember the purpose of this exercise is to identify potential groups which generate harmful videos and just that. So the only tagging that is needed is of the group-videos’ spuriousness. And this will happen only after detection of a spurious video.

So for identifying whether a video is spurious or not, a two stage process is suggested.

First, over and above the like/dislike buttons, a spuriousness button should be provided below a video, by pressing which the user tells whether the video is unacceptable and should be removed. This by itself is not a guarantee that the video is spurious.

The second stage will happen only if a video is found spurious by a user. This stage consists of manual reviewing. Reviewing, it is suggested can be a crowd sourced affair, where reviewers are sent the video for rating. Once a reviewer gives a negative rating, the group(and videos) get tagged. So a negative rating, would not only apply to the subject video, but also to all the videos mapped to this group that this user belongs to. If the rating of the group falls below a defined value all videos from the group would be tagged unsearchable, unfeedable even for subscribers. They would not show up in searches or recommendation feeds.

So group tagging is done by a process of sampling which will considerably reduce processing time compared to what would be required for checking every video. Clearly this process would suffer from what is called cold start problem where a group definition would happen only after some reviews get generated and so bad videos would initially be available in feeds. But the belief, once again, is that spurious videos tend to get generated by people of a certain kind. So over a period of time Google should be able to get an idea of groups with rating history and should be able to spot malicious videos.

The tagged videos would be present in YouTube however and can be accessed only by its proper address or link which will considerably reduce the chances of unintentional views. This is an important distinction from the present practice of blocking videos. The only thing being done to take it out of reach of unintended recipients. This includes subscriber feeds also as YouTube cannot be a forum for spreading spurious data even to subscribers.

Manual review might appear a daunting task, considering the number of videos uploaded every minute. But once a group has been identified, if a fresh video from the group gets a spurious strike, the second stage can be done away with. So over a period of time the number of reviews needed might reduce. Still, crowd sourcing is suggested for verification taking inspiration from this amazing platform — Wikipedia. Wiki, for all its limitations provides reasonably truthful information in spite of the fact that its editors do not get any incentive for ensuring accuracy of information. It is a fantastic demonstration of the inherent human nature to support rightness and promote truth. Google can find plenty of people who would be ready to support this activity of verifying video content. They do not do it today or perhaps are not seen to be doing it, due to the extraordinary size of the number of videos spewing out every day, which makes it difficult for somebody to even locate videos to provide feedback about. With a proper system for review Google can develop reviewers with performance ratings and classify them based on topics of expertise. They can be sent videos for reviewing and be provided compensation. To ensure elimination of effect of demographics, political affiliation, the reviewers can be spread all over the world.

One of the major effects of the grouping action is that it will reduce the number of videos to be verified. A sample set of videos should be sufficient to characterize the group and all its mapped videos. For example, a right wing anti-Semitic group can be evaluated by a just a handful of videos for its content and all videos which are liked exclusively by this group can be tagged unfeedable, unsearchable. To ensure that good videos seen by the group do not get tagged negatively, if the videos liked by a group member is also liked by other groups then it would not qualify to get tagged negatively. The whole process can be controlled by a faceless algorithm so that there is no bias. The individual reviewers will only review videos and would have no idea of the group it belongs to and would not have any say in the action the algorithm takes.

This process can be a framework for a way YouTube videos can be linked and networked. It can also be extended to rate videos for truthfulness, approval, etc. which might also benefit users and also make YouTube a source of trusted information in the long run. This may even have commercial benefits.

It would be a tough task, any which way, to spot and penalize videos with unacceptable content while at the same time not offending plenty of fans of the channel as this article shows. But that’s a price that Google has to pay if it wants to avoid hateful videos leading up to violence and loss of lives.

Thanks for reading!


Published by HackerNoon on 2018/12/22