AI Spotlight: YouTube’s AI Tools Show Promise for Extremist Content Removal

Share |

Terrorists and other hate groups like al-Qaeda, ISIS, white supremacists, and neo-Nazis use social media and video streaming platforms to publish and spread their hateful and offensive content for radicalization, propaganda, or organizational purposes. After the recent tragic events in Charlottesville, the tech community has been figuring out ways to respond.  Platforms have increased the rate of which they either take down white supremacist content or make it harder to find.  But, many companies and platforms have been flagging and taking down such harmful for a long while, especially pertaining to terrorist content.

 

Like many companies, YouTube, has grappled with the best way to remove terrorist and supremacist content as this type of content negatively affects it in a wide variety of ways.  In fact, YouTube has been flagging and taking down content on its own, long before the protests in Charlottesville.  In doing this, it exercises judgment on how such content is used.  For instance, if a video is being used for educational purposes rather than extremist purposes, it may stay up.  In addition to the flag-review-removal process that is commonly used on social media and streaming sites, YouTube is also using artificial intelligence (AI) to expedite this process.

 

AI and machine learning will be able to detect and predict what is extremist and terrorist-related content and remove it.  The results are demonstrably higher than simply using humans to flag, review, and take content down.  In fact, YouTube saw its ability to flag this type of content improve six-fold within just a few weeks of implementing the AI tools.  YouTube says it is additionally using AI and machine learning technology to analyze models to find and assess more than fifty percent of the terrorism-related content that it has removed over the past six months, according to Kent Walker, General Counsel at Google.  Walker also said that YouTube would have to create “content classifiers” to train algorithms determine what constitutes “extremist content.”

 

As with many AI technologies, a great fear is that AI will displace human workers.  Yet, YouTube still provides great focus on the human component of its AI flagging tools.  It still needs data scientists to develop and train the algorithms.  Humans are also necessary to review what the AI flags to ensure that it is indeed removing extremist content.  Finally, YouTube still needs independent human experts in its “Flagger” program where YouTube will give grants to 50 NGOs to assist with content classification.  In these ways, while AI is certainly going to have a significant autonomous impact in content renewal, it also supplements human work and creates new human opportunities keeping in line with SIIA’s views.

 

YouTube, like many online platforms, has faced pressure from its advertisers, consumers, and governments to “do more” to take down terrorist and extremist material.  Employing AI tools like those used by YouTube shows that not only have social media companies been doing a lot to thwart extremist propaganda and other content, but they are also innovating to do so, creating new technologies, applications, and opportunities in the process.  While these AI tools may not be able to totally mitigate the effects of extremist content – YouTube taking down a video isn’t going to stop an ISIS attack – they can certainly stifle the ability for extremist groups to get their messages across.  By using AI technology to remove content, YouTube hopes that it can prevent extremist propaganda from taking effect on impressionable people which can lead to catastrophic and tragic events like the many that have occurred around the world, most recently in Charlottesville.  

Diane Diane Pinto is the Public Policy Coordinator at SIIA. Follow the Policy team on Twitter @SIIAPolicy.