YouTube extremist content continues to plague the video hosting platform and now it’s taking the fight further by unleashing Google artificial intelligence…
In June, YouTube introduced a hateful video ban. The company, owned by Google, experience a publisher revolt over placement of extremist content next to their brands. Now, YouTube is taking another step to combat extremist content by putting Google AI to work.
Google Artificial Intelligence to Identify YouTube Extremist Content
The video host states it’s already begun to roll out tools to identify extremist content and says it’s pleased with the results thus far. YouTube discloses more than 75 percent of violent extremism it successfully found and removed before being flagged by a human. The system’s accuracy also improved with the help of its machine learning.
“We’ve always used a mix of technology and human review to address the ever-changing challenges around controversial content on YouTube. We recently began developing and implementing cutting-edge machine learning technology designed to help us identify and remove violent extremism and terrorism-related content in a scalable way,” the announcement explains.
YouTube recognizes it cannot rely solely on technology to find extremist content and will continue to use humans to find and flag such content. “Of course, our systems are only as good as the data they’re based on. Over the past weeks, we have begun working with more than 15 additional expert NGOs and institutions through our Trusted Flagger program, including the Anti-Defamation League, the No Hate Speech Movement, and the Institute for Strategic Dialogue.”
The will also apply tougher treatment to content that’s not illegal or does not technically violate its guidelines but have nevertheless been flagged.
Thanks for taking a few moments of your time to read this; if you like this information, please share it with your friends!
Would you like to contribute your own article? Submit your contribution here!