Google states it will expand its human YouTube moderator team by up to 10,000 employees to monitor video uploads, thereby improving UX…
Google Expanding YouTube Moderator Team to Improve User Experience
In recent months, the video hosting platform put AI to work to hunt down extremist content. A move resulting from terrorist organization uploads. Prior to that, the company introduced a hateful video ban. Now, it’s targeting disturbing channels which feature videos masquerading as family-friendly content. YouTube is now training its complex algorithms to detect hate speech to improve child safety.
However, to properly train those signals, YouTube needs many more human hands on deck. Therefore, the platform intends to hire as many as 10,000 people as content reviewers. These moderators find content which violates the company’s policies.
YouTube CEO Susan Wojcicki states the company’s machine-learning algorithms helped to take down 70 percent of violent extremist content just eight hours after upload. The video host will now use that same technology for other types of content. Among them, videos which target children. But, these aren’t the only measures YouTube plans to take.
In addition, the company will create and enforce stricter guidelines for which channels are eligible for ad monetization. Currently, creators need a minimum of 10,000 views to earn money. Now, it seems the platform will likewise expand its human review team to “ensure ads are only running where they should.” Also, beginning next year, the company will start publishing reports with details on flags and its removal actions for videos and comments violating its policies.