Google Announces it Will No Longer Support Website Robots.txt Noindex Commands

Google has officially announced its GoogleBot web crawler won’t support website Robots.txt noindex directives after September 1st…

Search behemoth Google officially announced its crawl spider, GoogleBot, will no longer honor robots.txt noindex requests as of September 1st.

Google Robots.txt Noindex Support Ending September 1st 2019

Google explains it’s changing its policy because the robots.txt noindex command is not an official directive of the company:

“In the interest of maintaining a healthy ecosystem and preparing for potential future open source releases, we’re retiring all code that handles unsupported and unpublished rules (such as noindex) on September 1, 2019. For those of you who relied on the noindex indexing directive in the robots.txt file, which controls crawling, there are a number of alternative options.”

Google recommends replacing the command with nonindex in robots meta tags, using 404 and 410 HTTP status codes, password protection, using disallow in robots.txt, or using the Search Console Removal URL tool.

Bill Boyles

Bill is a freelance writer who covers a wide range of topics, including apps, social media, and search.