Google has officially announced its GoogleBot web crawler won’t support website Robots.txt noindex directives after September 1st…
Search behemoth Google officially announced its crawl spider, GoogleBot, will no longer honor robots.txt noindex requests as of September 1st.
Google Robots.txt Noindex Support Ending September 1st 2019
Google explains it’s changing its policy because the robots.txt noindex command is not an official directive of the company:
“In the interest of maintaining a healthy ecosystem and preparing for potential future open source releases, we’re retiring all code that handles unsupported and unpublished rules (such as noindex) on September 1, 2019. For those of you who relied on the noindex indexing directive in the robots.txt file, which controls crawling, there are a number of alternative options.”
Google recommends replacing the command with nonindex in robots meta tags, using 404 and 410 HTTP status codes, password protection, using disallow in robots.txt, or using the Search Console Removal URL tool.
Today we’re saying goodbye to undocumented and unsupported rules in robots.txt 👋
If you were relying on these rules, learn about your options in our blog post.https://t.co/Go39kmFPLT
— Google Webmasters (@googlewmc) July 2, 2019