Google has updated its help documentation specific to Google Crawlers and added the Google Safety crawler to the list of special case crawlers. This crawler is not new but Google decided to add it because they “received many questions” over the past year about this crawler.
Google Safety’s crawler goes under the full user agent string of “Google-Safety.”
The Google-Safety user agent handles abuse-specific crawling, such as malware discovery for publicly posted links on Google properties.
Google-Safety user agent ignores robots.txt rules.
Why does it ignore robots.txt? I assume because it has to check pages and directories that might not be safe in order to protect its searchers and users?
Forum discussion at X.
Image credit to Lizzi.