arrow_backAll termsseo fundamentals

Robots.txt

Robots.txt is a text file at the domain root that instructs search engine crawlers which parts of a website they can or cannot access.

Also available: Auf Deutsch

The Robots.txt file is a plain text file located at the root of a website (e.g., https://www.example.com/robots.txt). It's part of the Robots Exclusion Protocol and provides instructions to web robots (like search engine crawlers) about which areas of the website they are allowed or disallowed to crawl.

This file is primarily used to manage crawler traffic, prevent indexing of private or irrelevant sections (like admin pages, staging environments, or duplicate content), and conserve crawl budget. It's important to note that robots.txt is a directive, not a security measure; it tells polite crawlers what to do, but determined bots can still access disallowed content if directly linked.

A common robots.txt configuration might look like this:

User-agent: *
Disallow: /admin/
Disallow: /private/

User-agent: Googlebot
Allow: /public/
Disallow: /temp/

This example blocks all user-agents from /admin/ and /private/, and specifically tells Googlebot it can access /public/ but not /temp/.

Related terms

Audit your site on all of these?

Pantra scans you in 8 seconds. Free, no signup.

Scan my sitearrow_forward