SEO·Cursor
SEOCursor

Add a robots.txt that allows search + AI crawlersCursor

Without a robots.txt, defaults apply — which is usually fine for Google but leaves you invisible to ChatGPT, Claude, and Perplexity unless their bots are explicitly allowed.

rocket_launch

Fixing this in Cursor

Agentic AI code editor built on VSCode

For Next.js, prefer `app/robots.ts` exporting `MetadataRoute.Robots` — it gets built dynamically and respects the site URL.

Files to touch in Cursor:

app/robots.ts OR public/robots.txt (not both)

Using a different tool? Pick your stack:

The prompt for Cursor

Copy and paste this into your Cursor chat exactly as-is.

Apply these changes to my codebase. Edit the files directly and keep existing formatting:

Add a robots.txt

1. Create /public/robots.txt.
2. Allow GPTBot, ClaudeBot, PerplexityBot and Google-Extended (one User-agent block each, all with `Allow: /`).
3. Add a `Sitemap:` line pointing to your sitemap.xml.

Why this matters

robots.txt is the first file any crawler requests. It tells bots what they can and cannot fetch, and at what rate. No file = default-allow for most good bots, which is fine — but you lose the ability to explicitly invite AI crawlers or block abusive scrapers.

In 2023-2025, OpenAI, Anthropic, Perplexity, and Google all introduced AI crawlers that respect `robots.txt` and only fetch content from sites that allow them. Many developers blocked them by accident through a restrictive default config; many others left them disabled by never creating the file.

If you want to appear in ChatGPT search, Claude responses, Perplexity citations, or Google AI Overviews, you must explicitly allow the respective bot. This is the single most important GEO (Generative Engine Optimization) step and takes 30 seconds.

How to use this prompt in Cursor

  1. 1. Open your Cursor project.
  2. 2. Copy the prompt above with the copy button.
  3. 3. Paste into the Cursor chat and send.
  4. 4. Review the diff, accept the changes, redeploy.
  5. 5. Verify the fix using the checklist below.

Common mistakes to avoid

  • error_outlineShipping `User-agent: *\nDisallow: /` from a template — blocks everything.
  • error_outlineBlocking GPTBot, ClaudeBot, PerplexityBot because of a blog post that said "block AI crawlers" — cost yourself all AI-search visibility.
  • error_outlineForgetting to add `Sitemap: https://yoursite.com/sitemap.xml` at the end.
  • error_outlineUsing `Disallow: /api` which usually is correct, combined with `Disallow: /` which is not.
  • error_outlineHaving both `public/robots.txt` and `app/robots.ts` — Next.js picks one unpredictably.

How to verify the fix worked

  • check_circleVisit `https://yoursite.com/robots.txt` — returns the file content as plain text.
  • check_circleIn Google Search Console → Settings → Crawling → robots.txt report — no errors, no warnings.
  • check_circleTry a `curl -s https://yoursite.com/robots.txt | grep "gptbot"` — confirms GPTBot line exists.
  • check_circleTest a blocked URL with Google's robots.txt tester (Search Console).

Frequently asked questions

Does disallowing a URL in robots.txt remove it from Google?expand_more
No. robots.txt blocks crawling, not indexing. To remove a URL from the index use `<meta name="robots" content="noindex">` or a `noindex` HTTP header.
Which AI crawlers should I allow?expand_more
GPTBot (OpenAI), ClaudeBot (Anthropic), PerplexityBot (Perplexity), Google-Extended (Google AI), CCBot (Common Crawl), Applebot-Extended (Apple AI).
Does allowing AI crawlers mean they use my content for training?expand_more
GPTBot and Google-Extended are specifically for training. ClaudeBot and PerplexityBot are primarily for search. If you only want AI search citations without training, allow ClaudeBot and PerplexityBot only.

Want all 34 prompts tailored to your Cursor site?

Pantra scans your site in 10 seconds, detects the stack, and generates the exact prompts that apply — only the ones you actually need.

Scan my site

Related Cursor prompts