AI Search / GEO·Cursor
AI Search / GEOCursor

Allow GPTBot, ClaudeBot, and PerplexityBotCursor

AI answer engines only cite sites their bots can fetch. If you do not allow GPTBot, ClaudeBot, and PerplexityBot in robots.txt, you are invisible to ChatGPT and Perplexity no matter how good your content is.

rocket_launch

Fixing this in Cursor

Agentic AI code editor built on VSCode

In Cursor you edit real files, so this fix lands via direct diffs instead of a regenerated project. Paste the prompt below into your Cursor chat and the fix rolls out across the project in one pass.

Using a different tool? Pick your stack:

The prompt for Cursor

Copy and paste this into your Cursor chat exactly as-is.

Apply these changes to my codebase. Edit the files directly and keep existing formatting:

Allow AI crawlers

1. Open /public/robots.txt (create it if missing).
2. Add four blocks: `User-agent: GPTBot` / `Allow: /`, then ClaudeBot, PerplexityBot and Google-Extended — same pattern.
3. Do NOT add a global `User-agent: * / Disallow: /` — that overrides the bot-specific allows on some parsers.

Why this matters

In 2024-2025 AI search (ChatGPT, Claude, Perplexity, Google AI Overviews) started sending real referral traffic and, more importantly, becoming the first place users ask questions. The old model — rank in Google, get clicks — is being replaced by "get cited in an AI answer, get traffic".

Every major AI company publishes a crawler that respects robots.txt. OpenAI has GPTBot (for training) and ChatGPT-User (for live web search). Anthropic has ClaudeBot. Perplexity has PerplexityBot. Google has Google-Extended. Common Crawl (CCBot) powers many open-source models.

If you do not explicitly allow these bots, they default to "whatever robots.txt says for User-agent: *". Many sites ship with restrictive defaults and accidentally block everything. The fix is a six-line allow list — the single highest-leverage GEO move you can make.

How to use this prompt in Cursor

  1. 1. Open your Cursor project.
  2. 2. Copy the prompt above with the copy button.
  3. 3. Paste into the Cursor chat and send.
  4. 4. Review the diff, accept the changes, redeploy.
  5. 5. Verify the fix using the checklist below.

Common mistakes to avoid

  • error_outlineBlocking all bots with `User-agent: *\nDisallow: /` and forgetting to add allow rules for AI.
  • error_outlineAllowing only GPTBot and forgetting the others — ChatGPT search and ChatGPT training use different crawlers.
  • error_outlineBlocking AI because of a blog post that said "block AI crawlers" — that advice costs you AI-search visibility.
  • error_outlinePutting the allow rules in the wrong order — `User-agent: *` `Disallow: /` first, then specific allows after.

How to verify the fix worked

  • check_circleVisit `/robots.txt` and confirm the six User-agent blocks exist, each with `Allow: /`.
  • check_circleTest with https://www.bing.com/webmasters/help/robots-txt-analyzer or similar tools.
  • check_circleCheck server logs for GPTBot, ClaudeBot, PerplexityBot user-agents after 1-2 weeks — they should appear.
  • check_circleIn ChatGPT with web browsing, ask a query where you would expect to be cited — see if you appear.

Frequently asked questions

Will allowing GPTBot let OpenAI train on my content?expand_more
Yes — GPTBot is specifically for training. If you want to appear in ChatGPT answers without training, allow ChatGPT-User (search) and block GPTBot (training). Same for Google-Extended.
Does allowing AI crawlers hurt my SEO?expand_more
No. Google AI crawlers are separate from Googlebot. Blocking Google-Extended does not affect your regular Google search ranking.
Is there a downside to allowing them?expand_more
The training question is philosophical. Operationally: more traffic from AI citations, no server-load issue (AI crawlers are polite).

Want all 34 prompts tailored to your Cursor site?

Pantra scans your site in 10 seconds, detects the stack, and generates the exact prompts that apply — only the ones you actually need.

Scan my site

Related Cursor prompts