Add a robots.txt that allows search + AI crawlers — Lovable
Without a robots.txt, defaults apply — which is usually fine for Google but leaves you invisible to ChatGPT, Claude, and Perplexity unless their bots are explicitly allowed.
Fixing this in Lovable
AI full-stack app builder (React + Vite + Supabase)
Drop a static `public/robots.txt`. Lovable serves everything in `public/` at the root.
Using a different tool? Pick your stack:
The prompt for Lovable
Copy and paste this into your Lovable chat exactly as-is.
Fix my Lovable app — please make these exact changes in the Lovable editor: Add a robots.txt 1. Create /public/robots.txt. 2. Allow GPTBot, ClaudeBot, PerplexityBot and Google-Extended (one User-agent block each, all with `Allow: /`). 3. Add a `Sitemap:` line pointing to your sitemap.xml.
Why this matters
robots.txt is the first file any crawler requests. It tells bots what they can and cannot fetch, and at what rate. No file = default-allow for most good bots, which is fine — but you lose the ability to explicitly invite AI crawlers or block abusive scrapers.
In 2023-2025, OpenAI, Anthropic, Perplexity, and Google all introduced AI crawlers that respect `robots.txt` and only fetch content from sites that allow them. Many developers blocked them by accident through a restrictive default config; many others left them disabled by never creating the file.
If you want to appear in ChatGPT search, Claude responses, Perplexity citations, or Google AI Overviews, you must explicitly allow the respective bot. This is the single most important GEO (Generative Engine Optimization) step and takes 30 seconds.
How to use this prompt in Lovable
- 1. Open your Lovable project.
- 2. Copy the prompt above with the copy button.
- 3. Paste into the Lovable chat and send.
- 4. Review the diff, accept the changes, redeploy.
- 5. Verify the fix using the checklist below.
Common mistakes to avoid
- error_outlineShipping `User-agent: *\nDisallow: /` from a template — blocks everything.
- error_outlineBlocking GPTBot, ClaudeBot, PerplexityBot because of a blog post that said "block AI crawlers" — cost yourself all AI-search visibility.
- error_outlineForgetting to add `Sitemap: https://yoursite.com/sitemap.xml` at the end.
- error_outlineUsing `Disallow: /api` which usually is correct, combined with `Disallow: /` which is not.
- error_outlineHaving both `public/robots.txt` and `app/robots.ts` — Next.js picks one unpredictably.
How to verify the fix worked
- check_circleVisit `https://yoursite.com/robots.txt` — returns the file content as plain text.
- check_circleIn Google Search Console → Settings → Crawling → robots.txt report — no errors, no warnings.
- check_circleTry a `curl -s https://yoursite.com/robots.txt | grep "gptbot"` — confirms GPTBot line exists.
- check_circleTest a blocked URL with Google's robots.txt tester (Search Console).
Frequently asked questions
Does disallowing a URL in robots.txt remove it from Google?expand_more
Which AI crawlers should I allow?expand_more
Does allowing AI crawlers mean they use my content for training?expand_more
Want all 34 prompts tailored to your Lovable site?
Pantra scans your site in 10 seconds, detects the stack, and generates the exact prompts that apply — only the ones you actually need.
Scan my siteRelated Lovable prompts
Allow GPTBot, ClaudeBot, and PerplexityBot — Lovable
Prompt to whitelist AI crawlers so ChatGPT, Claude, and Perplexity can cite your pages. Works in any AI-coded stack.
SEOGenerate a sitemap.xml covering every route — Lovable
Stack-specific prompt to create /public/sitemap.xml listing all public routes, for Lovable, Cursor, Bolt, v0, Replit, Windsurf, Claude Code, and Base44.
AI Search / GEOAdd an llms.txt file — Lovable
Stack-specific prompt to publish llms.txt — a curated guide telling LLMs what your site is about.
SEOAdd a unique <title> tag to every page — Lovable
Copy-paste prompt to add a unique, keyword-rich <title> tag to every page in Lovable, Cursor, Bolt, v0, Replit, Windsurf, Claude Code, or Base44.