Does llms.txt actually work? The adoption reality in 2026
Only 10.13% of sites have llms.txt, and the two largest 2026 studies found no measurable citation lift. What the file does, what it doesn't, and the time budget we'd spend instead.
Every other week, a founder forwards us a LinkedIn post that says "publish an llms.txt and you'll start getting cited in ChatGPT." The file takes 15 minutes to add. It sounds clean, technical, and like a free win. That is why it spreads. The data tells a different story: as of 2026, 10.13% of sites have llms.txt, zero AI bots fetch it in server logs, and the two largest citation studies found no measurable lift. The file is not harmful. It is simply not the mechanism moving AI citations. This piece is the honest 2026 picture - what llms.txt is, what the studies actually measured, what Google and Anthropic have said, and where we'd spend the same 15 minutes instead.
What is llms.txt and what was it meant to do?
llms.txt is a proposed markdown file for the root of a website, published September 3, 2024 by Jeremy Howard of Answer.AI. It is intended to give LLMs a curated, pre-digested map of the site: an H1 site name, a blockquote summary, and sectioned markdown links to priority pages. A sibling variant, llms-full.txt, ships the whole documentation corpus in one file. Howard's framing: "site authors know best, and can provide a list of content that an LLM should use."
The spec is hosted at llmstxt.org. The most visible early adopters were developer-documentation platforms: Mintlify rolled out automatic generation in November 2024, and through that pipeline, Anthropic's developer docs and Cursor's docs began shipping the file overnight. The proposal is legitimate. The question is whether AI retrieval pipelines in 2026 actually read it, and whether the brands that ship it earn more citations than the ones that don't.
Does llms.txt actually work for AI citations?
No - not in any of the 2026 studies we trust. Two independent analyses published between November 2025 and early 2026 tested the question directly and both came back with the same answer: no measurable citation lift. The file is not hostile to AI visibility. It is simply inert.
SE Ranking crawled ~300,000 domains and modeled citation frequency across ChatGPT, Perplexity, Gemini, and other major LLMs. Their XGBoost regression found that removing llms.txt as a feature actually improved the model's predictive accuracy - the file was adding noise, not signal. ALLMO's parallel 2026 audit examined 94,614 cited URLs across 11,867 AI responses and found exactly 1 llms.txt URL in the entire citation set - 0.00105693%. For the operator, that is not "needs more time to work." That is "no statistical relationship with the outcome the file claims to improve."
Who has adopted llms.txt, and who has not?
Adoption is flat around 10% and skews toward smaller dev-docs sites, not the domains AI engines actually cite. SE Ranking's 300,000-domain crawl pegged overall adoption at 10.13%. What matters more than the aggregate is the shape of the adoption curve.
Traffic tier by traffic tier, SE Ranking's data shows almost no variation: sites with 0–100 monthly visits adopt at 9.88%, sites with 1,001–5,000 visits at 10.54%, and sites with 100,001+ visits at 8.27%. High-authority sites adopt less than small sites. The ALLMO 2026 audit drove the point home at the top of the distribution: only 1 of the 50 most-cited domains globally (Target.com) runs an llms.txt, and 0 of the top 20 media and publishing domains ship one. Walmart quietly removed its file between November 2025 and January 2026. The domains LLMs actually quote from - Wikipedia, Reddit, G2, the major news outlets - are overwhelmingly absent. If "the sites that win AI citations use llms.txt," the data would skew the other direction. It does not.
What do AI bots actually fetch in server logs?
Zero, in the largest public audit. wislr.com's 48-day server-log study (February 1–March 20, 2026) logged 12,099 AI bot requests across 71,603 total hits and found exactly zero requests to /llms.txt or /llm.txt from any AI crawler. The only entity that fetched it was Dataprovider.com, a web-analytics service - 3 requests, non-AI.
The bots doing real work in those 48 days were Meta-WebIndexer (1,833 hits), ChatGPT-User (923), Claude-SearchBot (549), PerplexityBot (456), OAI-SearchBot (330), ClaudeBot (206), and GPTBot (187). They crawled HTML pages. They occasionally checked robots.txt - OAI-SearchBot 3–6 times a day, ClaudeBot ~4 times. None of them asked for llms.txt. This is the single cleanest empirical fact in the whole debate: if a file is never requested, it cannot be influencing retrieval. A second-party log study might find a single edge case. The direction of the evidence is not ambiguous.
What have Google, OpenAI, and Anthropic said?
Google has twice publicly dismissed llms.txt. In June 2025, John Mueller posted on Bluesky that "no AI system currently uses llms.txt." On July 23, 2025 at Search Central Deep Dive, Gary Illyes told attendees: "Google doesn't support LLMs.txt and isn't planning to." Mueller separately compared the file to the legacy keywords meta tag - functionally dead on arrival.
The caveat worth naming: in December 2025 Google quietly added an llms.txt to its own Search Central developer docs, which is a Mintlify-style dev-docs gesture, not a retrieval-policy reversal. Mueller's public response to the discovery was "hmmn :-/". Neither OpenAI nor Anthropic has published a formal statement endorsing or consuming llms.txt. Signals operates an editorial placement network across 20,000+ sites, and in every client audit we run, the brands earning ChatGPT, Perplexity, and AI Overviews citations are doing so through mechanisms the platforms have publicly confirmed - indexable HTML content, structured data, and third-party editorial mentions - not through llms.txt.
When is llms.txt still worth the 15 minutes?
Ship it if you run a developer-documentation site and your users copy-paste context into LLMs. The single legitimate use case in 2026 is the Mintlify pattern: developers select your docs, paste them into Claude or ChatGPT, and an llms-full.txt flat-file saves them the scrape-and-clean step. That is a user-experience win for a technical audience, not a retrieval-side citation play.
Three scenarios where we'd still add it: (1) You publish API or SDK docs that developers routinely load into agents, and an llms-full.txt shortens their prompt. (2) Your CMS auto-generates it (Mintlify, some Webflow exports) and shipping costs zero incremental work. (3) You want to hedge on an eventual standards adoption - future-proofing with a 15-minute file is defensible if you already have zero other open GEO work. Skip it if you are a marketing site, an ecommerce brand, a local service business, or a media publisher. The file has not earned a measurable citation in any of those contexts in the data we have seen, and the time is better spent on the levers that do move citations.
llms.txt vs what actually moves AI citations
The table below is the replacement prioritization we give operators when they ask "should I add llms.txt." We are not saying llms.txt is wrong. We are saying the return on operator hours is somewhere else.
| Tactic | Time to ship | Measured AI citation lift | Data source |
|---|---|---|---|
| Publish llms.txt | 15 minutes | 0% (300k-domain XGBoost; 1 of 94,614 cited URLs) | SE Ranking 2026; ALLMO 2026 |
| Add FAQPage schema to core pages | 1–2 hours | +2.7× (41% citation rate vs 15%) | Ahrefs 2026 schema study |
| Refactor H2 sections to 120–180 words | 4–8 hours per post | +70% citations on refactored sections | Ahrefs 2026 content-structure study |
| Earn G2 / Capterra / Trustpilot profiles | 2–4 weeks | +3× citation probability (SaaS) | Ahrefs 2026 brand-profile correlation |
| Earn unlinked editorial brand mentions | 4–12 weeks | 0.664 correlation with AI citations (vs 0.218 for backlinks) | Profound + cross-platform 2026 studies |
The file is a 15-minute gesture. Every other row on that table is a defensible time investment that maps to a measured lift. For the full version of the mentions-vs-backlinks causal story, see our pillar how to get mentioned by ChatGPT, and for the domain-level view of where citations actually come from, the 50 domains that drive 80 percent of AI citations is the next read.
How to measure whether llms.txt is doing anything for you
Run a 30-day log audit and a citation panel before and after. The only honest way to know is to measure, because aggregate studies will never be identical to your site's retrieval behavior.
Step one: check your access logs for any request to /llms.txt from a known AI user-agent (GPTBot, ClaudeBot, PerplexityBot, OAI-SearchBot, ChatGPT-User, Claude-SearchBot, Meta-WebIndexer, Applebot, Amazonbot, Bytespider). If you do not see one in 30 days of logs, the file is not being consumed, full stop. Step two: run a 30–50 prompt panel across ChatGPT, Perplexity, Claude, and Google AI Overviews, capture your citation rate, ship or remove llms.txt, and re-run the panel 30 days later. For the DIY version of the prompt panel, see our walkthrough on how to track brand mentions in ChatGPT for free. If your citation rate moves, we want the data - publish it. We will update this piece when new evidence lands.
Frequently asked questions
Does ChatGPT read llms.txt?
No, based on every public test we have seen. OpenAI's ChatGPT-User agent made 923 requests in the 48-day wislr.com server-log audit and did not fetch llms.txt once. OpenAI has not published any documentation stating that GPTBot, ChatGPT-User, or OAI-SearchBot consumes the file. ALLMO's citation audit of 94,614 URLs across ChatGPT, Claude, Perplexity, Gemini, and Grok found 1 llms.txt citation total across all five engines combined - statistically indistinguishable from zero. If you want to show up in ChatGPT in 2026, the working levers are Wikipedia presence, Reddit visibility, G2/Capterra profiles for SaaS, and editorial brand mentions.
Does Google use llms.txt for AI Overviews or AI Mode?
No. Google has said so publicly, twice. John Mueller said in June 2025 that "no AI system currently uses llms.txt." Gary Illyes said at Search Central Deep Dive (July 23, 2025) that "Google doesn't support LLMs.txt and isn't planning to" and that normal SEO practices are sufficient for AI Overviews. Google did add an llms.txt to its own Search Central docs site in December 2025, but that is consistent with the Mintlify dev-docs pattern and is not a reversal of retrieval policy. For AI Overview ranking, the evidence-backed levers are schema markup, indexable HTML, and third-party brand entity signals - not llms.txt.
Is llms.txt the same as robots.txt?
No. robots.txt is a crawl-permission file that actual AI bots check - OAI-SearchBot pings it 3–6 times a day per wislr.com's logs, and ClaudeBot ~4 times. It is an enforcement surface. llms.txt is a content-curation proposal with no equivalent enforcement or consumption across major LLM platforms. The two files serve different purposes, live at different paths (/robots.txt vs /llms.txt), and are crawled at radically different rates. Do not conflate them: updating robots.txt has real consequences for whether AI bots index your site; updating llms.txt, based on current evidence, has none.
Should I add llms-full.txt if I run developer documentation?
Yes, if your users copy-paste your docs into LLMs. This is the one operator case where the file is doing real work. Mintlify auto-generates llms-full.txt for hosted docs sites, and Anthropic and Cursor ship versions through that pipeline. The win is not "the LLM retrieves it during inference" - it is "a developer pastes it into their prompt." That is a UX improvement for a specific technical audience. If you are not in that audience, the file is not earning its keep on a citation dashboard.
Will llms.txt work later once AI engines adopt it?
Maybe, but we would not plan content strategy around it. The 2026 retrieval pipelines at OpenAI, Anthropic, and Google were built without llms.txt in the loop. For that to change, at least one of those engines would have to announce consumption, publish documentation, and start showing the file in their user-agent fetch patterns. None of those signals have appeared. Content standards like hcard and the keywords meta tag also waited for adoption that never came. If llms.txt turns into the standard that critics expect, ship it then - the file is not expensive to add retroactively.
What should I do with the 15 minutes I was going to spend on llms.txt?
Add FAQPage schema to your top three pages. The Ahrefs 2026 data showed FAQPage-marked pages earn a 41% citation rate vs 15% for pages without - a 2.7× lift from markup you can add in a single sitting. If schema is already handled, the next-highest-leverage 15-minute task is adding a named-expert quote or an original statistic to your highest-traffic piece: expert quotes correlate with a +37% citation lift and original statistics with +22%. Both are measured, repeatable, and sourced in our research bank - unlike llms.txt, which is not.