Demo

A recent audit reveals that major AI systems are not using llms.txt files as widely as anticipated, raising questions about its current effectiveness and adoption in AI and SEO practices.

The idea of an llms.txt file has attracted far more attention than evidence. An audit by Flavio Longato, who works in LLM optimisation and SEO at Adobe, found no visits at all from GPTBot, ClaudeBot or PerplexityBot across 1,000 domains over a 30-day period. Instead, the bulk of requests came from Google’s desktop crawler, with a small number from BingBot and OpenAI’s search bot, while SEO tools also made up a noticeable share of the traffic.

That matters because llms.txt is still only a proposed standard, not an agreed one. The file is intended to sit in a site’s root directory as a Markdown guide to a website’s key pages, giving machines a cleaner map of important content than a normal HTML page. But the central promise of the format is that AI systems would use it, and the available evidence does not show that happening.

Longato’s audit is not the only signal pointing in that direction. A live experiment reported by Complete SEO, based on an opt-in WordPress plugin, also found that major AI crawlers were not reading llms.txt files. Semrush has similarly noted that the format remains unadopted by the major AI companies, and that while conventional search crawlers may fetch the file, the dedicated AI bots have shown little interest.

The distinction is important. Googlebot and Bingbot are built to crawl broadly, so they will often request any discoverable file on a site, llms.txt included. That does not mean the file is being treated as a meaningful signal. In practice, the traffic seen in logs appears to reflect ordinary crawler behaviour, not support for a new AI-SEO standard.

For site owners, that leaves llms.txt in an awkward position. It is unlikely to cause harm, and it can be generated automatically with little effort, but there is no strong evidence that it improves visibility in AI answers, speeds indexing or replaces structured data. If a team wants to publish one as a low-cost experiment, that is one thing. Building a strategy around it is another.

The more reliable work is much less glamorous. Clean semantic HTML remains fundamental, because both search engines and AI systems still have to parse the page they are given. Proper use of elements such as article, section, nav and main makes content easier to interpret than a site built from generic div containers and styling hooks.

Structured data is another area where the case is stronger. Schema.org markup in JSON-LD format is already supported by major search engines and is far more established than llms.txt. For articles, guides, FAQs and product pages, it gives machines information in a form they can already use. Metadata such as title tags, descriptions and canonical URLs also remains essential.

Visual content is often the weakest part of the experience for both crawlers and language models. Images without meaningful alt text, and video without transcripts, create gaps that no root-level text file can fix. If AI systems are to quote, summarise or recommend content accurately, they need the surrounding context that semantically rich pages and transcripts provide.

The same applies to discoverability. A well-maintained sitemap.xml still does a better job of listing a site’s important URLs than llms.txt, and robots.txt remains the real mechanism for controlling crawler access. OpenAI, Anthropic and Perplexity all document their bots, and site owners can already decide what those crawlers may or may not reach.

Seen in that light, llms.txt looks less like a breakthrough and more like a useful test case for the gap between hype and adoption. It is a neat idea, but the available logs suggest that the real users of the file today are SEO tools and ordinary crawlers, not the AI systems it was meant to guide. For now, the practical answer is simple: if you have time to spare, generate one automatically; if not, invest the effort where the evidence is stronger.

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services

Noah Fact Check Pro

The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.

Freshness check

Score:
3

Notes:
The article references an August 2025 audit by Flavio Longato, which is over seven months old. The most recent source cited is from March 2026, indicating that the content may be outdated. The article was published on April 25, 2026, but the information it presents is not current. This raises concerns about the freshness of the information provided.

Quotes check

Score:
2

Notes:
The article includes direct quotes from Flavio Longato’s audit. However, these quotes are not independently verifiable through other sources. The absence of corroborating sources for these quotes diminishes their reliability.

Source reliability

Score:
4

Notes:
The primary source is an article from Habr, a Russian tech community platform. While Habr is known for its technical content, it is not a mainstream news outlet, which may affect the perceived reliability of the information. Additionally, the article relies heavily on a single audit by Flavio Longato, which may not be representative of broader trends.

Plausibility check

Score:
5

Notes:
The claims about the lack of AI crawler activity on llms.txt files are plausible, given that major AI companies have not officially adopted the llms.txt standard. However, the article does not provide sufficient evidence to fully support these claims, and the reliance on a single audit raises questions about the generalizability of the findings.

Overall assessment

Verdict (FAIL, OPEN, PASS): FAIL

Confidence (LOW, MEDIUM, HIGH): HIGH

Summary:
The article presents outdated information, relies on unverifiable quotes, and lacks independent verification from reputable sources. Its content type is more opinion-based than factual reporting, further diminishing its suitability for publication. Given these issues, the article does not meet the necessary standards for factual reporting.

Supercharge Your Content Strategy

Feel free to test this content on your social media sites to see whether it works for your community.

Get a personalized demo from Engage365 today.

Share.

Get in Touch

Looking for tailored content like this?
Whether you’re targeting a local audience or scaling content production with AI, our team can deliver high-quality, automated news and articles designed to match your goals. Get in touch to explore how we can help.

Or schedule a meeting here.

© 2026 AlphaRaaS. All Rights Reserved.