// infrastructure / ai-readable

Why we serve Markdown to AI bots.

Instead of publishing a symbolic llms.txt, we detect AI user agents at the edge and serve them a clean Markdown representation of every page on this site. It's a live running system, not a pledge. You can inspect it with one curl command.

What this is.

A Cloudflare Pages Function runs on every request to winstondigitalmarketing.com. If the User-Agent header matches a known AI crawler. GPTBot, ChatGPT-User, ClaudeBot, PerplexityBot, Google-Extended, and the rest. The middleware rewrites the response to serve a parallel .md file stored under /content/. Humans and standard search crawlers (Googlebot non-AI, Bingbot) continue to receive the normal HTML.

Why not llms.txt.

llms.txt is a pledge. There's no version history, no enforcement, no observability, and no mechanism to verify that any given bot honored it on any given request. In practice, most llms.txt files exist purely as an SEO signal. A claim that says "we care about AI readability" without a feature behind it.

Edge middleware is different. It's a running system. We can log which agents hit which pages, tune the response per user-agent pattern, version the Markdown outputs alongside HTML, and verify behavior in a single terminal command. It's the difference between a privacy policy and the actual encryption you deploy.

Inspect it yourself.

Run these two commands and diff the responses:

# What a human browser gets:
curl -s https://winstondigitalmarketing.com/ | head -20

# What an AI agent gets:
curl -sH "User-Agent: ClaudeBot/1.0" https://winstondigitalmarketing.com/ | head -20

The second request returns Markdown with Content-Type: text/markdown and an X-Served-To: ai-agent response header. You can swap in GPTBot/1.0, PerplexityBot, Google-Extended, or any of the other recognized UA patterns and see the same behavior.

Which agents are recognized.

The full list is in functions/_middleware.js in this site's repo. At launch we recognize:

Missing one that matters? Tell us and we'll add it.

The receipt.

We documented the full implementation. Including the commit history, performance numbers, and the exact middleware code. In the agent-ready infrastructure receipt. You can also see this capability as a productized service on the Agentic Web Transformation page.

Want this running on your site?

The middleware is simple, cheap, and portable. We'll implement it on your infrastructure and ship a sanitized version of the code.