YOUR WEBSITE IS INVISIBLE TO AI AGENTS
AI crawlers scrape for training data. AI agents complete tasks for users. Your site needs to be ready for both. We measure exactly what they see.
THE SHIFT
ChatGPT, Perplexity, Google AI Overviews, and autonomous AI agents are reshaping how people discover the web. The websites winning in AI search aren't the ones with the best design. They're the ones that are machine-readable.
GPTBot, ClaudeBot, Amazonbot, Google-Extended, and 9 more
Across 5 weighted categories
Most websites land in the orange zone
Almost nobody implements agent protocols yet
WHAT WE MEASURE
Five categories covering everything an AI agent needs to find, understand, and interact with your website. Each weighted by how much it affects what AI agents actually do with your content.
AI CONTENT DISCOVERY
If AI systems can't find and access your content, nothing else matters. We check whether your robots.txt welcomes the 13 major AI bots, whether you provide a sitemap for crawlers to follow, and whether you have llms.txt — a curated entry point that helps AI agents like Claude Code and Cursor find your most important pages.
- robots.txt
- AI Crawler Directives
- XML Sitemap
- llms.txt
- Meta Robots
- HTTP Bot Access
- Content Freshness
AI SEARCH SIGNALS
AI-powered search engines like ChatGPT, Perplexity, and Google AI Overviews rely on structured data to understand your content. JSON-LD with Schema.org vocabulary tells AI exactly what your page represents: product, article, organization, or service. We validate 17 high-value schema types and check for proper entity linking.
- JSON-LD
- Schema.org Types
- Entity Linking
- BreadcrumbList
- Schema Validation
- Organization Schema
- FAQPage Schema
- Author Attribution
CONTENT & SEMANTICS
AI agents navigate the accessibility tree, not your visual design. AI crawlers parse raw HTML without JavaScript. We validate heading hierarchy, semantic markup, and detect JavaScript-only pages invisible to both.
- SSR Detection
- Heading Hierarchy
- Semantic HTML
- ARIA Landmarks
- Alt Text
- Language
- Link Text
- Question Headings
AGENT PROTOCOLS
WebMCP, A2A Agent Cards, and MCP Discovery let AI agents interact directly with your site's functionality. Almost no website implements these yet.
- WebMCP
- A2A Agent Card
- MCP Discovery
- OpenAPI
- agents.json
SECURITY & TRUST
Every agent protocol requires HTTPS as baseline. Security headers like HSTS, CSP, and a strict Referrer-Policy build the trust layer that agents need before interacting with your site.
- HTTPS
- HSTS
- CSP
- X-Content-Type-Options
- X-Frame-Options
- CORS
- Referrer-Policy
WE TEST WHAT AI ACTUALLY SEES
Our scanner fetches raw HTML without executing JavaScript, exactly how GPTBot, ClaudeBot, and PerplexityBot experience your site.
RAW HTML PERSPECTIVE
We don't render JavaScript. Neither do AI crawlers or agents. Tools that render JavaScript show you what Google sees, not what AI systems see. Our scanner reads exactly what GPTBot, ClaudeBot, and coding agents like Claude Code read: raw HTML, no Chromium.
EVERY AI BOT, INDIVIDUALLY
We parse your robots.txt for all 13 major AI user agents — both crawlers (GPTBot, Google-Extended, Bytespider) and search/assistant bots (ChatGPT-User, ClaudeBot, PerplexityBot). Each has different crawling behavior and purpose.
EMERGING PROTOCOLS
We check protocols most scanners don't know exist: WebMCP declarative APIs, Google's A2A Agent Cards, MCP server discovery, and agents.json. These are the interfaces AI agents will use to interact with your site.
ACTIONABLE CODE SNIPPETS
Every failed checkpoint comes with a prioritized recommendation and copy-paste code to fix it. Your report includes per-checkpoint pass/fail results, specific fixes, and a radar chart of your five category scores.
Scan and fix from your editor. Install the MCP server to scan, and AI skills to fix.
claude mcp add isagentready-mcp -- npx -y isagentready-mcp
/plugin marketplace add bartwaardenburg/isagentready-skills
HOW WE SCORE
Multiple checkpoints across 5 categories, each earning points toward your overall 0-100 score. Transparent, weighted, and research-backed.
TYPICAL SITE PROFILE
Average scores across scanned websites
LETTER GRADES
CATEGORY WEIGHTS
How each category contributes to the overall score
BUILT BY A PRACTITIONER
IsAgentReady exists because the standards are moving fast and most websites are being left behind.
Bart Waardenburg
AI Agent Readiness Expert - The Hague, NL
The web is going through one of its biggest shifts since mobile. AI agents are starting to browse on behalf of people, and new standards like MCP, WebMCP, and llms.txt are rewriting how that works. I find it genuinely fascinating to dig into these protocols and figure out what they mean for the websites we build. IsAgentReady started from that curiosity: making this shift concrete and measurable, so businesses can see where they stand and what to improve.
DIVE DEEPER
Learn how ChatGPT, Google, and Claude decide which websites to cite in their AI-generated answers.
Content Negotiation for AI Agents: Why Sentry Serves Markdown Over HTML
Sentry co-founder David Cramer shows how content negotiation — a 25-year-old HTTP standard — saves AI agents 80% of tokens. We break down the implementation: Accept headers, markdown delivery, authenticated page redirects, and what this means for every website preparing for agent traffic.
Cloudflare /crawl Endpoint: One API Call to Crawl Any Website
Cloudflare launched a /crawl endpoint that crawls entire websites with one API call — returning HTML, Markdown, or AI-extracted JSON. We break down what this means for AI agent readiness: why your robots.txt, sitemap, semantic HTML, and server-side rendering now matter more than ever.
AI Crawlers Ignore llms.txt — But AI Agents Don't
Dries Buytaert's data shows zero AI crawlers use llms.txt. But he measured the wrong thing. Crawlers scrape for training data — agents complete tasks. We break down why the crawler vs agent distinction matters, which coding agents already use llms.txt and content negotiation, and what you should implement today.
FREQUENTLY ASKED QUESTIONS
Common questions about AI agent readiness, how we measure it, and what you can do to improve your score.
EXPLORE MORE
Most websites score under 45. Find out where you stand.