Skip to content

YOUR WEBSITE IS INVISIBLE TO AI AGENTS

AI crawlers scrape for training data. AI agents complete tasks for users. Your site needs to be ready for both. We measure exactly what they see.

THE SHIFT

ChatGPT, Perplexity, Google AI Overviews, and autonomous AI agents are reshaping how people discover the web. The websites winning in AI search aren't the ones with the best design. They're the ones that are machine-readable.

AI BOT DIRECTIVES
0

GPTBot, ClaudeBot, Amazonbot, Google-Extended, and 9 more

CHECKPOINTS
0

Across 5 weighted categories

TYPICAL SCORE
0

Most websites land in the orange zone

PROTOCOL READINESS
0

Almost nobody implements agent protocols yet

WHAT WE MEASURE

Five categories covering everything an AI agent needs to find, understand, and interact with your website. Each weighted by how much it affects what AI agents actually do with your content.

30%

AI CONTENT DISCOVERY

If AI systems can't find and access your content, nothing else matters. We check whether your robots.txt welcomes the 13 major AI bots, whether you provide a sitemap for crawlers to follow, and whether you have llms.txt — a curated entry point that helps AI agents like Claude Code and Cursor find your most important pages.

  • robots.txt
  • AI Crawler Directives
  • XML Sitemap
  • llms.txt
  • Meta Robots
  • HTTP Bot Access
  • Content Freshness
20%

AI SEARCH SIGNALS

AI-powered search engines like ChatGPT, Perplexity, and Google AI Overviews rely on structured data to understand your content. JSON-LD with Schema.org vocabulary tells AI exactly what your page represents: product, article, organization, or service. We validate 17 high-value schema types and check for proper entity linking.

  • JSON-LD
  • Schema.org Types
  • Entity Linking
  • BreadcrumbList
  • Schema Validation
  • Organization Schema
  • FAQPage Schema
  • Author Attribution
20%

CONTENT & SEMANTICS

AI agents navigate the accessibility tree, not your visual design. AI crawlers parse raw HTML without JavaScript. We validate heading hierarchy, semantic markup, and detect JavaScript-only pages invisible to both.

  • SSR Detection
  • Heading Hierarchy
  • Semantic HTML
  • ARIA Landmarks
  • Alt Text
  • Language
  • Link Text
  • Question Headings
15%

AGENT PROTOCOLS

WebMCP, A2A Agent Cards, and MCP Discovery let AI agents interact directly with your site's functionality. Almost no website implements these yet.

  • WebMCP
  • A2A Agent Card
  • MCP Discovery
  • OpenAPI
  • agents.json
15%

SECURITY & TRUST

Every agent protocol requires HTTPS as baseline. Security headers like HSTS, CSP, and a strict Referrer-Policy build the trust layer that agents need before interacting with your site.

  • HTTPS
  • HSTS
  • CSP
  • X-Content-Type-Options
  • X-Frame-Options
  • CORS
  • Referrer-Policy

WE TEST WHAT AI ACTUALLY SEES

Our scanner fetches raw HTML without executing JavaScript, exactly how GPTBot, ClaudeBot, and PerplexityBot experience your site.

RAW HTML PERSPECTIVE

We don't render JavaScript. Neither do AI crawlers or agents. Tools that render JavaScript show you what Google sees, not what AI systems see. Our scanner reads exactly what GPTBot, ClaudeBot, and coding agents like Claude Code read: raw HTML, no Chromium.

EVERY AI BOT, INDIVIDUALLY

We parse your robots.txt for all 13 major AI user agents — both crawlers (GPTBot, Google-Extended, Bytespider) and search/assistant bots (ChatGPT-User, ClaudeBot, PerplexityBot). Each has different crawling behavior and purpose.

EMERGING PROTOCOLS

We check protocols most scanners don't know exist: WebMCP declarative APIs, Google's A2A Agent Cards, MCP server discovery, and agents.json. These are the interfaces AI agents will use to interact with your site.

ACTIONABLE CODE SNIPPETS

Every failed checkpoint comes with a prioritized recommendation and copy-paste code to fix it. Your report includes per-checkpoint pass/fail results, specific fixes, and a radar chart of your five category scores.

DEVELOPER TOOLS

Scan and fix from your editor. Install the MCP server to scan, and AI skills to fix.

SCAN
$ claude mcp add isagentready-mcp -- npx -y isagentready-mcp
FIX
$ /plugin marketplace add bartwaardenburg/isagentready-skills

HOW WE SCORE

Multiple checkpoints across 5 categories, each earning points toward your overall 0-100 score. Transparent, weighted, and research-backed.

TYPICAL SITE PROFILE

Average scores across scanned websites

LETTER GRADES

A+ 97
95–100
A 87
80–94
B 75
70–79
C 55
40–69
D 30
20–39
F 10
0–19

CATEGORY WEIGHTS

How each category contributes to the overall score

BUILT BY A PRACTITIONER

IsAgentReady exists because the standards are moving fast and most websites are being left behind.

Bart Waardenburg

Bart Waardenburg

AI Agent Readiness Expert - The Hague, NL

The web is going through one of its biggest shifts since mobile. AI agents are starting to browse on behalf of people, and new standards like MCP, WebMCP, and llms.txt are rewriting how that works. I find it genuinely fascinating to dig into these protocols and figure out what they mean for the websites we build. IsAgentReady started from that curiosity: making this shift concrete and measurable, so businesses can see where they stand and what to improve.

DIVE DEEPER

Learn how ChatGPT, Google, and Claude decide which websites to cite in their AI-generated answers.

Content Negotiation for AI Agents: Why Sentry Serves Markdown Over HTML
9 min read

Content Negotiation for AI Agents: Why Sentry Serves Markdown Over HTML

Sentry co-founder David Cramer shows how content negotiation — a 25-year-old HTTP standard — saves AI agents 80% of tokens. We break down the implementation: Accept headers, markdown delivery, authenticated page redirects, and what this means for every website preparing for agent traffic.

ai-agents seo getting-started
Cloudflare /crawl Endpoint: One API Call to Crawl Any Website
9 min read

Cloudflare /crawl Endpoint: One API Call to Crawl Any Website

Cloudflare launched a /crawl endpoint that crawls entire websites with one API call — returning HTML, Markdown, or AI-extracted JSON. We break down what this means for AI agent readiness: why your robots.txt, sitemap, semantic HTML, and server-side rendering now matter more than ever.

ai-agents seo getting-started
AI Crawlers Ignore llms.txt — But AI Agents Don't
9 min read

AI Crawlers Ignore llms.txt — But AI Agents Don't

Dries Buytaert's data shows zero AI crawlers use llms.txt. But he measured the wrong thing. Crawlers scrape for training data — agents complete tasks. We break down why the crawler vs agent distinction matters, which coding agents already use llms.txt and content negotiation, and what you should implement today.

ai-agents seo getting-started

FREQUENTLY ASKED QUESTIONS

Common questions about AI agent readiness, how we measure it, and what you can do to improve your score.

IsAgentReady scans websites across 5 weighted categories: AI Content Discovery (30%), AI Search Signals (20%), Content & Semantics (20%), Agent Protocols (15%), and Security & Trust (15%). Each category evaluates specific checkpoints that determine how well AI agents can find, understand, and interact with your website.

Each category receives a 0-100 score based on individual checkpoints. The overall score is a weighted average of all categories, converted to a letter grade from F (0-19) to A+ (95-100). Discovery carries the most weight at 30% because if AI agents can't find your content, nothing else matters.

Yes, IsAgentReady is completely free. Scan any website and get instant results with detailed scores, checkpoint-level pass/fail results, and actionable recommendations with code snippets.

llms.txt is a plain-text file at /llms.txt that provides a curated entry point to your site for AI agents. Unlike crawlers that scrape everything, agents like Claude Code and Cursor fetch specific pages to complete tasks — llms.txt tells them which pages matter most. Adoption is still under 10%, but growing fast in developer tooling.

WebMCP (Web Model Context Protocol) and A2A (Agent-to-Agent) let AI agents interact directly with your website's functionality, not just read its content. WebMCP enables declarative APIs via HTML meta tags, while A2A Agent Cards describe your site's capabilities in a machine-readable format. Almost no websites implement these yet.

AI crawlers (GPTBot, Google-Extended, Bytespider) scrape the web at scale for training data — they process billions of pages and use sitemaps, not llms.txt. AI agents (Claude Code, Cursor, ChatGPT Search) fetch specific pages to complete tasks for users — they care about token efficiency and use llms.txt, content negotiation, and tool manifests. Both read raw HTML without JavaScript, so server-side rendering matters for all AI systems.

Start with your scan's Quick Wins section, which shows the 3 changes with the highest point potential. Common quick wins include adding a robots.txt that allows AI bots, implementing JSON-LD structured data, and ensuring your pages render meaningful HTML server-side. Each recommendation includes copy-paste code snippets.

Yes. Install the IsAgentReady MCP server to scan websites directly from Claude Code, Cursor, Codex CLI, or any MCP-compatible editor. For fixing issues, install our open-source AI agent skills - they give your coding agent step-by-step playbooks for all 36 checkpoints. Together they form a scan-fix-verify loop without leaving your editor.

AI-powered search engines like ChatGPT, Perplexity, and Google AI Overviews use structured data (JSON-LD with Schema.org vocabulary) to understand what your page represents - product, article, organization, or service. Proper structured data increases the chance of being cited in AI-generated answers and summaries.

EXPLORE MORE

Most websites score under 45. Find out where you stand.

SCAN NOW
TEST YOUR SITE

SCAN NOW

Enter your URL and get a detailed AI readiness report in under a minute. 5 categories, actionable recommendations.
RANKINGS
SEE WHO LEADS

RANKINGS

See how top websites score across all categories. Learn from the best-prepared sites.
COMPARE
HEAD TO HEAD

COMPARE

Compare two websites side-by-side across all 5 categories and 47 checkpoints.