Skip to content

Anthropic's AI Exposure Index: What Real-World Usage Data Means for Your Website

12 min read
Bart Waardenburg

Bart Waardenburg

AI Agent Readiness Expert & Founder

On March 5, 2026, Anthropic published "Labor Market Impacts of AI: A New Measure and Early Evidence", a research paper by Maxim Massenkoff and Peter McCrory. They introduce "observed exposure," a metric that measures how AI is actually being used in the workplace versus how it could theoretically be used. The gap between theory and practice turns out to be enormous. And the occupations hit hardest? Exactly the knowledge workers whose daily work happens on the web.

If you build or manage a website, there's a signal here you can't ignore. The same pattern playing out in the labor market, a massive gap between what AI can do and what it actually does, is mirrored on the web. The gap between websites that could be agent-ready and those that are is just as wide.

A New Way to Measure AI's Real Impact

Previous attempts to measure AI's impact on jobs relied on theoretical assessments. Experts rating whether an LLM could speed up a given task. The Anthropic team took a very different approach by combining three data sources:

  1. O*NET Database, task definitions across ~800 US occupations
  2. Anthropic Economic Index, real-world Claude usage patterns from millions of conversations
  3. Eloundou et al. (2023), theoretical LLM task feasibility ratings (β scale: 1 = fully feasible, 0.5 = needs tools, 0 = not feasible)

The result is "observed exposure," a metric that weights tasks by theoretical feasibility, actual usage frequency, work-context relevance, automation level, and task-share within occupations. For the first time we can see where AI is actually being deployed. Not where it could be, but where it really is.

The Feasibility Gap: Theory vs Reality

The data reveals that real-world AI usage concentrates heavily on tasks that are theoretically the most feasible. But it falls far short of covering all of them:

CLAUDE TASKS IN FEASIBLE CATEGORIES
0
USAGE ON FULLY FEASIBLE TASKS
0
USAGE ON NON-FEASIBLE TASKS
0

97% of what people actually use Claude for falls into categories rated as theoretically feasible. But the coverage within those categories is far from complete. That's the feasibility gap: AI is being used where it works, but it hasn't come close to all the tasks it theoretically could handle.

AI adoption follows a predictable path. It starts with the easiest tasks and gradually expands. The same pattern applies to websites. AI agents start by consuming structured, machine-readable content and then move toward more complex interactions. Whether your website is ready for that expansion is a different question entirely.

Which Occupations Are Most Exposed?

The most exposed occupations are overwhelmingly web-based knowledge work. The very occupations where workers interact with websites, SaaS tools, and digital platforms every day:

Computer programmers lead with 74.5% task coverage. Three quarters of their work tasks are now being done with AI assistance. As a front-end developer, I feel slightly attacked by this. Customer service representatives follow at 70.1%, data entry keyers at 67.1%. Meanwhile, roughly 30% of all occupations register zero AI coverage. Cooks, mechanics, bartenders, lifeguards. The people who do things you can't copy-paste.

The Adoption Gap: Theoretical vs Observed Exposure

The most striking finding is how far actual AI usage lags behind theoretical capability. The gap between what AI could do in an occupation and what it actually does is telling:

Computer & Math occupations have 94% theoretical exposure but only 33% observed. A 61-percentage-point gap. Sixty-one. That gap exists because of legal restrictions, software integration requirements, verification protocols, and model limitations. These are exactly the same friction points that slow AI agent adoption on the web. Websites that lack structured data, block AI crawlers, or don't expose machine-readable interfaces create barriers to adoption that's coming regardless.

COMPUTER & MATH: THEORETICAL
0
COMPUTER & MATH: OBSERVED
0

Who Are the Exposed Workers?

The demographics of AI-exposed workers tell a clear story. These are experienced, well-educated, high-earning professionals. The knowledge workers who keep the digital economy running:

Demographic Exposed Workers Unexposed Workers Difference
Female representation Higher Lower +16 pp
Average earnings Higher Lower +47%
Graduate degree holders 17.4% 4.5% 3.5x
White representation Higher Lower +11 pp
Asian representation Higher Lower ~2x

Exposed workers earn 47% more on average and are 3.5 times more likely to hold a graduate degree. This isn't AI replacing low-skill work. It's AI augmenting the most productive, best-paid workers. Their work lives revolve around digital tools and web-based platforms. When their tools increasingly incorporate AI agents, the websites they use need to keep up.

Employment Impact: The Early Signals

The headline finding on employment sounds reassuring at first: no systematic increase in unemployment for highly exposed workers since ChatGPT launched in late 2022. But zoom in and the data tells a different story.

BLS Growth Projections Correlate with Exposure

The Bureau of Labor Statistics' own employment projections through 2034 show a clear correlation with observed AI exposure:

BLS GROWTH DROP PER 10pp COVERAGE
-0.6pp
CORRELATION WITH THEORETICAL ONLY
None

For every 10-percentage-point increase in observed AI coverage, BLS growth projections drop by 0.6 percentage points. The interesting part: this correlation only appears with the observed exposure measure, not with theoretical exposure alone. The BLS is already factoring in real-world AI adoption patterns. That validates Anthropic's approach.

The Young Worker Signal

The most concerning finding is about young workers entering exposed occupations. Workers aged 22-25 in AI-exposed occupations are seeing measurably reduced hiring:

JOB-FINDING RATE DECLINE
-0.5pp
REDUCTION IN HIRING RATE
0
IMPACT ON WORKERS OVER 25
None

Job-finding rates for young workers in exposed occupations have declined roughly 0.5 percentage points since late 2024, a 14% reduction in hiring rates. No equivalent decline for workers over 25. This mirrors findings from Brynjolfsson et al. Existing workers aren't losing their jobs. Companies are simply hiring fewer juniors, presumably because AI tools are handling tasks that would have gone to entry-level staff.

This has direct implications for the web. If companies are substituting junior hires with AI tools, the tools those companies use, including their websites, internal platforms, and customer-facing services, need to work seamlessly with AI agents. The shift is already happening at the hiring level.

The Web Readiness Parallel

The Anthropic report's central finding, a massive gap between theoretical AI capability and actual adoption, maps precisely onto what I see in web AI agent readiness:

THE LABOR MARKET GAP

94% of Computer & Math tasks are theoretically automatable, but only 33% show actual AI usage. Diffusion constraints — legal, software, verification — slow adoption.

THE WEB READINESS GAP

Most websites could serve AI agents, but few actually do. Missing structured data, blocked crawlers, and absent machine-readable interfaces create unnecessary barriers.

The parallels are structural:

Labor Market Pattern Web Readiness Equivalent
Theoretical exposure (94%) vs observed (33%) Websites that could serve agents vs those that actually do
Legal/software barriers slow adoption Missing robots.txt rules, blocked AI crawlers, no structured data
Most exposed = high-skill knowledge workers Most affected = web-based SaaS, e-commerce, content platforms
Young worker hiring slows in exposed fields AI agents replace routine web tasks (search, forms, data extraction)
BLS projects lower growth for exposed occupations Non-agent-ready websites will lose visibility to those that adapt
Adoption follows feasibility (β=1 tasks first) Agents start with structured content, then expand to complex interactions

The Adoption Window Is Now

The finding that AI adoption is far below theoretical capacity is actually good news for website owners. It means we're in a transition window. The gap will close. Usage that's at 33% today will expand toward 94% as integration barriers fall, models improve, and workflows mature.

The question is whether your website will be ready when it does. The Yale Budget Lab's parallel research confirms that while economy-wide disruption hasn't materialized yet, the occupational composition is shifting. As Fortune reported : the transition is real but gradual. Exactly the kind of shift where early preparation pays off.

What This Means for Your Website

Based on Anthropic's findings, these are the areas where web readiness matters most:

OPEN THE DOOR TO AI CRAWLERS

Just as AI adoption starts with feasible tasks, agents start with accessible content. Ensure robots.txt allows AI crawlers, serve an llms.txt, and don't block the 13+ AI user-agents that index the web.

MAKE YOUR DATA MACHINE-READABLE

The 61-point gap between theoretical and observed exposure exists partly because of poor tooling. JSON-LD, Schema.org, and structured data close the gap between what AI agents could extract and what they actually can.

EXPOSE AGENT PROTOCOLS

WebMCP, MCP discovery, A2A Agent Cards, and OpenAPI specs give AI agents the interfaces they need. The report shows adoption follows capability — make the capability available.

BUILD TRUST SIGNALS

Exposed workers are the most educated and highest-paid. Their tools demand security: HTTPS, HSTS, CSP, CORS headers. AI agents — and the professionals who use them — need to trust your website.

These four areas map directly to the IsAgentReady scanner's five categories. I built it to measure exactly these signals:

  • AI Content Discovery (30%) — crawler access, robots.txt, llms.txt, sitemaps. See how ChatGPT selects sources .
  • AI Search Signals (20%) — JSON-LD, Schema.org, entity linking, FAQPage schema. See the State of AEO .
  • Content & Semantics (20%) — SSR, heading hierarchy, semantic HTML, ARIA. See how agents see your website .
  • Agent Protocols (15%) — WebMCP, MCP, A2A, OpenAPI, agents.json. See what is WebMCP .
  • Security & Trust (15%) — HTTPS, HSTS, CSP, security headers.

An Early Warning System — For Jobs and Websites

Anthropic explicitly frames this research as an "early warning system," designed to track AI's labor market impacts before significant disruption occurs. The same principle applies to web readiness. The data shows the wave is building:

  • 75% of programming tasks already have AI coverage
  • Young worker hiring is already slowing in exposed occupations
  • BLS projections already factor in AI-driven growth slowdowns
  • The gap between theoretical and observed exposure is already closing

When Anthropic, the company that builds Claude, publishes data showing that real-world AI usage is expanding rapidly along predictable adoption curves, that's a signal. The websites these AI systems interact with need to be prepared. The companies that make their websites agent-ready now will benefit as the adoption gap closes. The rest can catch up later. And the data says this transition is already underway.

Sources

Ready to check?

SCAN YOUR WEBSITE

Get your AI agent readiness score with actionable recommendations across 5 categories.

  • Free instant scan with letter grade
  • 5 categories, 47 checkpoints
  • Code examples for every recommendation

RELATED ARTICLES

Continue reading about AI agent readiness and web optimization.

Content Negotiation for AI Agents: Why Sentry Serves Markdown Over HTML
9 min read

Content Negotiation for AI Agents: Why Sentry Serves Markdown Over HTML

Sentry co-founder David Cramer shows how content negotiation — a 25-year-old HTTP standard — saves AI agents 80% of tokens. We break down the implementation: Accept headers, markdown delivery, authenticated page redirects, and what this means for every website preparing for agent traffic.

ai-agents seo getting-started
Cloudflare /crawl Endpoint: One API Call to Crawl Any Website
9 min read

Cloudflare /crawl Endpoint: One API Call to Crawl Any Website

Cloudflare launched a /crawl endpoint that crawls entire websites with one API call — returning HTML, Markdown, or AI-extracted JSON. We break down what this means for AI agent readiness: why your robots.txt, sitemap, semantic HTML, and server-side rendering now matter more than ever.

ai-agents seo getting-started
AI Crawlers Ignore llms.txt — But AI Agents Don't
9 min read

AI Crawlers Ignore llms.txt — But AI Agents Don't

Dries Buytaert's data shows zero AI crawlers use llms.txt. But he measured the wrong thing. Crawlers scrape for training data — agents complete tasks. We break down why the crawler vs agent distinction matters, which coding agents already use llms.txt and content negotiation, and what you should implement today.

ai-agents seo getting-started

EXPLORE MORE

Most websites score under 45. Find out where you stand.

RANKINGS
SEE HOW OTHERS SCORE

RANKINGS

Browse AI readiness scores for scanned websites.
COMPARE
HEAD TO HEAD

COMPARE

Compare two websites side-by-side across all 5 categories and 47 checkpoints.
ABOUT
HOW WE MEASURE

ABOUT

Learn about our 5-category scoring methodology.