Skip to content

What Is agents.json? Advertising AI Agent Capabilities on Your Website

10 min read
Bart Waardenburg

Bart Waardenburg

AI Agent Readiness Expert & Founder

robots.txt has been around for thirty years. A simple text file telling crawlers where they cannot go. Worked great for crawlers. But AI agents aren't crawlers anymore. They browse, reason, and take actions autonomously. They need a different kind of file, one that tells them what they can do. That's where agents.json comes in.

What Is agents.json?

agents.json is a machine-readable JSON file that describes what AI agents can do on or with your website. Where robots.txt is a list of restrictions, agents.json is a list of capabilities. Which endpoints your site exposes, which workflows agents can execute, how they authenticate, what tools are available.

The concept doesn't come from a single standards body. Several independent initiatives converged on the same idea: a discoverable JSON file at /.well-known/agents.json, serving as a machine-readable business card for AI agents visiting your domain. The most prominent version is the agents.json spec by Wildcard AI , an open-source project (v0.1.0) that extends OpenAPI with agent-specific metadata.

ADVERTISE

Tell agents what services, endpoints, and workflows your site offers.

DISCOVER

A single file at a well-known path lets any agent find your capabilities.

CONNECT

Include authentication methods and flow definitions so agents can act immediately.

Why agents.json Matters

The web was built for humans clicking links. APIs for developers writing code. AI agents are neither. They're autonomous software that needs to discover what a website can do, understand how to interact with it, and execute multi-step workflows. No human in the loop.

Right now, when an AI agent visits your website, it has to reverse-engineer your capabilities. Read the HTML, guess at API endpoints, hope for the best. Brittle, error-prone, inefficient. An agents.json file flips that around: instead of agents guessing, you declare what they can do.

Think of it as handing someone a menu versus letting them wander into the kitchen. The menu always wins. agents.json is that menu for AI agents.

Aspect Without agents.json With agents.json
Discovery Agent scrapes HTML, guesses at endpoints Structured list of all available capabilities
Authentication Agent tries common auth patterns Explicit auth methods with scheme details
Workflows Agent must infer multi-step processes Pre-defined flows with linked actions
Error handling Trial and error Documented responses and constraints
Reliability Breaks when UI changes Stable contract independent of UI

The Wildcard agents.json Specification

The most developed agents.json spec comes from Wildcard AI , a Y Combinator-backed startup focused on making APIs work for AI agents. Their spec (v0.1.0, Apache 2.0) builds on OpenAPI with three key additions:

The most interesting idea in the Wildcard spec is the concept of flows (also called "chains") and links. A flow describes a complete workflow: a sequence of API calls that together achieve an outcome. Links define how those steps connect. Output from step 1 becomes input for step 2.

agents.json - basic structure json
{
  "apiVersion": "0.1.0",
  "baseUrl": "https://api.example.com",
  "info": {
    "title": "Example API for Agents",
    "description": "Manage orders and customers"
  },
  "chains": {
    "create_order": {
      "description": "Create a new order for an existing customer",
      "agent_instructions": "Look up the customer first, then create the order",
      "steps": [
        {
          "endpoint": "/customers/search",
          "method": "GET",
          "parameters": {
            "query": "$user_input.customer_name"
          }
        },
        {
          "endpoint": "/orders",
          "method": "POST",
          "parameters": {
            "customer_id": "$prev.results[0].id",
            "items": "$user_input.items"
          }
        }
      ]
    }
  }
}

The create_order chain above describes a two-step workflow: search for a customer, then create an order using that customer's ID. The agent_instructions field gives the LLM plain-language context to understand the intent. The steps define the exact API calls.

Authentication

The spec supports multiple auth schemes, the usual suspects:

  • API Key: Key passed in a header, query parameter, or cookie
  • Bearer Token: Standard OAuth 2.0 bearer tokens
  • Basic Auth: Username/password combination
  • OAuth: Full OAuth flows (on the roadmap for v0.2)

Design Principles

The Wildcard spec follows four principles that set it apart:

  1. Build on OpenAPI: Don't reinvent what already works. Extend existing API documentation.
  2. Optimize for LLMs: Structure the schema for machine consumption, not human readability.
  3. Stateless by design: Agents manage their own orchestration. The spec defines what to do, not how to manage sessions.
  4. Minimal API changes: Adding agents.json should not require changing your existing API.

How agents.json Compares to Other Discovery Formats

There are now several agent-related JSON files on the web, and at first glance they look similar. But each solves a different problem:

File Backed by Purpose Discovery path Maturity
agents.json Wildcard AI (YC) Agent-to-API interaction contracts /.well-known/agents.json Early (v0.1.0)
agent.json Google (A2A) Agent-to-agent communication /.well-known/agent.json Active (v0.2.5)
mcp.json Anthropic (MCP) Tool and context server discovery GitHub registry / local config Growing ecosystem
ai-card.json AI Card initiative Unified AI service metadata /.well-known/ai-card.json Proposal stage

agents.json vs A2A's agent.json

Google's Agent-to-Agent (A2A) protocol uses an "Agent Card" at /.well-known/agent.json for agent-to-agent communication. The difference is directional: A2A's agent.json describes what an agent is (identity, skills, supported modes). agents.json describes what an API can do for agents (endpoints, workflows, auth). A2A is for agents talking to other agents. agents.json is for agents talking to your API.

agents.json vs MCP

Anthropic's Model Context Protocol (MCP) provides a stateful connection between an AI model and external tools. An MCP server needs to be running, and the agent connects to it. agents.json is a static file. No server needed. As Nordic APIs notes , MCP "lacks a discoverability layer" and probably works best alongside something like agents.json that solves the discovery problem.

agents.json vs OpenAPI

OpenAPI documents APIs for human developers. agents.json adds a layer on top for AI agents. The additions: flows (multi-step workflows), agent instructions (plain-language guidance for LLMs), and optimized schemas (structured for machines, not for documentation). You don't replace your OpenAPI spec. You build agents.json on top of it.

FOR AI AGENTS

agents.json tells AI agents what your API can do, with flows and natural-language instructions optimized for LLM consumption.

FOR DEVELOPERS

OpenAPI tells human developers how your API works, with type definitions, examples, and documentation.

The idea of machine-readable agent capabilities is being explored from multiple angles. Beyond Wildcard, a few other projects are tackling adjacent problems:

agent-permissions.json

The Lightweight Agent Standards Working Group (LAS-WG) is developing agent-permissions.json, a proposal published on arXiv for a "robots.txt-style permission manifest for web agents." agents.json declares capabilities. agent-permissions.json declares permissions: what agents are allowed to do on specific pages, down to individual HTML elements.

There's a nice distinction between resource-level rules (machine-enforceable, like "don't click this button") and action-level guidelines (behavioral directives, like "use _bot at the end of usernames when registering"). Discoverable via a <link rel="agent-permissions"> tag or at /.well-known/agent-permissions.json.

JSON Agents (Portable Agent Manifest)

The JSON Agents project defines a Portable Agent Manifest (PAM): a framework-agnostic format for describing AI agents themselves. Capabilities, tools, runtimes, governance policies. agents.json focuses on what your API offers to agents. PAM focuses on making agent definitions portable across frameworks like LangChain, AutoGen, and MCP.

AI Card (Unified AI Service Metadata)

The AI Card initiative wants to unify the proliferation of agent discovery files into a single standard. The idea: a single /.well-known/ai-card.json that includes metadata for all protocols a server supports (MCP, A2A, custom APIs) in one place. Still in proposal stage, but the direction is clear: one entry point instead of five separate files.

How to Implement agents.json

Implementation isn't complicated. Here's a step-by-step approach based on the Wildcard spec:

Step 1: Inventory Your API Capabilities

List every endpoint you want AI agents to access. Searching for products, placing orders, checking account status, booking appointments. Focus on tasks an agent would perform on behalf of a user.

Step 2: Define Agent Flows

For each user task, define the sequence of API calls. Group related endpoints into named chains with clear descriptions and instructions an LLM can understand:

agents.json - e-commerce example json
{
  "apiVersion": "0.1.0",
  "baseUrl": "https://api.mystore.com/v1",
  "info": {
    "title": "MyStore API",
    "description": "E-commerce API for product search, cart management, and checkout"
  },
  "auth": {
    "type": "bearer",
    "description": "OAuth 2.0 access token required for cart and checkout operations"
  },
  "chains": {
    "search_products": {
      "description": "Search the product catalog by keyword, category, or price range",
      "agent_instructions": "Use this to help users find products. Supports filtering by category and price.",
      "steps": [
        {
          "endpoint": "/products/search",
          "method": "GET",
          "parameters": {
            "q": "$user_input.query",
            "category": "$user_input.category",
            "min_price": "$user_input.min_price",
            "max_price": "$user_input.max_price"
          }
        }
      ]
    },
    "add_to_cart_and_checkout": {
      "description": "Add a product to the cart and proceed to checkout",
      "agent_instructions": "First add the item to the cart, then initiate checkout. Requires auth.",
      "steps": [
        {
          "endpoint": "/cart/items",
          "method": "POST",
          "parameters": {
            "product_id": "$user_input.product_id",
            "quantity": "$user_input.quantity"
          }
        },
        {
          "endpoint": "/checkout",
          "method": "POST",
          "parameters": {
            "cart_id": "$prev.cart_id"
          }
        }
      ]
    }
  }
}

Step 3: Host the File

Place your agents.json file at the well-known path on your domain:

Discovery URL plain
https://yourdomain.com/.well-known/agents.json

Serve it with Content-Type: application/json and don't forget the CORS headers if agents will access it from different origins.

Step 4: Reference Your OpenAPI Spec

Already have an OpenAPI spec? Reference it from your agents.json so agents can access the detailed type definitions. The agents.json flows are the "what to do" layer, OpenAPI is the "how it works" layer:

agents.json - with OpenAPI reference json
{
  "apiVersion": "0.1.0",
  "baseUrl": "https://api.example.com",
  "info": {
    "title": "Example API",
    "openapi_spec": "https://api.example.com/openapi.json"
  },
  "chains": {
    "get_user_profile": {
      "description": "Retrieve a user profile by ID",
      "agent_instructions": "Use this when the user wants to view profile information",
      "steps": [
        {
          "endpoint": "/users/{user_id}",
          "method": "GET"
        }
      ]
    }
  }
}

Step 5: Test and Iterate

Use the Wildcard Bridge SDK (Python 3.10+) to validate your agents.json and test agent interactions:

Testing with Wildcard Bridge python
pip install wildcard-bridge

from wildcard_bridge import AgentsJSON

# Load and validate your spec
spec = AgentsJSON.from_url("https://yourdomain.com/.well-known/agents.json")

# List available chains
for chain_name, chain in spec.chains.items():
    print(f"{chain_name}: {chain.description}")

Who Is Using agents.json?

According to the Nordic APIs comparison , Wildcard has published agents.json files for major API providers including Resend, Alpaca, Slack, HubSpot, and Stripe. Important detail: these aren't maintained by the API providers themselves. Wildcard created them to demonstrate the format.

OpenBB, a financial data platform, has also adopted the format with their own agents.json reference for describing AI agents that connect to their workspace.

Current Status and Maturity

I'll be honest: agents.json is the least standardized of the agent discovery formats. A2A has Google behind it with a formal protocol spec. MCP has Anthropic with a growing ecosystem. agents.json is a community proposal driven by one startup.

SPEC VERSION
0.1.0
LICENSE
Apache 2.0
STATUS
Early

What we know about the maturity level:

  • No formal RFC: There is no IETF draft or W3C proposal for agents.json. It is an open-source project on GitHub.
  • Single primary author: Wildcard AI drives the specification. Community contributions exist but are limited.
  • No browser or platform support: No major AI platform (OpenAI, Google, Anthropic) officially consumes agents.json files.
  • Incomplete tooling: The Wildcard Bridge SDK is the primary tool. Alternatives are scarce.
  • Active development: The roadmap includes memory management, field transformations, rate-limiting, and conditional logic.

That said, the concept behind agents.json is gaining momentum from multiple directions. Whether the final standard ends up being called agents.json, ai-card.json, or something else entirely doesn't really matter. The web needs a "robots.txt for capabilities," and Wildcard's spec is the most concrete attempt so far.

The New Complement to robots.txt

For thirty years, the web's contract between sites and bots was one-directional. "Here's what you can't do." With robots.txt, site owners blocked paths, restricted crawl rates, and told bots which pages to ignore. Good model, simpler times.

The AI agent era needs a two-directional contract. Sites still need to set boundaries (robots.txt, agent-permissions.json), but they also need to advertise capabilities. An AI agent booking a flight, comparing insurance quotes, or managing a subscription needs to know what's possible. Not just what's forbidden.

The model that's emerging:

  • robots.txt - What crawlers cannot access (restrictions)
  • llms.txt - What your site is about (content summary for LLMs)
  • agents.json - What agents can do (capabilities and workflows)
  • agent.json - What your agent is (identity for agent-to-agent communication)

Together, these files form a layered discovery system. From basic access control, to content understanding, to structured interaction, to inter-agent collaboration.

Expected Evolution

The space is moving fast. Based on current trajectories, here's what I expect:

Near-term (2026)

  • Wildcard agents.json reaches v0.2 with OAuth support and conditional logic
  • More API providers publish their own agents.json files (rather than Wildcard doing it for them)
  • The LAS-WG agent-permissions.json gains traction as a complementary standard
  • AI platforms begin experimenting with consuming capability files during browsing

Medium-term (2026-2027)

  • Convergence between competing formats, possibly through the AI Card initiative or a similar unification effort
  • Major AI platforms (OpenAI, Google, Anthropic) begin officially supporting a capability discovery standard
  • Potential IETF or W3C working group for agent-web interaction standards

Long-term (2027+)

  • A unified agent discovery standard emerges, possibly combining elements from agents.json, A2A, and MCP
  • Browser-level support for agent capability files, similar to how browsers handle robots.txt
  • Agent-aware web frameworks generate capability files automatically from API definitions

Who Should Implement agents.json?

Given how early things still are, implementing agents.json pays off most for:

  • API-first companies that want AI agents to reliably consume their services
  • SaaS platforms building AI agent integrations alongside their MCP or A2A support
  • E-commerce sites that want agents to search, compare, and transact on behalf of users
  • Developer tool providers whose APIs are frequently used by coding agents and AI assistants
  • Early adopters who want to be ready when AI platforms begin consuming capability files

Running a content-focused website without APIs (blog, portfolio, informational site)? agents.json is less relevant for you. Focus first on structured data, llms.txt, and semantic HTML . Those have broader support and more immediate impact.

Practical Implementation Checklist

Going for it? Here's a quick checklist:

  1. Start with your existing OpenAPI spec (or create one if you don't have it)
  2. Identify the top 5-10 workflows agents would want to perform
  3. Write chains with clear agent_instructions for each workflow
  4. Define your authentication scheme
  5. Host the file at /.well-known/agents.json
  6. Set appropriate CORS headers for cross-origin access
  7. Test with the Wildcard Bridge SDK
  8. Monitor and iterate as the specification evolves

The Bottom Line

agents.json is a bet on a web where AI agents don't have to guess what your site can do, because you just tell them. The spec is young, the ecosystem is small, no major platform officially supports it. But the need is real, and it's growing.

Whether you implement it today depends on your situation. Running an API that agents should use? Experiment with it now. Running a content site? Your time is better spent on structured data, semantic HTML, and llms.txt. Either way, understanding AI agent readiness and the idea of declaring capabilities (not just restrictions) is the direction the web is heading.

robots.txt told bots what they can't do. agents.json tells agents what they can. That contract is being rewritten.

Sources

Ready to check?

SCAN YOUR WEBSITE

Get your AI agent readiness score with actionable recommendations across 5 categories.

  • Free instant scan with letter grade
  • 5 categories, 47 checkpoints
  • Code examples for every recommendation

RELATED ARTICLES

Continue reading about AI agent readiness and web optimization.

What Is MCP? The Model Context Protocol for AI Agents
10 min read

What Is MCP? The Model Context Protocol for AI Agents

Anthropic's Model Context Protocol (MCP) connects AI assistants to external tools and data. We cover the architecture, discovery via /.well-known/mcp.json, current adoption, and how to implement it.

ai-agents web-standards agent-protocols
What Is Google's A2A Protocol? Agent-to-Agent Communication Explained
10 min read

What Is Google's A2A Protocol? Agent-to-Agent Communication Explained

Google's Agent-to-Agent (A2A) protocol lets AI agents discover and work with each other. We cover the Agent Card, task lifecycle, A2A vs MCP, the partner ecosystem, and step-by-step implementation.

ai-agents web-standards agent-protocols
What Is WebMCP and Why Your Website Needs It
10 min read

What Is WebMCP and Why Your Website Needs It

WebMCP is the W3C proposal for exposing website functionality directly to AI agents. We cover the Declarative and Imperative APIs, site-wide discovery, browser support timeline, and step-by-step implementation with code examples.

ai-agents web-standards agent-protocols

EXPLORE MORE

Most websites score under 45. Find out where you stand.

RANKINGS
SEE HOW OTHERS SCORE

RANKINGS

Browse AI readiness scores for scanned websites.
COMPARE
HEAD TO HEAD

COMPARE

Compare two websites side-by-side across all 5 categories and 47 checkpoints.
ABOUT
HOW WE MEASURE

ABOUT

Learn about our 5-category scoring methodology.