Skip to content

What Is MCP? The Model Context Protocol for AI Agents

10 min read
Bart Waardenburg

Bart Waardenburg

AI Agent Readiness Expert & Founder

Every AI assistant has the same problem: isolation. It can generate text, write code, and reason through complex problems, but it can't check your calendar, query your database, or interact with your tools without custom integration work. Connecting an AI model to external systems used to mean building bespoke code for every tool, every API, every data source. Anyone who's tried that knows it doesn't scale.

The Model Context Protocol (MCP) changes that. MCP is an open standard, created by Anthropic and now governed by the Linux Foundation, that gives AI applications a universal way to connect to external tools, data sources, and services. One interface replaces dozens of proprietary connectors.

What Is MCP?

MCP stands for Model Context Protocol. It's an open protocol that gives AI applications a standardized way to connect to external data sources and tools. Instead of each AI tool building its own integration for every service, MCP defines a common language. Any AI client can use it to talk to any MCP-compatible server.

Anthropic released MCP in November 2024. It came from a very practical frustration: they kept duplicating integration code between Claude Desktop and various IDEs. Within a year, it became the fastest-adopted open standard in AI history, with 97 million monthly SDK downloads and adoption by every major AI platform.

SDK DOWNLOADS
97M+/mo
MCP SERVERS
10,000+
SPEC VERSION
2025-11-25

Why MCP Matters

Before MCP, every AI application had to build custom integrations for every tool it wanted to use. Want Claude to access your GitHub repositories, Slack channels, and Postgres database? Three separate integration codebases. Want ChatGPT to access the same tools? Three more. The result was an M×N problem: M applications times N tools, each requiring unique glue code.

MCP collapses this into an M+N solution. Build one MCP server for your tool, and every MCP-compatible client can use it. Build one MCP client, and it talks to every MCP server. Same pattern that made USB successful: standardize the interface, and the ecosystem grows on its own.

UNIVERSAL

Build one MCP server for your tool, and every AI client can connect to it. No more per-application integrations.

OPEN STANDARD

Governed by the Linux Foundation's Agentic AI Foundation. No single company controls the protocol.

ECOSYSTEM

Over 10,000 active MCP servers and 97M+ monthly SDK downloads across Python and TypeScript.

How MCP Works

MCP uses a client-server architecture built on JSON-RPC 2.0, inspired by the Language Server Protocol (LSP) that standardized IDE tooling. If you've worked with LSP, the structure will feel familiar. The protocol defines three roles:

  • Hosts: AI applications that initiate connections (Claude Desktop, Cursor, VS Code)
  • Clients: Protocol connectors within the host that maintain 1:1 connections with servers
  • Servers: Services that expose tools, data, and capabilities to AI models

A host application contains one or more MCP clients, each connected to a different MCP server. Ask Claude to "check my GitHub issues" and the host routes that request through its MCP client to the GitHub MCP server. That server executes the action and returns the result.

Server Primitives: Tools, Resources, and Prompts

MCP servers expose three types of primitives to clients:

  • Tools: Functions the AI model can call to take actions (search a database, send an email, create a file). The model decides when and how to use them.
  • Resources: Data the server makes available for context (files, database records, API responses). These are read-only and used to enrich the model's understanding.
  • Prompts: Reusable prompt templates and workflows that the server offers. Users can select these to structure their interactions.

It works the other way too. MCP clients can offer capabilities back to servers:

  • Sampling: Allows servers to request LLM completions through the client, enabling recursive agent workflows
  • Roots: Lets servers query the filesystem or URI boundaries they should operate within
  • Elicitation: Enables servers to request additional information directly from users during execution

Transport Types

MCP defines two standard transport mechanisms. Which one you pick depends on your deployment model.

stdio: Local Process Communication

In the stdio transport, the client launches the MCP server as a subprocess. Communication happens over standard input and output. The simplest transport, and it works well for local tools like file systems, databases, and development utilities.

Claude Desktop config with stdio server json
{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/me/projects"]
    }
  }
}

Streamable HTTP: Remote Server Communication

The Streamable HTTP transport replaced the earlier HTTP+SSE transport in the 2025-03-26 spec revision. It enables remote MCP servers that can handle multiple client connections over the network. The server exposes a single HTTP endpoint that accepts POST and GET requests, with optional Server-Sent Events (SSE) for streaming responses.

This is what makes MCP relevant for web platforms and SaaS products. Instead of requiring users to run a local process, you host an MCP server at a URL like https://api.example.com/mcp and any compatible client can connect remotely.

What Streamable HTTP gives you:

  • Session management via Mcp-Session-Id headers for stateful interactions
  • Resumability through SSE event IDs, allowing clients to reconnect after drops without losing messages
  • Multiple concurrent connections from different clients to the same server
  • Security requirements including Origin header validation to prevent DNS rebinding attacks

Discovery: /.well-known/mcp.json

For remote MCP servers, discovery is the first problem you need to solve. How does a client know what MCP servers a domain offers? The current approach is a well-known discovery endpoint at /.well-known/mcp.json that lists available MCP servers and their endpoints.

/.well-known/mcp.json json
{
  "version": "1.0",
  "servers": [
    {
      "name": "primary",
      "description": "Main API server for example.com",
      "endpoint": "https://mcp.example.com/v1"
    },
    {
      "name": "analytics",
      "description": "Analytics data access",
      "endpoint": "https://mcp.example.com/analytics"
    }
  ]
}

Same idea as robots.txt for crawlers or /.well-known/openid-configuration for OAuth: clients automatically find and connect to available services without manual configuration. Don't forget proper CORS headers (Access-Control-Allow-Origin: *) to support browser-based clients.

Who Backs MCP?

MCP is no longer a single-company project. On December 9, 2025, exactly one year after launch, Anthropic donated MCP to the Agentic AI Foundation (AAIF), a directed fund under the Linux Foundation. The AAIF was co-founded by Anthropic, Block, and OpenAI, with support from Google, Microsoft, AWS, Cloudflare, and Bloomberg.

The AAIF also governs two other founding projects alongside MCP:

  • Goose (by Block): an open-source AI agent framework
  • AGENTS.md (by OpenAI): a standard for describing agent capabilities in repositories

The AAIF also closely relates to the agents.json specification , which provides a standardized way for websites to advertise their AI agent capabilities and supported protocols.

That governance structure matters. MCP sits under the same foundation that stewards Linux, Kubernetes, and PyTorch. It's not going away, and no single company controls its direction.

Current Adoption

RedMonk called MCP "the fastest-adopted standard" they've ever tracked, with an S-curve adoption pattern reminiscent of Docker's rapid market saturation. The timeline:

Date Milestone
Nov 2024 Anthropic releases MCP with Python and TypeScript SDKs. Early adopters: Block, Apollo, Zed, Replit, Codeium, Sourcegraph.
Mar 2025 OpenAI adopts MCP across Agents SDK, Responses API, and ChatGPT Desktop.
Apr 2025 Google DeepMind confirms MCP support in upcoming Gemini models.
May 2025 Microsoft and GitHub join the MCP steering committee at Build 2025.
Nov 2025 Major spec update (2025-11-25): async tasks, enhanced OAuth, elicitation, server icons, tool calling in sampling.
Dec 2025 Anthropic donates MCP to the Agentic AI Foundation under the Linux Foundation.

Today, MCP is supported by virtually every major AI coding tool and assistant:

  • Claude Desktop & Claude Code: Anthropic's own applications, MCP-native from day one
  • ChatGPT: OpenAI added full MCP support in March 2025
  • Cursor: MCP integration is central to its tool-use and context awareness
  • VS Code + GitHub Copilot: Agent mode uses MCP to invoke external tools during coding sessions
  • Windsurf (Codeium): Full MCP server support for AI coding workflows
  • Gemini: Google DeepMind confirmed MCP support for contextually aware agent interactions
  • Microsoft Copilot: Enterprise-wide MCP adoption across the Copilot ecosystem
  • Zed, Cline, Replit, Continue.dev: The broader developer tools ecosystem

The MCP Ecosystem

The ecosystem around MCP has grown fast. Over 10,000 active MCP servers are publicly available, covering everything from developer tools to enterprise data platforms. The community maintains multiple registries and directories where you can find servers for your use case.

Popular MCP servers cover:

  • Developer tools: GitHub, GitLab, Jira, Linear, Sentry
  • Data & databases: PostgreSQL, MongoDB, Snowflake, BigQuery
  • Productivity: Google Drive, Slack, Notion, Confluence
  • Infrastructure: AWS, Cloudflare, Docker, Kubernetes
  • Web & search: Brave Search, Firecrawl, Puppeteer, Playwright
  • Design: Figma, Blender

The official SDKs are available in TypeScript and Python, with community SDKs for Go, Rust, Java, C#, Ruby, and more. The TypeScript SDK is maintained at @modelcontextprotocol/sdk and the Python SDK at mcp (with the high-level FastMCP framework for rapid development).

What's Evolving in the MCP Spec

The November 2025 specification (version 2025-11-25) introduced several features that show where MCP is heading:

  • Async Tasks: Durable request tracking with polling and deferred result retrieval, enabling long-running operations that don't require keeping a connection open
  • URL Mode Elicitation: Servers can send users to a browser URL for sensitive flows like OAuth authentication or payment processing, keeping credentials secure
  • Enhanced OAuth: OpenID Connect Discovery support, incremental scope consent, and Client ID Metadata Documents that eliminate the need for per-server registration
  • Server Icons: Servers can expose icon metadata for tools, resources, and prompts, improving discoverability in client UIs
  • Tool Calling in Sampling: Servers can include tool definitions when requesting LLM completions, enabling more complex agentic workflows
  • Formalized Governance: Working groups, interest groups, SDK tiering, and community communication guidelines

The community is also working on standardized server discovery via /.well-known/mcp.json, better enterprise authorization patterns, and deeper integration with other agentic protocols.

How to Implement MCP

Building an MCP server is pretty straightforward with the official SDKs. Here's a minimal example that exposes a tool to an AI assistant.

Building an MCP Server in TypeScript

MCP server with a weather tool typescript
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

const server = new McpServer({
  name: "weather",
  version: "1.0.0"
});

server.tool(
  "get-weather",
  "Get current weather for a city",
  { city: z.string().describe("City name") },
  async ({ city }) => {
    const response = await fetch(
      `https://api.weather.example.com/current?city=${city}`
    );
    const data = await response.json();
    return {
      content: [{
        type: "text",
        text: `Weather in ${city}: ${data.temperature}°C, ${data.condition}`
      }]
    };
  }
);

const transport = new StdioServerTransport();
await server.connect(transport);

Building an MCP Server in Python

The Python SDK includes FastMCP, a high-level framework that makes it even simpler:

MCP server with FastMCP python
from mcp.server.fastmcp import FastMCP

mcp = FastMCP("weather")

@mcp.tool()
async def get_weather(city: str) -> str:
    """Get current weather for a city."""
    response = await fetch_weather(city)
    return f"Weather in {city}: {response.temperature}°C, {response.condition}"

if __name__ == "__main__":
    mcp.run()

Adding MCP Discovery to Your Website

Hosting a remote MCP server? Add a discovery endpoint so clients can find it automatically:

/.well-known/mcp.json json
{
  "version": "1.0",
  "servers": [
    {
      "name": "your-platform",
      "description": "Access your platform data and tools via MCP",
      "endpoint": "https://api.yourplatform.com/mcp"
    }
  ]
}

Put this at /.well-known/mcp.json with the proper CORS headers, and MCP clients will find your server without any manual configuration.

What Implementing MCP Means for Your Platform

Adding MCP support to your platform or API gets you concrete benefits:

REACH

Your tool becomes accessible to every MCP-compatible AI assistant: Claude, ChatGPT, Gemini, Cursor, VS Code, and thousands more.

REDUCED EFFORT

Instead of building and maintaining separate integrations for each AI platform, you build one MCP server that works everywhere.

TRUST & SECURITY

MCP's security model includes user consent, OAuth flows, Origin validation, and structured permission systems baked into the protocol.

FUTURE-PROOF

Backed by every major AI company and governed by the Linux Foundation. MCP is the standard that's winning.

MCP vs A2A: How They Complement Each Other

Google's Agent-to-Agent (A2A) protocol is another standard you'll come across in the agentic AI space. The question I get a lot: are MCP and A2A competing? No. They solve different problems at different layers.

Aspect MCP A2A
Purpose Connects an AI model to tools and data Enables agents to communicate with each other
Direction Vertical: AI to capabilities Horizontal: agent to agent
Analogy A worker using tools from a toolbox Workers coordinating on a shared project
Created by Anthropic (now Linux Foundation) Google Cloud
Use case Query a database, call an API, read files Delegate tasks between specialized agents

In practice, a well-architected agentic system uses both. Individual agents use MCP to connect to their tools and data sources (the vertical axis). A2A enables those agents to discover and collaborate with each other (the horizontal axis). MCP is the hands, A2A is the conversation between team members.

MCP vs WebMCP: Server-Side Meets the Browser

MCP connects AI models to backend tools and data. WebMCP does the same thing, but for the browser. WebMCP (Web Model Context Protocol) is a W3C proposal co-authored by Google and Microsoft that lets websites expose their functionality as structured tools. AI agents can discover and invoke them directly in the browser, no backend required.

Two protocols at different layers, same mental model: define tools with names, descriptions, and input schemas, then let AI agents call them.

Aspect MCP WebMCP
Runs where Backend servers (local or remote) Client-side in the browser (JavaScript)
Transport stdio or Streamable HTTP Browser APIs (navigator.modelContext)
Best for APIs, databases, backend services Interactive websites, forms, SPAs
Discovery /.well-known/mcp.json /.well-known/webmcp.json + page-level
Backed by Linux Foundation (Anthropic, OpenAI, Google, Microsoft) W3C (Google, Microsoft)

MCP gives AI agents access to your backend (query a database, call an API, manage infrastructure). WebMCP gives them access to your frontend (search your product catalog, fill out a booking form, check availability). A travel site might use MCP to expose its booking API to AI assistants, and WebMCP to let browser-based agents search flights and complete bookings directly in the UI.

The protocols complement each other fully. Implement both and you give AI agents the widest possible surface area to interact with your platform, whether they come through a backend integration or through a browser.

Who Should Implement MCP?

MCP isn't just for AI companies. Any organization that exposes data or functionality can benefit from it:

  • API providers: If you have a REST or GraphQL API, wrapping it as an MCP server makes your service accessible to every AI assistant. Stripe, Twilio, and similar platforms are natural MCP candidates.
  • Data platforms: Databases, analytics tools, and data warehouses can expose query interfaces through MCP, letting AI models access structured data directly.
  • SaaS products: Any product with an API can offer an MCP server alongside its REST endpoints. This gives customers AI-powered access without requiring them to write integration code.
  • Developer tools: CI/CD platforms, monitoring tools, and infrastructure services can surface their capabilities through MCP for AI-assisted DevOps workflows.
  • Content platforms: CMS systems, documentation platforms, and knowledge bases can expose content as MCP resources, making their data directly available to AI agents. Understanding how AI models like Claude select sources can help you structure these resources effectively.
  • Enterprise systems: CRMs, ERPs, and internal tools can use MCP to give AI assistants controlled access to business data, with OAuth and permission scoping built into the protocol.

Getting Started with MCP

Ready to make your platform MCP-compatible? Here's a practical path:

  1. Choose your SDK: Use the official TypeScript SDK or Python SDK (with FastMCP for rapid development).
  2. Define your tools: Identify the key actions and data queries your platform offers. Each API endpoint is a potential MCP tool.
  3. Choose a transport: Use stdio for local dev tools, Streamable HTTP for remote/production servers.
  4. Implement authentication: For remote servers, implement OAuth 2.0 flows using the MCP authorization spec. The November 2025 update added Client ID Metadata Documents to simplify registration.
  5. Add discovery: Serve a /.well-known/mcp.json file listing your MCP server endpoints.
  6. Test with real clients: Connect your server to Claude Desktop, Cursor, or VS Code to verify it works across different MCP hosts.

The Standard Has Been Set

MCP has achieved something rare: rapid convergence on a single standard. In just over a year, it went from an internal Anthropic experiment to the universal protocol for AI-to-tool integration. Every major AI company backs it, the Linux Foundation governs it. It's part of the broader shift from traditional SEO toward AI Engine Optimization , where making your platform machine-readable becomes as important as making it human-readable.

For platform builders, the message is clear: MCP is how AI assistants will connect to your tools. Early adopters are already seeing their services used in AI workflows across Claude, ChatGPT, Gemini, and dozens of coding tools. Build now and ride the momentum, not catch up later.

Sources

Ready to check?

SCAN YOUR WEBSITE

Get your AI agent readiness score with actionable recommendations across 5 categories.

  • Free instant scan with letter grade
  • 5 categories, 47 checkpoints
  • Code examples for every recommendation

RELATED ARTICLES

Continue reading about AI agent readiness and web optimization.

What Is agents.json? Advertising AI Agent Capabilities on Your Website
10 min read

What Is agents.json? Advertising AI Agent Capabilities on Your Website

agents.json is the emerging complement to robots.txt - a machine-readable file that tells AI agents what your website can do. We cover the Wildcard specification, compare it to A2A, MCP, and OpenAPI, and show you how to implement it step by step.

ai-agents web-standards agent-protocols
What Is Google's A2A Protocol? Agent-to-Agent Communication Explained
10 min read

What Is Google's A2A Protocol? Agent-to-Agent Communication Explained

Google's Agent-to-Agent (A2A) protocol lets AI agents discover and work with each other. We cover the Agent Card, task lifecycle, A2A vs MCP, the partner ecosystem, and step-by-step implementation.

ai-agents web-standards agent-protocols
What Is WebMCP and Why Your Website Needs It
10 min read

What Is WebMCP and Why Your Website Needs It

WebMCP is the W3C proposal for exposing website functionality directly to AI agents. We cover the Declarative and Imperative APIs, site-wide discovery, browser support timeline, and step-by-step implementation with code examples.

ai-agents web-standards agent-protocols

EXPLORE MORE

Most websites score under 45. Find out where you stand.

RANKINGS
SEE HOW OTHERS SCORE

RANKINGS

Browse AI readiness scores for scanned websites.
COMPARE
HEAD TO HEAD

COMPARE

Compare two websites side-by-side across all 5 categories and 47 checkpoints.
ABOUT
HOW WE MEASURE

ABOUT

Learn about our 5-category scoring methodology.