Skip to content

Build for Agents: Why CLIs Are the New Distribution Channel

11 min read
Bart Waardenburg

Bart Waardenburg

AI Agent Readiness Expert & Founder

Andrej Karpathy posted something last week that deserves more attention than the usual Twitter discourse. In a post that pulled nearly 2 million views, the former Tesla AI director argued that CLIs are exciting precisely because they're "legacy" technology. Not despite being old, because of it. stdin/stdout, flags, JSON output, pipes. The Unix philosophy accidentally built the perfect interface for AI agents, decades before anyone knew they'd exist.

The real point wasn't about CLIs, though. It was a question aimed at every product builder: "If you have any kind of product or service, think: can agents access and use them?" If the answer is no, you've got a problem. Your product is becoming invisible to what might be the fastest-growing distribution channel software has ever seen. That's the core question behind AI agent readiness .

Karpathy's Thesis: Legacy Is the New Advantage

The demo was simple but telling. Karpathy asked Claude to install the Polymarket CLI (a Rust-built command-line tool for querying prediction markets) and then to build "any arbitrary dashboards or interfaces or logic." Three minutes later, Claude had a working terminal dashboard showing the highest-volume Polymarket contracts and their 24-hour price changes.

No API integration code. No auth flow. No SDK setup. A CLI that outputs structured data, and an agent that knows what to do with it.

He then did the same thing with the GitHub CLI. Claude navigated repos, inspected issues, reviewed PRs, read code. The agent stitched existing command-line tools into workflows that would've taken a developer hours to wire up by hand.

Think about what makes this work. CLI tools follow conventions that haven't changed in decades. They take flags. They spit out text or JSON to stdout. They compose through pipes. They have --help flags that describe their own capabilities. That's essentially everything an AI agent needs: discoverability, structured I/O, composability. No screenshotting a UI or parsing a DOM.

Agents Are the New Distribution Channel

Zoom out from the CLI angle and the picture gets bigger. Agents are becoming the primary distribution channel for software. They don't browse your marketing site or watch your demo video. They don't click through an onboarding flow. They call your CLI, hit your MCP server, or read your docs programmatically. If none of those surface areas exist, your product simply doesn't show up in their world.

And this shift happened fast. MCP went from zero to 97 million monthly SDK downloads in twelve months. Over 10,000 active servers are running. OpenAI, Google DeepMind, Microsoft, Cloudflare, they all adopted it. Anthropic donated MCP to the Linux Foundation in December 2025, not as a gesture but because the standard had already won. Running an MCP server is starting to feel like running a web server. It's just what you do.

MCP SDK DOWNLOADS
97M+/mo
ACTIVE MCP SERVERS
10,000+
ENTERPRISES WITH AI AGENTS
85%

85% of enterprises are expected to have AI agents deployed. Those agents need structured, programmatic access to whatever tools they're working with. Your beautiful React dashboard? Worthless to an agent trying to pull data into a pipeline at 3am. Agents don't see your GUI, they see the accessibility tree . What they need is a CLI, an MCP endpoint, documentation they can actually parse.

Why CLIs Are Accidentally Perfect for AI Agents

There's a reason Karpathy put "legacy" in quotes. He's not saying CLIs are outdated. He's saying the opposite. The things that make them old (stable conventions, text-based I/O, predictable behavior) are exactly the things that make them perfect for agents.

STRUCTURED I/O

stdin/stdout with JSON output. No GUI rendering, no screenshot parsing, no vision model overhead. Agents can read and write directly.

COMPOSABLE

Unix pipes let agents chain tools together. The output of one command becomes the input to the next. Complex workflows emerge from simple building blocks.

SELF-DESCRIBING

--help flags, man pages, and consistent flag conventions let agents discover capabilities at runtime without burning tokens on documentation.

DETERMINISTIC

Same input, same output. No CSS layout shifts, no JavaScript hydration, no race conditions. Agents get predictable, reliable results every time.

Justin Poehnelt, who builds Google Workspace tooling, recently wrote a guide on designing CLIs with agents as the primary user. His framing stuck with me: "Human DX optimizes for discoverability and forgiveness. Agent DX optimizes for predictability and defense-in-depth." Those are very different design targets.

Some of his practical recommendations. Accept raw JSON payloads as first-class input (not just convenience flags), expose runtime-queryable schemas so agents can introspect capabilities without loading docs, add field masks to keep responses small and save token budget, and include --dry-run flags so agents can validate before executing. If you're building a CLI today, his post is worth reading in full.

The Four Agent-Accessible Surface Areas

Karpathy's post ends with a checklist that's worth expanding on. I see four surface areas that determine whether agents can actually find and use your product:

1. Command-Line Interface (CLI)

If your product has an API, it should have a CLI. Full stop. The Polymarket CLI that Karpathy demoed is built in Rust. Agents can query markets, place trades, pull data, all from the terminal with zero overhead. No SDK, no auth dance, no boilerplate.

The design principles worth following. Output JSON by default when stdout isn't a TTY, expose an --output json flag, support raw payload input alongside convenience flags, and make the CLI self-describing via schema introspection commands.

2. MCP Server

I've written about MCP extensively, but the short version: it's become the universal standard for connecting AI models to external tools. Claude, ChatGPT, Gemini, Cursor, VS Code, they all speak MCP. Ship an MCP server and your product becomes a tool any AI assistant can pick up and use.

The discovery piece matters too. Adding a /.well-known/mcp.json file means AI clients can find your endpoints automatically, no manual configuration needed. Our scanner checks for this as the MCP Discovery checkpoint (4.3), one of the signals that directly determines whether agents can find you.

3. Machine-Readable Documentation

Your tutorial videos and interactive getting-started guides? Agents can't use those. They need docs they can parse. Markdown, llms.txt, OpenAPI specs, structured knowledge bases.

llms.txt (our checkpoint 1.4) was built for exactly this. A single file at your domain root that tells AI models what your site does and how to navigate it. Think of it as robots.txt for comprehension instead of crawling. We also check for OpenAPI specs (checkpoint 4.5) and agents.json (checkpoint 4.6), all formats that make your product machine-discoverable.

4. Agent Protocol Endpoints

The protocol stack beyond MCP is filling in fast. WebMCP handles browser-based agent interactions, Google's A2A lets agents delegate tasks to each other, and agents.json advertises what your AI agents can do. Each one gives agents another structured way to find and work with your product.

Our scanner's Agent Protocols category (15% of total score) evaluates this entire stack: WebMCP, A2A Agent Card, MCP discovery, OpenAPI, agents.json, form quality, and interactive surface coverage. If agents can't find you through any of these, they won't use you.

The Competitive Dynamics Are Brutal

Picture this. Your competitor ships an MCP server. Now every Claude Code user, every Cursor session, every autonomous pipeline can discover and use their product. No human visits the website. No sales call. No onboarding email. The agent finds the tool, reads the schema, and starts using it.

We know this movie. Twenty years ago, companies that invested in SEO early captured organic traffic that compounded for years. Everyone else spent the next decade buying ads to reach the same audience. Agent accessibility is following the same playbook, except the timeline is compressed from years to months.

Goldman projects $780 billion in application software by 2030, with agents capturing 60%+ of those economics. The profit pool shifts from per-seat subscriptions to workflow-completion pricing. If agents can't reach your product, you're not part of that economy.

APP SOFTWARE BY 2030
$780B
CAPTURED BY AGENTS
60%+

What to Do Right Now

Enough about why. Here's what to actually do, in order of impact:

  1. Ship a CLI with JSON output. If your product has an API, wrap it in a CLI that accepts JSON input and outputs structured JSON. This is the lowest-friction path to agent accessibility. Use --output json or auto-detect when stdout isn't a TTY.
  2. Deploy an MCP server. Use the official TypeScript or Python SDK. Expose your core actions as MCP tools. Add /.well-known/mcp.json for discovery.
  3. Export docs in markdown. Create an llms.txt file at your domain root. Publish an OpenAPI spec. Make sure your documentation is available in plain text or markdown, not just rendered HTML.
  4. Add agents.json. Advertise your AI agent capabilities with a machine-readable agents.json file. This tells other agents what your product can do and how to interact with it.
  5. Measure your agent readiness. Scan your site to see where you stand across all 47 checkpoints, from crawler access and structured data to agent protocols and security headers.

We Built a CLI Too: @isagentready/cli

I practice what I preach. Alongside our MCP server and Claude Code skill , I've shipped @isagentready/cli , a command-line tool that scans any website for AI agent readiness straight from your terminal. It implements every agent-friendly CLI pattern discussed in this post.

Install it globally or run it directly with npx:

# Install globally
npm install -g @isagentready/cli

# Or run directly
npx @isagentready/cli scan example.com

A single command gives you scores, letter grades, and a category breakdown across all 47 checkpoints:

isagentready scan example.com plain
A+  example.com  98/100
  Scanned in 1.7s

  Categories

  ███████████████ 100%  AI Content Discovery (30% weight)
  █████████████░░  89%  AI Search Signals (20% weight)
  ███████████████ 100%  Content & Semantics (20% weight)
  ███████████████ 100%  Agent Protocols (15% weight)
  ███████████████ 100%  Security & Trust (15% weight)

Add --verbose to see every individual checkpoint, or --json for machine-readable output. But here's where it gets interesting for agents: the CLI auto-detects when stdout is piped and switches to JSON automatically:

# Piped → auto-JSON, no flag needed
isagentready results example.com | jq '.overall_score'

# Field masks to save token budget
isagentready results example.com --fields domain,overall_score,letter_grade

# Browse the rankings leaderboard
isagentready rankings --grade high --json

# Stream all rankings as NDJSON
isagentready rankings --page-all --fields domain,overall_score | wc -l

# Validate without hitting the API
isagentready scan example.com --dry-run

# Runtime schema introspection for agents
isagentready schema scan

Every pattern Justin Poehnelt recommended is here. Auto-detect non-TTY mode, explicit --output json|text override, field masks for context window constraints, --dry-run for validation, NDJSON streaming, and a schema command for runtime introspection. The CLI also ships with a CONTEXT.md file, structured guidance that AI agents can read to understand conventions and invariants.

The result: any AI agent (Claude Code, Cursor, a custom pipeline) can install the CLI and immediately start scanning websites, pulling scores, and querying rankings without reading a single docs page. Three access paths to the same data: the web scanner , the CLI , and the MCP server . Humans get the GUI, agents get the interface that works for them.

How Our Scanner Measures Agent Accessibility

Everything Karpathy called out maps to signals we already measure. Here's how the "Build for Agents" checklist aligns with our 47 checkpoints:

Agent Surface Area Our Checkpoints Category
MCP server / discovery MCP Discovery (4.3), WebMCP Declarative (4.1), WebMCP Manifest (4.2) Agent Protocols
Machine-readable docs llms.txt (1.4), OpenAPI (4.5), agents.json (4.6) Discovery + Protocols
Structured data for agents JSON-LD (2.1), Schema types (2.2), Entity linking (2.4) AI Search Signals
Crawler access robots.txt (1.1), AI crawler directives (1.2), HTTP bot access (1.8) AI Content Discovery
Agent protocol stack A2A Agent Card (4.4), Form quality (4.7), Manifest schema (4.8) Agent Protocols
Security & trust HTTPS (5.1), HSTS (5.3), CSP (5.4), CORS (5.7) Security & Trust

The Paradigm Shift Is Already Here

None of this is speculative. It's already happening. As one commenter on Karpathy's post put it: "Agents don't browse, they execute." We spent a decade making everything visual because buttons felt like the future. In 2026, the most powerful users of your product don't have eyes.

The web was built for human browsers. The next layer is being built for agents. Products that expose CLIs, MCP servers, and structured docs are already picking up distribution through agent workflows. Distribution their marketing team never had to orchestrate. Products that only offer a GUI are quietly dropping out of the conversation.

Karpathy closed with three words: "Build. For. Agents." It's not a slogan. It's the new minimum viable product.

Sources

Ready to check?

SCAN YOUR WEBSITE

Get your AI agent readiness score with actionable recommendations across 5 categories.

  • Free instant scan with letter grade
  • 5 categories, 47 checkpoints
  • Code examples for every recommendation

RELATED ARTICLES

Continue reading about AI agent readiness and web optimization.

Content Negotiation for AI Agents: Why Sentry Serves Markdown Over HTML
9 min read

Content Negotiation for AI Agents: Why Sentry Serves Markdown Over HTML

Sentry co-founder David Cramer shows how content negotiation — a 25-year-old HTTP standard — saves AI agents 80% of tokens. We break down the implementation: Accept headers, markdown delivery, authenticated page redirects, and what this means for every website preparing for agent traffic.

ai-agents seo getting-started
Cloudflare /crawl Endpoint: One API Call to Crawl Any Website
9 min read

Cloudflare /crawl Endpoint: One API Call to Crawl Any Website

Cloudflare launched a /crawl endpoint that crawls entire websites with one API call — returning HTML, Markdown, or AI-extracted JSON. We break down what this means for AI agent readiness: why your robots.txt, sitemap, semantic HTML, and server-side rendering now matter more than ever.

ai-agents seo getting-started
AI Crawlers Ignore llms.txt — But AI Agents Don't
9 min read

AI Crawlers Ignore llms.txt — But AI Agents Don't

Dries Buytaert's data shows zero AI crawlers use llms.txt. But he measured the wrong thing. Crawlers scrape for training data — agents complete tasks. We break down why the crawler vs agent distinction matters, which coding agents already use llms.txt and content negotiation, and what you should implement today.

ai-agents seo getting-started

EXPLORE MORE

Most websites score under 45. Find out where you stand.

RANKINGS
SEE HOW OTHERS SCORE

RANKINGS

Browse AI readiness scores for scanned websites.
COMPARE
HEAD TO HEAD

COMPARE

Compare two websites side-by-side across all 5 categories and 47 checkpoints.
ABOUT
HOW WE MEASURE

ABOUT

Learn about our 5-category scoring methodology.