Field Notes · AI Development

B2A: Business
to Agent

The third distribution model. Your customers are AI agents finding you before a human ever sees your URL. The conversion funnel is a JSON schema. MCP is the new SEO.

B2B. B2C. B2A.

The conventional model is two-way. You build a product. Humans find it through some discovery surface (search, ads, word of mouth, app stores). Humans use it through some interface (web, mobile, desktop). The two distribution categories everyone knows: B2B for products sold to businesses, and B2C for products sold to consumers. Different sales motions, different pricing, different buyers, same fundamental shape: a human chooses your product.

The emerging model is different. You build a product. An AI agent, tasked by a human with a problem your product solves, discovers your service via MCP, calls your API on the human's behalf, completes the task, and returns the result. The human never typed your URL. The human never saw your landing page. The human never compared you against competitors. The user, in any meaningful sense, never chose you. Their agent did.

The user never chose you. Their agent did.

This is B2A: Business to Agent. The distribution channel is the AI orchestrator. The buyer is the agent. The human is downstream of the transaction, often by the time they find out it happened. If your product is the one an agent reaches for when tasked with a problem in your domain, you've won that customer. If your product is invisible to agents, you don't exist from the agent's perspective, no matter how good your website looks to a human reader who isn't there.

Websites optimized for Google's crawler. Products now optimize for LLM orchestrators.

SEO emerged because search was the discovery layer. Google's crawler decided which products humans saw, and structuring your content for that crawler was the difference between visibility and obscurity. A whole industry of practitioners spent two decades getting good at this. Title tags, schema markup, page speed, backlinks, content quality signals.

The discovery layer changed. AI agents using MCP (Model Context Protocol) are now the crawlers. An agent tasked with a problem queries available MCP servers, reads their capability descriptions, and selects the one whose tools fit the task. The agent doesn't read your homepage. The agent reads your tool definitions, your input schemas, your example invocations, your error semantics. Whatever metadata your MCP server exposes is the conversion funnel for the agent.

You're not fighting for eyeballs anymore. You're fighting for API calls.

The practitioners who get good at this are doing for MCP what SEO experts did for search: studying which capability descriptions get selected, which tool names get matched, which input schemas reduce hallucination on the agent side, which examples raise selection probability. None of this discipline has a name yet. It will. The window for first-movers is the period before there's a name for it.

The standard MCP server isn't a feature you ship alongside your product. It's the only surface an agent will ever see. Your homepage is a courtesy to humans who happen to visit it. Your MCP server is the actual store.

No homepage. No onboarding. No sign-up button.

In B2C, the conversion funnel has steps. Land on the homepage. Watch the demo. Sign up. Configure. Use. Each step has a drop-off rate, and the discipline of conversion optimization is squeezing the funnel.

In B2A, the funnel has no steps in the human sense. The agent recognizes your tool as a fit (one step, internal to the agent). The agent calls your API (one step, programmatic). The task completes. There is no landing page to bounce off, no demo to skip, no sign-up form to abandon, no configuration screen to give up on. The human finds out later, when the agent returns with a result, or doesn't find out at all because the result was satisfactory and didn't need explanation.

What replaces the conversion funnel is the capability description. A JSON schema, an OpenAPI doc, an MCP tool definition. That's your funnel. The clarity, specificity, and reliability of that description is the difference between an agent choosing you and an agent choosing a competitor.

Dimension Traditional API / Product B2A via MCP
DiscoveryHuman reads docs (ReadMe, Swagger)Agent queries via reflection (list_tools)
IntegrationWeeks of custom client codeMinutes via standardized protocol
Primary userHuman clicking a UIAutonomous agent making calls
Billing triggerUser action in UIAgentic caller, OAuth-delegated
CompetitionSEO, ads, word of mouthMCP catalog position, capability quality
OnboardingSign-up, config, tutorialNone; tool is called or not

The implication for builders: your conversion funnel is now a capability description in a JSON schema. The same product can succeed in B2C with a beautiful landing page and fail in B2A with a vague tool description, and vice versa. They are different distribution surfaces serving different buyers.

Solo devs ship MCP servers in an afternoon. Enterprises take quarters.

Every market shift produces a window where small operators can move faster than incumbents. The B2A shift is one of those windows, and the asymmetry is unusually large.

A solo developer using agentic coding tools can ship a functional MCP server for a product in an afternoon. Read the OpenAPI spec, generate MCP tool definitions, deploy on two transports (local STDIO for desktop agents, remote HTTP for cloud agents), register in a public or private MCP catalog. The work is mostly mechanical and the tools to do it are improving rapidly.

A large company has too much bureaucracy to open its ecosystem at that pace. The MCP exposure has to be reviewed by security, by legal, by product, by partnerships. Approval cycles are measured in quarters, not afternoons. By the time the enterprise ships its first MCP server, the solo dev's server has already been called by agents thousands of times, accumulated usage signals, and tuned its capability descriptions based on which selections agents made and which ones they didn't.

First-mover advantage in B2A is measured in weeks, not years.

The window doesn't stay open forever. The same enterprises that move slowly to ship the first MCP server move heavily once they decide it matters; they will deploy at a scale a solo dev cannot match. But the first wave of agent-tasked workflows is being routed right now, by agents that have to pick from whichever servers exist today, not from the better servers some incumbent will eventually ship. If you're the only MCP server in your category right now, you are the default. Defaults in agent-tasked workflows compound. The agent that called you yesterday is more likely to call you tomorrow because the orchestrator's tool-selection memory is sticky on what worked.

Every product needs an MCP server. The agentic tax is real.

In 2010, every product needed a Twitter API. The reason was simple: that's where attention was, and ignoring it meant being invisible in the discovery layer that mattered. Some products integrated immediately, some held out, and the holdouts mostly disappeared as the integration became table stakes.

In 2026, every product needs an MCP server. The reason is the same. Agentic workflows are the new discovery layer; ignoring them means being invisible to the buyers (agents) who route the traffic. The "agentic tax" on every new product is the time it takes to author tool definitions, deploy on the required transports, and register where agents will find you. The tax isn't optional. It's the cost of being a buyable product in the new distribution model.

Practical implications:

OAuth delegation becomes load-bearing. The agent acts on behalf of the user, but the billing is for the user. Standard OAuth flows have to handle agentic-caller scenarios cleanly: which user is being billed, what scopes the agent is authorized for, how revocation works when the user no longer wants the agent calling on their behalf. Solving this correctly is the difference between safe agent-routed transactions and a class of fraud that nobody knows how to defend yet.

The "single tool that does one thing well" doctrine returns. Agents pick tools that are easy to reason about. A single-purpose MCP tool with clear input/output is selected over a multi-purpose tool with overlapping capabilities. The Unix philosophy is back in style, retrofitted for the agentic era.

Error semantics matter more than they did in B2C. A human sees a vague error message and tries again. An agent sees a vague error message and stops calling your tool, because vagueness is a hallucination risk. Sharp, machine-actionable error responses are now a competitive advantage.

Capability descriptions are marketing copy. Not in the conventional sense (the agent doesn't care about brand voice), but in the sense that this is the only text the buyer reads. Tightness, specificity, and discriminability against competing capabilities is the equivalent of conversion-optimized landing-page copy. The product that describes its capabilities with precision wins the selection contest.

The human-to-product distribution layer didn't disappear. A new layer was added underneath it.

B2B and B2C still exist. Humans still buy products. What's changed is that agents now buy products on behalf of humans, and the agents see a completely different surface than humans see. The same company can have a beautiful homepage that humans love and an MCP server that agents won't touch, or a barely-styled homepage and an MCP server that gets called by every orchestrator in the ecosystem. The two surfaces are independent. Optimizing one does not optimize the other.

Whoever ships the first credible MCP server in a category captures the default. Defaults compound. The window is open right now.