Ismat Samadov
  • Tags
  • About
17 min read/2 views

Google's A2A Protocol: How AI Agents Will Talk to Each Other

A2A lets AI agents discover, delegate, and coordinate without knowing each other's internals. Here is how it works.

AILLMA2APython

Related Articles

vLLM vs TGI vs Ollama: Self-Hosting LLMs Without Burning Money or Losing Sleep

13 min read

Structured Output Changed How I Build LLM Apps — Pydantic, Tool Use, and the End of Regex Parsing

13 min read

Semantic Caching Saved Us $14K/Month in LLM API Costs

14 min read

Enjoyed this article?

Get new posts delivered to your inbox. No spam, unsubscribe anytime.

On this page

  • The Numbers Behind the Shift
  • What A2A Actually Does
  • The Architecture
  • Building an A2A Agent in Python
  • A2A vs MCP: Different Layers, Same Stack
  • The Full Protocol Stack
  • Who's Using A2A in Production
  • The Problems With A2A
  • A Practical Decision Framework
  • What Most A2A Articles Get Wrong
  • What I Actually Think
  • Sources

© 2026 Ismat Samadov

RSS

Google quietly dropped the Agent-to-Agent protocol in April 2025 with a blog post and a GitHub repo. Eleven months later, it has 22,700 GitHub stars, backing from over 150 organizations including AWS, Microsoft, and Salesforce, and a permanent home under the Linux Foundation. The protocol that lets AI agents talk to each other — without knowing anything about each other's internals — is becoming the TCP/IP of the agentic era.

I ignored A2A for months. I thought it was Google trying to counter Anthropic's MCP, another protocol war that would fizzle out. I was wrong. A2A solves a problem that MCP doesn't touch, and every team building multi-agent systems will eventually need it.

The Numbers Behind the Shift

The AI agent market is exploding in a way that makes the protocol question urgent.

The global AI agents market was valued at $7.63 billion in 2025 and is projected to reach $182.97 billion by 2033 — a 49.6% CAGR. The multi-agent system market specifically is expected to grow from $8 billion in 2026 to $25.47 billion by 2030 at a 33.6% CAGR.

Here's the stat that caught my attention: Gartner reported a 1,445% surge in multi-agent system inquiries from Q1 2024 to Q2 2025. That's not gradual interest. That's a stampede.

But here's the catch — single-agent systems still held 59.24% of market revenue in 2025. Most companies haven't made the leap to multi-agent yet. When they do, they'll need a way for those agents to coordinate. That's A2A.

The protocol reached v1.0 in early 2026 with gRPC support, signed Agent Cards, and multi-tenancy. Google donated it to the Linux Foundation in June 2025, with AWS, Cisco, Microsoft, Salesforce, SAP, and ServiceNow as founding members. This isn't a Google project anymore. It's an industry standard in the making.

What A2A Actually Does

A2A is a protocol for AI agents to discover each other, negotiate capabilities, delegate tasks, and exchange results — without knowing anything about each other's internal architecture.

That last part is the key insight. Your agent might be built with LangGraph. The agent it's talking to might use CrewAI. A third might be a custom Python system with no framework at all. A2A doesn't care. It treats every agent as an opaque black box that exposes capabilities through a standard interface.

The official specification defines five core concepts:

Agent Card — a JSON document at /.well-known/agent.json that describes what the agent can do. Think of it as a digital business card. It lists the agent's name, description, skills, authentication requirements, and endpoint URL. Any client can fetch this card and understand what the agent offers.

Task — the fundamental unit of work. A client creates a task, the remote agent processes it, and it moves through a lifecycle of states: submitted, working, input-required, completed, or failed. Tasks can be short (answered immediately) or long-running (hours or days).

Message — a communication turn between agents. Each message has a role ("user" from the client, "agent" from the server) and contains one or more Parts.

Part — the actual content. Text, files, structured data, or references to external resources. Parts are the atoms of agent communication.

Artifact — an output generated by the remote agent. A completed task might produce a document, a dataset, an analysis — these are artifacts.

The Architecture

A2A follows a client-server model over HTTPS using JSON-RPC 2.0. One agent (the client) discovers and sends work to another agent (the server). The server processes the work and returns results.

Here's what the flow looks like:

  1. Discovery: The client fetches the remote agent's Agent Card from /.well-known/agent.json
  2. Task creation: The client sends a tasks/send request with a message describing what it needs
  3. Processing: The remote agent works on the task, potentially requesting additional input
  4. Completion: The remote agent returns artifacts and marks the task as completed

The communication is transport-flexible. Version 0.3 added gRPC support alongside HTTP and Server-Sent Events (SSE), giving you streaming responses and better performance for high-throughput scenarios.

Here's what an Agent Card looks like in practice:

{
  "name": "Research Agent",
  "description": "Searches academic papers and summarizes findings",
  "url": "https://research-agent.example.com",
  "version": "1.0.0",
  "capabilities": {
    "streaming": true,
    "pushNotifications": false
  },
  "authentication": {
    "schemes": ["bearer"]
  },
  "skills": [
    {
      "id": "paper-search",
      "name": "Search Papers",
      "description": "Search academic papers by topic and return summaries",
      "tags": ["research", "academic", "summarization"],
      "examples": [
        "Find recent papers on transformer architectures",
        "Summarize the top 5 papers on retrieval augmented generation"
      ]
    }
  ],
  "defaultInputModes": ["text/plain"],
  "defaultOutputModes": ["text/plain", "application/json"]
}

And here's a task request:

{
  "jsonrpc": "2.0",
  "method": "tasks/send",
  "params": {
    "id": "task-001",
    "message": {
      "role": "user",
      "parts": [
        {
          "kind": "text",
          "text": "Find the 3 most cited papers on LLM evaluation from 2025"
        }
      ]
    }
  },
  "id": 1
}

The remote agent processes this and returns:

{
  "jsonrpc": "2.0",
  "result": {
    "id": "task-001",
    "status": {
      "state": "completed"
    },
    "artifacts": [
      {
        "parts": [
          {
            "kind": "text",
            "text": "Top 3 most cited LLM evaluation papers from 2025: ..."
          }
        ]
      }
    ]
  },
  "id": 1
}

Clean, stateless on the wire, stateful through task IDs.

Building an A2A Agent in Python

The official Python SDK makes this straightforward. Here's a minimal agent that exposes one skill:

from a2a.server.agent_execution import AgentExecutor
from a2a.server.apps import A2AStarletteApplication
from a2a.types import AgentCard, AgentSkill, AgentCapabilities
import uvicorn

# Define what this agent can do
skill = AgentSkill(
    id="translate",
    name="Translate Text",
    description="Translates text between languages",
    tags=["translation", "language"],
    examples=["Translate 'hello world' to Spanish"],
)

# Create the Agent Card
agent_card = AgentCard(
    name="Translation Agent",
    description="Translates text between 50+ languages",
    url="http://localhost:8000",
    version="1.0.0",
    skills=[skill],
    capabilities=AgentCapabilities(streaming=False),
    defaultInputModes=["text/plain"],
    defaultOutputModes=["text/plain"],
)

# Implement the agent logic
class TranslationExecutor(AgentExecutor):
    async def execute(self, context, event_queue):
        # Get the user's message
        user_message = context.get_user_message()
        text = user_message.parts[0].text

        # Do the translation (simplified)
        translated = await translate(text)

        # Return the result
        await event_queue.enqueue_event(
            create_artifact_event(translated)
        )

# Start the server
app = A2AStarletteApplication(
    agent_card=agent_card,
    agent_executor=TranslationExecutor(),
)
uvicorn.run(app.build(), host="0.0.0.0", port=8000)

Once running, any A2A client can discover this agent at http://localhost:8000/.well-known/agent.json and start sending translation tasks. The client doesn't need to know this agent uses Python, which translation API it calls, or how it processes requests internally. It just sends text and gets translations back.

A2A vs MCP: Different Layers, Same Stack

This is the comparison everyone asks about, and most articles get it wrong by framing them as competitors.

MCP connects agents to tools. A2A connects agents to agents. They're complementary protocols that operate at different layers of the same system.

AspectMCPA2A
Created byAnthropic (Nov 2024)Google (Apr 2025)
PurposeConnect agents to tools/dataConnect agents to other agents
Communication modelAgent calls a toolAgent delegates to another agent
DiscoveryServer exposes tool listAgent Card at .well-known/agent.json
TransparencyClient sees tool internalsAgents are opaque to each other
StateImplicit in tool callsExplicit task lifecycle
Best analogyUSB-C port (plug in any tool)Phone network (call any agent)
GitHub stars132K (LangChain ecosystem)22.7K
Downloads97M monthly SDK downloadsEarly adoption phase
GovernanceLinux Foundation (AAIF)Linux Foundation (AAIF)

Here's how they work together in practice: your customer support agent uses MCP to access the database (check order status), the knowledge base (find help articles), and Slack (notify the team). When it encounters a billing question it can't handle, it uses A2A to delegate to a specialized billing agent — which has its own MCP connections to the payment system.

MCP gives agents hands. A2A gives agents the ability to ask for help.

Both protocols are now under the Linux Foundation's Agentic AI Foundation. Google contributed A2A, Anthropic contributed MCP. The fact that they share governance signals that the industry sees them as parts of one stack, not rival standards.

The Full Protocol Stack

Here's how all three layers — function calling, MCP, and A2A — fit together:

LayerProtocolWhat It DoesExample
1. Model capabilityFunction CallingLLM outputs structured tool-call JSONGPT decides to call get_weather()
2. Tool integrationMCPStandardizes tool discovery and executionClaude connects to Postgres, GitHub, Slack
3. Agent collaborationA2AAgents discover and delegate to other agentsResearch agent sends task to analysis agent

Function calling is the mechanism. MCP is the tool standard. A2A is the collaboration standard. You don't choose between them — you use each where it fits.

Most teams start at layer 1 (function calling for a prototype), add layer 2 (MCP when they need multiple tools), and eventually need layer 3 (A2A when they have multiple specialized agents). The progression is natural.

Who's Using A2A in Production

Production deployments are still early, but several major companies are in:

Adobe is using A2A to make its distributed agents interoperable with Google Cloud's ecosystem, enabling cross-platform collaboration for digital experience creation.

S&P Global Market Intelligence adopted A2A as its protocol for inter-agent communication, standardizing how their agents share financial data and analysis across the organization.

ServiceNow built AI Agent Fabric — a multi-agent communication layer connecting ServiceNow, customer, and partner-built agents through A2A.

Tyson Foods and Gordon Food Service are building collaborative A2A systems to share product data and leads between their agents in real-time, reducing supply chain friction.

Huawei announced at MWC 2026 that it would open-source A2A-T, a telecom-specific variant of the A2A protocol for network operations.

The pattern: large enterprises with multiple existing agent systems that need to communicate. That's where A2A's value is clearest.

The Problems With A2A

I wouldn't be honest if I didn't lay out the issues. A2A has real limitations.

O(n-squared) scaling. A2A uses direct peer-to-peer connections. With 4 agents, you need 6 connections. With 50 agents, you need over 1,200. The protocol doesn't include a message broker or routing layer — you build that yourself or watch your connection count explode.

No strong typing for skills. Agent Cards describe skills in natural language, but don't require machine-readable input/output schemas. The "translate" skill says it translates text, but doesn't formally define that it expects a source_language, target_language, and text parameter with specific types. This makes automated orchestration harder than it should be.

Security is immature. The specification supports authentication schemes, but trust establishment between agents is weak. "Tool squatting" — maliciously registering fake agents with legitimate-sounding names — is a real risk. Malicious instructions can propagate between agents via A2A, creating attack chains.

HTTP handling inconsistencies. Different agents might require different headers, authentication schemes, timeout settings, and retry logic. The spec is flexible (which is good), but that flexibility creates interoperability headaches in practice.

Still young. The spec was at v0.2.2 for most of 2025, reaching v1.0 only in early 2026. The ecosystem is thin. The tooling is early. Production examples are limited to large companies with dedicated teams.

Honestly? These are the growing pains of any new protocol. HTTP had similar issues in the early 1990s. The question isn't whether A2A has problems — it's whether the problems are fixable. I think they are.

A Practical Decision Framework

Here's how I'd decide what to use, starting from simple and moving to complex:

You have 1 agent with 1-3 tools Use function calling directly. No protocol needed. Keep it simple.

# This is fine. Don't overcomplicate it.
response = client.chat.completions.create(
    model="gpt-4o",
    messages=messages,
    tools=tool_definitions,
)

You have 1 agent with many tools or multiple LLM providers Add MCP. Write MCP servers for your tools, connect via the standard client. Switch LLM providers without rewriting integrations.

You have multiple specialized agents that need to coordinate Add A2A. Each agent publishes an Agent Card. A coordinator agent discovers and delegates tasks. Individual agents still use MCP for their tool access.

You have a large-scale multi-agent system (50+ agents) A2A plus a message broker. The protocol doesn't solve routing at scale — add something like MQTT, NATS, or a custom gateway. The Linux Foundation's AgentGateway project is building exactly this.

ScenarioRecommended StackComplexity
Simple chatbot with toolsFunction callingLow
Multi-tool agent, provider-flexibleMCPMedium
2-5 specialized agents coordinatingMCP + A2AMedium-High
Enterprise multi-agent platformMCP + A2A + message brokerHigh

Timeline expectations:

  • Function calling: hours to implement
  • MCP integration: days
  • A2A between 2 agents: 1-2 weeks
  • Full multi-agent A2A system: months

Don't jump to A2A because it sounds impressive. Start with the simplest thing that solves your problem and add protocols when you hit real limitations.

What Most A2A Articles Get Wrong

The biggest error I see: treating A2A as a replacement for MCP.

It's not. They operate at different layers. An agent that uses A2A to talk to other agents still needs MCP (or direct integrations) to access databases, APIs, and files. Dropping MCP for A2A is like dropping your file system because you have a network protocol. Both exist for a reason.

The second error: assuming A2A means autonomous agent swarms.

A2A is a communication protocol, not an autonomy framework. It doesn't decide which agents to call or how to decompose tasks. Your orchestration logic does that. A2A just gives you a standard way to send tasks and get results. The intelligence is in your code, not in the protocol.

Third error: ignoring the scaling problem.

Most A2A tutorials show two agents talking to each other. That works beautifully. The articles don't mention what happens with 20 agents, or 100. The O(n-squared) connection issue is real and will bite you if you don't plan for it. Any production multi-agent system needs a routing layer on top of A2A.

What I Actually Think

A2A is going to be important, but not yet.

Right now, most teams don't need it. They're still figuring out how to build one reliable agent, let alone coordinate multiple. The single-agent market share of 59% tells the story — the industry is still in the "make one agent work well" phase.

But that phase is ending fast. The 1,445% surge in multi-agent inquiries at Gartner isn't random. Companies are discovering that one agent can't do everything, that specialized agents working together outperform one generalist agent, and that they need those specialists to coordinate without custom integration code for every pair.

When that wave hits, A2A will be ready. The Linux Foundation governance is right. The backing from 150+ organizations is real. The specification is maturing. The v1.0 release with gRPC and signed Agent Cards closed the biggest gaps.

My bet: by mid-2027, A2A will be as standard for multi-agent systems as MCP is for tool integration today. The companies investing now — Adobe, S&P Global, ServiceNow — will have a head start. Everyone else will scramble to catch up.

If I were starting a new multi-agent project today, I'd build with A2A from day one. Not because I need agent-to-agent communication right now, but because retrofitting it later is harder than building it in. The Agent Card pattern is clean enough that even if you only have two agents, the overhead is minimal.

But if I were building a single agent? I'd skip A2A entirely and focus on MCP for tool integration. Don't add protocols you don't need. The right time to adopt A2A is when you have a second agent that needs to talk to the first. Not before.

The protocol wars are over before they started. MCP won the tool layer. A2A is winning the agent layer. Both live under the same foundation. The future isn't one or the other — it's both, together, at the right layers.


Sources

  1. Announcing the Agent2Agent Protocol (A2A) — Google Developers Blog
  2. Agent2Agent Protocol (A2A) is Getting an Upgrade — Google Cloud Blog
  3. What Is Agent2Agent (A2A) Protocol? — IBM
  4. A2A Protocol Specification
  5. A2A GitHub Repository
  6. A2A Python SDK — GitHub
  7. Linux Foundation Launches A2A Protocol Project
  8. Google Cloud Donates A2A to Linux Foundation — Google Developers Blog
  9. AI Agents Market Report — Grand View Research
  10. Multi-Agent System Market Size and Forecast — Research and Markets
  11. Agentic AI Statistics 2026: 150+ Data Points — Digital Applied
  12. A2A vs MCP: Guide to AI Agent Protocols 2026 — AIMojo
  13. MCP vs A2A: Protocols for Multi-Agent Collaboration — Auth0
  14. Function Calling vs MCP vs A2A — Zilliz
  15. Everything Wrong with A2A Protocol — Medium
  16. A2A for Enterprise-Scale AI Agent Communication — HiveMQ
  17. A2A Protocol Explained — CodiLime
  18. A2A Protocol: Secure Interoperability for Agentic AI — OneReach
  19. Huawei A2A-T Telecom Agent Protocol — TechNode
  20. Linux Foundation AgentGateway Project