A2A Protocol (Agent-to-Agent)
The open standard for connecting AI agents across vendors
The Problem: You built an AI agent with Claude, your colleague built one with GPT, and your partner company uses a custom open-source agent. They can't talk to each other — each uses proprietary formats. Multi-vendor agent orchestration is impossible without a universal standard.
The Solution: A2A — Universal Language for AI Agents
The A2A (Agent-to-Agent) protocol is Google's open standard for inter-agent communication, released in April 2025 with 50+ partners. Each agent publishes an agent card — a JSON document describing its capabilities, skills, and endpoint. Clients discover agents, create tasks with a defined lifecycle (submitted → working → input-required → completed → failed), and receive results via Server-Sent Events streaming or webhook push notifications. The protocol is HTTP-based with enterprise-grade auth (OAuth2, API keys).
Think of it like USB-C for AI agents — one universal standard instead of dozens of proprietary cables. An agent card is like a passport (who am I, what can I do, how to reach me), and a task is like a work order (submitted, in progress, done):
- 1. Agent publishes agent card: Each agent describes its capabilities in a standard JSON format: name, description, skills, endpoint URL, auth requirements
- 2. Client discovers capabilities: The client fetches the agent card from a known URL, reads what the agent can do, and decides whether to use it
- 3. Task created & negotiated: Client sends a task. The agent may start working immediately, or request additional input (input-required state) before proceeding
- 4. Results streamed back: Results arrive via SSE streaming (real-time) or webhook push notifications. Supports artifacts (files, structured data) alongside text responses
Where A2A Protocol Is Used
- Multi-vendor orchestration: Connect agents from different providers (Google, Microsoft, open-source) into unified workflows without proprietary lock-in
- Enterprise workflows: Chains of specialized agents: research → analysis → report. Each agent handles its part, results flow via standard A2A protocol
- Marketplace of agents: Standard interface enables an "app store" for AI agents. Discover capabilities via agent cards, integrate via standard protocol
- Common Pitfall: Confusing A2A with MCP. MCP connects an LLM to tools (databases, APIs, file systems) — like USB for peripherals. A2A connects agents to agents — like HTTP for web services. They are complementary, not competing
Fun Fact: A2A was launched with 50+ partners including Salesforce, SAP, MongoDB, and LangChain. The protocol is designed to be the HTTP of the agent world — simple enough to implement in an afternoon, but powerful enough for enterprise-scale multi-agent orchestration with streaming, auth, and artifact exchange.
Try It Yourself!
Explore the interactive visualization below to see how agents discover each other, negotiate tasks, and stream results through the A2A protocol.
How a client discovers and selects an agent via Agent Card
GET https://research-agent.example/.well-known/agent.jsonA2A turns each agent into a "microservice with intelligence". Just as HTTP let web services communicate without knowing each other's internals, A2A lets agents collaborate knowing only each other's Agent Card.
Try it yourself
Interactive demo of this technique
Organize interaction between research agent and analysis agent for market research
fetch("https://research-vendor.com/api/v2/search", { body: JSON.stringify({query: "AI market"}) }) .then(res => fetch("https://analysis-co.io/api/analyze", { body: JSON.stringify({data: res}) }))
Problem: if research-vendor changes response format (v2→v3), analysis-co breaks. Need to rewrite parsing every time.
Step 1: Discovery GET https://research-agent.example/.well-known/agent.json → { "name": "ResearchAgent", "skills": ["web-search"], "endpoint": "/tasks" }
Step 2: Create task POST https://research-agent.example/tasks { "task": { "message": "Research AI agent market 2025" } } → { "taskId": "t-123", "state": "working" }
Step 3: Get result (SSE) GET https://research-agent.example/tasks/t-123/stream → event: state_change, data: { "state": "completed", "artifacts": [...] }
Step 4: Pass to Analysis Agent GET https://analysis-agent.example/.well-known/agent.json POST https://analysis-agent.example/tasks { "task": { "message": "Analyze data", "artifacts": [...] } }
Benefit: swapping any agent is just changing the URL. Format is standard.
A2A turns custom API integrations into a standard protocol. Instead of rewriting code when changing vendors — just change the Agent Card URL.
Create a free account to solve challenges
3 AI-verified challenges for this lesson
This lesson is part of a structured LLM course.
My Learning Path