Ismat Samadov
  • Tags
  • About
16 min read/3 views

MCP Explained: The Protocol Connecting LLMs to Everything

MCP went from Anthropic side project to industry standard in 16 months. Here is how it works and why it matters.

AILLMMCPPython

Related Articles

OWASP Top 10 for LLM Applications: The Attacks Your AI App Isn't Ready For

15 min read

Testing LLM Applications Is Nothing Like Testing Regular Software — Here's What Actually Works

14 min read

Rate Limiting, Circuit Breakers, and Backpressure: The Three Patterns That Keep Distributed Systems Alive

18 min read

Enjoyed this article?

Get new posts delivered to your inbox. No spam, unsubscribe anytime.

On this page

  • The Numbers
  • What MCP Actually Is
  • MCP Servers Worth Knowing
  • MCP vs Function Calling: The Confusion
  • MCP vs A2A: Not a Competition
  • Building Your First MCP Server
  • The Security Problem Nobody Wants to Talk About
  • The Practical Guide: Adopting MCP Safely
  • What Most MCP Articles Get Wrong
  • What I Actually Think
  • Sources

© 2026 Ismat Samadov

RSS

Six months ago, connecting Claude to a Postgres database required writing custom code. Connecting it to Slack required different custom code. Each new integration was a bespoke project. Today, I type one config line, point at an MCP server, and it just works. Same server works with GPT, Gemini, or any other model. No code changes.

The Model Context Protocol went from an Anthropic side project to an industry standard faster than any protocol I've seen in twenty years of software development. Faster than REST replaced SOAP. Faster than GraphQL gained traction. And it's reshaping how every AI application talks to the outside world.

The Numbers

MCP's adoption curve is unlike anything in recent developer tooling history.

Anthropic launched MCP in November 2024 with about 2 million monthly SDK downloads. OpenAI adopted it in April 2025, pushing downloads to 22 million. Microsoft integrated it into Copilot Studio by July 2025 at 45 million. AWS Bedrock added support in November 2025. By March 2026, monthly downloads crossed 97 million.

That's a 48x increase in 16 months.

The ecosystem exploded alongside adoption. There are now over 10,000 MCP servers indexed across public registries, with 5,000+ community-built servers. Major SaaS companies — Atlassian, Figma, Asana — ship official MCP servers. Developer tools like Cursor, Replit, and Zed have native MCP client support.

In December 2025, Anthropic donated MCP to the Agentic AI Foundation under the Linux Foundation, co-founded with Block and OpenAI, backed by AWS, Google, Microsoft, Salesforce, and Snowflake. When every major AI company co-signs a protocol, that's not hype. That's infrastructure.

Gartner predicts 40% of enterprise applications will include AI agents by end of 2026, up from less than 5% today. MCP is how most of those agents will connect to the real world.

What MCP Actually Is

Strip away the marketing and MCP is a client-server protocol built on JSON-RPC 2.0. That's it. It defines how an AI application (the client) discovers and calls tools provided by external services (the servers).

The analogy everyone uses is USB-C. Before USB-C, every device had its own charger, its own cable, its own connector. MCP does the same thing for AI integrations. Before MCP, every LLM-to-tool connection was custom. After MCP, you write one server and every compatible AI client can use it.

The architecture has three layers:

Host — the application the user interacts with. Claude Desktop, an IDE, a custom app. The host runs one or more MCP clients.

Client — maintains a connection to a single MCP server. Handles protocol negotiation, capability discovery, and message routing.

Server — exposes tools, resources, and prompts to the client. A Postgres MCP server exposes query tools. A GitHub MCP server exposes repo operations. A Slack MCP server exposes messaging.

The communication looks like this:

{
  "jsonrpc": "2.0",
  "method": "tools/call",
  "params": {
    "name": "query_database",
    "arguments": {
      "sql": "SELECT count(*) FROM users WHERE created_at > '2026-01-01'"
    }
  },
  "id": 1
}

The server receives this, executes the query, and returns the result. The LLM never touches the database directly — the MCP server mediates everything.

MCP supports two transport mechanisms: stdio for local servers (fast, no network overhead) and streamable HTTP for remote servers (supports authentication, streaming, and multi-tenant deployments).

MCP Servers Worth Knowing

The ecosystem is huge now, but most developers interact with a handful of servers. Here are the ones I actually use:

ServerWhat It DoesBest For
PostgreSQLNatural language to SQL, schema explorationData analysis, reporting
GitHubPR summaries, issue triage, code reviewDev workflow automation
SlackRead/write messages, search history, manage canvasesTeam communication agents
FilesystemRead/write/search local filesCode generation, file management
Brave SearchWeb search with structured resultsResearch agents
PlaywrightBrowser automation, web scrapingTesting, data collection
SentryError tracking, issue analysisDebugging workflows
MemoryPersistent key-value storageLong-running agent state

The official Anthropic servers are the safest starting point. Community servers vary wildly in quality — some are excellent, some are security nightmares. More on that later.

MCP vs Function Calling: The Confusion

This trips up almost everyone. "Isn't MCP just function calling with extra steps?"

No. And understanding why matters.

Function calling is a capability of the LLM itself. When a model decides it needs external data, it outputs structured JSON describing which function to call and with what arguments. Your application code then executes that function and feeds the result back.

MCP is the protocol that standardizes how those functions are discovered, described, and executed. MCP uses function calling under the hood. They're different layers of the same stack.

Here's the practical difference:

# Function calling: you define tools inline, per-provider
response = openai.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "How many users signed up today?"}],
    tools=[{
        "type": "function",
        "function": {
            "name": "query_database",
            "description": "Run a SQL query",
            "parameters": {
                "type": "object",
                "properties": {
                    "sql": {"type": "string"}
                }
            }
        }
    }]
)
# MCP: tools are discovered automatically from the server
# This same server works with ANY MCP-compatible client
# No tool definitions in your code — the server declares them

The key differences in a table:

AspectFunction CallingMCP
Defined byEach LLM provider (different schemas)Open standard (one schema)
Tool discoveryManual — you list tools in each requestAutomatic — client queries server
PortabilityVendor-locked (OpenAI format != Anthropic format)Provider-agnostic
State managementYou build itProtocol handles it
Best forQuick prototypes, 2-3 toolsProduction systems, many integrations
OverheadMinimalSlight (extra protocol layer)

One hidden downside of function calling is vendor lock-in. Each provider has its own schema. Switching from OpenAI to Anthropic means rewriting all your tool definitions. With MCP, the same server works with both. No code changes.

My rule of thumb: if you have fewer than 3 tools and one LLM provider, function calling is fine. Anything beyond that, use MCP.

MCP vs A2A: Not a Competition

Google launched the Agent-to-Agent protocol (A2A) in early 2025. The inevitable "MCP vs A2A" articles followed. But this is a false dichotomy.

MCP connects agents to tools. A2A connects agents to other agents. They solve different problems.

Think of it this way: MCP is how your agent reads a database or sends a Slack message. A2A is how your research agent hands off findings to your writing agent, which hands off a draft to your review agent. Different layers, same system.

Both protocols are now under the Linux Foundation's Agentic AI Foundation. Google contributed A2A, Anthropic contributed MCP. They're designed to work together. A2A reached v1.0 in early 2026 with support for gRPC, signed Agent Cards, and multi-tenancy.

If you're building a single agent that needs to access external tools: MCP. If you're building a multi-agent system where agents delegate tasks to each other: A2A. If you're building a serious production system: probably both.

Building Your First MCP Server

Here's a minimal MCP server in TypeScript. This one exposes a single tool that checks the weather:

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";

const server = new McpServer({
  name: "weather",
  version: "1.0.0",
});

server.tool(
  "get_weather",
  "Get current weather for a city",
  { city: { type: "string", description: "City name" } },
  async ({ city }) => {
    const res = await fetch(
      "https://api.weatherapi.com/v1/current.json?key=YOUR_KEY&q=" + city
    );
    const data = await res.json();
    return {
      content: [{
        type: "text",
        text: data.location.name + ": " + data.current.temp_c + "C, " + data.current.condition.text
      }]
    };
  }
);

const transport = new StdioServerTransport();
await server.connect(transport);

To connect this to Claude Desktop, add it to your config:

{
  "mcpServers": {
    "weather": {
      "command": "npx",
      "args": ["tsx", "weather-server.ts"]
    }
  }
}

That's it. Claude can now check the weather. The same server works with Cursor, Zed, or any MCP-compatible client without modification.

The official SDK supports TypeScript, Python, and Kotlin. TypeScript is the most mature. Python is catching up fast.

The Security Problem Nobody Wants to Talk About

Here's where I get concerned.

MCP's adoption is outpacing its security story. Badly.

Invariant Labs demonstrated that a malicious MCP server could silently exfiltrate a user's entire WhatsApp history by combining tool poisoning with a legitimate server. Hidden instructions in the malicious server's tool descriptions caused the LLM to send hundreds of past messages to an attacker-controlled endpoint. The user saw nothing.

That's not a theoretical attack. It worked.

The top MCP security risks include:

Tool poisoning — malicious instructions embedded in tool descriptions that are visible to the LLM but hidden from the user. The LLM follows these instructions because it treats tool metadata as trusted context. This is the most dangerous attack vector because it's invisible.

Prompt injection via data — a GitHub issue or Slack message contains hidden instructions. When the MCP server retrieves this data and passes it to the LLM, the model follows the injected instructions. Researchers demonstrated this with GitHub, stealing repository data through crafted issue descriptions.

OAuth bypass — many public MCP servers don't verify requests or protect user sessions. Some accept completely unauthenticated calls. CVE-2025-6514 was a command injection bug in mcp-remote that let malicious servers achieve remote code execution on client machines through crafted authorization endpoints.

Cross-server tool shadowing — a malicious MCP server intercepts calls meant for a trusted server by registering tools with similar names. The LLM doesn't verify which server it's talking to.

The protocol itself lacks inherent security enforcement. Authentication, authorization, input validation — all left to individual implementations. That's like designing a network protocol and saying "encryption is optional." Some servers do it right. Many don't.

The 2026 MCP roadmap acknowledges these gaps and promises improvements. But today, right now, if you're running community MCP servers in production, you're accepting significant risk.

The Practical Guide: Adopting MCP Safely

If you're starting with MCP today, here's the path that minimizes risk and maximizes value:

Week 1: Start with official servers only

Install Claude Desktop or Claude Code and connect the official filesystem and PostgreSQL MCP servers. Get comfortable with the protocol using trusted, Anthropic-maintained code.

# Install Claude Code (includes MCP client)
npm install -g @anthropic-ai/claude-code

Week 2: Add one integration that matters

Pick the MCP server that solves your biggest pain point. For most developers, that's GitHub (code review), Slack (team communication), or a database server (data analysis). Stick with servers from the official repository.

Week 3: Build a custom server

You'll inevitably need something specific — your internal API, your custom database, your proprietary tool. Build a minimal MCP server using the official SDK. Start with one tool. Add more as needed.

Week 4: Security audit

Before going to production, review every MCP server you're running:

CheckWhy It Matters
Is it from the official repo or a trusted vendor?Community servers may contain tool poisoning
Does it require authentication?Unauthenticated servers are open attack surfaces
What permissions does it have?Over-permissioned servers can leak data
Are tool descriptions clean?Hidden instructions in descriptions enable attacks
Is the transport encrypted?Stdio is local-only; HTTP needs TLS

Ongoing: Monitor and update

MCP servers are software. They have bugs. They get CVEs. Pin versions, watch for security advisories, and update promptly.

What Most MCP Articles Get Wrong

The biggest misconception: "MCP replaces APIs."

It doesn't. MCP is a protocol layer that sits on top of APIs. Your MCP server still calls REST APIs, database connections, and file system operations under the hood. MCP just standardizes how the LLM discovers and invokes those operations.

For high-performance, low-latency applications, direct API calls are still more efficient. MCP adds a reasoning layer — the model decides which tool to call and how to combine results. That's powerful for AI agents but adds latency you don't want in a hot path.

The second misconception: "MCP is only for Anthropic/Claude."

That was true for about five months. OpenAI, Google, Microsoft — everyone supports it now. The protocol is genuinely provider-agnostic. I've tested the same MCP servers with Claude, GPT-4o, and Gemini without changing a single line of server code.

Third misconception: "MCP is just function calling with a fancy name."

I covered this above, but it bears repeating. Function calling is a model capability. MCP is an integration standard. They're different layers that work together. Confusing them leads to architectural decisions you'll regret.

What I Actually Think

MCP is the most important protocol in AI right now. And it's also dangerously premature for production use without significant caution.

The protocol design is elegant. Client-server over JSON-RPC 2.0 is proven infrastructure. The USB-C analogy is accurate — write one server, connect to any client. The adoption numbers are real. The Linux Foundation governance gives it long-term stability. This is going to be a foundational piece of how AI applications work for the next decade.

But the security situation scares me. Tool poisoning is an attack that most developers don't even know exists, and there's no protocol-level defense against it. The community MCP server ecosystem is a Wild West of unaudited code with access to databases, file systems, and communication platforms. We're repeating the npm supply chain problem, except this time the malicious package gets to talk to your LLM and convince it to do things.

My advice: use MCP, but treat every server like untrusted code. Run official servers when they exist. Audit community servers line by line before connecting them. Never give an MCP server more permissions than it absolutely needs. And watch the security advisories like your production systems depend on it — because they do.

The protocol will mature. The security story will improve. But right now, in April 2026, we're in the "move fast and break things" phase of MCP adoption. The teams that succeed will be the ones that move fast and think about what breaks.


Sources

  1. Introducing the Model Context Protocol — Anthropic
  2. Model Context Protocol — Wikipedia
  3. A Year of MCP: From Internal Experiment to Industry Standard — Pento
  4. 2026: The Year for Enterprise-Ready MCP Adoption — CData
  5. The State of MCP: Adoption, Security and Production Readiness — Zuplo
  6. MCP Architecture Overview — Model Context Protocol
  7. JSON-RPC Protocol in MCP — MCPCat
  8. MCP Transports — Model Context Protocol
  9. Model Context Protocol Servers — GitHub
  10. Slack MCP Server Overview — Slack API
  11. Function Calling vs MCP — Fast.io
  12. MCP vs Function Calling — Descope
  13. MCP vs Function Calling: How They Actually Work Together — Portkey
  14. A2A vs MCP: Guide to AI Agent Protocols 2026 — AIMojo
  15. MCP vs A2A: Protocols for Multi-Agent Collaboration — Auth0
  16. Agent-to-Agent Protocol (A2A) — Google Developers Blog
  17. MCP Horror Stories: GitHub Prompt Injection — Docker
  18. Top 10 MCP Security Risks — Prompt Security
  19. MCP Vulnerabilities Every Developer Should Know — Composio
  20. A Timeline of MCP Security Breaches — AuthZed
  21. MCP Security Vulnerabilities — Practical DevSecOps
  22. Shortcomings of MCP Explained — CData
  23. MCP's Biggest Growing Pains — The New Stack
  24. Why the Model Context Protocol Won — The New Stack
  25. MCP vs APIs: When to Use Which — Tinybird