MCP (Model Context Protocol) Explained: How AI Agents Connect to the Real World

9 March, 2026 AI

I was building a project where Claude Code needed to interact with a Postgres database, a local file system, and a third-party API — all in the same coding session. Without MCP, each integration was a separate, brittle shell command wrapped in natural language. The model would construct a curl command, I'd approve it, parse the output, and feed it back. It worked, but it was slow, error-prone, and felt like duct tape.

Then I set up MCP servers for each of those services. The model could query the database, read files, and call the API directly — through a standardised protocol, with typed schemas, without me manually shuttling data between tools. That was the moment MCP clicked for me. Not as an abstract spec, but as the thing that turns an AI chatbot into an actual agent.

What MCP Is

The Model Context Protocol is an open standard published by Anthropic that defines how AI applications communicate with external data sources and tools. Think of it as a USB-C port for AI agents — a universal connector that any tool can implement, and any AI client can consume.

Before MCP, every integration was bespoke. If you wanted Claude to query a database, you wrote custom code. If you wanted it to read from Jira, you wrote different custom code. Each integration had its own schema, its own error handling, its own authentication flow. N tools meant N custom integrations.

MCP replaces this with a single protocol. A tool implements the MCP server interface once, and any MCP-compatible client — Claude Code, Cursor, Windsurf, or your own application — can use it without additional glue code.

The Architecture: Clients, Servers, and Hosts

MCP uses a client-server model with three roles:

Host — the AI application the user interacts with. Claude Code, Cursor, or a custom app you build. The host manages connections to one or more MCP servers.

Client — a protocol client running inside the host. Each client maintains a 1:1 connection with a single MCP server. The host creates one client per server.

Server — a lightweight process that exposes specific capabilities. A Postgres MCP server exposes database queries. A GitHub MCP server exposes repository operations. A filesystem MCP server exposes file reading and writing.

┌─────────────────────────────────────┐
│  Host (Claude Code / Cursor / App)  │
│                                     │
│  ┌──────────┐  ┌──────────┐         │
│  │ Client 1 │  │ Client 2 │   ...   │
│  └────┬─────┘  └──────┬───┘         │
└───────┼───────────────┼─────────────┘
        │               │
  ┌─────┴──────┐  ┌─────┴──────┐
  │ MCP Server │  │ MCP Server │
  │ (Postgres) │  │ (GitHub)   │
  └────────────┘  └────────────┘

The communication uses JSON-RPC 2.0 over two transport options: stdio (for local processes) or HTTP with Server-Sent Events (for remote servers). Stdio is simpler and is what most local MCP servers use — the host spawns the server as a child process and communicates through stdin/stdout.

What MCP Servers Expose

An MCP server can provide three types of capabilities:

Tools

Tools are functions the model can call. They have a name, a description, and a JSON Schema defining their inputs. When the model decides it needs to call a tool, it sends a request with the arguments, and the server returns the result.

{
  "name": "query",
  "description": "Run a read-only SQL query against the database",
  "inputSchema": {
    "type": "object",
    "properties": {
      "sql": { "type": "string", "description": "The SQL query to execute" }
    },
    "required": ["sql"]
  }
}

This is the most commonly used capability. Most MCP servers are essentially tool servers.

Resources

Resources are data the client can read. They're identified by URIs and can represent anything: a file, a database record, a screenshot, API documentation. Unlike tools, resources are read-only and don't perform actions.

postgres://localhost/mydb/tables/users
file:///home/project/src/config.yaml

Resources allow the model to pull context without executing code. Instead of running a query to see the database schema, the model can read the schema as a resource.

Prompts

Prompts are reusable templates the server offers. A Postgres server might expose a "explain-query" prompt that takes a SQL query and wraps it in instructions for the model to explain the execution plan. They're the least commonly used capability, but useful for standardising common workflows.

MCP vs Function Calling

If you've used the Claude or OpenAI API with tool definitions, MCP might sound familiar. Both involve the model calling functions with structured inputs. The difference is where the integration lives.

Function calling MCP
Definition Per-application, in your code Standardised, in the MCP server
Reusability Custom per project Any MCP client can use any server
Discovery You hardcode available tools Client discovers tools at runtime
Transport HTTP API request/response JSON-RPC over stdio or SSE
Ecosystem You build everything Community servers available
Lifecycle Stateless per API call Persistent connection with state

Function calling is "I built this tool for my app." MCP is "anyone can build a server, and anyone's client can use it." The same shift that happened from proprietary connectors to USB.

In practice, you'll use both. Function calling for application-specific logic that only makes sense in your context. MCP for general-purpose integrations — databases, file systems, APIs, version control — that benefit from standardisation.

What a Real MCP Configuration Looks Like

In Claude Code, you configure MCP servers in your project settings:

{
  "mcpServers": {
    "postgres": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-postgres",
               "postgresql://localhost:5432/mydb"]
    },
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem",
               "/home/project/docs"]
    }
  }
}

When Claude Code starts, it spawns each server as a child process, performs a capability handshake, and makes the server's tools available to the model. The model sees:

Available tools:
- postgres.query(sql) - Run a read-only SQL query
- postgres.list_tables() - List all tables in the database
- filesystem.read_file(path) - Read a file's contents
- filesystem.list_directory(path) - List directory contents

From there, the model can decide which tools to call based on the task. No manual data shuttling.

The Ecosystem Right Now

The MCP ecosystem is growing fast but is still maturing. Here's an honest snapshot:

What works well:

  • File system, Git, and database servers are stable and useful
  • Claude Code, Cursor, and Windsurf all support MCP natively
  • The reference SDK (TypeScript and Python) makes building servers straightforward
  • Community servers for popular services — Slack, GitHub, Linear, Notion — exist and work

What's still rough:

  • Authentication for remote servers is inconsistent. OAuth support was added to the spec in early 2026, but many servers still expect tokens passed as command-line arguments
  • Error handling varies wildly between servers. Some return clean error messages, others crash silently
  • Discovery is manual. You need to know the server exists, find its npm package, and configure it yourself. There's no central registry like npm or PyPI yet
  • Security requires attention. An MCP server has the same permissions as the process running it. A poorly written server with access to your database is a security risk

Building Your Own MCP Server

If you need an integration that doesn't exist, building an MCP server is simpler than it sounds. The reference SDK handles the protocol layer — you implement the tools.

A minimal TypeScript server:

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from
  "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

const server = new McpServer({ name: "my-server", version: "1.0.0" });

server.tool(
  "greet",
  "Generate a greeting for a name",
  { name: z.string() },
  async ({ name }) => ({
    content: [{ type: "text", text: `Hello, ${name}!` }]
  })
);

const transport = new StdioServerTransport();
await server.connect(transport);

That's a working MCP server. Run it with node server.js, point an MCP client at it, and the model can call the greet tool. Real servers have more tools and handle errors, but the shape is the same.

How MCP Changes Agent Workflows

The practical impact of MCP becomes clear when you think about how coding agents work. An agent like Claude Code follows a loop: read context, reason about the task, take an action, observe the result, repeat. MCP determines what actions are available in that loop.

Without MCP, an agent's actions are limited to shell commands and file operations. Need to check a deployment status? Construct a curl command. Need to query a database? Build a psql one-liner. Each action requires the model to know the CLI syntax for that specific tool.

With MCP, the agent has typed, documented tools. Instead of constructing curl -H "Authorization: Bearer $TOKEN" https://api.linear.app/..., it calls linear.get_issue(id: "ABC-123"). The model doesn't need to know the API syntax — just the tool's interface.

This matters for reliability. A model constructing shell commands has to get syntax, quoting, escaping, and error handling right every time. A model calling a typed MCP tool just fills in the parameters. The server handles the rest.

When to Use MCP and When Not To

Use MCP when:

  • You need the same integration across multiple AI clients
  • The integration is general-purpose (databases, APIs, file systems)
  • You want the model to have direct access rather than going through shell commands
  • You're building a tool that other developers might also want

Don't use MCP when:

  • The integration is one-off and specific to your application's business logic
  • A simple function call in your code does the job
  • You're working with sensitive systems where you want full control over every operation
  • The overhead of running a separate server process isn't justified

Where to Start

MCP is the standardisation layer that AI tool integration was missing. Instead of every application building custom connectors to every service, MCP provides a common protocol that any client can speak and any server can implement.

The protocol is simple — JSON-RPC over stdio, with tools, resources, and prompts as the three capabilities. The ecosystem is young but growing. The real value isn't in any single server — it's in the composability. Add a Postgres server, a GitHub server, and a filesystem server, and your AI agent can work across all three without you writing any glue code.

If you're building AI-powered applications, start by using existing MCP servers for your common integrations. If you hit a gap, the SDK makes building your own server a matter of hours, not days.

More Articles

CSV vs JSON for Data Exchange: When Each Format Wins

A practical comparison of CSV and JSON for APIs, data pipelines, and file exports. Covers structure, parsing, streaming, schema enforcement, size, tooling, and clear guidelines for choosing the right format.

15 April, 2026

SEO for AI Search: How to Optimise for ChatGPT, Perplexity, and Google AI Overviews

How AI-powered search engines discover, evaluate, and cite web content. Practical strategies for optimising your pages for ChatGPT Browse, Perplexity, Google AI Overviews, and other AI answer engines.

14 April, 2026

Image to Base64 Data URIs: When to Inline and When Not To

A practical guide to embedding images as Base64 data URIs. Covers the data URI format, size overhead, performance trade-offs, browser caching, Content Security Policy, and clear rules for when inlining helps vs hurts.

10 April, 2026