Advanced MCP Server Development — Build Production-Grade AI Tool Infrastructure

Lesson 1 of 7 · 18 min

MCP Protocol Deep Dive — What Happens Under the Hood

Why Most MCP Tutorials Fail You

You've probably seen the standard MCP tutorial: clone this repo, run this command, it works. Great. Now change anything and it breaks. You don't know why because you don't understand the protocol.

This lesson fixes that. By the end, you'll understand every byte that flows between an AI client (Claude, Cursor, etc.) and your MCP server.

The Protocol Stack

MCP is built on three layers:

  1. Transport: How messages physically move between client and server (stdio, SSE, or WebSocket)
  2. JSON-RPC 2.0: The message format — every MCP message is a JSON-RPC request, response, or notification
  3. MCP Protocol: The specific methods, tool schemas, and lifecycle defined by the Model Context Protocol spec

Transport Layer: stdio vs SSE vs WebSocket

stdio (standard input/output): The simplest transport. The MCP client spawns your server as a child process and communicates via stdin/stdout. Every message is a newline-delimited JSON string. Used by Claude Code, Claude Desktop, and most local MCP clients.

Advantages: zero network configuration, works locally, fast. Disadvantages: one client per server instance, no remote access.

SSE (Server-Sent Events over HTTP): The server runs as an HTTP server. The client connects to an SSE endpoint for server-to-client messages and POSTs to another endpoint for client-to-server messages. Used for remote MCP servers.

Advantages: remote access, multiple clients possible, works through firewalls. Disadvantages: more complex setup, latency overhead.

Streamable HTTP: The newest transport (2025). Simpler than SSE — uses standard HTTP POST with streaming responses. The direction MCP is heading for remote servers.

JSON-RPC 2.0: The Message Format

Every MCP message follows JSON-RPC 2.0. Three message types:

Request: Client asks server to do something.

{"jsonrpc": "2.0", "id": 1, "method": "tools/call", "params": {"name": "search", "arguments": {"query": "MCP servers"}}}

Response: Server replies with result or error.

{"jsonrpc": "2.0", "id": 1, "result": {"content": [{"type": "text", "text": "Found 42 results..."}]}}

Notification: One-way message, no response expected.

{"jsonrpc": "2.0", "method": "notifications/progress", "params": {"progressToken": "abc", "progress": 50, "total": 100}}

The MCP Lifecycle

When an AI client connects to your server, this happens in order:

  1. Initialize: Client sends initialize request with its capabilities. Server responds with its capabilities (tools, resources, prompts).
  2. Initialized notification: Client sends notifications/initialized — handshake complete.
  3. Tool discovery: Client calls tools/list. Server returns all available tools with their JSON Schema parameter definitions.
  4. Normal operation: Client calls tools/call with tool name and arguments. Server executes and returns results.
  5. Shutdown: Client sends close or terminates the process.

Understanding this lifecycle is critical. If your server doesn't respond correctly to initialize, no client will ever discover your tools.

Capability Negotiation

During initialization, both sides declare what they support:

Server capabilities:

  • tools: Server provides callable tools
  • resources: Server provides readable resources (files, data)
  • prompts: Server provides prompt templates
  • logging: Server can send log messages to client

Client capabilities:

  • roots: Client can provide filesystem roots for the server
  • sampling: Client can fulfill LLM sampling requests from the server

Your server declares capabilities in the initialize response. Only declare what you actually implement — clients may test undeclared capabilities and your server will fail.

Practical Exercise: Trace a Real MCP Session

Enable debug logging in Claude Code (CLAUDE_MCP_DEBUG=1) and watch the raw JSON-RPC messages when you use an MCP tool. You'll see the exact initialize → tools/list → tools/call flow described above. Understanding this flow is the foundation for everything else in this course.

Code Examples

mcp-initialize-handshake.json
json
// Initialize request from client
{"jsonrpc": "2.0", "id": 1, "method": "initialize", "params": {
  "protocolVersion": "2025-03-26",
  "capabilities": {"roots": {"listChanged": true}},
  "clientInfo": {"name": "claude-code", "version": "1.0.0"}
}}

// Initialize response from server
{"jsonrpc": "2.0", "id": 1, "result": {
  "protocolVersion": "2025-03-26",
  "capabilities": {"tools": {"listChanged": true}},
  "serverInfo": {"name": "my-mcp-server", "version": "0.1.0"}
}}

Key Takeaways

  • MCP runs on three layers: transport (stdio/SSE/HTTP), JSON-RPC 2.0 (message format), and MCP protocol (methods and schemas)
  • The MCP lifecycle: initialize → capabilities negotiation → tool discovery → tool calls → shutdown
  • stdio transport is simplest for local servers; SSE/Streamable HTTP for remote access
  • Every MCP message is JSON-RPC 2.0 — understanding the format unlocks debugging and custom implementations

Lesson 1 of 7

Related Resources

Weekly AI Digest