Model Context Protocol (MCP) is an open standard for connecting AI assistants to external tools and data sources. Think of it as a plugin system for AI agents — one protocol, many implementations.
What is MCP?
MCP provides a standardized way for AI models to:
- Discover tools available in connected servers
- Invoke tools with typed parameters
- Receive structured results back
- Maintain context across multiple requests
Instead of each agent implementing custom integrations for Outlook, Notion, GitHub, etc., MCP servers expose standardized interfaces that any MCP client can use.
Analogy: MCP is to AI agents what REST APIs are to web apps — a common protocol for interoperability.
Architecture
graph LR subgraph "AI Agent" CLIENT[MCP Client] end subgraph "MCP Servers" OUTLINE[Outline Server<br/>search, read, create] TOOLS[5eTools Server<br/>D&D reference data] MIDJOURNEY[Midjourney Server<br/>image generation] end subgraph "Backend APIs" OUTLINE_API[Outline API] MIDJOURNEY_API[Midjourney API] end CLIENT -->|initialize| OUTLINE CLIENT -->|tools/list| OUTLINE CLIENT -->|tools/call| OUTLINE CLIENT --> TOOLS CLIENT --> MIDJOURNEY OUTLINE --> OUTLINE_API MIDJOURNEY --> MIDJOURNEY_API style CLIENT fill:#bbf,stroke:#333,stroke-width:2px
Three components:
- MCP Client — Embedded in the AI agent, discovers and calls tools
- MCP Server — Exposes tools via MCP protocol, wraps backend APIs
- Backend Service — Actual API (Outline, GitHub, etc.)
The server handles authentication, rate limiting, and API-specific logic. The client just calls standardized MCP methods.
Protocol Basics
MCP uses JSON-RPC 2.0 over stdio (subprocess) or HTTP (network).
Core Methods
| Method | Purpose | Example |
|---|---|---|
initialize | Start session, exchange capabilities | Client announces supported protocol version |
notifications/initialized | Confirm initialization complete | Server acknowledges ready state |
tools/list | Discover available tools | Returns list of tool names + schemas |
tools/call | Invoke a tool with parameters | Call search_documents(query="MCP") |
resources/list | List available data sources | Optional: expose files, databases, etc. |
prompts/list | Get pre-defined prompt templates | Optional: suggest workflows to client |
Request/Response Flow
// Client → Server: Initialize
{
"jsonrpc": "2.0",
"method": "initialize",
"params": {
"protocolVersion": "2024-11-05",
"capabilities": {}
},
"id": 1
}
// Server → Client: Initialized response
{
"jsonrpc": "2.0",
"result": {
"protocolVersion": "2024-11-05",
"capabilities": {
"tools": {}
}
},
"id": 1
}
// Client → Server: List tools
{
"jsonrpc": "2.0",
"method": "tools/list",
"id": 2
}
// Server → Client: Tool schemas
{
"jsonrpc": "2.0",
"result": {
"tools": [
{
"name": "search_documents",
"description": "Search Outline wiki for documents",
"inputSchema": {
"type": "object",
"properties": {
"query": {"type": "string"}
},
"required": ["query"]
}
}
]
},
"id": 2
}
// Client → Server: Call tool
{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "search_documents",
"arguments": {
"query": "MCP integration"
}
},
"id": 3
}
// Server → Client: Tool result
{
"jsonrpc": "2.0",
"result": {
"content": [
{
"type": "text",
"text": "Found 3 documents matching 'MCP integration':\n1. Model Context Protocol (technology/)"
}
]
},
"id": 3
}Transport Options
stdio (Subprocess)
Default transport: Server runs as subprocess, communicates via stdin/stdout.
Pros:
- Simple to implement (no HTTP server needed)
- Low latency (local process)
- Process isolation (crashes don’t affect client)
Cons:
- Not network-accessible (single machine only)
- Requires subprocess management (lifecycle, cleanup)
Use case: Local tools, file system access, desktop integrations
Example:
import subprocess
import json
# Start MCP server
process = subprocess.Popen(
['python', 'mcp_server.py'],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE
)
# Send request
request = {"jsonrpc": "2.0", "method": "tools/list", "id": 1}
process.stdin.write(json.dumps(request).encode() + b'\n')
process.stdin.flush()
# Read response
response = json.loads(process.stdout.readline())HTTP (Network)
Network transport: Server runs as HTTP service, accepts POST requests.
Pros:
- Network-accessible (remote agents, shared infrastructure)
- Standard web tech (proxies, load balancers, auth)
- Easier debugging (curl, Postman)
Cons:
- Requires HTTP wrapper (Flask, Express, etc.)
- Session management needed (MCP expects stateful connections)
Use case: Shared MCP servers, cloud deployments, multi-agent systems
Example:
from flask import Flask, request, jsonify
app = Flask(__name__)
sessions = {}
@app.route('/mcp', methods=['POST'])
def mcp_endpoint():
session_id = request.headers.get('X-Session-ID', 'default')
if session_id not in sessions:
sessions[session_id] = MCPServer()
mcp_request = request.json
result = sessions[session_id].handle(mcp_request)
return jsonify(result)HTTP Wrapper Patterns
Converting stdio-based MCP servers to HTTP requires careful handling of session state and transport differences.
Session Management
Problem: HTTP is stateless, MCP expects persistent sessions.
Solution: Session dictionary keyed by client identifier.
sessions = {}
@app.route('/mcp', methods=['POST'])
def mcp_endpoint():
session_id = request.headers.get('X-Session-ID', 'default')
# Create session if missing
if session_id not in sessions:
sessions[session_id] = {
'server': MCPServer(),
'initialized': False
}
session = sessions[session_id]
# Auto-initialize if needed
if not session['initialized'] and request.json['method'] != 'initialize':
initialize_result = session['server'].initialize({})
session['initialized'] = True
# Handle request
result = session['server'].handle_request(request.json)
return jsonify(result)Key patterns:
- Use
X-Session-IDheader or generate UUID per client - Auto-initialize sessions on first non-initialize request
- Expire sessions after timeout (optional, for resource cleanup)
Flask Routing Issues
Problem: Proxy sends /mcp/ with trailing slash, Flask route is /mcp without → 404.
Solution: Disable strict slash handling.
app = Flask(__name__)
app.url_map.strict_slashes = False # Accept both /mcp and /mcp/Type Handling
Problem: Pydantic models expect strict types, APIs return flexible types.
Example: Model defines error_code: Optional[str], but Midjourney API returns integer 400.
Solution: Use flexible types or coerce in wrapper.
from pydantic import BaseModel, field_validator
from typing import Optional, Union
class TaskResult(BaseModel):
error_code: Optional[Union[int, str]] = None # Accept both
@field_validator('error_code', mode='before')
def coerce_error_code(cls, v):
if v is not None:
return str(v) # Convert to string
return vAsync Handling
Problem: MCP servers often use asyncio, Flask is sync by default.
Solution: Run async handlers in event loop.
import asyncio
@app.route('/mcp', methods=['POST'])
def mcp_endpoint():
session_id = request.headers.get('X-Session-ID', 'default')
mcp_request = request.json
# Run async handler in event loop
loop = asyncio.new_event_loop()
result = loop.run_until_complete(
sessions[session_id].handle_async(mcp_request)
)
loop.close()
return jsonify(result)Or use async Flask (requires ASGI server like Hypercorn):
from quart import Quart, request, jsonify
app = Quart(__name__)
@app.route('/mcp', methods=['POST'])
async def mcp_endpoint():
mcp_request = await request.get_json()
result = await sessions[session_id].handle_async(mcp_request)
return jsonify(result)Shared MCP Infrastructure
A common deployment pattern is to run shared MCP servers behind a gateway. See MCP Gateway for the practical layout (Docker Compose on a dedicated host, token-based permissions, secrets via Vaultwarden).
Why shared servers?
Instead of each agent implementing its own integration with each external system, agents connect to shared MCP servers. This:
- Eliminates code duplication — one implementation, many clients.
- Centralizes credentials — API keys live in one place (the server).
- Enables new capabilities cheaply — add an MCP server, every agent gets access.
- Maintains governance — server permissions controlled via the same review process as code.
Client Support: When Native Support Isn’t There
When an agent host or CLI doesn’t have native MCP support, three workarounds:
- Community plugin — a third-party plugin adds an MCP client to your host.
- HTTP proxy pattern — wrap MCP servers in REST APIs and call via a generic exec tool. More flexible (works with any HTTP client) but loses some MCP benefits (tool discovery, typed schemas).
- Direct integration — agents spawn MCP servers as subprocesses, manage stdio manually. Full control, no plugin needed; requires more agent code (subprocess lifecycle, JSON-RPC handling).
The HTTP proxy pattern via a gateway is the most common pragmatic choice when native support is absent.
Tool Schema Design
Well-designed MCP tools have:
Clear, Action-Oriented Names
// Good
{"name": "search_documents", "description": "Search Outline wiki"}
{"name": "create_page", "description": "Create new wiki page"}
// Bad (vague, noun-based)
{"name": "documents", "description": "Do something with documents"}
{"name": "page", "description": "Page operations"}Typed Input Schemas
{
"name": "search_documents",
"inputSchema": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "Search query (keywords or phrases)"
},
"limit": {
"type": "number",
"description": "Max results to return",
"default": 10
}
},
"required": ["query"]
}
}Specify:
- Parameter types (
string,number,boolean,object,array) - Descriptions (guide the AI on what to pass)
- Required vs optional
- Defaults (when sensible)
- Validation (min/max, pattern, enum)
Structured Outputs
Return consistent, parseable formats:
// Good (structured, parseable)
{
"content": [
{
"type": "text",
"text": "Found 3 results:\n1. Article A\n2. Article B\n3. Article C"
},
{
"type": "resource",
"resource": {
"uri": "outline://doc/123",
"name": "Article A",
"mimeType": "text/markdown"
}
}
]
}
// Bad (unstructured string)
{
"result": "Found some stuff, here's a blob of text..."
}Debugging MCP Integrations
Test with curl (HTTP servers)
# Health check
curl http://192.168.0.250:3100/health
# Initialize session
curl -X POST http://192.168.0.250:3100/mcp \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
-H "X-Session-ID: test-session" \
-d '{"jsonrpc":"2.0","method":"initialize","params":{"protocolVersion":"2024-11-05"},"id":1}'
# List tools
curl -X POST http://192.168.0.250:3100/mcp \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
-H "X-Session-ID: test-session" \
-d '{"jsonrpc":"2.0","method":"tools/list","id":2}'
# Call tool
curl -X POST http://192.168.0.250:3100/mcp \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
-H "X-Session-ID: test-session" \
-d '{
"jsonrpc":"2.0",
"method":"tools/call",
"params":{
"name":"search_documents",
"arguments":{"query":"MCP"}
},
"id":3
}'Check Logs (stdio servers)
Run the server manually and send JSON-RPC via stdin:
# Start server
python mcp_server.py
# In another terminal, send requests
echo '{"jsonrpc":"2.0","method":"tools/list","id":1}' | nc localhost 3000Common Errors
| Error | Cause | Fix |
|---|---|---|
node: command not found | GitHub Actions in Python container | Use node:20-bookworm base image, install Python |
404 Not Found | Flask strict slashes | Set app.url_map.strict_slashes = False |
Session not initialized | Calling tools before initialize | Auto-initialize sessions or require explicit initialize |
Type validation error | Pydantic strict types vs API responses | Use Union types or coerce values |
Connection refused | Server not running or wrong port | Check docker ps, verify port mapping |
Real-World Example: Midjourney MCP
A Midjourney MCP wrapper demonstrates several patterns:
Input: Natural language prompt → Output: Generated image URLs
Tools exposed:
imagine— Generate image from text promptdescribe— Get prompt suggestions from uploaded imageblend— Combine multiple images
HTTP wrapper fixes applied:
- Added
app.url_map.strict_slashes = False(proxy sends/mcp/) - Implemented session auto-initialization (MCP requires initialize → tools)
- Changed
error_codetype fromOptional[str]toOptional[int | str](API returns int 400)
Usage:
{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "imagine",
"arguments": {
"prompt": "Sahuagin emerging from dark waters around pirate ships at Blue Harbour",
"version": "v7"
}
},
"id": 1
}Result: Image generated, saved to ~/artifacts/midjourney/, URL returned in MCP response.
Budget constraint: 2 images/day (Midjourney API pricing). Used for diary headers and D&D campaign visuals.
Fallback Patterns: Working Without MCP
When MCP infrastructure is unavailable (offline work, service outages, or custom requirements), use local tools for visual artifact generation.
Charts: Vega-Lite + vl-convert
Vega-Lite is a declarative grammar for charts. Generate the JSON spec (via LLM or manually), render locally.
# Install once
npm install -g vega vega-lite vl-convert-cli
# Generate spec (example: bar chart)
cat > chart.vl.json <<'EOF'
{
"$schema": "https://vega.github.io/schema/vega-lite/v5.json",
"data": {"values": [
{"category": "A", "value": 28},
{"category": "B", "value": 55}
]},
"mark": "bar",
"encoding": {
"x": {"field": "category", "type": "nominal"},
"y": {"field": "value", "type": "quantitative"}
}
}
EOF
# Render to PNG/SVG
vl2png chart.vl.json chart.png
vl2svg chart.vl.json chart.svgAdvantages:
- Works offline
- Spec is versioned in git (reproducible)
- Supports 26+ chart types (line, bar, scatter, treemap, sankey, etc.)
- Deterministic rendering (same spec → same output)
Diagrams: Mermaid CLI
Mermaid renders flowcharts, sequence diagrams, and more from text descriptions.
# Install once
npm install -g @mermaid-js/mermaid-cli
# Create diagram
cat > diagram.mmd <<'EOF'
graph TD
A[Start] --> B{Decision}
B -->|Yes| C[Action 1]
B -->|No| D[Action 2]
EOF
# Render
mmdc -i diagram.mmd -o diagram.png
mmdc -i diagram.mmd -o diagram.svgUse cases:
- Architecture diagrams
- Gantt charts (timelines)
- State machines
- ER diagrams
SVG Manipulation: ImageMagick
Post-process SVG outputs from Vega-Lite or Mermaid.
# Resize
convert input.svg -resize 800x600 output.png
# Add border/padding
convert input.svg -bordercolor white -border 20x20 output.png
# Composite (layer images)
convert base.png overlay.png -composite result.pngPattern: Generate Spec → Render → Commit
- LLM generates declarative spec (Vega-Lite JSON, Mermaid DSL)
- Deterministic tool renders to PNG/SVG
- Validate output (file size > 1KB, dimensions correct)
- Commit both spec and output to artifacts repo
This approach:
- Works offline (no API dependencies)
- Auditable (spec is human-readable, versioned in git)
- Reproducible (re-render anytime by re-running the tool)
- Flexible (edit spec manually for fine-tuning)
When to Use MCP vs Local Tools
Use MCP (chart/mermaid/infographic servers):
- Quick iteration during active sessions
- Standard patterns (personal data queries, known templates)
- MCP infrastructure is up and accessible
Use local tools:
- Offline work or unreliable connectivity
- Custom specs requiring manual editing
- Need to version control the spec alongside output
- MCP infrastructure unavailable
Practical example: During 2026-03-22 diary generation, Midjourney MCP server returned empty responses (upstream 500s). Agent pivoted to Mermaid CLI for header image generation, documenting this fallback pattern in the process.
Future Directions
- Native OpenClaw support (issue #4834) — MCP client built into OpenClaw runtime
- MCP server discovery — Agents auto-discover available servers via registry
- Capability negotiation — Servers advertise optional features, clients adapt
- Streaming support — Long-running tools stream progress updates
- Multi-modal tools — Return images, audio, video directly in MCP responses
See Also
- MCP Gateway — practical gateway architecture and deployment
- Credential Management — how API keys flow to MCP servers
- Multi-Agent Coordination — how agents use shared MCP servers as part of fallback chains
- Anarchism — why shared infrastructure isn’t power-centralizing
References
- MCP Specification: https://modelcontextprotocol.io/
- GitHub Actions MCP: https://github.com/github/actions-mcp
- Community implementations: https://github.com/topics/model-context-protocol