Auth and Identity
Series
MCP Deep DiveThe first MCP servers built at most companies have no auth. That is understandable — you are prototyping, the server runs on localhost, and you just want to see if the tool call works. The problem is that prototype servers have a habit of reaching production. A server with no auth is not a "developer convenience" — it is a capability exposed to anything that can reach the endpoint.
Auth in MCP is not optional. It is the mechanism that binds a model action to a human identity, enforces what that identity is allowed to do, and gives you the audit trail to know what happened when something goes wrong.
The Identity Problem is Layered
Three identities are in play in any MCP interaction:
- Human identity — who authorised this session.
- Agent identity — which client is making the call.
- Server identity — which backend credentials the server uses for downstream calls.
Most auth incidents happen because one of these layers is missing. The server authenticates the agent but not the human. Or the server authenticates no-one and runs with a hardcoded API key.
Token Flow: OAuth 2.0 with PKCE
The MCP spec recommends OAuth 2.0 for network-based servers. The client obtains a token from an authorisation server, then presents it to the MCP server as a Bearer token in every request.
On the server side, token validation looks like this:
import jwt from "jsonwebtoken";
import { createRemoteJWKSet, jwtVerify } from "jose";
const JWKS = createRemoteJWKSet(
new URL("https://auth.example.com/.well-known/jwks.json")
);
async function validateToken(authHeader: string | undefined): Promise<JWTPayload> {
if (!authHeader?.startsWith("Bearer ")) {
throw new Error("Missing or malformed Authorization header");
}
const token = authHeader.slice(7);
const { payload } = await jwtVerify(token, JWKS, {
issuer: "https://auth.example.com",
audience: "mcp-server"
});
return payload;
}
// In your Express handler wrapping the MCP transport:
app.post("/mcp", async (req, res) => {
try {
const claims = await validateToken(req.headers.authorization);
req.user = claims; // attach to request context
} catch {
return res.status(401).json({ error: "Unauthorized" });
}
// ... hand off to MCP transport
});Scoping: Not Every Tool for Every Token
OAuth scopes let you model least-privilege at the tool level. Define fine-grained scopes and check them in each tool handler.
// Scope definitions
const SCOPES = {
READ_TICKETS: "tickets:read",
CREATE_TICKETS: "tickets:write",
ADMIN: "admin"
} as const;
function requireScope(claims: JWTPayload, scope: string): void {
const tokenScopes = (claims.scope as string ?? "").split(" ");
if (!tokenScopes.includes(scope)) {
throw new Error(`Insufficient scope. Required: ${scope}`);
}
}
// In the tool handler
server.setRequestHandler(CallToolRequestSchema, async (req, extra) => {
const claims = extra.authInfo?.claims; // populated from middleware
if (req.params.name === "list_tickets") {
requireScope(claims, SCOPES.READ_TICKETS);
const tickets = await tracker.list();
return { content: [{ type: "text", text: JSON.stringify(tickets) }] };
}
if (req.params.name === "close_ticket") {
requireScope(claims, SCOPES.CREATE_TICKETS);
await tracker.close(req.params.arguments.id);
return { content: [{ type: "text", text: "Ticket closed." }] };
}
});This means a read-only agent token cannot call write tools even if it somehow reaches your server.
Audit Logging: What You Need to Record
Every tool call must produce an audit record that answers four questions:
- Who called it (user sub + agent client ID).
- What was called (tool name + arguments, redacting secrets).
- When it happened (ISO timestamp, server clock).
- What the outcome was (success / error + result summary).
import logging
import json
from datetime import datetime, timezone
audit_log = logging.getLogger("mcp.audit")
def log_tool_call(user_sub: str, client_id: str, tool: str, args: dict, result: str, error: str | None = None):
# Redact sensitive fields before logging
safe_args = {k: "***REDACTED***" if k in ("password", "token", "secret") else v
for k, v in args.items()}
audit_log.info(json.dumps({
"ts": datetime.now(timezone.utc).isoformat(),
"user": user_sub,
"client": client_id,
"tool": tool,
"args": safe_args,
"outcome": "error" if error else "success",
"error": error,
"result_len": len(result) if result else 0
}))Ship these logs to your SIEM or log aggregation system. Do not make them opt-in — they are non-negotiable for any server that touches production data.
Local / stdio Auth
stdio servers present a different threat model: the attacker must already be on the machine. OS-level user permissions, file ownership on the server binary, and sandboxing (macOS sandbox profiles, Linux seccomp) are the relevant controls. You can still pass a token at startup via environment variable and validate it in the server initialisation handler if you want a belt-and-suspenders approach.
const SECRET = process.env.MCP_SHARED_SECRET;
server.setRequestHandler(InitializeRequestSchema, async (req) => {
const clientSecret = req.params.clientInfo?.secret;
if (SECRET && clientSecret !== SECRET) {
throw new Error("Client authentication failed");
}
return { protocolVersion: "2024-11-05", capabilities: {}, serverInfo: { name: "local-tool", version: "1.0.0" } };
});Key Takeaways
- Every MCP server exposed over a network needs token-based auth — Bearer tokens validated against a JWKS endpoint is the practical standard.
- Three identities matter: human, agent, and server downstream credentials. Missing any one creates a gap.
- OAuth scopes map cleanly to tool-level permissions; check them per handler, not at the transport layer only.
- Audit logs must capture who, what, when, and outcome — redact secrets before writing.
- stdio servers lean on OS-level controls but can layer a shared-secret handshake for defence in depth.
- Prototype auth gaps routinely survive to production; build auth from the first commit.