MCP Resource Design: What to Expose, What to Compute
Every MCP server I have audited has the same gap: the tools are thoughtfully designed, the resources are an afterthought. A single resource that returns the entire database dump. No URI templates. No listing endpoint. No pagination. The LLM either loads ten megabytes of JSON into its context window or never discovers that the resource exists.
Resources are the most powerful and most misunderstood MCP primitive. When designed correctly, they let an agent passively acquire context — reading schemas, enumerating catalogues, and sampling data — without burning tool invocations or context tokens on boilerplate. When designed poorly, they either overwhelm the model or disappear into obscurity.
This post is a complete guide to MCP resource design: URI schemes, listing semantics, lazy fetching strategies, large payload handling, and pagination.
Why Resources Are Underused
The mental model most developers bring to MCP is REST API design. In a REST API, you expose data at endpoints. In MCP, resources serve a different purpose: they are content the model can discover and inspect without being explicitly directed to do so. A well-designed resource namespace lets the agent browse your domain the way a developer browses an API explorer.
The practical consequence is that resources should answer the question "what exists?" before the agent asks. Tools answer "do this." Resources answer "here is what I know about."
The schema and config resources feed the agent's understanding of what is possible before it invokes any tool. This reduces hallucinated arguments and invalid queries dramatically.
URI Design: Hierarchy Before Convenience
The first decision in resource design is the URI namespace. Get it wrong and you cannot evolve it without breaking clients. The principles:
Hierarchical and guessable. A model that reads schema://tables should be able to infer that schema://tables/customers probably exists. Guessable URIs reduce the number of list operations the agent must perform.
Stable across versions. The schema://tables/customers URI should return the same conceptual resource regardless of schema migrations. Abstract over implementation details.
No opaque IDs in the path where possible. Prefer customers://by-email/alice@example.com over customers://42 when there is a natural business key. Opaque IDs require the agent to first discover the ID before it can construct the URI.
Separate by domain. Use different URI schemes per domain. Do not put schema and config at the same path prefix.
# Good URI namespace
schema:// → list all schemas
schema://tables → list all tables
schema://tables/{table} → single table DDL + column info
schema://tables/{table}/indexes → indexes for a table
schema://tables/{table}/constraints → constraints for a table
data://customers → paginated customer list
data://customers/{id} → single customer record
data://customers/{id}/orders → orders for a customer
config://rate-limits → current rate limit config
config://feature-flags → current feature flag state
config://regions → available regions
# Bad URI namespace
api://get?resource=schema&table=customers&type=ddl
api://fetch?entity=customer&id=42&include=ordersListing Semantics
Every resource template must have a corresponding list resource. This is non-negotiable. A model cannot enumerate a resource namespace it cannot list. The list resource should return URIs, not full content — the agent decides what to fetch.
import { McpServer, ResourceTemplate } from "@modelcontextprotocol/sdk/server/mcp.js";
const server = new McpServer({ name: "data-mcp", version: "1.0.0" });
// List resource: returns URIs only
server.resource(
"customers-list",
"data://customers",
{ mimeType: "application/json" },
async () => {
const ids = await db
.selectFrom("customers")
.select("id")
.limit(500)
.execute();
return {
contents: [{
uri: "data://customers",
mimeType: "application/json",
text: JSON.stringify({
resources: ids.map(r => ({
uri: `data://customers/${r.id}`,
name: `Customer ${r.id}`,
})),
total: ids.length,
note: "Use data://customers?page=N for pagination beyond 500.",
}),
}],
};
}
);
// Detail resource: returns full record
server.resource(
"customer-detail",
new ResourceTemplate("data://customers/{id}", { list: undefined }),
{ mimeType: "application/json" },
async (uri, { id }) => {
const customer = await db
.selectFrom("customers")
.selectAll()
.where("id", "=", id)
.executeTakeFirst();
if (!customer) {
throw new Error(`Customer ${id} not found`);
}
return {
contents: [{
uri: uri.href,
mimeType: "application/json",
text: JSON.stringify(customer),
}],
};
}
);The list resource returns a bounded result (500 max) with a hint about pagination. This prevents the list from overwhelming the context window while still being useful for discovery.
Lazy Fetching: Don't Push What Wasn't Asked For
A common mistake is building a resource that eagerly joins data. The order resource includes line items, which include product details, which include supplier information. The resulting payload is 50KB for a single order.
Design resources to be lazy by default. Return the core entity at the primary URI and expose related data at child URIs.
// Lazy: order resource returns core fields only
server.resource(
"order-detail",
new ResourceTemplate("orders://{id}", { list: undefined }),
{ mimeType: "application/json" },
async (uri, { id }) => {
const order = await db
.selectFrom("orders")
.select(["id", "status", "created_at", "customer_id", "total_amount"])
.where("id", "=", id)
.executeTakeFirst();
return {
contents: [{
uri: uri.href,
mimeType: "application/json",
text: JSON.stringify({
...order,
_links: {
items: `orders://${id}/items`,
customer: `data://customers/${order?.customer_id}`,
shipments: `orders://${id}/shipments`,
},
}),
}],
};
}
);
// Separate resource for line items — only fetched if needed
server.resource(
"order-items",
new ResourceTemplate("orders://{id}/items", { list: undefined }),
{ mimeType: "application/json" },
async (uri, { id }) => {
const items = await db
.selectFrom("order_items")
.selectAll()
.where("order_id", "=", id)
.execute();
return {
contents: [{
uri: uri.href,
mimeType: "application/json",
text: JSON.stringify(items),
}],
};
}
);The _links field in the response is a HATEOAS-style hint. The model reads the order, sees the links, and decides whether it needs to follow them. This mirrors how a human developer explores an API — not by downloading everything at once.
Large Payloads and Content Truncation
Sometimes the data at a URI is genuinely large and there is no way to avoid it — a full schema DDL for a table with 80 columns, a report output, a log file. Handle large payloads with a two-phase approach: a summary resource and a raw resource.
// Summary: always fast, always small
server.resource(
"table-summary",
new ResourceTemplate("schema://tables/{table}", { list: undefined }),
{ mimeType: "application/json" },
async (uri, { table }) => {
const columns = await db.introspectTable(table);
const summary = {
name: table,
column_count: columns.length,
columns: columns.map(c => ({
name: c.column_name,
type: c.data_type,
nullable: c.is_nullable === "YES",
})),
raw_ddl_uri: `schema://tables/${table}/ddl`,
};
return {
contents: [{
uri: uri.href,
mimeType: "application/json",
text: JSON.stringify(summary),
}],
};
}
);
// Raw DDL: large but separately addressable
server.resource(
"table-ddl",
new ResourceTemplate("schema://tables/{table}/ddl", { list: undefined }),
{ mimeType: "text/plain" },
async (uri, { table }) => {
const ddl = await db.getDDL(table);
return {
contents: [{
uri: uri.href,
mimeType: "text/plain",
text: ddl,
}],
};
}
);The agent reads the summary (fast, small) and only fetches the raw DDL if it actually needs it for a complex query generation task. Most of the time, column names and types are sufficient.
Pagination for Large Collections
When a collection has thousands of items, the list resource must support pagination. Encode page parameters in the URI or as query parameters.
server.resource(
"customers-paged",
new ResourceTemplate("data://customers{?page,page_size}", { list: undefined }),
{ mimeType: "application/json" },
async (uri) => {
const url = new URL(uri.href);
const page = parseInt(url.searchParams.get("page") ?? "1", 10);
const pageSize = Math.min(
parseInt(url.searchParams.get("page_size") ?? "50", 10),
200 // hard cap
);
const offset = (page - 1) * pageSize;
const [rows, total] = await Promise.all([
db.selectFrom("customers")
.select(["id", "name", "email"])
.limit(pageSize)
.offset(offset)
.execute(),
db.selectFrom("customers")
.select(db.fn.countAll<number>().as("count"))
.executeTakeFirstOrThrow()
.then(r => Number(r.count)),
]);
const totalPages = Math.ceil(total / pageSize);
return {
contents: [{
uri: uri.href,
mimeType: "application/json",
text: JSON.stringify({
page,
page_size: pageSize,
total_pages: totalPages,
total_items: total,
resources: rows.map(r => ({
uri: `data://customers/${r.id}`,
name: r.name,
email: r.email,
})),
next_page: page < totalPages ? `data://customers?page=${page + 1}&page_size=${pageSize}` : null,
prev_page: page > 1 ? `data://customers?page=${page - 1}&page_size=${pageSize}` : null,
}),
}],
};
}
);Include next_page and prev_page links in the response. The model reads these and knows exactly what URI to fetch next without having to construct it from scratch.
Caching Strategy
Resources that are expensive to compute but change infrequently should be cached. Implement a simple in-memory cache with a TTL for static resources and a Redis cache for shared state across server instances.
const resourceCache = new Map<string, { data: string; expires: number }>();
function cachedResource(ttlSeconds: number) {
return function <T>(fn: (key: string) => Promise<T>) {
return async (key: string): Promise<T> => {
const cached = resourceCache.get(key);
if (cached && cached.expires > Date.now()) {
return JSON.parse(cached.data) as T;
}
const result = await fn(key);
resourceCache.set(key, {
data: JSON.stringify(result),
expires: Date.now() + ttlSeconds * 1000,
});
return result;
};
};
}
const fetchTableList = cachedResource(300)(async (_key) => {
return db.introspection.getTables();
});Schema resources: cache for 5–15 minutes. Config resources: cache for 1–5 minutes. Data resources: cache for seconds or not at all, depending on consistency requirements.
Resource Subscriptions and Change Notifications
The MCP protocol supports resources/subscribe — clients can subscribe to a resource URI and receive notifications when it changes. This is powerful for config and feature flag resources.
server.setRequestHandler("resources/subscribe", async (req) => {
const uri = req.params.uri;
// Register subscription in your pub/sub system
await subscriptionManager.subscribe(uri, req.params.sessionId);
return {};
});
// When data changes, notify subscribed clients
async function notifyResourceChanged(uri: string) {
const subscribers = await subscriptionManager.getSubscribers(uri);
for (const sessionId of subscribers) {
await server.sendResourceUpdated({ uri }, sessionId);
}
}Use subscriptions sparingly. They add complexity and require a pub/sub infrastructure. Reserve them for resources where the agent genuinely needs to react to changes in real time — live feature flags, circuit breaker state, or queue depth metrics.
Key Takeaways
- Resources answer "what exists?" — design them to let the agent passively acquire context before invoking any tools, which reduces hallucinated arguments.
- Design your URI namespace hierarchically and completely before writing any handler; stable, guessable URIs enable the model to construct child URIs without an extra list round-trip.
- Every resource template must have a corresponding list resource that returns URIs and metadata, not full content — let the agent decide what to fetch.
- Implement lazy fetching by default: expose core entity fields at the primary URI and related data at child URIs linked via
_linksor equivalent; eager joins produce context-window-busting payloads. - Handle large collections with explicit pagination; include
next_pageandprev_pageURIs in every paged response so the model can navigate without constructing URIs manually. - Cache expensive-to-compute resources with a TTL appropriate to the data's change frequency; schema resources can be cached for minutes, live data should not be cached at all.