lainlog

Chapter 5 of 9 · Model Context Protocol

Tools, resources, prompts: the server's three voices

three primitives, three controllers — model, application, user.

The handshake closed. Both sides agreed on a version, the capability sets locked in, and the server's reply listed what it offers: tools, resources, prompts. Three names. Three primitives. And a load-bearing question the spec settles but most readers arrive without an answer to: which of these should this thing be?

Pick wrong and the integration feels wrong. A tool that should have been a resource gets called too aggressively. A resource that should have been a promptnever surfaces to the user. The chapter's whole job is the picking discipline. Start with it.

Pick the primitive
Pick the primitive
Six integration sketches. Predict before you read the verdict — the chapter exists because most readers default to tool for all six. The wrong-answer verdict is the lesson.

If your first reflex on most of those scenarios was tool— that's the OpenAI function-calling habit talking. Function-calling normalised a world where every server-side capability is something the model invokes, with typed parameters, when it decides to. MCP says: that's one of three. The other two have different controllers. Naming them is the rest of the chapter.

The first instinct is wrong#

“Everything is a tool” is the function-calling brain defaulting. It's the most common mistake new MCP server authors make, and it's the lesson the picker just exposed. The fix is a single question, asked of every integration sketch:

Who decides when this fires?

Three answers, three primitives. The model decides → it's a tool. The host application decides → it's a resource. The user decides → it's a prompt. That's the controlleraxis, and it's the only axis that matters for picking.

Tools — model-controlled#

A tool is a function the LLM can invoke during reasoning. The server registers it with a name, a description, and a JSON Schema for its parameters; the host advertises tools/list results to the model; the model decides when to call. The wire methods are exactly two:

  • tools/list — the host asks for the catalogue; the server returns the names, descriptions, and input schemas.
  • tools/call — the host invokes a named tool with arguments; the server runs it and returns a structured content array (text, images, or references to resources).

The mental model: tools are verbs the model gets to use. search_flights, convert_currency, send_email. Each has a clear input contract; each does something. The model fires them when it judges it useful — eagerly, sometimes too eagerly. That eagerness is the cost of the model-controlled axis, and it's why not everything should be a tool.

Resources — application-controlled#

A resourceis a piece of read-only data the server exposes — but the LLM doesn't fetch it. The host application decides when to surface a resource as context. The reader can think of resources as the server's catalogue of things to read; the host folds the right thing into the conversation when the moment calls for it.

Resources are addressed by a URI template: a short scheme-and-path with named slots that get filled at request time. The host calls resources/templates/list to learn the templates, substitutes values, and reads the concrete URI with resources/read. Try it.

Read a URI template
Read a URI template
Resources advertise themselves as templates with named slots. The host substitutes values at request time and calls resources/read on the concrete URI.

The slot syntax is borrowed from RFC 6570 — minus most of its operators; MCP uses the simple {name} form. The point is that resources aren't one URL each — they're a parametrised shape. A calendar server doesn't list one URI per year; it lists a template (calendar://events/{year}) and the host reads whichever year fits the conversation. The full method set:

  • resources/list — the concrete resources currently available (no slots).
  • resources/templates/list — the parametrised templates the server exposes.
  • resources/read — read a concrete URI; returns content with a MIME type.
  • resources/subscribe — optionally subscribe to changes (named here; deep-dive later).

The mental model: resources are nounsthe host gets to read. They're passive context, not invocations. The host owns the decision; the model never reaches for them directly.

Prompts — user-controlled#

A prompt is a templated message the user invokes — typically as a slash commandin the host's chat input. The server registers a prompt with a name, a description, and an optional argument list; the host surfaces it as an autocomplete-able command; when the user picks it, the host calls prompts/get with arguments and the server returns the expanded message text the user is about to send.

Prompts are how servers teach users to start complex requests. /plan-vacation with destination and budget arguments expands into a paragraph the user can edit and send. The method set is small:

  • prompts/list — the catalogue: names, descriptions, argument schemas.
  • prompts/get — expand a prompt with arguments; returns the message the user is about to send.

The mental model: prompts are discoverable starting points. The user is the controller; tools and resources are not directly discoverable to them, prompts are.

The controller table — the shape to remember#

Three primitives, three controllers. The single table the rest of the course leans on:

The controller table
The controller table
The shape of the chapter. Hover or focus a row to lift it; the difference between the three is the difference between who decides when.

The whole chapter compresses to the cells of that table. Memorise the controller column especially — it's the question to ask of every integration sketch you'll see in chapter 6, every threat model you'll see in chapter 9, and every client-side primitive you'll see in chapter 8.

The travel server, all three at once#

A worked sketch. A travel-planning server registers one of each primitive: a tool to search flights, a resource for the user's upcoming trips, and a prompt to start a vacation plan. In TypeScript, against the official SDK, the surface looks like this:

travel-server.ts (sketch)typescript
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { z } from "zod";

const server = new McpServer({
  name: "travel-server",
  version: "0.1.0",
});

// 1. Tool — model-controlled. The LLM decides when to invoke.
server.registerTool(
  "search_flights",
  {
    title: "Search flights",
    description: "Find flights between two airports on a date.",
    inputSchema: {
      from: z.string().length(3),
      to: z.string().length(3),
      date: z.string(),
    },
  },
  async ({ from, to, date }) => ({
    content: [
      { type: "text", text: `flights ${from} → ${to} on ${date}` },
    ],
  }),
);

// 2. Resource — application-controlled. URI-template addressed.
server.registerResource(
  "trips",
  "trips://upcoming/{year}",
  { title: "Upcoming trips", mimeType: "application/json" },
  async (uri, { year }) => ({
    contents: [{ uri: uri.href, text: `{ "year": "${year}", "trips": [] }` }],
  }),
);

// 3. Prompt — user-controlled. Surfaced as a slash-command in the host.
server.registerPrompt(
  "plan-vacation",
  {
    title: "Plan a vacation",
    description: "Scaffold a vacation plan from destination + budget.",
    argsSchema: { destination: z.string(), budget: z.string() },
  },
  async ({ destination, budget }) => ({
    messages: [
      {
        role: "user",
        content: {
          type: "text",
          text: `Plan a trip to ${destination} on a ${budget} budget.`,
        },
      },
    ],
  }),
);

Three registrations, three controllers, one server. Chapter 6 extends this same shape into a working server you can ping in the page; this is the silhouette to keep in mind on the way there.

Why three, not one#

A reasonable instinct, looking at the three: couldn't I just make resources tools that return data?Couldn't I just make prompts tools that return a string?

Yes, technically. And you'd be lying about who controls them. The cost of that lie shows up in three places:

Cost

Tool calls fire on the model's decision, and models call them eagerly. A weather tool gets invoked on every turn that touches travel; a weather resource gets surfaced once, by the host, when the conversation actually needs it. The token cost compounds across a session. Same data, very different behaviour.

Privacy

Resources stay under the host's control — the host chooses what to fold into context, and can redact, summarise, or skip entirely based on the user's settings. Tools delegate that decision to the model. If the data is sensitive, the controller axis is a privacy boundary, not a stylistic preference.

UX

Prompts are discoverable to users — they show up in the chat input's slash-command menu. Tools and resources are not. If you want a user to find and start a workflow, it's a prompt. Hide it behind a tool name and only the model will ever invoke it.

Comprehension check#

A travel server exposes three things: search_flights (a tool), trips://upcoming/2026 (a resource), and /summarize-trip (a prompt). The user types /summarize-trip in the chat input. Walk through who invokes what next, in two sentences.

reveal answer

The host catches the slash-command, calls prompts/get on the server with the prompt's arguments, and inserts the expanded message into the conversation. Once the model is reasoning about that message, it may decide to call tools/call on search_flights (model-controlled), and the host may fold in trips://upcoming/2026 via resources/read (application-controlled) when the model's reply touches scheduling. Three controllers, in order — user → model → application — across a single turn.

The three voices, named. Now build one.#

The chapter cataloged what a server can show: tools the model fires, resources the host surfaces, prompts the user invokes. Three primitives, three controllers, three sets of methods on the wire.

That's the silhouette of every MCP server you'll ever meet. The next chapter sharpens the silhouette into something running. Chapter 6 — build a server, in the page.