Skip to main content
The Model Context Protocol (MCP) changes how AI applications connect to external data and services. While most developers have experience with tool calling, MCP offers three distinct interaction types that work together to create richer experiences: prompts, resources, and tools. Understanding these three primitives and when to use each one gives you more control over building AI-powered applications.

The MCP interaction model

These three interaction types work together through what’s known as the “MCP interaction model”:
  • Prompts are user-driven, typically exposed through slash commands or menu options
  • Resources are application-driven, where the client decides how to use the data
  • Tools are model-driven, where the AI chooses when and how to call them
This gives you coverage across all three major actors in an AI application: the user, the application, and the model itself.
Please refer to the full Model Context Protocol specification for more details.

Prompts: User-driven templates for AI interactions

Prompts in MCP are predefined templates that users can invoke directly. Think of them as shortcuts or examples that help users get started with your MCP server’s capabilities.

Why prompts matter

As the creator of an MCP server, you know best how your tools should be used. Prompts let you provide users with working examples they can invoke immediately, rather than expecting them to figure out the right way to phrase their requests.

How prompts work

The prompt interaction follows a specific flow between the user, client application, and MCP server:

Dynamic prompt capabilities

Under the hood, prompts are just code, which means they can be dynamic. They can:
  • Fetch live data from APIs
  • Include current system state
  • Offer autocomplete for arguments
  • Adapt based on user context
Here’s how you might implement a dynamic prompt in TypeScript:
const PROMPTS = {
  "analyze-project": {
    name: "analyze-project",
    description: "Analyze project logs and code",
    arguments: [
      {
        name: "timeframe",
        description: "Time period to analyze logs",
        required: true
      },
      {
        name: "fileUri", 
        description: "URI of code file to review",
        required: true
      }
    ]
  }
};

server.setRequestHandler(GetPromptRequestSchema, async (request) => {
  if (request.params.name === "analyze-project") {
    const timeframe = request.params.arguments?.timeframe;
    const fileUri = request.params.arguments?.fileUri;
    
    return {
      messages: [
        {
          role: "user",
          content: {
            type: "text",
            text: `Analyze these system logs and code file for issues:`
          }
        },
        {
          role: "user", 
          content: {
            type: "resource",
            resource: {
              uri: `logs://recent?timeframe=${timeframe}`,
              text: await fetchRecentLogs(timeframe),
              mimeType: "text/plain"
            }
          }
        }
      ]
    };
  }
});

When to use prompts

Use prompts when you want to:
  • Provide examples of how to use your MCP server
  • Give users shortcuts for common workflows
  • Include dynamic context that would be tedious to type manually
  • Onboard new users with working examples

Resources: Application-driven data exposure

Resources represent raw data that your MCP server can expose to client applications. Unlike prompts that users invoke or tools that models call, resources are consumed by the application itself.

The power of application choice

Resources give applications complete freedom in how they use your data. A client might:
  • Build embeddings for retrieval-augmented generation (RAG)
  • Cache frequently accessed data
  • Transform data for specific use cases
  • Combine multiple resources in novel ways

Resource types

MCP supports two types of resources: Direct resources have fixed URIs and represent specific data:
{
  uri: "file:///logs/app.log",
  name: "Application Logs", 
  mimeType: "text/plain"
}
Resource templates use URI templates for dynamic resources:
{
  uriTemplate: "database://table/{tableName}/schema",
  name: "Database Schema",
  description: "Schema for any table",
  mimeType: "application/json"
}

Implementing resources

Here’s a Python example showing how to expose database schemas as resources:
@app.list_resources()
async def list_resources() -> list[types.Resource]:
    return [
        types.Resource(
            uri="database://schema/users",
            name="Users Table Schema",
            mimeType="application/json"
        ),
        types.Resource(
            uri="database://schema/orders", 
            name="Orders Table Schema",
            mimeType="application/json"
        )
    ]

@app.read_resource()
async def read_resource(uri: AnyUrl) -> str:
    if str(uri).startswith("database://schema/"):
        table_name = str(uri).split("/")[-1]
        schema = await get_table_schema(table_name)
        return json.dumps(schema)
    
    raise ValueError("Resource not found")

When to use resources

Use resources when you want to:
  • Expose raw data for the application to process
  • Enable RAG implementations
  • Provide data that applications might cache or index
  • Support multiple data consumption patterns

Tools: Model-driven actions

Tools are the most familiar MCP primitive—functions that the AI model can choose to call during conversations. They represent actions your MCP server can perform.

Tool design principles

Effective tools should:
  • Have clear, descriptive names
  • Include comprehensive descriptions
  • Define precise input schemas
  • Return structured, helpful results

Tool implementation

Here’s a TypeScript example of a calculation tool:
server.setRequestHandler(ListToolsRequestSchema, async () => {
  return {
    tools: [{
      name: "calculate_sum",
      description: "Add two numbers together",
      inputSchema: {
        type: "object", 
        properties: {
          a: { type: "number" },
          b: { type: "number" }
        },
        required: ["a", "b"]
      }
    }]
  };
});

server.setRequestHandler(CallToolRequestSchema, async (request) => {
  if (request.params.name === "calculate_sum") {
    const { a, b } = request.params.arguments;
    return {
      content: [
        {
          type: "text", 
          text: String(a + b)
        }
      ]
    };
  }
  throw new Error("Tool not found");
});

How tools work in practice

Tools follow the familiar function calling pattern, but within the MCP framework:

When to use tools

Use tools when you want the AI to:
  • Perform actions on behalf of users
  • Query external systems
  • Transform or process data
  • Make decisions about when to invoke functionality

Bringing it all together: A GitHub issue tracker example

Here’s how these three primitives work together in a GitHub issue tracker MCP server: Prompts provide shortcuts like “summarize recent issues” with autocomplete for project repositories and milestones, giving users an easy way to catch up on project status and outstanding work. Resources expose repository metadata, issue lists, pull request data, and commit histories that applications can use for embeddings, caching, or building comprehensive project dashboards. Tools handle actions like creating issues, updating labels, assigning team members, and searching across repositories that the AI can invoke as needed based on user requests. This combination allows users to interact with GitHub repositories through natural language while giving applications the flexibility to process GitHub data in sophisticated ways. By using all three interaction types together, you create a much richer experience than tool calling alone could provide.

Building richer MCP experiences with Upsun

When you’re building MCP servers that take advantage of these three interaction types, you need a platform that can handle the complexity. Upsun’s Cloud Application Platform provides the infrastructure you need:
  • Preview environments let you test MCP server changes in production-like environments
  • Multi-app architecture supports complex MCP implementations with multiple services
  • Built-in observability helps you monitor MCP server performance and usage
  • Git-driven infrastructure ensures your MCP server deployments are consistent and version-controlled
The combination of prompts, resources, and tools gives you powerful building blocks for AI applications. With Upsun handling the infrastructure complexity, you can focus on creating innovative MCP servers that provide real value to users. Ready to build your own MCP server? Start with a free Upsun account and explore how these interaction types can transform your AI applications.
Last modified on April 27, 2026