Back to skills
SkillHub ClubShip Full StackFull Stack

fastmcp

$2c

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
25
Hot score
88
Updated
March 20, 2026
Overall rating
C1.9
Composite score
1.9
Best-practice grade
F19.6

Install command

npx @skill-hub/cli install ovachiever-droid-tings-fastmcp

Repository

ovachiever/droid-tings

Skill path: skills/fastmcp

$2c

Open repository

Best for

Primary workflow: Ship Full Stack.

Technical facets: Full Stack.

Target audience: everyone.

License: MIT.

Original source

Catalog source: SkillHub Club.

Repository owner: ovachiever.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install fastmcp into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/ovachiever/droid-tings before adding fastmcp to shared team environments
  • Use fastmcp for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: fastmcp
description: |
  Build MCP servers in Python with FastMCP framework to expose tools, resources, and prompts to LLMs. Supports
  storage backends (memory/disk/Redis), middleware, OAuth Proxy, OpenAPI integration, and FastMCP Cloud deployment.

  Use when: creating MCP servers, defining tools or resources, implementing OAuth authentication, configuring
  storage backends for tokens/cache, adding middleware for logging/rate limiting, deploying to FastMCP Cloud,
  or troubleshooting module-level server, storage, lifespan, middleware order, circular imports, or OAuth errors.

  Keywords: FastMCP, MCP server Python, Model Context Protocol Python, fastmcp framework, mcp tools, mcp resources, mcp prompts, fastmcp storage, fastmcp memory storage, fastmcp disk storage, fastmcp redis, fastmcp dynamodb, fastmcp lifespan, fastmcp middleware, fastmcp oauth proxy, server composition mcp, fastmcp import, fastmcp mount, fastmcp cloud, fastmcp deployment, mcp authentication, fastmcp icons, openapi mcp, claude mcp server, fastmcp testing, storage misconfiguration, lifespan issues, middleware order, circular imports, module-level server, async await mcp
license: MIT
metadata:
  version: "2.0.0"
  package_version: "fastmcp>=2.13.0"
  python_version: ">=3.10"
  token_savings: "90-95%"
  errors_prevented: 25
  production_tested: true
  last_updated: "2025-11-04"
---

# FastMCP - Build MCP Servers in Python

FastMCP is a Python framework for building Model Context Protocol (MCP) servers that expose tools, resources, and prompts to Large Language Models like Claude. This skill provides production-tested patterns, error prevention, and deployment strategies for building robust MCP servers.

## Quick Start

### Installation

```bash
pip install fastmcp
# or
uv pip install fastmcp
```

### Minimal Server

```python
from fastmcp import FastMCP

# MUST be at module level for FastMCP Cloud
mcp = FastMCP("My Server")

@mcp.tool()
async def hello(name: str) -> str:
    """Say hello to someone."""
    return f"Hello, {name}!"

if __name__ == "__main__":
    mcp.run()
```

**Run it:**
```bash
# Local development
python server.py

# With FastMCP CLI
fastmcp dev server.py

# HTTP mode
python server.py --transport http --port 8000
```

## Core Concepts

### 1. Tools

Tools are functions that LLMs can call to perform actions:

```python
@mcp.tool()
def calculate(operation: str, a: float, b: float) -> float:
    """Perform mathematical operations.

    Args:
        operation: add, subtract, multiply, or divide
        a: First number
        b: Second number

    Returns:
        Result of the operation
    """
    operations = {
        "add": lambda x, y: x + y,
        "subtract": lambda x, y: x - y,
        "multiply": lambda x, y: x * y,
        "divide": lambda x, y: x / y if y != 0 else None
    }
    return operations.get(operation, lambda x, y: None)(a, b)
```

**Best Practices:**
- Clear, descriptive function names
- Comprehensive docstrings (LLMs read these!)
- Strong type hints (Pydantic validates automatically)
- Return structured data (dicts/lists)
- Handle errors gracefully

**Sync vs Async:**

```python
# Sync tool (for non-blocking operations)
@mcp.tool()
def sync_tool(param: str) -> dict:
    return {"result": param.upper()}

# Async tool (for I/O operations, API calls)
@mcp.tool()
async def async_tool(url: str) -> dict:
    async with httpx.AsyncClient() as client:
        response = await client.get(url)
        return response.json()
```

### 2. Resources

Resources expose static or dynamic data to LLMs:

```python
# Static resource
@mcp.resource("data://config")
def get_config() -> dict:
    """Provide application configuration."""
    return {
        "version": "1.0.0",
        "features": ["auth", "api", "cache"]
    }

# Dynamic resource
@mcp.resource("info://status")
async def server_status() -> dict:
    """Get current server status."""
    return {
        "status": "healthy",
        "timestamp": datetime.now().isoformat(),
        "api_configured": bool(os.getenv("API_KEY"))
    }
```

**Resource URI Schemes:**
- `data://` - Generic data
- `file://` - File resources
- `resource://` - General resources
- `info://` - Information/metadata
- `api://` - API endpoints
- Custom schemes allowed

### 3. Resource Templates

Dynamic resources with parameters in the URI:

```python
# Single parameter
@mcp.resource("user://{user_id}/profile")
async def get_user_profile(user_id: str) -> dict:
    """Get user profile by ID."""
    user = await fetch_user_from_db(user_id)
    return {
        "id": user_id,
        "name": user.name,
        "email": user.email
    }

# Multiple parameters
@mcp.resource("org://{org_id}/team/{team_id}/members")
async def get_team_members(org_id: str, team_id: str) -> list:
    """Get team members with org context."""
    return await db.query(
        "SELECT * FROM members WHERE org_id = ? AND team_id = ?",
        [org_id, team_id]
    )
```

**Critical:** Parameter names must match exactly between URI template and function signature.

### 4. Prompts

Pre-configured prompts for LLMs:

```python
@mcp.prompt("analyze")
def analyze_prompt(topic: str) -> str:
    """Generate analysis prompt."""
    return f"""
    Analyze {topic} considering:
    1. Current state
    2. Challenges
    3. Opportunities
    4. Recommendations

    Use available tools to gather data.
    """

@mcp.prompt("help")
def help_prompt() -> str:
    """Generate help text for server."""
    return """
    Welcome to My Server!

    Available tools:
    - search: Search for items
    - process: Process data

    Available resources:
    - info://status: Server status
    """
```

## Context Features

FastMCP provides advanced features through context injection:

### 1. Elicitation (User Input)

Request user input during tool execution:

```python
from fastmcp import Context

@mcp.tool()
async def confirm_action(action: str, context: Context) -> dict:
    """Perform action with user confirmation."""
    # Request confirmation from user
    confirmed = await context.request_elicitation(
        prompt=f"Confirm {action}? (yes/no)",
        response_type=str
    )

    if confirmed.lower() == "yes":
        result = await perform_action(action)
        return {"status": "completed", "action": action}
    else:
        return {"status": "cancelled", "action": action}
```

### 2. Progress Tracking

Report progress for long-running operations:

```python
@mcp.tool()
async def batch_import(file_path: str, context: Context) -> dict:
    """Import data with progress updates."""
    data = await read_file(file_path)
    total = len(data)

    imported = []
    for i, item in enumerate(data):
        # Report progress
        await context.report_progress(
            progress=i + 1,
            total=total,
            message=f"Importing item {i + 1}/{total}"
        )

        result = await import_item(item)
        imported.append(result)

    return {"imported": len(imported), "total": total}
```

### 3. Sampling (LLM Integration)

Request LLM completions from within tools:

```python
@mcp.tool()
async def enhance_text(text: str, context: Context) -> str:
    """Enhance text using LLM."""
    response = await context.request_sampling(
        messages=[{
            "role": "system",
            "content": "You are a professional copywriter."
        }, {
            "role": "user",
            "content": f"Enhance this text: {text}"
        }],
        temperature=0.7,
        max_tokens=500
    )

    return response["content"]
```

## Storage Backends

FastMCP supports pluggable storage backends built on the `py-key-value-aio` library. Storage backends enable persistent state for OAuth tokens, response caching, and client-side token storage.

### Available Backends

**Memory Store (Default)**:
- Ephemeral storage (lost on restart)
- Fast, no configuration needed
- Good for development

**Disk Store**:
- Persistent storage on local filesystem
- Encrypted by default with `FernetEncryptionWrapper`
- Platform-aware defaults (Mac/Windows use disk, Linux uses memory)

**Redis Store**:
- Distributed storage for production
- Supports multi-instance deployments
- Ideal for response caching across servers

**Other Supported**:
- DynamoDB (AWS)
- MongoDB
- Elasticsearch
- Memcached
- RocksDB
- Valkey

### Basic Usage

```python
from fastmcp import FastMCP
from key_value.stores import MemoryStore, DiskStore, RedisStore
from key_value.encryption import FernetEncryptionWrapper
from cryptography.fernet import Fernet
import os

# Memory storage (default)
mcp = FastMCP("My Server")

# Disk storage (persistent)
from key_value.stores import DiskStore

mcp = FastMCP(
    "My Server",
    storage=DiskStore(path="/app/data/storage")
)

# Redis storage (production)
from key_value.stores import RedisStore

mcp = FastMCP(
    "My Server",
    storage=RedisStore(
        host=os.getenv("REDIS_HOST", "localhost"),
        port=int(os.getenv("REDIS_PORT", "6379")),
        password=os.getenv("REDIS_PASSWORD")
    )
)
```

### Encrypted Storage

Storage backends support automatic encryption:

```python
from cryptography.fernet import Fernet
from key_value.encryption import FernetEncryptionWrapper
from key_value.stores import DiskStore

# Generate encryption key (store in environment!)
# key = Fernet.generate_key()

# Use encrypted storage
encrypted_storage = FernetEncryptionWrapper(
    key_value=DiskStore(path="/app/data/storage"),
    fernet=Fernet(os.getenv("STORAGE_ENCRYPTION_KEY"))
)

mcp = FastMCP("My Server", storage=encrypted_storage)
```

### OAuth Token Storage

Storage backends automatically persist OAuth tokens:

```python
from fastmcp.auth import OAuthProxy
from key_value.stores import RedisStore
from key_value.encryption import FernetEncryptionWrapper
from cryptography.fernet import Fernet

# Production OAuth with encrypted Redis storage
auth = OAuthProxy(
    jwt_signing_key=os.environ["JWT_SIGNING_KEY"],
    client_storage=FernetEncryptionWrapper(
        key_value=RedisStore(
            host=os.getenv("REDIS_HOST"),
            password=os.getenv("REDIS_PASSWORD")
        ),
        fernet=Fernet(os.environ["STORAGE_ENCRYPTION_KEY"])
    ),
    upstream_authorization_endpoint="https://provider.com/oauth/authorize",
    upstream_token_endpoint="https://provider.com/oauth/token",
    upstream_client_id=os.getenv("OAUTH_CLIENT_ID"),
    upstream_client_secret=os.getenv("OAUTH_CLIENT_SECRET")
)

mcp = FastMCP("OAuth Server", auth=auth)
```

### Platform-Aware Defaults

FastMCP automatically chooses storage based on platform:

- **Mac/Windows**: Disk storage (persistent)
- **Linux**: Memory storage (ephemeral)
- **Override**: Set `storage` parameter explicitly

```python
# Explicitly use disk storage on Linux
from key_value.stores import DiskStore

mcp = FastMCP(
    "My Server",
    storage=DiskStore(path="/var/lib/mcp/storage")
)
```

## Server Lifespans

Server lifespans provide initialization and cleanup hooks that run once per server instance (NOT per client session). This is critical for managing database connections, API clients, and other resources.

**⚠️ Breaking Change in v2.13.0**: Lifespan behavior changed from per-session to per-server-instance.

### Basic Pattern

```python
from fastmcp import FastMCP
from contextlib import asynccontextmanager
from typing import AsyncIterator
from dataclasses import dataclass

@dataclass
class AppContext:
    """Shared application state."""
    db: Database
    api_client: httpx.AsyncClient

@asynccontextmanager
async def app_lifespan(server: FastMCP) -> AsyncIterator[AppContext]:
    """
    Initialize resources on startup, cleanup on shutdown.
    Runs ONCE per server instance, NOT per client session.
    """
    # Startup: Initialize resources
    db = await Database.connect(os.getenv("DATABASE_URL"))
    api_client = httpx.AsyncClient(
        base_url=os.getenv("API_BASE_URL"),
        headers={"Authorization": f"Bearer {os.getenv('API_KEY')}"},
        timeout=30.0
    )

    print("Server initialized")

    try:
        # Yield context to tools
        yield AppContext(db=db, api_client=api_client)
    finally:
        # Shutdown: Cleanup resources
        await db.disconnect()
        await api_client.aclose()
        print("Server shutdown complete")

# Create server with lifespan
mcp = FastMCP("My Server", lifespan=app_lifespan)

# Access context in tools
from fastmcp import Context

@mcp.tool()
async def query_database(sql: str, context: Context) -> list:
    """Query database using shared connection."""
    # Access lifespan context
    app_context: AppContext = context.fastmcp_context.lifespan_context
    return await app_context.db.query(sql)

@mcp.tool()
async def api_request(endpoint: str, context: Context) -> dict:
    """Make API request using shared client."""
    app_context: AppContext = context.fastmcp_context.lifespan_context
    response = await app_context.api_client.get(endpoint)
    return response.json()
```

### ASGI Integration

When using FastMCP with ASGI apps (FastAPI, Starlette), you **must** pass the lifespan explicitly:

```python
from fastapi import FastAPI
from fastmcp import FastMCP

# FastMCP lifespan
@asynccontextmanager
async def mcp_lifespan(server: FastMCP):
    print("MCP server starting")
    yield
    print("MCP server stopping")

mcp = FastMCP("My Server", lifespan=mcp_lifespan)

# FastAPI app MUST include MCP lifespan
app = FastAPI(lifespan=mcp.lifespan)

# Add routes
@app.get("/")
def root():
    return {"message": "Hello World"}
```

**❌ WRONG**: Not passing lifespan to parent app
```python
app = FastAPI()  # MCP lifespan won't run!
```

**✅ CORRECT**: Pass MCP lifespan to parent app
```python
app = FastAPI(lifespan=mcp.lifespan)
```

### State Management

Store and retrieve state during server lifetime:

```python
from fastmcp import Context

@mcp.tool()
async def set_config(key: str, value: str, context: Context) -> dict:
    """Store configuration value."""
    context.fastmcp_context.set_state(key, value)
    return {"status": "saved", "key": key}

@mcp.tool()
async def get_config(key: str, context: Context) -> dict:
    """Retrieve configuration value."""
    value = context.fastmcp_context.get_state(key, default=None)
    if value is None:
        return {"error": f"Key '{key}' not found"}
    return {"key": key, "value": value}
```

## Middleware System

FastMCP provides an MCP-native middleware system for cross-cutting functionality like logging, rate limiting, caching, and error handling.

### Built-in Middleware (8 Types)

1. **TimingMiddleware** - Performance monitoring
2. **ResponseCachingMiddleware** - TTL-based caching with pluggable storage
3. **LoggingMiddleware** - Human-readable and JSON-structured logging
4. **RateLimitingMiddleware** - Token bucket and sliding window algorithms
5. **ErrorHandlingMiddleware** - Consistent error management
6. **ToolInjectionMiddleware** - Dynamic tool injection
7. **PromptToolMiddleware** - Tool-based prompt access for limited clients
8. **ResourceToolMiddleware** - Tool-based resource access for limited clients

### Basic Usage

```python
from fastmcp import FastMCP
from fastmcp.middleware import (
    TimingMiddleware,
    LoggingMiddleware,
    RateLimitingMiddleware,
    ResponseCachingMiddleware,
    ErrorHandlingMiddleware
)

mcp = FastMCP("My Server")

# Add middleware (order matters!)
mcp.add_middleware(ErrorHandlingMiddleware())
mcp.add_middleware(TimingMiddleware())
mcp.add_middleware(LoggingMiddleware(level="INFO"))
mcp.add_middleware(RateLimitingMiddleware(
    max_requests=100,
    window_seconds=60,
    algorithm="token_bucket"
))
mcp.add_middleware(ResponseCachingMiddleware(
    ttl_seconds=300,
    storage=RedisStore(host="localhost")
))
```

### Middleware Execution Order

Middleware executes in order added:

```
Request Flow:
  → ErrorHandlingMiddleware (catches errors)
    → TimingMiddleware (starts timer)
      → LoggingMiddleware (logs request)
        → RateLimitingMiddleware (checks rate limit)
          → ResponseCachingMiddleware (checks cache)
            → Tool/Resource Handler
          ← ResponseCachingMiddleware (stores in cache)
        ← RateLimitingMiddleware
      ← LoggingMiddleware (logs response)
    ← TimingMiddleware (stops timer, logs duration)
  ← ErrorHandlingMiddleware (returns error if any)
```

### Custom Middleware

Create custom middleware using hooks:

```python
from fastmcp.middleware import BaseMiddleware
from fastmcp import Context

class AccessControlMiddleware(BaseMiddleware):
    """Check authorization before tool execution."""

    def __init__(self, allowed_users: list[str]):
        self.allowed_users = allowed_users

    async def on_call_tool(self, tool_name: str, arguments: dict, context: Context):
        """Hook runs before tool execution."""
        # Get user from context (from auth)
        user = context.fastmcp_context.get_state("user_id")

        if user not in self.allowed_users:
            raise PermissionError(f"User '{user}' not authorized")

        # Continue to tool
        return await self.next(tool_name, arguments, context)

# Add to server
mcp.add_middleware(AccessControlMiddleware(
    allowed_users=["alice", "bob", "charlie"]
))
```

### Hook Hierarchy

Middleware hooks from most general to most specific:

1. **`on_message`** - All messages (requests and notifications)
2. **`on_request`** / **`on_notification`** - By message type
3. **`on_call_tool`**, **`on_read_resource`**, **`on_get_prompt`** - Operation-specific
4. **`on_list_tools`**, **`on_list_resources`**, **`on_list_prompts`**, **`on_list_resource_templates`** - List operations

```python
class ComprehensiveMiddleware(BaseMiddleware):
    async def on_message(self, message: dict, context: Context):
        """Runs for ALL messages."""
        print(f"Message: {message['method']}")
        return await self.next(message, context)

    async def on_call_tool(self, tool_name: str, arguments: dict, context: Context):
        """Runs only for tool calls."""
        print(f"Tool: {tool_name}")
        return await self.next(tool_name, arguments, context)

    async def on_read_resource(self, uri: str, context: Context):
        """Runs only for resource reads."""
        print(f"Resource: {uri}")
        return await self.next(uri, context)
```

### Response Caching Middleware

Improve performance by caching expensive operations:

```python
from fastmcp.middleware import ResponseCachingMiddleware
from key_value.stores import RedisStore

# Cache responses for 5 minutes
cache_middleware = ResponseCachingMiddleware(
    ttl_seconds=300,
    storage=RedisStore(host="localhost"),  # Shared across instances
    cache_tools=True,       # Cache tool calls
    cache_resources=True,   # Cache resource reads
    cache_prompts=False     # Don't cache prompts
)

mcp.add_middleware(cache_middleware)

# Tools/resources are automatically cached
@mcp.tool()
async def expensive_computation(data: str) -> dict:
    """This will be cached for 5 minutes."""
    import time
    time.sleep(5)  # Expensive operation
    return {"result": process(data)}
```

## Server Composition

Organize tools, resources, and prompts into modular components using server composition.

### Two Strategies

**1. `import_server()` - Static Snapshot**:
- One-time copy of components at import time
- Changes to subserver don't propagate
- Fast (no runtime delegation)
- Use for: Bundling finalized components

**2. `mount()` - Dynamic Link**:
- Live runtime link to subserver
- Changes to subserver immediately visible
- Runtime delegation (slower)
- Use for: Modular runtime composition

### Import Server (Static)

```python
from fastmcp import FastMCP

# Subserver with tools
api_server = FastMCP("API Server")

@api_server.tool()
def api_tool():
    return "API result"

@api_server.resource("api://status")
def api_status():
    return {"status": "ok"}

# Main server imports components
main_server = FastMCP("Main Server")

# Import all components from subserver
main_server.import_server(api_server)

# Now main_server has api_tool and api://status
# Changes to api_server won't affect main_server
```

### Mount Server (Dynamic)

```python
from fastmcp import FastMCP

# Create servers
api_server = FastMCP("API Server")
db_server = FastMCP("DB Server")

@api_server.tool()
def fetch_data():
    return "API data"

@db_server.tool()
def query_db():
    return "DB result"

# Main server mounts subservers
main_server = FastMCP("Main Server")

# Mount with prefix
main_server.mount(api_server, prefix="api")
main_server.mount(db_server, prefix="db")

# Tools are namespaced:
# - api.fetch_data
# - db.query_db

# Resources are prefixed:
# - resource://api/path/to/resource
# - resource://db/path/to/resource
```

### Mounting Modes

**Direct Mounting (Default)**:
```python
# In-memory access, subserver runs in same process
main_server.mount(subserver, prefix="sub")
```

**Proxy Mounting**:
```python
# Treats subserver as separate entity with own lifecycle
main_server.mount(
    subserver,
    prefix="sub",
    mode="proxy"
)
```

### Tag Filtering

Filter components when importing/mounting:

```python
# Tag subserver components
@api_server.tool(tags=["public"])
def public_api():
    return "Public"

@api_server.tool(tags=["admin"])
def admin_api():
    return "Admin only"

# Import only public tools
main_server.import_server(
    api_server,
    include_tags=["public"]
)

# Or exclude admin tools
main_server.import_server(
    api_server,
    exclude_tags=["admin"]
)

# Tag filtering is recursive with mount()
main_server.mount(
    api_server,
    prefix="api",
    include_tags=["public"]
)
```

### Resource Prefix Formats

**Path Format (Default since v2.4.0)**:
```
resource://prefix/path/to/resource
```

**Protocol Format (Legacy)**:
```
prefix+resource://path/to/resource
```

Configure format:

```python
main_server.mount(
    subserver,
    prefix="api",
    resource_prefix_format="path"  # or "protocol"
)
```

## OAuth Proxy & Authentication

FastMCP provides comprehensive authentication support for HTTP-based transports, including an OAuth Proxy for providers that don't support Dynamic Client Registration (DCR).

### Four Authentication Patterns

1. **Token Validation** (`TokenVerifier`/`JWTVerifier`) - Validate external tokens
2. **External Identity Providers** (`RemoteAuthProvider`) - OAuth 2.0/OIDC with DCR
3. **OAuth Proxy** (`OAuthProxy`) - Bridge to traditional OAuth providers
4. **Full OAuth** (`OAuthProvider`) - Complete authorization server

### Pattern 1: Token Validation

Validate tokens issued by external systems:

```python
from fastmcp import FastMCP
from fastmcp.auth import JWTVerifier

# JWT verification
auth = JWTVerifier(
    issuer="https://auth.example.com",
    audience="my-mcp-server",
    public_key=os.getenv("JWT_PUBLIC_KEY")
)

mcp = FastMCP("Secure Server", auth=auth)

@mcp.tool()
async def secure_operation(context: Context) -> dict:
    """Only accessible with valid JWT."""
    # Token validated automatically
    user = context.fastmcp_context.get_state("user_id")
    return {"user": user, "status": "authorized"}
```

### Pattern 2: External Identity Providers

Use OAuth 2.0/OIDC providers with Dynamic Client Registration:

```python
from fastmcp.auth import RemoteAuthProvider

auth = RemoteAuthProvider(
    issuer="https://auth.example.com",
    # Provider must support DCR
)

mcp = FastMCP("OAuth Server", auth=auth)
```

### Pattern 3: OAuth Proxy (Recommended for Production)

Bridge to OAuth providers without DCR support (GitHub, Google, Azure, AWS, Discord, Facebook, etc.):

```python
from fastmcp.auth import OAuthProxy
from key_value.stores import RedisStore
from key_value.encryption import FernetEncryptionWrapper
from cryptography.fernet import Fernet
import os

auth = OAuthProxy(
    # JWT signing for issued tokens
    jwt_signing_key=os.environ["JWT_SIGNING_KEY"],

    # Encrypted storage for upstream tokens
    client_storage=FernetEncryptionWrapper(
        key_value=RedisStore(
            host=os.getenv("REDIS_HOST"),
            password=os.getenv("REDIS_PASSWORD")
        ),
        fernet=Fernet(os.environ["STORAGE_ENCRYPTION_KEY"])
    ),

    # Upstream OAuth provider
    upstream_authorization_endpoint="https://github.com/login/oauth/authorize",
    upstream_token_endpoint="https://github.com/login/oauth/access_token",
    upstream_client_id=os.getenv("GITHUB_CLIENT_ID"),
    upstream_client_secret=os.getenv("GITHUB_CLIENT_SECRET"),

    # Scopes
    upstream_scope="read:user user:email",

    # Security: Enable consent screen (prevents confused deputy attacks)
    enable_consent_screen=True
)

mcp = FastMCP("GitHub Auth Server", auth=auth)
```

### OAuth Proxy Features

**Token Factory Pattern**:
- Proxy issues its own JWTs (not forwarding upstream tokens)
- Upstream tokens stored encrypted
- Proxy tokens can have custom claims

**Consent Screens**:
- Prevents authorization bypass attacks
- Shows user what permissions are being granted
- Required for security compliance

**PKCE Support**:
- End-to-end validation from client to upstream
- Protects against authorization code interception

**RFC 7662 Token Introspection**:
- Validate tokens with upstream provider
- Check revocation status

### Pattern 4: Full OAuth Provider

Run complete authorization server:

```python
from fastmcp.auth import OAuthProvider

auth = OAuthProvider(
    issuer="https://my-auth-server.com",
    client_storage=RedisStore(host="localhost"),
    # Full OAuth 2.0 server implementation
)

mcp = FastMCP("Auth Server", auth=auth)
```

### Environment-Based Configuration

Auto-detect auth from environment:

```bash
export FASTMCP_SERVER_AUTH='{"type": "oauth_proxy", "upstream_authorization_endpoint": "...", ...}'
```

```python
# Automatically configures from FASTMCP_SERVER_AUTH
mcp = FastMCP("Auto Auth Server")
```

### Supported OAuth Providers

- **GitHub**: `https://github.com/login/oauth/authorize`
- **Google**: `https://accounts.google.com/o/oauth2/v2/auth`
- **Azure**: `https://login.microsoftonline.com/{tenant}/oauth2/v2.0/authorize`
- **AWS Cognito**: `https://{domain}.auth.{region}.amazoncognito.com/oauth2/authorize`
- **Discord**: `https://discord.com/api/oauth2/authorize`
- **Facebook**: `https://www.facebook.com/v12.0/dialog/oauth`
- **WorkOS**: Enterprise identity
- **AuthKit**: Authentication toolkit
- **Descope**: Auth platform
- **Scalekit**: Enterprise SSO

## Icons Support

Add visual representations to servers, tools, resources, and prompts for better UX in MCP clients.

### Server-Level Icons

```python
from fastmcp import FastMCP, Icon

mcp = FastMCP(
    name="Weather Service",
    website_url="https://weather.example.com",
    icons=[
        Icon(
            url="https://example.com/icon-small.png",
            size="small"
        ),
        Icon(
            url="https://example.com/icon-large.png",
            size="large"
        )
    ]
)
```

### Component-Level Icons

```python
from fastmcp import Icon

@mcp.tool(icons=[
    Icon(url="https://example.com/tool-icon.png")
])
async def analyze_data(data: str) -> dict:
    """Analyze data with visual icon."""
    return {"result": "analyzed"}

@mcp.resource(
    "user://{user_id}/profile",
    icons=[Icon(url="https://example.com/user-icon.png")]
)
async def get_user(user_id: str) -> dict:
    """User profile with icon."""
    return {"id": user_id, "name": "Alice"}

@mcp.prompt(
    "analyze",
    icons=[Icon(url="https://example.com/prompt-icon.png")]
)
def analysis_prompt(topic: str) -> str:
    """Analysis prompt with icon."""
    return f"Analyze {topic}"
```

### Data URI Support

Embed images directly (useful for self-contained deployments):

```python
from fastmcp import Icon, Image

# Convert local file to data URI
icon = Icon.from_file("/path/to/icon.png", size="medium")

# Or use Image utility
image_data_uri = Image.to_data_uri("/path/to/icon.png")
icon = Icon(url=image_data_uri, size="medium")

# Use in server
mcp = FastMCP(
    "My Server",
    icons=[icon]
)
```

### Multiple Sizes

Provide different sizes for different contexts:

```python
mcp = FastMCP(
    "Responsive Server",
    icons=[
        Icon(url="icon-16.png", size="small"),    # 16x16
        Icon(url="icon-32.png", size="medium"),   # 32x32
        Icon(url="icon-64.png", size="large"),    # 64x64
    ]
)
```

## API Integration

FastMCP provides multiple patterns for API integration:

### Pattern 1: Manual API Integration

```python
import httpx
import os

# Create reusable client
client = httpx.AsyncClient(
    base_url=os.getenv("API_BASE_URL"),
    headers={"Authorization": f"Bearer {os.getenv('API_KEY')}"},
    timeout=30.0
)

@mcp.tool()
async def fetch_data(endpoint: str) -> dict:
    """Fetch data from API."""
    try:
        response = await client.get(endpoint)
        response.raise_for_status()
        return {"success": True, "data": response.json()}
    except httpx.HTTPStatusError as e:
        return {"error": f"HTTP {e.response.status_code}"}
    except Exception as e:
        return {"error": str(e)}
```

### Pattern 2: OpenAPI/Swagger Auto-Generation

```python
from fastmcp import FastMCP
from fastmcp.server.openapi import RouteMap, MCPType
import httpx

# Load OpenAPI spec
spec = httpx.get("https://api.example.com/openapi.json").json()

# Create authenticated client
client = httpx.AsyncClient(
    base_url="https://api.example.com",
    headers={"Authorization": f"Bearer {API_TOKEN}"},
    timeout=30.0
)

# Auto-generate MCP server from OpenAPI
mcp = FastMCP.from_openapi(
    openapi_spec=spec,
    client=client,
    name="API Server",
    route_maps=[
        # GET with parameters → Resource Templates
        RouteMap(
            methods=["GET"],
            pattern=r".*\{.*\}.*",
            mcp_type=MCPType.RESOURCE_TEMPLATE
        ),
        # GET without parameters → Resources
        RouteMap(
            methods=["GET"],
            mcp_type=MCPType.RESOURCE
        ),
        # POST/PUT/DELETE → Tools
        RouteMap(
            methods=["POST", "PUT", "DELETE"],
            mcp_type=MCPType.TOOL
        ),
    ]
)

# Optionally add custom tools
@mcp.tool()
async def custom_operation(data: dict) -> dict:
    """Custom tool on top of generated ones."""
    return process_data(data)
```

### Pattern 3: FastAPI Conversion

```python
from fastapi import FastAPI
from fastmcp import FastMCP

# Existing FastAPI app
app = FastAPI()

@app.get("/items/{item_id}")
def get_item(item_id: int):
    return {"id": item_id, "name": "Item"}

# Convert to MCP server
mcp = FastMCP.from_fastapi(
    app=app,
    httpx_client_kwargs={
        "headers": {"Authorization": "Bearer token"}
    }
)
```

## Cloud Deployment (FastMCP Cloud)

### Critical Requirements

**❗️ IMPORTANT:** These requirements are mandatory for FastMCP Cloud:

1. **Module-level server object** named `mcp`, `server`, or `app`
2. **PyPI dependencies only** in requirements.txt
3. **Public GitHub repository** (or accessible to FastMCP Cloud)
4. **Environment variables** for configuration

### Cloud-Ready Server Pattern

```python
# server.py
from fastmcp import FastMCP
import os

# ✅ CORRECT: Module-level server object
mcp = FastMCP(
    name="production-server"
)

# ✅ Use environment variables
API_KEY = os.getenv("API_KEY")
DATABASE_URL = os.getenv("DATABASE_URL")

@mcp.tool()
async def production_tool(data: str) -> dict:
    """Production-ready tool."""
    if not API_KEY:
        return {"error": "API_KEY not configured"}

    # Your implementation
    return {"status": "success", "data": data}

# ✅ Optional: for local testing
if __name__ == "__main__":
    mcp.run()
```

### Common Cloud Deployment Errors

**❌ WRONG: Function-wrapped server**
```python
def create_server():
    mcp = FastMCP("my-server")
    return mcp

if __name__ == "__main__":
    server = create_server()  # Too late for cloud!
    server.run()
```

**✅ CORRECT: Factory with module export**
```python
def create_server() -> FastMCP:
    mcp = FastMCP("my-server")
    # Complex setup logic
    return mcp

# Export at module level
mcp = create_server()

if __name__ == "__main__":
    mcp.run()
```

### Deployment Steps

1. **Prepare Repository:**
```bash
git init
git add .
git commit -m "Initial MCP server"
gh repo create my-mcp-server --public
git push -u origin main
```

2. **Deploy on FastMCP Cloud:**
   - Visit https://fastmcp.cloud
   - Sign in with GitHub
   - Click "Create Project"
   - Select your repository
   - Configure:
     - **Server Name**: Your project name
     - **Entrypoint**: `server.py`
     - **Environment Variables**: Add any needed

3. **Access Your Server:**
   - URL: `https://your-project.fastmcp.app/mcp`
   - Automatic deployment on push to main
   - PR preview deployments

## Client Configuration

### Claude Desktop

Add to `claude_desktop_config.json`:

```json
{
  "mcpServers": {
    "my-server": {
      "url": "https://your-project.fastmcp.app/mcp",
      "transport": "http"
    }
  }
}
```

### Local Development

```json
{
  "mcpServers": {
    "my-server": {
      "command": "python",
      "args": ["/absolute/path/to/server.py"],
      "env": {
        "API_KEY": "your-key",
        "DATABASE_URL": "your-db-url"
      }
    }
  }
}
```

### Claude Code CLI

```json
{
  "mcpServers": {
    "my-server": {
      "command": "uv",
      "args": ["run", "python", "/absolute/path/to/server.py"]
    }
  }
}
```

## 25 Common Errors (With Solutions)

### Error 1: Missing Server Object

**Error:**
```
RuntimeError: No server object found at module level
```

**Cause:** Server object not exported at module level (FastMCP Cloud requirement)

**Solution:**
```python
# ❌ WRONG
def create_server():
    return FastMCP("server")

# ✅ CORRECT
mcp = FastMCP("server")  # At module level
```

**Source:** FastMCP Cloud documentation, deployment failures

---

### Error 2: Async/Await Confusion

**Error:**
```
RuntimeError: no running event loop
TypeError: object coroutine can't be used in 'await' expression
```

**Cause:** Mixing sync/async incorrectly

**Solution:**
```python
# ❌ WRONG: Sync function calling async
@mcp.tool()
def bad_tool():
    result = await async_function()  # Error!

# ✅ CORRECT: Async tool
@mcp.tool()
async def good_tool():
    result = await async_function()
    return result

# ✅ CORRECT: Sync tool with sync code
@mcp.tool()
def sync_tool():
    return "Hello"
```

**Source:** GitHub issues #156, #203

---

### Error 3: Context Not Injected

**Error:**
```
TypeError: missing 1 required positional argument: 'context'
```

**Cause:** Missing `Context` type annotation for context parameter

**Solution:**
```python
from fastmcp import Context

# ❌ WRONG: No type hint
@mcp.tool()
async def bad_tool(context):  # Missing type!
    await context.report_progress(...)

# ✅ CORRECT: Proper type hint
@mcp.tool()
async def good_tool(context: Context):
    await context.report_progress(0, 100, "Starting")
```

**Source:** FastMCP v2 migration guide

---

### Error 4: Resource URI Syntax

**Error:**
```
ValueError: Invalid resource URI: missing scheme
```

**Cause:** Resource URI missing scheme prefix

**Solution:**
```python
# ❌ WRONG: Missing scheme
@mcp.resource("config")
def get_config(): pass

# ✅ CORRECT: Include scheme
@mcp.resource("data://config")
def get_config(): pass

# ✅ Valid schemes
@mcp.resource("file://config.json")
@mcp.resource("api://status")
@mcp.resource("info://health")
```

**Source:** MCP Protocol specification

---

### Error 5: Resource Template Parameter Mismatch

**Error:**
```
TypeError: get_user() missing 1 required positional argument: 'user_id'
```

**Cause:** Function parameter names don't match URI template

**Solution:**
```python
# ❌ WRONG: Parameter name mismatch
@mcp.resource("user://{user_id}/profile")
def get_user(id: str):  # Wrong name!
    pass

# ✅ CORRECT: Matching names
@mcp.resource("user://{user_id}/profile")
def get_user(user_id: str):  # Matches {user_id}
    return {"id": user_id}
```

**Source:** FastMCP patterns documentation

---

### Error 6: Pydantic Validation Error

**Error:**
```
ValidationError: value is not a valid integer
```

**Cause:** Type hints don't match provided data

**Solution:**
```python
from pydantic import BaseModel, Field

# ✅ Use Pydantic models for complex validation
class SearchParams(BaseModel):
    query: str = Field(min_length=1, max_length=100)
    limit: int = Field(default=10, ge=1, le=100)

@mcp.tool()
async def search(params: SearchParams) -> dict:
    # Validation automatic
    return await perform_search(params.query, params.limit)
```

**Source:** Pydantic documentation, FastMCP examples

---

### Error 7: Transport/Protocol Mismatch

**Error:**
```
ConnectionError: Server using different transport
```

**Cause:** Client and server using incompatible transports

**Solution:**
```python
# Server using stdio (default)
mcp.run()  # or mcp.run(transport="stdio")

# Client configuration must match
{
  "command": "python",
  "args": ["server.py"]
}

# OR for HTTP:
mcp.run(transport="http", port=8000)

# Client:
{
  "url": "http://localhost:8000/mcp",
  "transport": "http"
}
```

**Source:** MCP transport specification

---

### Error 8: Import Errors (Editable Package)

**Error:**
```
ModuleNotFoundError: No module named 'my_package'
```

**Cause:** Package not properly installed in editable mode

**Solution:**
```bash
# ✅ Install in editable mode
pip install -e .

# ✅ Or use absolute imports
from src.tools import my_tool

# ✅ Or add to PYTHONPATH
export PYTHONPATH="${PYTHONPATH}:/path/to/project"
```

**Source:** Python packaging documentation

---

### Error 9: Deprecation Warnings

**Error:**
```
DeprecationWarning: 'mcp.settings' is deprecated, use global Settings instead
```

**Cause:** Using old FastMCP v1 API

**Solution:**
```python
# ❌ OLD: FastMCP v1
from fastmcp import FastMCP
mcp = FastMCP()
api_key = mcp.settings.get("API_KEY")

# ✅ NEW: FastMCP v2
import os
api_key = os.getenv("API_KEY")
```

**Source:** FastMCP v2 migration guide

---

### Error 10: Port Already in Use

**Error:**
```
OSError: [Errno 48] Address already in use
```

**Cause:** Port 8000 already occupied

**Solution:**
```bash
# ✅ Use different port
python server.py --transport http --port 8001

# ✅ Or kill process on port
lsof -ti:8000 | xargs kill -9
```

**Source:** Common networking issue

---

### Error 11: Schema Generation Failures

**Error:**
```
TypeError: Object of type 'ndarray' is not JSON serializable
```

**Cause:** Unsupported type hints (NumPy arrays, custom classes)

**Solution:**
```python
# ❌ WRONG: NumPy array
import numpy as np

@mcp.tool()
def bad_tool() -> np.ndarray:  # Not JSON serializable
    return np.array([1, 2, 3])

# ✅ CORRECT: Use JSON-compatible types
@mcp.tool()
def good_tool() -> list[float]:
    return [1.0, 2.0, 3.0]

# ✅ Or convert to dict
@mcp.tool()
def array_tool() -> dict:
    data = np.array([1, 2, 3])
    return {"values": data.tolist()}
```

**Source:** JSON serialization requirements

---

### Error 12: JSON Serialization

**Error:**
```
TypeError: Object of type 'datetime' is not JSON serializable
```

**Cause:** Returning non-JSON-serializable objects

**Solution:**
```python
from datetime import datetime

# ❌ WRONG: Return datetime object
@mcp.tool()
def bad_tool() -> dict:
    return {"timestamp": datetime.now()}  # Not serializable

# ✅ CORRECT: Convert to string
@mcp.tool()
def good_tool() -> dict:
    return {"timestamp": datetime.now().isoformat()}

# ✅ Use helper function
def make_serializable(obj):
    """Convert object to JSON-serializable format."""
    if isinstance(obj, datetime):
        return obj.isoformat()
    elif isinstance(obj, bytes):
        return obj.decode('utf-8')
    # Add more conversions as needed
    return obj
```

**Source:** JSON specification

---

### Error 13: Circular Import Errors

**Error:**
```
ImportError: cannot import name 'X' from partially initialized module
```

**Cause:** Modules import from each other creating circular dependency (common in cloud deployment)

**Solution:**
```python
# ❌ WRONG: Factory function in __init__.py
# shared/__init__.py
_client = None
def get_api_client():
    from .api_client import APIClient  # Circular!
    return APIClient()

# shared/monitoring.py
from . import get_api_client  # Creates circle

# ✅ CORRECT: Direct imports
# shared/__init__.py
from .api_client import APIClient
from .cache import CacheManager

# shared/monitoring.py
from .api_client import APIClient
client = APIClient()  # Create directly

# ✅ ALTERNATIVE: Lazy import
# shared/monitoring.py
def get_client():
    from .api_client import APIClient
    return APIClient()
```

**Source:** Production cloud deployment errors, Python import system

---

### Error 14: Python Version Compatibility

**Error:**
```
DeprecationWarning: datetime.utcnow() is deprecated
```

**Cause:** Using deprecated Python 3.12+ methods

**Solution:**
```python
# ❌ DEPRECATED (Python 3.12+)
from datetime import datetime
timestamp = datetime.utcnow()

# ✅ CORRECT: Future-proof
from datetime import datetime, timezone
timestamp = datetime.now(timezone.utc)
```

**Source:** Python 3.12 release notes

---

### Error 15: Import-Time Execution

**Error:**
```
RuntimeError: Event loop is closed
```

**Cause:** Creating async resources at module import time

**Solution:**
```python
# ❌ WRONG: Module-level async execution
import asyncpg
connection = asyncpg.connect('postgresql://...')  # Runs at import!

# ✅ CORRECT: Lazy initialization
import asyncpg

class Database:
    connection = None

    @classmethod
    async def connect(cls):
        if cls.connection is None:
            cls.connection = await asyncpg.connect('postgresql://...')
        return cls.connection

# Usage: connection happens when needed, not at import
@mcp.tool()
async def get_users():
    conn = await Database.connect()
    return await conn.fetch("SELECT * FROM users")
```

**Source:** Async event loop management, cloud deployment requirements

---

### Error 16: Storage Backend Not Configured

**Error:**
```
RuntimeError: OAuth tokens lost on restart
ValueError: Cache not persisting across server instances
```

**Cause:** Using default memory storage in production without persistence

**Solution:**
```python
# ❌ WRONG: Memory storage in production
mcp = FastMCP("Production Server")  # Tokens lost on restart!

# ✅ CORRECT: Use disk or Redis storage
from key_value.stores import DiskStore, RedisStore
from key_value.encryption import FernetEncryptionWrapper
from cryptography.fernet import Fernet

# Disk storage (single instance)
mcp = FastMCP(
    "Production Server",
    storage=FernetEncryptionWrapper(
        key_value=DiskStore(path="/var/lib/mcp/storage"),
        fernet=Fernet(os.getenv("STORAGE_ENCRYPTION_KEY"))
    )
)

# Redis storage (multi-instance)
mcp = FastMCP(
    "Production Server",
    storage=FernetEncryptionWrapper(
        key_value=RedisStore(
            host=os.getenv("REDIS_HOST"),
            password=os.getenv("REDIS_PASSWORD")
        ),
        fernet=Fernet(os.getenv("STORAGE_ENCRYPTION_KEY"))
    )
)
```

**Source:** FastMCP v2.13.0 storage backends documentation

---

### Error 17: Lifespan Not Passed to ASGI App

**Error:**
```
RuntimeError: Database connection never initialized
Warning: MCP lifespan hooks not running
```

**Cause:** Using FastMCP with FastAPI/Starlette without passing lifespan

**Solution:**
```python
from fastapi import FastAPI
from fastmcp import FastMCP

# ❌ WRONG: Lifespan not passed
mcp = FastMCP("My Server", lifespan=my_lifespan)
app = FastAPI()  # MCP lifespan won't run!

# ✅ CORRECT: Pass MCP lifespan to parent app
mcp = FastMCP("My Server", lifespan=my_lifespan)
app = FastAPI(lifespan=mcp.lifespan)
```

**Source:** FastMCP v2.13.0 breaking changes, ASGI integration guide

---

### Error 18: Middleware Execution Order Error

**Error:**
```
RuntimeError: Rate limit not checked before caching
AttributeError: Context state not available in middleware
```

**Cause:** Incorrect middleware ordering (order matters!)

**Solution:**
```python
# ❌ WRONG: Cache before rate limiting
mcp.add_middleware(ResponseCachingMiddleware())
mcp.add_middleware(RateLimitingMiddleware())  # Too late!

# ✅ CORRECT: Rate limit before cache
mcp.add_middleware(ErrorHandlingMiddleware())  # First: catch errors
mcp.add_middleware(TimingMiddleware())         # Second: time requests
mcp.add_middleware(LoggingMiddleware())        # Third: log
mcp.add_middleware(RateLimitingMiddleware())   # Fourth: check limits
mcp.add_middleware(ResponseCachingMiddleware()) # Last: cache
```

**Source:** FastMCP middleware documentation, best practices

---

### Error 19: Circular Middleware Dependencies

**Error:**
```
RecursionError: maximum recursion depth exceeded
RuntimeError: Middleware loop detected
```

**Cause:** Middleware calling `self.next()` incorrectly or circular dependencies

**Solution:**
```python
# ❌ WRONG: Not calling next() or calling incorrectly
class BadMiddleware(BaseMiddleware):
    async def on_call_tool(self, tool_name, arguments, context):
        # Forgot to call next()!
        return {"error": "blocked"}

# ✅ CORRECT: Always call next() to continue chain
class GoodMiddleware(BaseMiddleware):
    async def on_call_tool(self, tool_name, arguments, context):
        # Do preprocessing
        print(f"Before: {tool_name}")

        # MUST call next() to continue
        result = await self.next(tool_name, arguments, context)

        # Do postprocessing
        print(f"After: {tool_name}")
        return result
```

**Source:** FastMCP middleware system documentation

---

### Error 20: Import vs Mount Confusion

**Error:**
```
RuntimeError: Subserver changes not reflected
ValueError: Unexpected tool namespacing
```

**Cause:** Using `import_server()` when `mount()` was needed (or vice versa)

**Solution:**
```python
# ❌ WRONG: Using import when you want dynamic updates
main_server.import_server(subserver)
# Later: changes to subserver won't appear in main_server

# ✅ CORRECT: Use mount() for dynamic composition
main_server.mount(subserver, prefix="sub")
# Changes to subserver are immediately visible

# ❌ WRONG: Using mount when you want static bundle
main_server.mount(third_party_server, prefix="vendor")
# Runtime overhead for static components

# ✅ CORRECT: Use import_server() for static bundles
main_server.import_server(third_party_server)
# One-time copy, no runtime delegation
```

**Source:** FastMCP server composition patterns

---

### Error 21: Resource Prefix Format Mismatch

**Error:**
```
ValueError: Resource not found: resource://api/users
ValueError: Unexpected resource URI format
```

**Cause:** Using wrong resource prefix format (path vs protocol)

**Solution:**
```python
# Path format (default since v2.4.0)
main_server.mount(api_server, prefix="api")
# Resources: resource://api/users

# ❌ WRONG: Expecting protocol format
# resource://api+users (doesn't exist)

# ✅ CORRECT: Use path format
uri = "resource://api/users"

# OR explicitly set protocol format (legacy)
main_server.mount(
    api_server,
    prefix="api",
    resource_prefix_format="protocol"
)
# Resources: api+resource://users
```

**Source:** FastMCP v2.4.0+ resource prefix changes

---

### Error 22: OAuth Proxy Without Consent Screen

**Error:**
```
SecurityWarning: Authorization bypass possible
RuntimeError: Confused deputy attack vector
```

**Cause:** OAuth Proxy configured without consent screen (security vulnerability)

**Solution:**
```python
# ❌ WRONG: No consent screen (security risk!)
auth = OAuthProxy(
    jwt_signing_key=os.getenv("JWT_KEY"),
    upstream_authorization_endpoint="...",
    upstream_token_endpoint="...",
    # Missing: enable_consent_screen
)

# ✅ CORRECT: Enable consent screen
auth = OAuthProxy(
    jwt_signing_key=os.getenv("JWT_KEY"),
    upstream_authorization_endpoint="...",
    upstream_token_endpoint="...",
    enable_consent_screen=True  # Prevents bypass attacks
)
```

**Source:** FastMCP v2.13.0 OAuth security enhancements, RFC 7662

---

### Error 23: Missing JWT Signing Key in Production

**Error:**
```
ValueError: JWT signing key required for OAuth Proxy
RuntimeError: Cannot issue tokens without signing key
```

**Cause:** OAuth Proxy missing `jwt_signing_key` in production

**Solution:**
```python
# ❌ WRONG: No JWT signing key
auth = OAuthProxy(
    upstream_authorization_endpoint="...",
    upstream_token_endpoint="...",
    # Missing: jwt_signing_key
)

# ✅ CORRECT: Provide signing key from environment
import secrets

# Generate once (in setup):
# signing_key = secrets.token_urlsafe(32)
# Store in: FASTMCP_JWT_SIGNING_KEY environment variable

auth = OAuthProxy(
    jwt_signing_key=os.environ["FASTMCP_JWT_SIGNING_KEY"],
    client_storage=encrypted_storage,
    upstream_authorization_endpoint="...",
    upstream_token_endpoint="...",
    upstream_client_id=os.getenv("OAUTH_CLIENT_ID"),
    upstream_client_secret=os.getenv("OAUTH_CLIENT_SECRET")
)
```

**Source:** OAuth Proxy production requirements

---

### Error 24: Icon Data URI Format Error

**Error:**
```
ValueError: Invalid data URI format
TypeError: Icon URL must be string or data URI
```

**Cause:** Incorrectly formatted data URI for icons

**Solution:**
```python
from fastmcp import Icon, Image

# ❌ WRONG: Invalid data URI
icon = Icon(url="base64,iVBORw0KG...")  # Missing data:image/png;

# ✅ CORRECT: Use Image utility
icon = Icon.from_file("/path/to/icon.png", size="medium")

# ✅ CORRECT: Manual data URI
import base64

with open("/path/to/icon.png", "rb") as f:
    image_data = base64.b64encode(f.read()).decode()
    data_uri = f"data:image/png;base64,{image_data}"
    icon = Icon(url=data_uri, size="medium")
```

**Source:** FastMCP icons documentation, Data URI specification

---

### Error 25: Lifespan Behavior Change (v2.13.0)

**Error:**
```
Warning: Lifespan runs per-server, not per-session
RuntimeError: Resources initialized multiple times
```

**Cause:** Expecting v2.12 lifespan behavior (per-session) in v2.13.0+ (per-server)

**Solution:**
```python
# v2.12.0 and earlier: Lifespan ran per client session
# v2.13.0+: Lifespan runs once per server instance

# ✅ CORRECT: v2.13.0+ pattern (per-server)
@asynccontextmanager
async def app_lifespan(server: FastMCP):
    """Runs ONCE when server starts, not per client session."""
    db = await Database.connect()
    print("Server starting - runs once")

    try:
        yield {"db": db}
    finally:
        await db.disconnect()
        print("Server stopping - runs once")

mcp = FastMCP("My Server", lifespan=app_lifespan)

# For per-session logic, use middleware instead:
class SessionMiddleware(BaseMiddleware):
    async def on_message(self, message, context):
        # Runs per client message
        session_id = context.fastmcp_context.get_state("session_id")
        if not session_id:
            session_id = str(uuid.uuid4())
            context.fastmcp_context.set_state("session_id", session_id)

        return await self.next(message, context)
```

**Source:** FastMCP v2.13.0 release notes, breaking changes documentation

---

## Production Patterns

### Pattern 1: Self-Contained Utils Module

Best practice for maintaining all utilities in one place:

```python
# src/utils.py - Single file with all utilities
import os
from typing import Dict, Any
from datetime import datetime

class Config:
    """Application configuration."""
    SERVER_NAME = os.getenv("SERVER_NAME", "FastMCP Server")
    SERVER_VERSION = "1.0.0"
    API_BASE_URL = os.getenv("API_BASE_URL")
    API_KEY = os.getenv("API_KEY")
    CACHE_TTL = int(os.getenv("CACHE_TTL", "300"))

def format_success(data: Any, message: str = "Success") -> Dict[str, Any]:
    """Format successful response."""
    return {
        "success": True,
        "message": message,
        "data": data,
        "timestamp": datetime.now().isoformat()
    }

def format_error(error: str, code: str = "ERROR") -> Dict[str, Any]:
    """Format error response."""
    return {
        "success": False,
        "error": error,
        "code": code,
        "timestamp": datetime.now().isoformat()
    }

# Usage in tools
from .utils import format_success, format_error, Config

@mcp.tool()
async def process_data(data: dict) -> dict:
    try:
        result = await process(data)
        return format_success(result)
    except Exception as e:
        return format_error(str(e))
```

### Pattern 2: Connection Pooling

Efficient resource management:

```python
import httpx
from typing import Optional

class APIClient:
    _instance: Optional[httpx.AsyncClient] = None

    @classmethod
    async def get_client(cls) -> httpx.AsyncClient:
        if cls._instance is None:
            cls._instance = httpx.AsyncClient(
                base_url=os.getenv("API_BASE_URL"),
                headers={"Authorization": f"Bearer {os.getenv('API_KEY')}"},
                timeout=httpx.Timeout(30.0),
                limits=httpx.Limits(max_keepalive_connections=5)
            )
        return cls._instance

    @classmethod
    async def cleanup(cls):
        if cls._instance:
            await cls._instance.aclose()
            cls._instance = None

@mcp.tool()
async def api_request(endpoint: str) -> dict:
    """Make API request with managed client."""
    client = await APIClient.get_client()
    response = await client.get(endpoint)
    return response.json()
```

### Pattern 3: Error Handling with Retry

Resilient API calls:

```python
import asyncio
from typing import Callable, TypeVar

T = TypeVar('T')

async def retry_with_backoff(
    func: Callable[[], T],
    max_retries: int = 3,
    initial_delay: float = 1.0,
    exponential_base: float = 2.0
) -> T:
    """Retry function with exponential backoff."""
    delay = initial_delay
    last_exception = None

    for attempt in range(max_retries):
        try:
            return await func()
        except Exception as e:
            last_exception = e
            if attempt < max_retries - 1:
                await asyncio.sleep(delay)
                delay *= exponential_base

    raise last_exception

@mcp.tool()
async def resilient_api_call(endpoint: str) -> dict:
    """API call with automatic retry."""
    async def make_call():
        async with httpx.AsyncClient() as client:
            response = await client.get(endpoint)
            response.raise_for_status()
            return response.json()

    try:
        data = await retry_with_backoff(make_call)
        return {"success": True, "data": data}
    except Exception as e:
        return {"error": f"Failed after retries: {e}"}
```

### Pattern 4: Time-Based Caching

Reduce API load:

```python
import time
from typing import Any, Optional

class TimeBasedCache:
    def __init__(self, ttl: int = 300):
        self.ttl = ttl
        self.cache = {}
        self.timestamps = {}

    def get(self, key: str) -> Optional[Any]:
        if key in self.cache:
            if time.time() - self.timestamps[key] < self.ttl:
                return self.cache[key]
            else:
                del self.cache[key]
                del self.timestamps[key]
        return None

    def set(self, key: str, value: Any):
        self.cache[key] = value
        self.timestamps[key] = time.time()

cache = TimeBasedCache(ttl=300)

@mcp.tool()
async def cached_fetch(resource_id: str) -> dict:
    """Fetch with caching."""
    cache_key = f"resource:{resource_id}"

    cached_data = cache.get(cache_key)
    if cached_data:
        return {"data": cached_data, "from_cache": True}

    data = await fetch_from_api(resource_id)
    cache.set(cache_key, data)

    return {"data": data, "from_cache": False}
```

## Testing

### Unit Testing Tools

```python
import pytest
from fastmcp import FastMCP
from fastmcp.testing import create_test_client

@pytest.fixture
def test_server():
    """Create test server instance."""
    mcp = FastMCP("test-server")

    @mcp.tool()
    async def test_tool(param: str) -> str:
        return f"Result: {param}"

    return mcp

@pytest.mark.asyncio
async def test_tool_execution(test_server):
    """Test tool execution."""
    async with create_test_client(test_server) as client:
        result = await client.call_tool("test_tool", {"param": "test"})
        assert result.data == "Result: test"
```

### Integration Testing

```python
import asyncio
from fastmcp import Client

async def test_server():
    """Test all server functionality."""
    async with Client("server.py") as client:
        # Test tools
        tools = await client.list_tools()
        print(f"Tools: {len(tools)}")

        for tool in tools:
            try:
                result = await client.call_tool(tool.name, {})
                print(f"✓ {tool.name}: {result}")
            except Exception as e:
                print(f"✗ {tool.name}: {e}")

        # Test resources
        resources = await client.list_resources()
        for resource in resources:
            try:
                data = await client.read_resource(resource.uri)
                print(f"✓ {resource.uri}")
            except Exception as e:
                print(f"✗ {resource.uri}: {e}")

if __name__ == "__main__":
    asyncio.run(test_server())
```

## CLI Commands

**Development:**
```bash
# Run with inspector (recommended)
fastmcp dev server.py

# Run normally
fastmcp run server.py

# Inspect server without running
fastmcp inspect server.py
```

**Installation:**
```bash
# Install to Claude Desktop
fastmcp install server.py

# Install with custom name
fastmcp install server.py --name "My Server"
```

**Debugging:**
```bash
# Enable debug logging
FASTMCP_LOG_LEVEL=DEBUG fastmcp dev server.py

# Run with HTTP transport
fastmcp run server.py --transport http --port 8000
```

## Best Practices

### 1. Server Structure

```python
from fastmcp import FastMCP
import os

def create_server() -> FastMCP:
    """Factory function for complex setup."""
    mcp = FastMCP("Server Name")

    # Configure server
    setup_tools(mcp)
    setup_resources(mcp)

    return mcp

def setup_tools(mcp: FastMCP):
    """Register all tools."""
    @mcp.tool()
    def example_tool():
        pass

def setup_resources(mcp: FastMCP):
    """Register all resources."""
    @mcp.resource("data://config")
    def get_config():
        return {"version": "1.0.0"}

# Export at module level
mcp = create_server()

if __name__ == "__main__":
    mcp.run()
```

### 2. Environment Configuration

```python
import os
from dotenv import load_dotenv

load_dotenv()

class Config:
    API_KEY = os.getenv("API_KEY", "")
    BASE_URL = os.getenv("BASE_URL", "https://api.example.com")
    DEBUG = os.getenv("DEBUG", "false").lower() == "true"

    @classmethod
    def validate(cls):
        if not cls.API_KEY:
            raise ValueError("API_KEY is required")
        return True

# Validate on startup
Config.validate()
```

### 3. Documentation

```python
@mcp.tool()
def complex_tool(
    query: str,
    filters: dict = None,
    limit: int = 10
) -> dict:
    """
    Search with advanced filtering.

    Args:
        query: Search query string
        filters: Optional filters dict with keys:
            - category: Filter by category
            - date_from: Start date (ISO format)
            - date_to: End date (ISO format)
        limit: Maximum results (1-100)

    Returns:
        Dict with 'results' list and 'total' count

    Examples:
        >>> complex_tool("python", {"category": "tutorial"}, 5)
        {'results': [...], 'total': 5}
    """
    pass
```

### 4. Health Checks

```python
@mcp.resource("health://status")
async def health_check() -> dict:
    """Comprehensive health check."""
    checks = {}

    # Check API connectivity
    try:
        async with httpx.AsyncClient() as client:
            response = await client.get(f"{BASE_URL}/health", timeout=5)
            checks["api"] = response.status_code == 200
    except:
        checks["api"] = False

    # Check database
    try:
        checks["database"] = await check_db_connection()
    except:
        checks["database"] = False

    all_healthy = all(checks.values())

    return {
        "status": "healthy" if all_healthy else "degraded",
        "timestamp": datetime.now().isoformat(),
        "checks": checks
    }
```

## Project Structure

### Simple Server

```
my-mcp-server/
├── server.py          # Main server file
├── requirements.txt   # Dependencies
├── .env              # Environment variables (git-ignored)
├── .gitignore        # Git ignore file
└── README.md         # Documentation
```

### Production Server

```
my-mcp-server/
├── src/
│   ├── server.py         # Main entry point
│   ├── utils.py          # Shared utilities
│   ├── tools/           # Tool modules
│   │   ├── __init__.py
│   │   ├── api_tools.py
│   │   └── data_tools.py
│   ├── resources/       # Resource definitions
│   │   ├── __init__.py
│   │   └── static.py
│   └── prompts/         # Prompt templates
│       ├── __init__.py
│       └── templates.py
├── tests/
│   ├── test_tools.py
│   └── test_resources.py
├── requirements.txt
├── pyproject.toml
├── .env
├── .gitignore
└── README.md
```

## References

**Official Documentation:**
- FastMCP: https://github.com/jlowin/fastmcp
- FastMCP Cloud: https://fastmcp.cloud
- MCP Protocol: https://modelcontextprotocol.io
- Context7 Docs: `/jlowin/fastmcp`

**Related Skills:**
- `openai-api` - OpenAI integration
- `claude-api` - Claude API
- `cloudflare-worker-base` - Deploy MCP as Worker

**Package Versions:**
- fastmcp >= 2.13.0
- Python >= 3.10
- httpx (recommended for async API calls)
- pydantic (for validation)
- py-key-value-aio (for storage backends)
- cryptography (for encrypted storage)

## Summary

FastMCP enables rapid development of production-ready MCP servers with advanced features for storage, authentication, middleware, and composition. Key takeaways:

1. **Always export server at module level** for FastMCP Cloud compatibility
2. **Use persistent storage backends** (Disk/Redis) in production for OAuth tokens and caching
3. **Configure server lifespans** for proper resource management (DB connections, API clients)
4. **Add middleware strategically** - order matters! (errors → timing → logging → rate limiting → caching)
5. **Choose composition wisely** - `import_server()` for static bundles, `mount()` for dynamic composition
6. **Secure OAuth properly** - Enable consent screens, encrypt token storage, use JWT signing keys
7. **Use async/await properly** - don't block the event loop
8. **Handle errors gracefully** with structured responses and ErrorHandlingMiddleware
9. **Avoid circular imports** especially with factory functions
10. **Test locally before deploying** using `fastmcp dev`
11. **Use environment variables** for all configuration (never hardcode secrets)
12. **Document thoroughly** - LLMs read your docstrings
13. **Follow production patterns** for self-contained, maintainable code
14. **Leverage OpenAPI** for instant API integration
15. **Monitor with health checks** and middleware for production reliability

**Production Readiness:**
- **Storage**: Encrypted persistence for OAuth tokens and response caching
- **Authentication**: 4 auth patterns (Token Validation, Remote OAuth, OAuth Proxy, Full OAuth)
- **Middleware**: 8 built-in types for logging, rate limiting, caching, error handling
- **Composition**: Modular server architecture with import/mount strategies
- **Security**: Consent screens, PKCE, RFC 7662 token introspection, encrypted storage
- **Performance**: Response caching, connection pooling, timing middleware

This skill prevents 25+ common errors and provides 90-95% token savings compared to manual implementation.


---

## Skill Companion Files

> Additional files collected from the skill directory layout.

### README.md

```markdown
# fastmcp

> Build MCP (Model Context Protocol) servers in Python with FastMCP

## What This Skill Does

This skill provides production-tested patterns, templates, and error prevention for building production-ready MCP servers with FastMCP in Python. It covers:

- **Server Creation**: Tools, resources, resource templates, and prompts
- **Storage Backends**: Memory, Disk, Redis, DynamoDB with encrypted persistence
- **Server Lifespans**: Resource management for DB connections and API clients
- **Middleware System**: 8 built-in types (logging, rate limiting, caching, error handling, timing)
- **Server Composition**: Modular architecture with import/mount strategies
- **Authentication**: 4 patterns (Token Validation, Remote OAuth, OAuth Proxy, Full OAuth)
- **OAuth Proxy**: Bridge to GitHub, Google, Azure, AWS, Discord, Facebook
- **Icons Support**: Visual representations for better UX
- **API Integration**: OpenAPI/Swagger auto-generation, FastAPI conversion, manual integration
- **Cloud Deployment**: FastMCP Cloud requirements and common pitfalls
- **Error Prevention**: 25 documented errors with solutions
- **Production Patterns**: Self-contained architecture, connection pooling, caching, retry logic
- **Context Features**: Elicitation, progress tracking, sampling, state management
- **Testing**: Unit and integration testing patterns
- **Client Configuration**: Claude Desktop, Claude Code CLI

## When to Use This Skill

**Use this skill when you need to:**
- Build an MCP server to expose tools/resources/prompts to LLMs
- Configure persistent storage for OAuth tokens or response caching
- Set up server lifespans for database connections or API client pooling
- Add middleware for logging, rate limiting, caching, or error handling
- Compose modular servers with import/mount strategies
- Implement OAuth authentication (GitHub, Google, Azure, AWS, Discord, Facebook)
- Secure MCP servers with JWT verification or OAuth Proxy
- Add icons to servers, tools, resources, or prompts
- Integrate an external API with Claude (via MCP)
- Deploy an MCP server to FastMCP Cloud
- Convert an OpenAPI/Swagger spec to MCP
- Convert a FastAPI app to MCP
- Wrap a database, file system, or service for LLM access
- Debug MCP server errors (storage, lifespan, middleware, OAuth, circular imports)
- Test MCP servers with FastMCP Client
- Implement elicitation (user input during execution)
- Add progress tracking to long-running operations
- Use sampling (LLM completions from within tools)
- Manage server state with context

**Don't use this skill if:**
- You're building an MCP *client* (not server)
- You're using a different MCP framework (not FastMCP)
- You're working in a language other than Python
- You're building with Anthropic's TypeScript SDK for MCP

## Auto-Trigger Keywords

This skill should automatically trigger when you mention:

### Primary Keywords
- `fastmcp`, `fast mcp`, `FastMCP`
- `MCP server`, `mcp server`, `MCP server python`, `python mcp server`
- `model context protocol`, `model context protocol python`
- `mcp tools`, `mcp resources`, `mcp prompts`
- `mcp integration`, `mcp framework`

### Use Case Keywords
- `build mcp server`, `create mcp server`, `make mcp server`
- `python mcp`, `mcp python`, `mcp with python`
- `integrate api with claude`, `expose api to llm`, `api for claude`
- `openapi to mcp`, `swagger to mcp`, `fastapi to mcp`
- `mcp cloud`, `fastmcp cloud`, `deploy mcp`
- `mcp testing`, `test mcp server`

### Storage & Persistence Keywords
- `mcp storage`, `fastmcp storage backends`, `persistent storage mcp`
- `redis storage mcp`, `disk storage mcp`, `encrypted storage`
- `oauth token storage`, `cache persistence mcp`
- `py-key-value-aio`, `fernet encryption mcp`

### Middleware Keywords
- `mcp middleware`, `fastmcp middleware`, `middleware system`
- `rate limiting mcp`, `response caching mcp`, `logging middleware`
- `timing middleware`, `error handling middleware`
- `middleware order`, `middleware hooks`

### Authentication Keywords
- `oauth mcp`, `oauth proxy mcp`, `jwt verification mcp`
- `github oauth mcp`, `google oauth mcp`, `azure oauth mcp`
- `token validation mcp`, `auth patterns mcp`
- `consent screen`, `pkce mcp`, `token introspection`

### Server Composition Keywords
- `import server mcp`, `mount server mcp`, `server composition`
- `modular mcp`, `subservers mcp`, `tag filtering mcp`

### Lifespan Keywords
- `server lifespan mcp`, `mcp lifespan`, `resource management mcp`
- `database connection mcp`, `cleanup hooks mcp`
- `asgi integration mcp`, `fastapi lifespan`

### Icons Keywords
- `mcp icons`, `server icons mcp`, `visual mcp`
- `data uri icons`, `icon sizes mcp`

### Error Keywords
- `mcp server not found`, `no server object found`
- `storage backend error`, `lifespan not running`, `middleware order error`
- `oauth not persisting`, `consent screen missing`
- `circular import fastmcp`, `import error mcp`
- `module-level server`, `fastmcp cloud deployment`
- `mcp async await`, `mcp context injection`
- `resource uri scheme`, `invalid resource uri`
- `pydantic validation mcp`, `mcp json serializable`

### Feature Keywords
- `mcp elicitation`, `user input during tool execution`
- `mcp progress tracking`, `progress updates mcp`
- `mcp sampling`, `llm from mcp tool`
- `resource templates mcp`, `dynamic resources`
- `tool transformation mcp`, `client handlers`
- `state management mcp`, `context state`

### Integration Keywords
- `openapi integration`, `swagger integration`, `fastapi mcp`
- `api wrapper mcp`, `database mcp`, `file system mcp`
- `connection pooling mcp`, `caching mcp`, `retry logic mcp`

### Claude Integration Keywords
- `claude desktop mcp`, `claude code mcp`
- `claude_desktop_config.json`, `mcp configuration`
- `expose tools to claude`, `claude tools`

## Token Efficiency

- **Without skill**: ~50-70k tokens, 8-15 errors
- **With skill**: ~3-5k tokens, 0 errors
- **Savings**: 90-95% token reduction

This is the highest token savings in the skills collection!

## Errors Prevented

This skill prevents 25 common errors:

### Core Server Errors (1-5)
1. **Missing server object** - Module-level export for FastMCP Cloud
2. **Async/await confusion** - Proper async/sync patterns
3. **Context not injected** - Type hints for context parameter
4. **Resource URI syntax** - Missing scheme prefixes
5. **Resource template mismatch** - Parameter name alignment

### Validation & Serialization (6-12)
6. **Pydantic validation errors** - Type hint consistency
7. **Transport/protocol mismatch** - Client/server compatibility
8. **Import errors** - Editable package installation
9. **Deprecation warnings** - FastMCP v2 migration
10. **Port conflicts** - Address already in use
11. **Schema generation failures** - Unsupported type hints
12. **JSON serialization** - Non-serializable objects

### Architecture & Lifecycle (13-15)
13. **Circular imports** - Factory function anti-patterns
14. **Python version compatibility** - Deprecated methods
15. **Import-time execution** - Async resource creation

### Storage & Persistence (16)
16. **Storage backend not configured** - Production persistence requirements

### Lifespan & Integration (17)
17. **Lifespan not passed to ASGI app** - FastAPI/Starlette integration

### Middleware (18-19)
18. **Middleware execution order error** - Incorrect middleware ordering
19. **Circular middleware dependencies** - Middleware loop errors

### Server Composition (20-21)
20. **Import vs mount confusion** - Static vs dynamic composition
21. **Resource prefix format mismatch** - Path vs protocol formats

### OAuth & Security (22-23)
22. **OAuth proxy without consent screen** - Security vulnerabilities
23. **Missing JWT signing key** - Production auth requirements

### Icons & Breaking Changes (24-25)
24. **Icon data URI format error** - Invalid data URI format
25. **Lifespan behavior change (v2.13.0)** - Per-server vs per-session

## What's Included

### Templates (19)
**Basic Server Templates:**
- `basic-server.py` - Minimal working server
- `tools-examples.py` - Sync/async tools
- `resources-examples.py` - Static/dynamic resources
- `prompts-examples.py` - Prompt templates

**Production Features:**
- `storage-backends-example.py` - Memory, Disk, Redis storage
- `server-lifespan-example.py` - Database connection lifecycle
- `middleware-examples.py` - All 8 built-in middleware types
- `server-composition-example.py` - Import vs mount patterns
- `oauth-proxy-example.py` - Full OAuth proxy configuration
- `authentication-patterns.py` - 4 auth strategies
- `icons-example.py` - Server and component icons

**Integration & Testing:**
- `openapi-integration.py` - OpenAPI auto-generation
- `api-client-pattern.py` - Manual API integration
- `client-example.py` - Testing with Client
- `error-handling.py` - Structured errors with retry
- `self-contained-server.py` - Production pattern

**Configuration:**
- `.env.example` - Environment variables
- `requirements.txt` - Package dependencies (fastmcp>=2.13.0)
- `pyproject.toml` - Package configuration

### Reference Docs (11)
**Error & Deployment:**
- `common-errors.md` - 25 errors with solutions
- `cloud-deployment.md` - FastMCP Cloud guide
- `cli-commands.md` - FastMCP CLI reference

**Production Features:**
- `storage-backends.md` - Complete storage options guide
- `server-lifespans.md` - Lifecycle management patterns
- `middleware-guide.md` - Middleware system deep dive
- `oauth-security.md` - OAuth Proxy and security features
- `performance-optimization.md` - Caching and middleware strategies

**Integration & Patterns:**
- `integration-patterns.md` - OpenAPI, FastAPI patterns
- `production-patterns.md` - Self-contained architecture
- `context-features.md` - Elicitation, progress, sampling, state

### Scripts (3)
- `check-versions.sh` - Verify package versions
- `test-server.sh` - Test with FastMCP Client
- `deploy-cloud.sh` - Deployment checklist

## Quick Start

### Install the Skill

```bash
cd /path/to/claude-skills
./scripts/install-skill.sh fastmcp
```

### Use the Skill

Just mention "fastmcp" or "build an mcp server" in your conversation with Claude Code, and the skill will automatically load.

Example prompts:
- "Help me build a FastMCP server"
- "Create an MCP server that wraps this API"
- "Convert this OpenAPI spec to an MCP server"
- "My MCP server has a circular import error"
- "Deploy my MCP server to FastMCP Cloud"

## Production Validation

**Tested With:**
- FastMCP 2.13.0+ (v2.13.0 "Cache Me If You Can" release)
- Python 3.10, 3.11, 3.12
- Storage backends: Memory, Disk, Redis
- Middleware: All 8 built-in types
- OAuth Proxy: GitHub, Google authentication
- FastMCP Cloud deployments
- OpenAPI integrations
- FastAPI conversions
- Server composition (import/mount)

**Based On:**
- Official FastMCP v2.13.0 documentation
- FastMCP updates: https://gofastmcp.com/updates.md
- Storage backends: https://gofastmcp.com/servers/storage-backends.md
- Icons: https://gofastmcp.com/servers/icons.md
- Progress: https://gofastmcp.com/servers/progress.md
- Real-world production patterns
- SimPro MCP server case study
- FastMCP Cloud deployment experience

## Package Info

- **Package**: `fastmcp>=2.13.0`
- **Python**: `>=3.10`
- **Repository**: https://github.com/jlowin/fastmcp
- **Cloud**: https://fastmcp.cloud
- **Context7**: `/jlowin/fastmcp`
- **Dependencies**:
  - `py-key-value-aio` (storage backends)
  - `cryptography` (encrypted storage)
  - `httpx` (async HTTP)
  - `pydantic` (validation)

## Related Skills

- `openai-api` - OpenAI API integration
- `claude-api` - Claude API integration
- `cloudflare-worker-base` - Deploy as Cloudflare Worker
- `google-gemini-api` - Gemini API integration
- `clerk-auth` - Alternative auth solution
- `better-auth` - Better Auth for authentication

## Skill Metadata

- **Version**: 2.0.0
- **License**: MIT
- **Token Savings**: 90-95%
- **Errors Prevented**: 25
- **Production Tested**: ✅
- **Last Updated**: 2025-11-04
- **Breaking Changes**: v2.13.0 lifespan behavior (per-server vs per-session)

---

**Questions or issues?** Check the templates and references in this skill, or consult the official FastMCP documentation at https://github.com/jlowin/fastmcp

```

### references/cli-commands.md

```markdown
# FastMCP CLI Commands Reference

Complete reference for FastMCP command-line interface.

## Installation

```bash
# Install FastMCP
pip install fastmcp

# or with uv
uv pip install fastmcp

# Check version
fastmcp --version
```

## Development Commands

### `fastmcp dev`

Run server with inspector interface (recommended for development).

```bash
# Basic usage
fastmcp dev server.py

# With options
fastmcp dev server.py --port 8000

# Enable debug logging
FASTMCP_LOG_LEVEL=DEBUG fastmcp dev server.py
```

**Features:**
- Interactive inspector UI
- Hot reload on file changes
- Detailed logging
- Tool/resource inspection

### `fastmcp run`

Run server normally (production-like).

```bash
# stdio transport (default)
fastmcp run server.py

# HTTP transport
fastmcp run server.py --transport http --port 8000

# SSE transport
fastmcp run server.py --transport sse
```

**Options:**
- `--transport`: `stdio`, `http`, or `sse`
- `--port`: Port number (for HTTP/SSE)
- `--host`: Host address (default: 127.0.0.1)

### `fastmcp inspect`

Inspect server without running it.

```bash
# Inspect tools and resources
fastmcp inspect server.py

# Output as JSON
fastmcp inspect server.py --json

# Show detailed information
fastmcp inspect server.py --verbose
```

**Output includes:**
- List of tools
- List of resources
- List of prompts
- Configuration details

## Installation Commands

### `fastmcp install`

Install server to Claude Desktop.

```bash
# Basic installation
fastmcp install server.py

# With custom name
fastmcp install server.py --name "My Server"

# Specify config location
fastmcp install server.py --config-path ~/.config/Claude/claude_desktop_config.json
```

**What it does:**
- Adds server to Claude Desktop configuration
- Sets up proper command and arguments
- Validates server before installing

### Claude Desktop Configuration

Manual configuration (if not using `fastmcp install`):

```json
{
  "mcpServers": {
    "my-server": {
      "command": "python",
      "args": ["/absolute/path/to/server.py"],
      "env": {
        "API_KEY": "your-key"
      }
    }
  }
}
```

**Config locations:**
- **macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
- **Linux**: `~/.config/Claude/claude_desktop_config.json`
- **Windows**: `%APPDATA%\Claude\claude_desktop_config.json`

## Python Direct Execution

### Run with Python

```bash
# stdio (default)
python server.py

# HTTP transport
python server.py --transport http --port 8000

# With arguments
python server.py --transport http --port 8000 --host 0.0.0.0
```

### Custom Argument Parsing

```python
# server.py
if __name__ == "__main__":
    import sys

    # Parse custom arguments
    if "--test" in sys.argv:
        run_tests()
    elif "--migrate" in sys.argv:
        run_migrations()
    else:
        mcp.run()
```

## Environment Variables

### FastMCP-Specific Variables

```bash
# Logging
export FASTMCP_LOG_LEVEL=DEBUG  # DEBUG, INFO, WARNING, ERROR
export FASTMCP_LOG_FILE=/path/to/log.txt

# Environment
export FASTMCP_ENV=production  # development, staging, production

# Custom variables (your server)
export API_KEY=your-key
export DATABASE_URL=postgres://...
```

### Using with Commands

```bash
# Inline environment variables
API_KEY=test fastmcp dev server.py

# From .env file
set -a && source .env && set +a && fastmcp dev server.py
```

## Testing Commands

### Run Tests with Client

```python
# test.py
import asyncio
from fastmcp import Client

async def test():
    async with Client("server.py") as client:
        tools = await client.list_tools()
        print(f"Tools: {[t.name for t in tools]}")

asyncio.run(test())
```

```bash
# Run tests
python test.py
```

### Integration Testing

```bash
# Start server in background
fastmcp run server.py --transport http --port 8000 &
SERVER_PID=$!

# Run tests
pytest tests/

# Kill server
kill $SERVER_PID
```

## Debugging Commands

### Enable Debug Logging

```bash
# Full debug output
FASTMCP_LOG_LEVEL=DEBUG fastmcp dev server.py

# Python logging
PYTHONVERBOSE=1 fastmcp dev server.py

# Trace imports
PYTHONPATH=. python -v server.py
```

### Check Python Environment

```bash
# Check Python version
python --version

# Check installed packages
pip list | grep fastmcp

# Check import paths
python -c "import sys; print('\n'.join(sys.path))"
```

### Validate Server

```bash
# Check syntax
python -m py_compile server.py

# Check imports
python -c "import server; print('OK')"

# Inspect structure
fastmcp inspect server.py --verbose
```

## Deployment Commands

### Prepare for Deployment

```bash
# Freeze dependencies
pip freeze > requirements.txt

# Clean specific to FastMCP
echo "fastmcp>=2.12.0" > requirements.txt
echo "httpx>=0.27.0" >> requirements.txt

# Test with clean environment
python -m venv test_env
source test_env/bin/activate
pip install -r requirements.txt
python server.py
```

### Git Commands for Deployment

```bash
# Prepare for cloud deployment
git add server.py requirements.txt
git commit -m "Prepare for deployment"

# Create GitHub repo
gh repo create my-mcp-server --public

# Push
git push -u origin main
```

## Performance Commands

### Profiling

```bash
# Profile with cProfile
python -m cProfile -o profile.stats server.py

# Analyze profile
python -m pstats profile.stats
```

### Memory Profiling

```bash
# Install memory_profiler
pip install memory_profiler

# Run with memory profiling
python -m memory_profiler server.py
```

## Batch Operations

### Multiple Servers

```bash
# Start multiple servers
fastmcp run server1.py --port 8000 &
fastmcp run server2.py --port 8001 &
fastmcp run server3.py --port 8002 &

# Kill all
killall -9 python
```

### Process Management

```bash
# Use screen/tmux for persistent sessions
screen -S fastmcp
fastmcp dev server.py
# Detach: Ctrl+A, D

# Reattach
screen -r fastmcp
```

## Common Command Patterns

### Local Development

```bash
# Quick iteration cycle
fastmcp dev server.py  # Edit, save, auto-reload
```

### Testing with HTTP Client

```bash
# Start HTTP server
fastmcp run server.py --transport http --port 8000

# Test with curl
curl -X POST http://localhost:8000/mcp \
  -H "Content-Type: application/json" \
  -d '{"method": "tools/list"}'
```

### Production-like Testing

```bash
# Set production environment
export ENVIRONMENT=production
export FASTMCP_LOG_LEVEL=WARNING

# Run
fastmcp run server.py
```

## Troubleshooting Commands

### Server Won't Start

```bash
# Check for syntax errors
python -m py_compile server.py

# Check for missing dependencies
pip check

# Verify FastMCP installation
python -c "import fastmcp; print(fastmcp.__version__)"
```

### Port Already in Use

```bash
# Find process using port
lsof -i :8000

# Kill process
lsof -ti:8000 | xargs kill -9

# Use different port
fastmcp run server.py --port 8001
```

### Permission Issues

```bash
# Make server executable
chmod +x server.py

# Fix Python path
export PYTHONPATH="${PYTHONPATH}:$(pwd)"
```

## Resources

- **FastMCP CLI Docs**: https://github.com/jlowin/fastmcp#cli
- **MCP Protocol**: https://modelcontextprotocol.io
- **Context7**: `/jlowin/fastmcp`

```

### references/cloud-deployment.md

```markdown
# FastMCP Cloud Deployment Guide

Complete guide for deploying FastMCP servers to FastMCP Cloud.

## Critical Requirements

**❗️ MUST HAVE** for FastMCP Cloud:

1. **Module-level server object** named `mcp`, `server`, or `app`
2. **PyPI dependencies only** in `requirements.txt`
3. **Public GitHub repository** (or accessible to FastMCP Cloud)
4. **Environment variables** for configuration (no hardcoded secrets)

## Cloud-Ready Server Pattern

```python
# server.py
from fastmcp import FastMCP
import os

# ✅ CORRECT: Module-level export
mcp = FastMCP("production-server")

# ✅ Use environment variables
API_KEY = os.getenv("API_KEY")
DATABASE_URL = os.getenv("DATABASE_URL")

@mcp.tool()
async def production_tool(data: str) -> dict:
    if not API_KEY:
        return {"error": "API_KEY not configured"}
    return {"status": "success", "data": data}

if __name__ == "__main__":
    mcp.run()
```

## Common Anti-Patterns

### ❌ WRONG: Function-Wrapped Server

```python
def create_server():
    mcp = FastMCP("server")
    return mcp

if __name__ == "__main__":
    server = create_server()  # Too late for cloud!
    server.run()
```

### ✅ CORRECT: Factory with Module Export

```python
def create_server() -> FastMCP:
    mcp = FastMCP("server")
    # Complex setup logic here
    return mcp

# Export at module level
mcp = create_server()

if __name__ == "__main__":
    mcp.run()
```

## Deployment Steps

### 1. Prepare Repository

```bash
# Initialize git
git init

# Add files
git add .

# Commit
git commit -m "Initial MCP server"

# Create GitHub repo
gh repo create my-mcp-server --public

# Push
git push -u origin main
```

### 2. Deploy to FastMCP Cloud

1. Visit https://fastmcp.cloud
2. Sign in with GitHub
3. Click "Create Project"
4. Select your repository
5. Configure:
   - **Server Name**: Your project name
   - **Entrypoint**: `server.py`
   - **Environment Variables**: Add all needed variables

### 3. Configure Environment Variables

In FastMCP Cloud dashboard, add:
- `API_KEY`
- `DATABASE_URL`
- `CACHE_TTL`
- Any custom variables

### 4. Access Your Server

- **URL**: `https://your-project.fastmcp.app/mcp`
- **Auto-deploy**: Pushes to main branch auto-deploy
- **PR Previews**: Pull requests get preview deployments

## Project Structure Requirements

### Minimal Structure

```
my-mcp-server/
├── server.py          # Main entry point (required)
├── requirements.txt   # PyPI dependencies (required)
├── .env              # Local dev only (git-ignored)
├── .gitignore        # Must ignore .env
└── README.md         # Documentation (recommended)
```

### Production Structure

```
my-mcp-server/
├── src/
│   ├── server.py         # Main entry point
│   ├── utils.py          # Self-contained utilities
│   └── tools/           # Tool modules
│       ├── __init__.py
│       └── api_tools.py
├── requirements.txt
├── .env.example         # Template for .env
├── .gitignore
└── README.md
```

## Requirements.txt Rules

### ✅ ALLOWED: PyPI Packages

```txt
fastmcp>=2.12.0
httpx>=0.27.0
python-dotenv>=1.0.0
pydantic>=2.0.0
```

### ❌ NOT ALLOWED: Non-PyPI Dependencies

```txt
# Don't use these in cloud:
git+https://github.com/user/repo.git
-e ./local-package
./wheels/package.whl
```

## Environment Variables Best Practices

### ✅ GOOD: Environment-based Configuration

```python
import os

class Config:
    API_KEY = os.getenv("API_KEY", "")
    BASE_URL = os.getenv("BASE_URL", "https://api.example.com")
    DEBUG = os.getenv("DEBUG", "false").lower() == "true"

    @classmethod
    def validate(cls):
        if not cls.API_KEY:
            raise ValueError("API_KEY is required")
```

### ❌ BAD: Hardcoded Values

```python
# Never do this in cloud:
API_KEY = "sk-1234567890"  # Exposed in repository!
DATABASE_URL = "postgresql://user:pass@host/db"  # Insecure!
```

## Avoiding Circular Imports

**Critical for cloud deployment!**

### ❌ WRONG: Factory Function in `__init__.py`

```python
# shared/__init__.py
def get_api_client():
    from .api_client import APIClient  # Circular import risk
    return APIClient()

# shared/monitoring.py
from . import get_api_client  # Creates circle!
```

### ✅ CORRECT: Direct Imports

```python
# shared/__init__.py
from .api_client import APIClient
from .cache import CacheManager

# shared/monitoring.py
from .api_client import APIClient
client = APIClient()  # Create directly
```

## Testing Before Deployment

### Local Testing

```bash
# Test with stdio (default)
fastmcp dev server.py

# Test with HTTP
python server.py --transport http --port 8000
```

### Pre-Deployment Checklist

- [ ] Server object exported at module level
- [ ] Only PyPI dependencies in requirements.txt
- [ ] No hardcoded secrets (all in environment variables)
- [ ] `.env` file in `.gitignore`
- [ ] No circular imports
- [ ] No import-time async execution
- [ ] Works with `fastmcp dev server.py`
- [ ] Git repository committed and pushed
- [ ] All required environment variables documented

## Monitoring Deployment

### Check Deployment Logs

FastMCP Cloud provides:
- Build logs
- Runtime logs
- Error logs

### Health Check Endpoint

Add a health check resource:

```python
@mcp.resource("health://status")
async def health_check() -> dict:
    return {
        "status": "healthy",
        "timestamp": datetime.now().isoformat(),
        "version": "1.0.0"
    }
```

### Common Deployment Errors

1. **"No server object found"**
   - Fix: Export server at module level

2. **"Module not found"**
   - Fix: Use only PyPI packages

3. **"Import error: circular dependency"**
   - Fix: Avoid factory functions in `__init__.py`

4. **"Environment variable not set"**
   - Fix: Add variables in FastMCP Cloud dashboard

## Continuous Deployment

FastMCP Cloud automatically deploys when you push to main:

```bash
# Make changes
git add .
git commit -m "Add new feature"
git push

# Deployment happens automatically!
# Check status at fastmcp.cloud
```

## Rollback Strategy

If deployment fails:

```bash
# Revert to previous commit
git revert HEAD
git push

# Or reset to specific commit
git reset --hard <commit-hash>
git push --force  # Use with caution!
```

## Resources

- **FastMCP Cloud**: https://fastmcp.cloud
- **FastMCP GitHub**: https://github.com/jlowin/fastmcp
- **Deployment Docs**: Check FastMCP Cloud documentation

```

### references/common-errors.md

```markdown
# FastMCP Common Errors Reference

Quick reference for the 15 most common FastMCP errors and their solutions.

## Error 1: Missing Server Object
**Error:** `RuntimeError: No server object found at module level`
**Fix:** Export server at module level: `mcp = FastMCP("name")`
**Why:** FastMCP Cloud requires module-level server object
**Source:** FastMCP Cloud documentation

## Error 2: Async/Await Confusion
**Error:** `RuntimeError: no running event loop`
**Fix:** Use `async def` for async operations, don't mix sync/async
**Example:** Use `await client.get()` not `client.get()`
**Source:** GitHub issues #156, #203

## Error 3: Context Not Injected
**Error:** `TypeError: missing required argument 'context'`
**Fix:** Add type hint: `async def tool(context: Context):`
**Why:** Type hint is required for context injection
**Source:** FastMCP v2 migration guide

## Error 4: Resource URI Syntax
**Error:** `ValueError: Invalid resource URI`
**Fix:** Include scheme: `@mcp.resource("data://config")`
**Valid schemes:** `data://`, `file://`, `info://`, `api://`
**Source:** MCP Protocol specification

## Error 5: Resource Template Parameter Mismatch
**Error:** `TypeError: missing positional argument`
**Fix:** Match parameter names: `user://{user_id}` → `def get_user(user_id: str)`
**Why:** Parameter names must exactly match URI template
**Source:** FastMCP patterns documentation

## Error 6: Pydantic Validation Error
**Error:** `ValidationError: value is not valid`
**Fix:** Ensure type hints match data types
**Best practice:** Use Pydantic models for complex validation
**Source:** Pydantic documentation

## Error 7: Transport/Protocol Mismatch
**Error:** `ConnectionError: different transport`
**Fix:** Match client/server transport (stdio or http)
**Server:** `mcp.run(transport="http")`
**Client:** `{"transport": "http", "url": "..."}`
**Source:** MCP transport specification

## Error 8: Import Errors (Editable Package)
**Error:** `ModuleNotFoundError: No module named 'my_package'`
**Fix:** Install in editable mode: `pip install -e .`
**Alternative:** Use absolute imports or add to PYTHONPATH
**Source:** Python packaging documentation

## Error 9: Deprecation Warnings
**Error:** `DeprecationWarning: 'mcp.settings' deprecated`
**Fix:** Use `os.getenv()` instead of `mcp.settings.get()`
**Why:** FastMCP v2 removed settings API
**Source:** FastMCP v2 migration guide

## Error 10: Port Already in Use
**Error:** `OSError: [Errno 48] Address already in use`
**Fix:** Use different port: `--port 8001`
**Alternative:** Kill process: `lsof -ti:8000 | xargs kill -9`
**Source:** Common networking issue

## Error 11: Schema Generation Failures
**Error:** `TypeError: not JSON serializable`
**Fix:** Use JSON-compatible types (no NumPy arrays, custom classes)
**Example:** Convert: `data.tolist()` or `data.to_dict()`
**Source:** JSON serialization requirements

## Error 12: JSON Serialization
**Error:** `TypeError: Object of type 'datetime' not JSON serializable`
**Fix:** Convert to string: `datetime.now().isoformat()`
**Apply to:** datetime, bytes, custom objects
**Source:** JSON specification

## Error 13: Circular Import Errors
**Error:** `ImportError: cannot import name 'X' from partially initialized module`
**Fix:** Avoid factory functions in `__init__.py`, use direct imports
**Example:** Import `APIClient` directly, don't use `get_api_client()` factory
**Why:** Cloud deployment initializes modules differently
**Source:** Production cloud deployment errors

## Error 14: Python Version Compatibility
**Error:** `DeprecationWarning: datetime.utcnow() deprecated`
**Fix:** Use `datetime.now(timezone.utc)` (Python 3.12+)
**Why:** Python 3.12+ deprecated some datetime methods
**Source:** Python 3.12 release notes

## Error 15: Import-Time Execution
**Error:** `RuntimeError: Event loop is closed`
**Fix:** Don't create async resources at module level
**Example:** Use lazy initialization: create resources when needed, not at import
**Why:** Event loop not available during module import
**Source:** Async event loop management

---

## Quick Debugging Checklist

When encountering errors:

1. ✅ Check server is exported at module level
2. ✅ Verify async/await usage is correct
3. ✅ Ensure Context type hints are present
4. ✅ Validate resource URIs have scheme prefixes
5. ✅ Match resource template parameters exactly
6. ✅ Use JSON-serializable types only
7. ✅ Avoid circular imports (especially in `__init__.py`)
8. ✅ Don't execute async code at module level
9. ✅ Test locally with `fastmcp dev server.py` before deploying

## Getting Help

- **FastMCP GitHub**: https://github.com/jlowin/fastmcp/issues
- **Context7 Docs**: `/jlowin/fastmcp`
- **This Skill**: See SKILL.md for detailed solutions

```

### references/context-features.md

```markdown
# FastMCP Context Features Reference

Complete reference for FastMCP's advanced context features: elicitation, progress tracking, and sampling.

## Context Injection

To use context features, inject Context into your tool:

```python
from fastmcp import Context

@mcp.tool()
async def tool_with_context(param: str, context: Context) -> dict:
    """Tool that uses context features."""
    # Access context features here
    pass
```

**Important:** Context parameter MUST have type hint `Context` for injection to work.

## Feature 1: Elicitation (User Input)

Request user input during tool execution.

### Basic Usage

```python
from fastmcp import Context

@mcp.tool()
async def confirm_action(action: str, context: Context) -> dict:
    """Request user confirmation."""
    # Request user input
    user_response = await context.request_elicitation(
        prompt=f"Confirm {action}? (yes/no)",
        response_type=str
    )

    if user_response.lower() == "yes":
        result = await perform_action(action)
        return {"status": "completed", "action": action}
    else:
        return {"status": "cancelled", "action": action}
```

### Type-Based Elicitation

```python
@mcp.tool()
async def collect_user_info(context: Context) -> dict:
    """Collect information from user."""
    # String input
    name = await context.request_elicitation(
        prompt="What is your name?",
        response_type=str
    )

    # Boolean input
    confirmed = await context.request_elicitation(
        prompt="Do you want to continue?",
        response_type=bool
    )

    # Numeric input
    count = await context.request_elicitation(
        prompt="How many items?",
        response_type=int
    )

    return {
        "name": name,
        "confirmed": confirmed,
        "count": count
    }
```

### Custom Type Elicitation

```python
from dataclasses import dataclass

@dataclass
class UserChoice:
    option: str
    reason: str

@mcp.tool()
async def get_user_choice(options: list[str], context: Context) -> dict:
    """Get user choice with reasoning."""
    choice = await context.request_elicitation(
        prompt=f"Choose from: {', '.join(options)}",
        response_type=UserChoice
    )

    return {
        "selected": choice.option,
        "reason": choice.reason
    }
```

### Client Handler for Elicitation

Client must provide handler:

```python
from fastmcp import Client

async def elicitation_handler(message: str, response_type: type, context: dict):
    """Handle elicitation requests."""
    if response_type == str:
        return input(f"{message}: ")
    elif response_type == bool:
        response = input(f"{message} (y/n): ")
        return response.lower() == 'y'
    elif response_type == int:
        return int(input(f"{message}: "))
    else:
        return input(f"{message}: ")

async with Client(
    "server.py",
    elicitation_handler=elicitation_handler
) as client:
    result = await client.call_tool("collect_user_info", {})
```

## Feature 2: Progress Tracking

Report progress for long-running operations.

### Basic Progress

```python
@mcp.tool()
async def long_operation(count: int, context: Context) -> dict:
    """Operation with progress tracking."""
    for i in range(count):
        # Report progress
        await context.report_progress(
            progress=i + 1,
            total=count,
            message=f"Processing item {i + 1}/{count}"
        )

        # Do work
        await asyncio.sleep(0.1)

    return {"status": "completed", "processed": count}
```

### Multi-Phase Progress

```python
@mcp.tool()
async def multi_phase_operation(data: list, context: Context) -> dict:
    """Operation with multiple phases."""
    # Phase 1: Loading
    await context.report_progress(0, 3, "Phase 1: Loading data")
    loaded = await load_data(data)

    # Phase 2: Processing
    await context.report_progress(1, 3, "Phase 2: Processing")
    for i, item in enumerate(loaded):
        await context.report_progress(
            progress=i,
            total=len(loaded),
            message=f"Processing {i + 1}/{len(loaded)}"
        )
        await process_item(item)

    # Phase 3: Saving
    await context.report_progress(2, 3, "Phase 3: Saving results")
    await save_results()

    await context.report_progress(3, 3, "Complete!")

    return {"status": "completed", "items": len(loaded)}
```

### Indeterminate Progress

For operations where total is unknown:

```python
@mcp.tool()
async def indeterminate_operation(context: Context) -> dict:
    """Operation with unknown duration."""
    stages = [
        "Initializing",
        "Loading data",
        "Processing",
        "Finalizing"
    ]

    for stage in stages:
        # No total - shows as spinner/indeterminate
        await context.report_progress(
            progress=stages.index(stage),
            total=None,
            message=stage
        )
        await perform_stage(stage)

    return {"status": "completed"}
```

### Client Handler for Progress

```python
async def progress_handler(progress: float, total: float | None, message: str | None):
    """Handle progress updates."""
    if total:
        pct = (progress / total) * 100
        # Use \r for same-line update
        print(f"\r[{pct:.1f}%] {message}", end="", flush=True)
    else:
        # Indeterminate progress
        print(f"\n[PROGRESS] {message}")

async with Client(
    "server.py",
    progress_handler=progress_handler
) as client:
    result = await client.call_tool("long_operation", {"count": 100})
```

## Feature 3: Sampling (LLM Integration)

Request LLM completions from within tools.

### Basic Sampling

```python
@mcp.tool()
async def enhance_text(text: str, context: Context) -> str:
    """Enhance text using LLM."""
    response = await context.request_sampling(
        messages=[{
            "role": "system",
            "content": "You are a professional copywriter."
        }, {
            "role": "user",
            "content": f"Enhance this text: {text}"
        }],
        temperature=0.7,
        max_tokens=500
    )

    return response["content"]
```

### Structured Output with Sampling

```python
@mcp.tool()
async def classify_text(text: str, context: Context) -> dict:
    """Classify text using LLM."""
    prompt = f"""
    Classify this text: {text}

    Return JSON with:
    - category: one of [news, blog, academic, social]
    - sentiment: one of [positive, negative, neutral]
    - topics: list of main topics

    Return as JSON object.
    """

    response = await context.request_sampling(
        messages=[{"role": "user", "content": prompt}],
        temperature=0.3,  # Lower for consistency
        response_format="json"
    )

    import json
    return json.loads(response["content"])
```

### Multi-Turn Sampling

```python
@mcp.tool()
async def interactive_analysis(topic: str, context: Context) -> dict:
    """Multi-turn analysis with LLM."""
    # First turn: Generate questions
    questions_response = await context.request_sampling(
        messages=[{
            "role": "user",
            "content": f"Generate 3 key questions about: {topic}"
        }],
        max_tokens=200
    )

    # Second turn: Answer questions
    analysis_response = await context.request_sampling(
        messages=[{
            "role": "user",
            "content": f"Answer these questions about {topic}:\n{questions_response['content']}"
        }],
        max_tokens=500
    )

    return {
        "topic": topic,
        "questions": questions_response["content"],
        "analysis": analysis_response["content"]
    }
```

### Client Handler for Sampling

Client provides LLM access:

```python
async def sampling_handler(messages, params, context):
    """Handle LLM sampling requests."""
    # Call your LLM API
    from openai import AsyncOpenAI

    client = AsyncOpenAI()
    response = await client.chat.completions.create(
        model=params.get("model", "gpt-4"),
        messages=messages,
        temperature=params.get("temperature", 0.7),
        max_tokens=params.get("max_tokens", 1000)
    )

    return {
        "content": response.choices[0].message.content,
        "model": response.model,
        "usage": {
            "prompt_tokens": response.usage.prompt_tokens,
            "completion_tokens": response.usage.completion_tokens,
            "total_tokens": response.usage.total_tokens
        }
    }

async with Client(
    "server.py",
    sampling_handler=sampling_handler
) as client:
    result = await client.call_tool("enhance_text", {"text": "Hello world"})
```

## Combined Example

All context features together:

```python
@mcp.tool()
async def comprehensive_task(data: list, context: Context) -> dict:
    """Task using all context features."""
    # 1. Elicitation: Confirm operation
    confirmed = await context.request_elicitation(
        prompt="Start processing?",
        response_type=bool
    )

    if not confirmed:
        return {"status": "cancelled"}

    # 2. Progress: Track processing
    results = []
    for i, item in enumerate(data):
        await context.report_progress(
            progress=i + 1,
            total=len(data),
            message=f"Processing {i + 1}/{len(data)}"
        )

        # 3. Sampling: Use LLM for processing
        enhanced = await context.request_sampling(
            messages=[{
                "role": "user",
                "content": f"Analyze this item: {item}"
            }],
            temperature=0.5
        )

        results.append({
            "item": item,
            "analysis": enhanced["content"]
        })

    return {
        "status": "completed",
        "total": len(data),
        "results": results
    }
```

## Best Practices

### Elicitation

- **Clear prompts**: Be specific about what you're asking
- **Type validation**: Use appropriate response_type
- **Handle cancellation**: Allow users to cancel operations
- **Provide context**: Explain why input is needed

### Progress Tracking

- **Regular updates**: Report every 5-10% or every item
- **Meaningful messages**: Describe what's happening
- **Phase indicators**: Show which phase of operation
- **Final confirmation**: Report 100% completion

### Sampling

- **System prompts**: Set clear instructions
- **Temperature control**: Lower for factual, higher for creative
- **Token limits**: Set reasonable max_tokens
- **Error handling**: Handle API failures gracefully
- **Cost awareness**: Sampling uses LLM API (costs money)

## Error Handling

### Context Not Available

```python
@mcp.tool()
async def safe_context_usage(context: Context) -> dict:
    """Safely use context features."""
    # Check if feature is available
    if hasattr(context, 'report_progress'):
        await context.report_progress(0, 100, "Starting")

    if hasattr(context, 'request_elicitation'):
        response = await context.request_elicitation(
            prompt="Continue?",
            response_type=bool
        )
    else:
        # Fallback behavior
        response = True

    return {"status": "completed"}
```

### Timeout Handling

```python
import asyncio

@mcp.tool()
async def elicitation_with_timeout(context: Context) -> dict:
    """Elicitation with timeout."""
    try:
        response = await asyncio.wait_for(
            context.request_elicitation(
                prompt="Your input (30 seconds):",
                response_type=str
            ),
            timeout=30.0
        )
        return {"status": "completed", "input": response}
    except asyncio.TimeoutError:
        return {"status": "timeout", "message": "No input received"}
```

## Context Feature Availability

| Feature | Claude Desktop | Claude Code CLI | FastMCP Cloud | Custom Client |
|---------|---------------|----------------|---------------|---------------|
| Elicitation | ✅ | ✅ | ⚠️ Depends | ✅ With handler |
| Progress | ✅ | ✅ | ✅ | ✅ With handler |
| Sampling | ✅ | ✅ | ⚠️ Depends | ✅ With handler |

⚠️ = Feature availability depends on client implementation

## Resources

- **Context API**: See SKILL.md for full Context API reference
- **Client Handlers**: See `client-example.py` template
- **MCP Protocol**: https://modelcontextprotocol.io

```

### references/integration-patterns.md

```markdown
# FastMCP Integration Patterns Reference

Quick reference for API integration patterns with FastMCP.

## Pattern 1: Manual API Integration

Best for simple APIs or when you need fine control.

```python
import httpx
from fastmcp import FastMCP

mcp = FastMCP("API Integration")

# Reusable client
client = httpx.AsyncClient(
    base_url="https://api.example.com",
    headers={"Authorization": f"Bearer {API_KEY}"},
    timeout=30.0
)

@mcp.tool()
async def fetch_data(endpoint: str) -> dict:
    """Fetch from API."""
    response = await client.get(endpoint)
    response.raise_for_status()
    return response.json()
```

**Pros:**
- Full control over requests
- Easy to customize
- Simple to understand

**Cons:**
- Manual tool creation for each endpoint
- More boilerplate code

## Pattern 2: OpenAPI/Swagger Auto-Generation

Best for well-documented APIs with OpenAPI specs.

```python
from fastmcp import FastMCP
from fastmcp.server.openapi import RouteMap, MCPType
import httpx

# Load spec
spec = httpx.get("https://api.example.com/openapi.json").json()

# Create client
client = httpx.AsyncClient(
    base_url="https://api.example.com",
    headers={"Authorization": f"Bearer {API_KEY}"}
)

# Auto-generate server
mcp = FastMCP.from_openapi(
    openapi_spec=spec,
    client=client,
    name="API Server",
    route_maps=[
        # GET + params → Resource Templates
        RouteMap(
            methods=["GET"],
            pattern=r".*\{.*\}.*",
            mcp_type=MCPType.RESOURCE_TEMPLATE
        ),
        # GET no params → Resources
        RouteMap(
            methods=["GET"],
            mcp_type=MCPType.RESOURCE
        ),
        # POST/PUT/DELETE → Tools
        RouteMap(
            methods=["POST", "PUT", "PATCH", "DELETE"],
            mcp_type=MCPType.TOOL
        ),
    ]
)
```

**Pros:**
- Instant integration (minutes not hours)
- Auto-updates with spec changes
- No manual endpoint mapping

**Cons:**
- Requires OpenAPI/Swagger spec
- Less control over individual endpoints
- May include unwanted endpoints

## Pattern 3: FastAPI Conversion

Best for converting existing FastAPI applications.

```python
from fastapi import FastAPI
from fastmcp import FastMCP

# Existing FastAPI app
app = FastAPI()

@app.get("/users/{user_id}")
def get_user(user_id: int):
    return {"id": user_id, "name": "User"}

# Convert to MCP
mcp = FastMCP.from_fastapi(
    app=app,
    httpx_client_kwargs={
        "headers": {"Authorization": "Bearer token"}
    }
)
```

**Pros:**
- Reuse existing FastAPI code
- Minimal changes needed
- Familiar FastAPI patterns

**Cons:**
- FastAPI must be running separately
- Extra HTTP hop (slower)

## Route Mapping Strategies

### Strategy 1: By HTTP Method

```python
route_maps = [
    RouteMap(methods=["GET"], mcp_type=MCPType.RESOURCE),
    RouteMap(methods=["POST"], mcp_type=MCPType.TOOL),
    RouteMap(methods=["PUT", "PATCH"], mcp_type=MCPType.TOOL),
    RouteMap(methods=["DELETE"], mcp_type=MCPType.TOOL),
]
```

### Strategy 2: By Path Pattern

```python
route_maps = [
    # Admin endpoints → Exclude
    RouteMap(
        pattern=r"/admin/.*",
        mcp_type=MCPType.EXCLUDE
    ),
    # Internal → Exclude
    RouteMap(
        pattern=r"/internal/.*",
        mcp_type=MCPType.EXCLUDE
    ),
    # Health → Exclude
    RouteMap(
        pattern=r"/(health|healthz)",
        mcp_type=MCPType.EXCLUDE
    ),
    # Everything else
    RouteMap(mcp_type=MCPType.TOOL),
]
```

### Strategy 3: By Parameters

```python
route_maps = [
    # Has path parameters → Resource Template
    RouteMap(
        pattern=r".*\{[^}]+\}.*",
        mcp_type=MCPType.RESOURCE_TEMPLATE
    ),
    # No parameters → Static Resource or Tool
    RouteMap(
        methods=["GET"],
        mcp_type=MCPType.RESOURCE
    ),
    RouteMap(
        methods=["POST", "PUT", "DELETE"],
        mcp_type=MCPType.TOOL
    ),
]
```

## Authentication Patterns

### API Key Authentication

```python
client = httpx.AsyncClient(
    base_url="https://api.example.com",
    headers={"X-API-Key": os.getenv("API_KEY")}
)
```

### Bearer Token

```python
client = httpx.AsyncClient(
    base_url="https://api.example.com",
    headers={"Authorization": f"Bearer {os.getenv('API_TOKEN')}"}
)
```

### OAuth2 with Token Refresh

```python
class OAuth2Client:
    def __init__(self):
        self.access_token = None
        self.expires_at = None

    async def get_token(self) -> str:
        if not self.expires_at or datetime.now() > self.expires_at:
            await self.refresh_token()
        return self.access_token

    async def refresh_token(self):
        async with httpx.AsyncClient() as client:
            response = await client.post(
                "https://auth.example.com/token",
                data={
                    "grant_type": "client_credentials",
                    "client_id": CLIENT_ID,
                    "client_secret": CLIENT_SECRET
                }
            )
            data = response.json()
            self.access_token = data["access_token"]
            self.expires_at = datetime.now() + timedelta(
                seconds=data["expires_in"] - 60
            )

oauth = OAuth2Client()

@mcp.tool()
async def authenticated_request(endpoint: str) -> dict:
    token = await oauth.get_token()
    async with httpx.AsyncClient() as client:
        response = await client.get(
            endpoint,
            headers={"Authorization": f"Bearer {token}"}
        )
        return response.json()
```

## Error Handling Patterns

### Basic Error Handling

```python
@mcp.tool()
async def safe_api_call(endpoint: str) -> dict:
    try:
        response = await client.get(endpoint)
        response.raise_for_status()
        return {"success": True, "data": response.json()}
    except httpx.HTTPStatusError as e:
        return {
            "success": False,
            "error": f"HTTP {e.response.status_code}",
            "message": e.response.text
        }
    except httpx.TimeoutException:
        return {"success": False, "error": "Request timeout"}
    except Exception as e:
        return {"success": False, "error": str(e)}
```

### Retry with Exponential Backoff

```python
async def retry_with_backoff(func, max_retries=3):
    delay = 1.0
    for attempt in range(max_retries):
        try:
            return await func()
        except (httpx.TimeoutException, httpx.NetworkError) as e:
            if attempt < max_retries - 1:
                await asyncio.sleep(delay)
                delay *= 2
            else:
                raise
```

## Caching Patterns

### Simple Time-Based Cache

```python
import time

class SimpleCache:
    def __init__(self, ttl=300):
        self.cache = {}
        self.timestamps = {}
        self.ttl = ttl

    def get(self, key: str):
        if key in self.cache:
            if time.time() - self.timestamps[key] < self.ttl:
                return self.cache[key]
        return None

    def set(self, key: str, value):
        self.cache[key] = value
        self.timestamps[key] = time.time()

cache = SimpleCache()

@mcp.tool()
async def cached_fetch(endpoint: str) -> dict:
    # Check cache
    cached = cache.get(endpoint)
    if cached:
        return {"data": cached, "from_cache": True}

    # Fetch from API
    data = await fetch_from_api(endpoint)
    cache.set(endpoint, data)

    return {"data": data, "from_cache": False}
```

## Rate Limiting Patterns

### Simple Rate Limiter

```python
from collections import deque
from datetime import datetime, timedelta

class RateLimiter:
    def __init__(self, max_requests: int, time_window: int):
        self.max_requests = max_requests
        self.time_window = timedelta(seconds=time_window)
        self.requests = deque()

    async def acquire(self):
        now = datetime.now()

        # Remove old requests
        while self.requests and now - self.requests[0] > self.time_window:
            self.requests.popleft()

        # Check limit
        if len(self.requests) >= self.max_requests:
            sleep_time = (self.requests[0] + self.time_window - now).total_seconds()
            await asyncio.sleep(sleep_time)
            return await self.acquire()

        self.requests.append(now)

limiter = RateLimiter(100, 60)  # 100 requests per minute

@mcp.tool()
async def rate_limited_call(endpoint: str) -> dict:
    await limiter.acquire()
    return await api_call(endpoint)
```

## Connection Pooling

### Singleton Client Pattern

```python
class APIClient:
    _instance = None

    @classmethod
    async def get_client(cls):
        if cls._instance is None:
            cls._instance = httpx.AsyncClient(
                base_url=API_BASE_URL,
                timeout=30.0,
                limits=httpx.Limits(
                    max_keepalive_connections=5,
                    max_connections=10
                )
            )
        return cls._instance

    @classmethod
    async def cleanup(cls):
        if cls._instance:
            await cls._instance.aclose()
            cls._instance = None

# Use in tools
@mcp.tool()
async def api_request(endpoint: str) -> dict:
    client = await APIClient.get_client()
    response = await client.get(endpoint)
    return response.json()
```

## Batch Request Patterns

### Parallel Batch Requests

```python
@mcp.tool()
async def batch_fetch(endpoints: list[str]) -> dict:
    """Fetch multiple endpoints in parallel."""
    async def fetch_one(endpoint: str):
        try:
            response = await client.get(endpoint)
            return {"endpoint": endpoint, "success": True, "data": response.json()}
        except Exception as e:
            return {"endpoint": endpoint, "success": False, "error": str(e)}

    results = await asyncio.gather(*[fetch_one(ep) for ep in endpoints])

    return {
        "total": len(endpoints),
        "successful": len([r for r in results if r["success"]]),
        "results": results
    }
```

## Webhook Patterns

### Webhook Receiver

```python
from fastapi import FastAPI, Request

app = FastAPI()

@app.post("/webhook")
async def handle_webhook(request: Request):
    data = await request.json()
    # Process webhook
    return {"status": "received"}

# Add to MCP server
mcp = FastMCP.from_fastapi(app)
```

## When to Use Each Pattern

| Pattern | Use When | Avoid When |
|---------|----------|------------|
| Manual Integration | Simple API, custom logic needed | API has 50+ endpoints |
| OpenAPI Auto-gen | Well-documented API, many endpoints | No OpenAPI spec available |
| FastAPI Conversion | Existing FastAPI app | Starting from scratch |
| Custom Route Maps | Need precise control | Simple use case |
| Connection Pooling | High-frequency requests | Single request needed |
| Caching | Expensive API calls, data rarely changes | Real-time data required |
| Rate Limiting | API has rate limits | No limits or internal API |

## Resources

- **FastMCP OpenAPI**: FastMCP.from_openapi documentation
- **FastAPI Integration**: FastMCP.from_fastapi documentation
- **HTTPX Docs**: https://www.python-httpx.org
- **OpenAPI Spec**: https://spec.openapis.org

```

### references/production-patterns.md

```markdown
# FastMCP Production Patterns Reference

Battle-tested patterns for production-ready FastMCP servers.

## Self-Contained Server Pattern

**Problem:** Circular imports break cloud deployment
**Solution:** Keep all utilities in one file

```python
# src/utils.py - All utilities in one place
import os
from typing import Dict, Any
from datetime import datetime

class Config:
    """Configuration from environment."""
    SERVER_NAME = os.getenv("SERVER_NAME", "FastMCP Server")
    API_KEY = os.getenv("API_KEY", "")
    CACHE_TTL = int(os.getenv("CACHE_TTL", "300"))

def format_success(data: Any) -> Dict[str, Any]:
    """Format successful response."""
    return {
        "success": True,
        "data": data,
        "timestamp": datetime.now().isoformat()
    }

def format_error(error: str, code: str = "ERROR") -> Dict[str, Any]:
    """Format error response."""
    return {
        "success": False,
        "error": error,
        "code": code,
        "timestamp": datetime.now().isoformat()
    }

# Usage in tools
from .utils import format_success, format_error, Config

@mcp.tool()
async def process_data(data: dict) -> dict:
    try:
        result = await process(data)
        return format_success(result)
    except Exception as e:
        return format_error(str(e))
```

**Why it works:**
- No circular dependencies
- Cloud deployment safe
- Easy to maintain
- Single source of truth

## Lazy Initialization Pattern

**Problem:** Creating expensive resources at import time fails in cloud
**Solution:** Initialize resources only when needed

```python
class ResourceManager:
    """Manages expensive resources with lazy initialization."""
    _db_pool = None
    _cache = None

    @classmethod
    async def get_db(cls):
        """Get database pool (create on first use)."""
        if cls._db_pool is None:
            cls._db_pool = await create_db_pool()
        return cls._db_pool

    @classmethod
    async def get_cache(cls):
        """Get cache (create on first use)."""
        if cls._cache is None:
            cls._cache = await create_cache()
        return cls._cache

# Usage - no initialization at module level
manager = ResourceManager()  # Lightweight

@mcp.tool()
async def database_operation():
    db = await manager.get_db()  # Initialization happens here
    return await db.query("SELECT * FROM users")
```

## Connection Pooling Pattern

**Problem:** Creating new connections for each request is slow
**Solution:** Reuse HTTP clients with connection pooling

```python
import httpx

class APIClient:
    _instance: Optional[httpx.AsyncClient] = None

    @classmethod
    async def get_client(cls) -> httpx.AsyncClient:
        """Get or create shared HTTP client."""
        if cls._instance is None:
            cls._instance = httpx.AsyncClient(
                base_url=API_BASE_URL,
                timeout=httpx.Timeout(30.0),
                limits=httpx.Limits(
                    max_keepalive_connections=5,
                    max_connections=10
                )
            )
        return cls._instance

    @classmethod
    async def cleanup(cls):
        """Cleanup on shutdown."""
        if cls._instance:
            await cls._instance.aclose()
            cls._instance = None

@mcp.tool()
async def api_request(endpoint: str) -> dict:
    client = await APIClient.get_client()
    response = await client.get(endpoint)
    return response.json()
```

## Retry with Exponential Backoff

**Problem:** Transient failures cause tool failures
**Solution:** Automatic retry with exponential backoff

```python
import asyncio

async def retry_with_backoff(
    func,
    max_retries: int = 3,
    initial_delay: float = 1.0,
    exponential_base: float = 2.0
):
    """Retry function with exponential backoff."""
    delay = initial_delay
    last_exception = None

    for attempt in range(max_retries):
        try:
            return await func()
        except (httpx.TimeoutException, httpx.NetworkError) as e:
            last_exception = e
            if attempt < max_retries - 1:
                await asyncio.sleep(delay)
                delay *= exponential_base

    raise last_exception

@mcp.tool()
async def resilient_api_call(endpoint: str) -> dict:
    """API call with automatic retry."""
    async def make_call():
        client = await APIClient.get_client()
        response = await client.get(endpoint)
        response.raise_for_status()
        return response.json()

    try:
        data = await retry_with_backoff(make_call)
        return {"success": True, "data": data}
    except Exception as e:
        return {"success": False, "error": str(e)}
```

## Time-Based Caching Pattern

**Problem:** Repeated API calls for same data waste time/money
**Solution:** Cache with TTL (time-to-live)

```python
import time

class TimeBasedCache:
    def __init__(self, ttl: int = 300):
        self.ttl = ttl
        self.cache = {}
        self.timestamps = {}

    def get(self, key: str):
        if key in self.cache:
            if time.time() - self.timestamps[key] < self.ttl:
                return self.cache[key]
            else:
                del self.cache[key]
                del self.timestamps[key]
        return None

    def set(self, key: str, value):
        self.cache[key] = value
        self.timestamps[key] = time.time()

cache = TimeBasedCache(ttl=300)

@mcp.tool()
async def cached_fetch(resource_id: str) -> dict:
    """Fetch with caching."""
    cache_key = f"resource:{resource_id}"

    cached = cache.get(cache_key)
    if cached:
        return {"data": cached, "from_cache": True}

    data = await fetch_from_api(resource_id)
    cache.set(cache_key, data)

    return {"data": data, "from_cache": False}
```

## Structured Error Responses

**Problem:** Inconsistent error formats make debugging hard
**Solution:** Standardized error response format

```python
from enum import Enum

class ErrorCode(Enum):
    VALIDATION_ERROR = "VALIDATION_ERROR"
    NOT_FOUND = "NOT_FOUND"
    API_ERROR = "API_ERROR"
    TIMEOUT = "TIMEOUT"
    UNKNOWN = "UNKNOWN"

def create_error(code: ErrorCode, message: str, details: dict = None):
    """Create structured error response."""
    return {
        "success": False,
        "error": {
            "code": code.value,
            "message": message,
            "details": details or {},
            "timestamp": datetime.now().isoformat()
        }
    }

@mcp.tool()
async def validated_operation(data: str) -> dict:
    if not data:
        return create_error(
            ErrorCode.VALIDATION_ERROR,
            "Data is required",
            {"field": "data"}
        )

    try:
        result = await process(data)
        return {"success": True, "data": result}
    except Exception as e:
        return create_error(ErrorCode.UNKNOWN, str(e))
```

## Environment-Based Configuration

**Problem:** Different settings for dev/staging/production
**Solution:** Environment-based configuration class

```python
import os
from enum import Enum

class Environment(Enum):
    DEVELOPMENT = "development"
    STAGING = "staging"
    PRODUCTION = "production"

class Config:
    ENV = Environment(os.getenv("ENVIRONMENT", "development"))

    SETTINGS = {
        Environment.DEVELOPMENT: {
            "debug": True,
            "cache_ttl": 60,
            "log_level": "DEBUG"
        },
        Environment.STAGING: {
            "debug": True,
            "cache_ttl": 300,
            "log_level": "INFO"
        },
        Environment.PRODUCTION: {
            "debug": False,
            "cache_ttl": 3600,
            "log_level": "WARNING"
        }
    }

    @classmethod
    def get(cls, key: str):
        return cls.SETTINGS[cls.ENV].get(key)

# Use configuration
cache_ttl = Config.get("cache_ttl")
debug_mode = Config.get("debug")
```

## Health Check Pattern

**Problem:** Need to monitor server health in production
**Solution:** Comprehensive health check resource

```python
@mcp.resource("health://status")
async def health_check() -> dict:
    """Comprehensive health check."""
    checks = {}

    # Check API connectivity
    try:
        client = await APIClient.get_client()
        response = await client.get("/health", timeout=5)
        checks["api"] = response.status_code == 200
    except:
        checks["api"] = False

    # Check database (if applicable)
    try:
        db = await ResourceManager.get_db()
        await db.execute("SELECT 1")
        checks["database"] = True
    except:
        checks["database"] = False

    # System resources
    import psutil
    checks["memory_percent"] = psutil.virtual_memory().percent
    checks["cpu_percent"] = psutil.cpu_percent()

    # Overall status
    all_healthy = (
        checks.get("api", True) and
        checks.get("database", True) and
        checks["memory_percent"] < 90 and
        checks["cpu_percent"] < 90
    )

    return {
        "status": "healthy" if all_healthy else "degraded",
        "timestamp": datetime.now().isoformat(),
        "checks": checks
    }
```

## Parallel Processing Pattern

**Problem:** Sequential processing is slow for batch operations
**Solution:** Process items in parallel

```python
import asyncio

@mcp.tool()
async def batch_process(items: list[str]) -> dict:
    """Process multiple items in parallel."""
    async def process_single(item: str):
        try:
            result = await process_item(item)
            return {"item": item, "success": True, "result": result}
        except Exception as e:
            return {"item": item, "success": False, "error": str(e)}

    # Process all items in parallel
    tasks = [process_single(item) for item in items]
    results = await asyncio.gather(*tasks)

    successful = [r for r in results if r["success"]]
    failed = [r for r in results if not r["success"]]

    return {
        "total": len(items),
        "successful": len(successful),
        "failed": len(failed),
        "results": results
    }
```

## State Management Pattern

**Problem:** Shared state causes race conditions
**Solution:** Thread-safe state management with locks

```python
import asyncio

class StateManager:
    def __init__(self):
        self._state = {}
        self._locks = {}

    async def get(self, key: str, default=None):
        return self._state.get(key, default)

    async def set(self, key: str, value):
        if key not in self._locks:
            self._locks[key] = asyncio.Lock()

        async with self._locks[key]:
            self._state[key] = value

    async def update(self, key: str, updater):
        """Update with function."""
        if key not in self._locks:
            self._locks[key] = asyncio.Lock()

        async with self._locks[key]:
            current = self._state.get(key)
            self._state[key] = await updater(current)
            return self._state[key]

state = StateManager()

@mcp.tool()
async def increment_counter(name: str) -> dict:
    new_value = await state.update(
        f"counter_{name}",
        lambda x: (x or 0) + 1
    )
    return {"counter": name, "value": new_value}
```

## Anti-Patterns to Avoid

### ❌ Factory Functions in __init__.py

```python
# DON'T DO THIS
# shared/__init__.py
def get_api_client():
    from .api_client import APIClient  # Circular import risk
    return APIClient()
```

### ❌ Blocking Operations in Async

```python
# DON'T DO THIS
@mcp.tool()
async def bad_async():
    time.sleep(5)  # Blocks entire event loop!
    return "done"

# DO THIS INSTEAD
@mcp.tool()
async def good_async():
    await asyncio.sleep(5)
    return "done"
```

### ❌ Global Mutable State

```python
# DON'T DO THIS
results = []  # Race conditions!

@mcp.tool()
async def add_result(data: str):
    results.append(data)
```

## Production Deployment Checklist

- [ ] Module-level server object
- [ ] Environment variables for all config
- [ ] Connection pooling for HTTP clients
- [ ] Retry logic for transient failures
- [ ] Caching for expensive operations
- [ ] Structured error responses
- [ ] Health check endpoint
- [ ] Logging configured
- [ ] No circular imports
- [ ] No import-time async execution
- [ ] Rate limiting if needed
- [ ] Graceful shutdown handling

## Resources

- **Production Examples**: See `self-contained-server.py` template
- **Error Handling**: See `error-handling.py` template
- **API Patterns**: See `api-client-pattern.py` template

```

### scripts/check-versions.sh

```bash
#!/bin/bash
# FastMCP Version Checker
# Verifies that FastMCP and dependencies are up to date

set -e

echo "======================================"
echo "FastMCP Version Checker"
echo "======================================"
echo ""

# Colors
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
RED='\033[0;31m'
NC='\033[0m' # No Color

# Check if Python is installed
if ! command -v python3 &> /dev/null; then
    echo -e "${RED}✗${NC} Python 3 is not installed"
    exit 1
fi

echo -e "${GREEN}✓${NC} Python $(python3 --version)"
echo ""

# Check Python version
PYTHON_VERSION=$(python3 -c 'import sys; print(f"{sys.version_info.major}.{sys.version_info.minor}")')
REQUIRED_VERSION="3.10"

if [ "$(printf '%s\n' "$REQUIRED_VERSION" "$PYTHON_VERSION" | sort -V | head -n1)" != "$REQUIRED_VERSION" ]; then
    echo -e "${RED}✗${NC} Python $PYTHON_VERSION is too old. FastMCP requires Python $REQUIRED_VERSION or later"
    exit 1
fi

echo -e "${GREEN}✓${NC} Python version $PYTHON_VERSION meets requirements"
echo ""

# Check if pip is installed
if ! command -v pip3 &> /dev/null; then
    echo -e "${RED}✗${NC} pip3 is not installed"
    exit 1
fi

echo "Checking package versions..."
echo ""

# Function to check package version
check_package() {
    local package=$1
    local min_version=$2

    if pip3 show "$package" &> /dev/null; then
        local installed_version=$(pip3 show "$package" | grep "Version:" | awk '{print $2}')
        echo -e "${GREEN}✓${NC} $package: $installed_version (required: >=$min_version)"

        # Note: This is a simple check. For production, use more robust version comparison
        if [ "$installed_version" != "$min_version" ]; then
            if [ "$(printf '%s\n' "$min_version" "$installed_version" | sort -V | head -n1)" != "$min_version" ]; then
                echo -e "  ${YELLOW}⚠${NC}  Installed version is older than minimum required"
            fi
        fi
    else
        echo -e "${RED}✗${NC} $package: Not installed (required: >=$min_version)"
    fi
}

# Check core packages
check_package "fastmcp" "2.12.0"
check_package "httpx" "0.27.0"
check_package "python-dotenv" "1.0.0"
check_package "pydantic" "2.0.0"

echo ""
echo "Checking optional packages..."
echo ""

# Check optional packages
if pip3 show "psutil" &> /dev/null; then
    check_package "psutil" "5.9.0"
else
    echo -e "${YELLOW}○${NC} psutil: Not installed (optional, for health checks)"
fi

if pip3 show "pytest" &> /dev/null; then
    check_package "pytest" "8.0.0"
else
    echo -e "${YELLOW}○${NC} pytest: Not installed (optional, for testing)"
fi

echo ""
echo "======================================"
echo "Version check complete!"
echo "======================================"
echo ""

# Suggestions
echo "Suggestions:"
echo "  - To update FastMCP: pip install --upgrade fastmcp"
echo "  - To update all dependencies: pip install --upgrade -r requirements.txt"
echo "  - To see outdated packages: pip list --outdated"
echo ""

```

### scripts/deploy-cloud.sh

```bash
#!/bin/bash
# FastMCP Cloud Deployment Checker
# Validates server is ready for FastMCP Cloud deployment

set -e

# Colors
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
RED='\033[0;31m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color

echo "======================================"
echo "FastMCP Cloud Deployment Checker"
echo "======================================"
echo ""

# Check arguments
if [ $# -eq 0 ]; then
    echo "Usage: $0 <server.py>"
    echo ""
    echo "Example:"
    echo "  $0 server.py"
    exit 1
fi

SERVER_PATH=$1
ERRORS=0
WARNINGS=0

# Function to check requirement
check_required() {
    local description=$1
    local command=$2

    if eval "$command" &> /dev/null; then
        echo -e "${GREEN}✓${NC} $description"
        return 0
    else
        echo -e "${RED}✗${NC} $description"
        ERRORS=$((ERRORS + 1))
        return 1
    fi
}

# Function to check warning
check_warning() {
    local description=$1
    local command=$2

    if eval "$command" &> /dev/null; then
        echo -e "${GREEN}✓${NC} $description"
        return 0
    else
        echo -e "${YELLOW}⚠${NC} $description"
        WARNINGS=$((WARNINGS + 1))
        return 1
    fi
}

# 1. Check server file exists
echo "Checking server file..."
check_required "Server file exists: $SERVER_PATH" "test -f '$SERVER_PATH'"
echo ""

# 2. Check Python syntax
echo "Checking Python syntax..."
check_required "Python syntax is valid" "python3 -m py_compile '$SERVER_PATH'"
echo ""

# 3. Check for module-level server object
echo "Checking module-level server object..."
if grep -q "^mcp = FastMCP\|^server = FastMCP\|^app = FastMCP" "$SERVER_PATH"; then
    echo -e "${GREEN}✓${NC} Found module-level server object (mcp/server/app)"
else
    echo -e "${RED}✗${NC} No module-level server object found"
    echo "   Expected: mcp = FastMCP(...) at module level"
    ERRORS=$((ERRORS + 1))
fi
echo ""

# 4. Check requirements.txt
echo "Checking requirements.txt..."
if [ -f "requirements.txt" ]; then
    echo -e "${GREEN}✓${NC} requirements.txt exists"

    # Check for non-PyPI dependencies
    if grep -q "^git+\|^-e \|\.whl$\|\.tar.gz$" requirements.txt; then
        echo -e "${RED}✗${NC} requirements.txt contains non-PyPI dependencies"
        echo "   FastMCP Cloud requires PyPI packages only"
        ERRORS=$((ERRORS + 1))
    else
        echo -e "${GREEN}✓${NC} All dependencies are PyPI packages"
    fi

    # Check for fastmcp
    if grep -q "^fastmcp" requirements.txt; then
        echo -e "${GREEN}✓${NC} FastMCP is in requirements.txt"
    else
        echo -e "${YELLOW}⚠${NC} FastMCP not found in requirements.txt"
        WARNINGS=$((WARNINGS + 1))
    fi
else
    echo -e "${RED}✗${NC} requirements.txt not found"
    ERRORS=$((ERRORS + 1))
fi
echo ""

# 5. Check for hardcoded secrets
echo "Checking for hardcoded secrets..."
if grep -i "api_key\s*=\s*[\"']" "$SERVER_PATH" | grep -v "os.getenv\|os.environ" > /dev/null; then
    echo -e "${RED}✗${NC} Found hardcoded API keys (possible security issue)"
    ERRORS=$((ERRORS + 1))
else
    echo -e "${GREEN}✓${NC} No hardcoded API keys found"
fi

if grep -i "password\s*=\s*[\"']\|secret\s*=\s*[\"']" "$SERVER_PATH" | grep -v "os.getenv\|os.environ" > /dev/null; then
    echo -e "${YELLOW}⚠${NC} Found possible hardcoded passwords/secrets"
    WARNINGS=$((WARNINGS + 1))
fi
echo ""

# 6. Check .gitignore
echo "Checking .gitignore..."
if [ -f ".gitignore" ]; then
    echo -e "${GREEN}✓${NC} .gitignore exists"

    if grep -q "\.env$" .gitignore; then
        echo -e "${GREEN}✓${NC} .env is in .gitignore"
    else
        echo -e "${YELLOW}⚠${NC} .env not in .gitignore"
        WARNINGS=$((WARNINGS + 1))
    fi
else
    echo -e "${YELLOW}⚠${NC} .gitignore not found"
    WARNINGS=$((WARNINGS + 1))
fi
echo ""

# 7. Check for circular imports
echo "Checking for potential circular imports..."
if grep -r "from __init__ import\|from . import.*get_" . --include="*.py" 2>/dev/null | grep -v ".git" > /dev/null; then
    echo -e "${YELLOW}⚠${NC} Possible circular import pattern detected (factory functions)"
    WARNINGS=$((WARNINGS + 1))
else
    echo -e "${GREEN}✓${NC} No obvious circular import patterns"
fi
echo ""

# 8. Check git repository
echo "Checking git repository..."
if [ -d ".git" ]; then
    echo -e "${GREEN}✓${NC} Git repository initialized"

    # Check if there are uncommitted changes
    if [ -z "$(git status --porcelain)" ]; then
        echo -e "${GREEN}✓${NC} No uncommitted changes"
    else
        echo -e "${YELLOW}⚠${NC} There are uncommitted changes"
        WARNINGS=$((WARNINGS + 1))
    fi

    # Check if remote is set
    if git remote -v | grep -q "origin"; then
        echo -e "${GREEN}✓${NC} Git remote (origin) configured"
        REMOTE_URL=$(git remote get-url origin)
        echo "   Remote: $REMOTE_URL"
    else
        echo -e "${YELLOW}⚠${NC} No git remote configured"
        echo "   Run: gh repo create <name> --public"
        WARNINGS=$((WARNINGS + 1))
    fi
else
    echo -e "${YELLOW}⚠${NC} Not a git repository"
    echo "   Run: git init"
    WARNINGS=$((WARNINGS + 1))
fi
echo ""

# 9. Test server can run
echo "Testing server execution..."
if timeout 5 python3 "$SERVER_PATH" --help &> /dev/null || timeout 5 fastmcp inspect "$SERVER_PATH" &> /dev/null; then
    echo -e "${GREEN}✓${NC} Server can be loaded"
else
    echo -e "${YELLOW}⚠${NC} Could not verify server loads correctly"
    WARNINGS=$((WARNINGS + 1))
fi
echo ""

# Summary
echo "======================================"
echo "Deployment Check Summary"
echo "======================================"
echo ""

if [ $ERRORS -eq 0 ] && [ $WARNINGS -eq 0 ]; then
    echo -e "${GREEN}✓ Ready for deployment!${NC}"
    echo ""
    echo "Next steps:"
    echo "  1. Commit changes: git add . && git commit -m 'Ready for deployment'"
    echo "  2. Push to GitHub: git push -u origin main"
    echo "  3. Visit https://fastmcp.cloud"
    echo "  4. Connect your repository"
    echo "  5. Add environment variables"
    echo "  6. Deploy!"
    exit 0
elif [ $ERRORS -eq 0 ]; then
    echo -e "${YELLOW}⚠ Ready with warnings (${WARNINGS} warnings)${NC}"
    echo ""
    echo "Review warnings above before deploying."
    echo ""
    echo "To deploy anyway:"
    echo "  1. git add . && git commit -m 'Ready for deployment'"
    echo "  2. git push -u origin main"
    echo "  3. Visit https://fastmcp.cloud"
    exit 0
else
    echo -e "${RED}✗ Not ready for deployment (${ERRORS} errors, ${WARNINGS} warnings)${NC}"
    echo ""
    echo "Fix the errors above before deploying."
    echo ""
    echo "Common fixes:"
    echo "  - Export server at module level: mcp = FastMCP('name')"
    echo "  - Use only PyPI packages in requirements.txt"
    echo "  - Use os.getenv() for secrets, not hardcoded values"
    echo "  - Initialize git: git init"
    echo "  - Create .gitignore with .env"
    exit 1
fi

```

### scripts/test-server.sh

```bash
#!/bin/bash
# FastMCP Server Tester
# Tests a FastMCP server using the FastMCP Client

set -e

# Colors
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
RED='\033[0;31m'
NC='\033[0m' # No Color

echo "======================================"
echo "FastMCP Server Tester"
echo "======================================"
echo ""

# Check arguments
if [ $# -eq 0 ]; then
    echo "Usage: $0 <server.py> [--http] [--port 8000]"
    echo ""
    echo "Examples:"
    echo "  $0 server.py                    # Test stdio server"
    echo "  $0 server.py --http --port 8000 # Test HTTP server"
    exit 1
fi

SERVER_PATH=$1
TRANSPORT="stdio"
PORT="8000"

# Parse arguments
shift
while [[ $# -gt 0 ]]; do
    case $1 in
        --http)
            TRANSPORT="http"
            shift
            ;;
        --port)
            PORT="$2"
            shift 2
            ;;
        *)
            echo "Unknown option: $1"
            exit 1
            ;;
    esac
done

# Check if server file exists
if [ ! -f "$SERVER_PATH" ]; then
    echo -e "${RED}✗${NC} Server file not found: $SERVER_PATH"
    exit 1
fi

echo -e "${GREEN}✓${NC} Found server: $SERVER_PATH"
echo -e "${GREEN}✓${NC} Transport: $TRANSPORT"
if [ "$TRANSPORT" = "http" ]; then
    echo -e "${GREEN}✓${NC} Port: $PORT"
fi
echo ""

# Create test script
TEST_SCRIPT=$(mktemp)
cat > "$TEST_SCRIPT" << 'EOF'
import asyncio
import sys
from fastmcp import Client

async def test_server(server_path, transport, port):
    """Test MCP server."""
    print("Starting server test...\n")

    try:
        if transport == "http":
            server_url = f"http://localhost:{port}/mcp"
            print(f"Connecting to HTTP server at {server_url}...")
            client_context = Client(server_url)
        else:
            print(f"Connecting to stdio server: {server_path}...")
            client_context = Client(server_path)

        async with client_context as client:
            print("✓ Connected to server\n")

            # Test: List tools
            print("Testing: List tools")
            tools = await client.list_tools()
            print(f"✓ Found {len(tools)} tools")
            for tool in tools:
                print(f"  - {tool.name}: {tool.description[:60]}...")
            print()

            # Test: List resources
            print("Testing: List resources")
            resources = await client.list_resources()
            print(f"✓ Found {len(resources)} resources")
            for resource in resources:
                print(f"  - {resource.uri}: {resource.description[:60] if resource.description else 'No description'}...")
            print()

            # Test: List prompts
            print("Testing: List prompts")
            prompts = await client.list_prompts()
            print(f"✓ Found {len(prompts)} prompts")
            for prompt in prompts:
                print(f"  - {prompt.name}: {prompt.description[:60] if prompt.description else 'No description'}...")
            print()

            # Test: Call first tool (if any)
            if tools:
                print(f"Testing: Call tool '{tools[0].name}'")
                try:
                    # Try calling with empty args (may fail if required params)
                    result = await client.call_tool(tools[0].name, {})
                    print(f"✓ Tool executed successfully")
                    print(f"  Result: {str(result.data)[:100]}...")
                except Exception as e:
                    print(f"⚠ Tool call failed (may require parameters): {e}")
                print()

            # Test: Read first resource (if any)
            if resources:
                print(f"Testing: Read resource '{resources[0].uri}'")
                try:
                    data = await client.read_resource(resources[0].uri)
                    print(f"✓ Resource read successfully")
                    print(f"  Data: {str(data)[:100]}...")
                except Exception as e:
                    print(f"✗ Failed to read resource: {e}")
                print()

            print("=" * 50)
            print("✓ Server test completed successfully!")
            print("=" * 50)
            return 0

    except Exception as e:
        print(f"\n✗ Server test failed: {e}")
        import traceback
        traceback.print_exc()
        return 1

if __name__ == "__main__":
    server_path = sys.argv[1]
    transport = sys.argv[2] if len(sys.argv) > 2 else "stdio"
    port = sys.argv[3] if len(sys.argv) > 3 else "8000"

    exit_code = asyncio.run(test_server(server_path, transport, port))
    sys.exit(exit_code)
EOF

# Run test
echo "Running tests..."
echo ""

if [ "$TRANSPORT" = "http" ]; then
    # For HTTP, start server in background
    echo "Starting HTTP server in background..."
    python3 "$SERVER_PATH" --transport http --port "$PORT" &
    SERVER_PID=$!

    # Wait for server to start
    sleep 2

    # Run test
    python3 "$TEST_SCRIPT" "$SERVER_PATH" "$TRANSPORT" "$PORT"
    TEST_EXIT=$?

    # Kill server
    kill $SERVER_PID 2>/dev/null || true

    # Cleanup
    rm "$TEST_SCRIPT"

    exit $TEST_EXIT
else
    # For stdio, run test directly
    python3 "$TEST_SCRIPT" "$SERVER_PATH" "$TRANSPORT"
    TEST_EXIT=$?

    # Cleanup
    rm "$TEST_SCRIPT"

    exit $TEST_EXIT
fi

```

fastmcp | SkillHub