Back to skills
SkillHub ClubAnalyze Data & AIFull StackData / AI

openai-agents-sdk

OpenAI Agents SDK (Python) development. Use when building AI agents, multi-agent workflows, tool integrations, or streaming applications with the openai-agents package.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
13
Hot score
85
Updated
March 20, 2026
Overall rating
C1.5
Composite score
1.5
Best-practice grade
B77.6

Install command

npx @skill-hub/cli install laguagu-claude-code-nextjs-skills-openai-agents-sdk
openaiai-agentspythonsdkworkflow

Repository

laguagu/claude-code-nextjs-skills

Skill path: skills/openai-agents-sdk

OpenAI Agents SDK (Python) development. Use when building AI agents, multi-agent workflows, tool integrations, or streaming applications with the openai-agents package.

Open repository

Best for

Primary workflow: Analyze Data & AI.

Technical facets: Full Stack, Data / AI.

Target audience: everyone.

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: laguagu.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install openai-agents-sdk into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/laguagu/claude-code-nextjs-skills before adding openai-agents-sdk to shared team environments
  • Use openai-agents-sdk for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: openai-agents-sdk
description: OpenAI Agents SDK (Python) development. Use when building AI agents, multi-agent workflows, tool integrations, or streaming applications with the openai-agents package.
---

# OpenAI Agents SDK (Python)

Use this skill when developing AI agents using OpenAI Agents SDK (`openai-agents` package).

## Quick Reference

### Installation

```bash
pip install openai-agents
```

### Environment Variables

```bash
# OpenAI (direct)
OPENAI_API_KEY=sk-...
LLM_PROVIDER=openai

# Azure OpenAI (via LiteLLM)
LLM_PROVIDER=azure
AZURE_API_KEY=...
AZURE_API_BASE=https://your-resource.openai.azure.com
AZURE_API_VERSION=2024-12-01-preview
```

### Basic Agent

```python
from agents import Agent, Runner

agent = Agent(
    name="Assistant",
    instructions="You are a helpful assistant.",
    model="gpt-5.2",  # or "gpt-5", "gpt-5.2-nano"
)

# Synchronous
result = Runner.run_sync(agent, "Tell me a joke")
print(result.final_output)

# Asynchronous
result = await Runner.run(agent, "Tell me a joke")
```

### Key Patterns

| Pattern | Purpose |
|---------|---------|
| Basic Agent | Simple Q&A with instructions |
| Azure/LiteLLM | Azure OpenAI integration |
| AgentOutputSchema | Strict JSON validation with Pydantic |
| Function Tools | External actions (@function_tool) |
| Streaming | Real-time UI (Runner.run_streamed) |
| Handoffs | Specialized agents, delegation |
| Agents as Tools | Orchestration (agent.as_tool) |
| LLM as Judge | Iterative improvement loop |
| Guardrails | Input/output validation |
| Sessions | Automatic conversation history |
| Multi-Agent Pipeline | Multi-step workflows |

## Reference Documentation

For detailed information, see:

- [agents.md](references/agents.md) - Agent creation, Azure/LiteLLM integration
- [tools.md](references/tools.md) - Function tools, hosted tools, agents as tools
- [structured-output.md](references/structured-output.md) - Pydantic output, AgentOutputSchema
- [streaming.md](references/streaming.md) - Streaming patterns, SSE with FastAPI
- [handoffs.md](references/handoffs.md) - Agent delegation
- [guardrails.md](references/guardrails.md) - Input/output validation
- [sessions.md](references/sessions.md) - Sessions, conversation history
- [patterns.md](references/patterns.md) - Multi-agent workflows, LLM as judge, tracing

## Official Documentation

- **Docs:** https://openai.github.io/openai-agents-python/
- **Examples:** https://github.com/openai/openai-agents-python/tree/main/examples


---

## Referenced Files

> The following files are referenced in this skill and included for context.

### references/agents.md

```markdown
# Agents

## Basic Agent Creation

```python
from agents import Agent, Runner

agent = Agent(
    name="Assistant",
    instructions="You are a helpful assistant.",
    model="gpt-5.2",  # or "gpt-5", "gpt-5.2-nano"
)

# Synchronous execution
result = Runner.run_sync(agent, "Tell me a joke")
print(result.final_output)

# Asynchronous execution
result = await Runner.run(agent, "Tell me a joke")
```

## Azure OpenAI (LiteLLM)

```python
import os
from typing import Union
from agents import Agent, ModelSettings
from agents.extensions.models.litellm_model import LitellmModel

LLM_PROVIDER = os.getenv("LLM_PROVIDER", "azure")
MODEL = os.getenv("MODEL", "gpt-5.2")

def get_model() -> Union[str, LitellmModel]:
    """Get model based on provider."""
    if LLM_PROVIDER == "azure":
        # azure/ prefix tells LiteLLM to use Azure endpoint
        return LitellmModel(model=f"azure/{MODEL}")
    # Direct OpenAI
    return MODEL

agent = Agent(
    name="Assistant",
    instructions="You are helpful.",
    model=get_model(),  # Works with both Azure and OpenAI
)
```

## Dynamic System Prompt

```python
from agents import Agent, Runner, RunContextWrapper

def dynamic_instructions(
    ctx: RunContextWrapper[dict], agent: Agent[dict]
) -> str:
    user_name = ctx.context.get("user_name", "User")
    return f"You are helping {user_name}. Be friendly and helpful."

agent = Agent(
    name="DynamicBot",
    instructions=dynamic_instructions,  # Function instead of string
    model="gpt-5.2",
)

result = await Runner.run(
    agent,
    "Hello!",
    context={"user_name": "Alice"},
)
```

## Loading Prompts from Files

```python
from pathlib import Path

PROMPTS_DIR = Path(__file__).parent / "prompts"

def load_prompt(filename: str) -> str:
    return (PROMPTS_DIR / filename).read_text(encoding="utf-8")

agent = Agent(
    name="Planner",
    instructions=load_prompt("planner.md"),
    model="gpt-5.2",
)
```

## Agent Configuration Options

| Option | Description |
|--------|-------------|
| `name` | Agent identifier |
| `instructions` | System prompt (string or function) |
| `model` | Model name or LitellmModel instance |
| `tools` | List of tools the agent can use |
| `handoffs` | List of agents to delegate to |
| `output_type` | Pydantic model for structured output |
| `model_settings` | ModelSettings for fine-tuning |
| `input_guardrails` | Input validation functions |
| `output_guardrails` | Output validation functions |

```

### references/tools.md

```markdown
# Tools

## Function Tools (@function_tool)

```python
from typing import Annotated
from agents import Agent, Runner, function_tool

@function_tool
def get_weather(city: Annotated[str, "City name"]) -> str:
    """Get weather for a city."""
    return f"Weather in {city}: Sunny, 20C"

@function_tool
async def search_database(query: Annotated[str, "Search query"]) -> list[dict]:
    """Search products in database."""
    # Async function - can await database calls
    return [{"id": "1", "name": "Hiking boots"}]

agent = Agent(
    name="Assistant",
    instructions="Help users find information.",
    tools=[get_weather, search_database],
)
```

## Tool with Multiple Parameters

```python
@function_tool
def book_flight(
    origin: Annotated[str, "Departure city"],
    destination: Annotated[str, "Arrival city"],
    date: Annotated[str, "Travel date (YYYY-MM-DD)"],
    passengers: Annotated[int, "Number of passengers"] = 1,
) -> dict:
    """Book a flight between two cities."""
    return {
        "confirmation": "ABC123",
        "route": f"{origin} -> {destination}",
        "date": date,
        "passengers": passengers,
    }
```

## Hosted Tools (Built-in)

```python
from agents import Agent, WebSearchTool, CodeInterpreterTool

agent = Agent(
    name="Researcher",
    instructions="Search the web and analyze data.",
    tools=[
        WebSearchTool(user_location={"type": "approximate", "city": "Helsinki"}),
        CodeInterpreterTool(),
    ],
)
```

## Agents as Tools

Use other agents as tools for orchestration:

```python
from agents import Agent, Runner

translator_es = Agent(
    name="SpanishTranslator",
    instructions="Translate to Spanish.",
)

translator_fr = Agent(
    name="FrenchTranslator",
    instructions="Translate to French.",
)

orchestrator = Agent(
    name="Orchestrator",
    instructions="Use translation tools as needed.",
    tools=[
        translator_es.as_tool(
            tool_name="translate_spanish",
            tool_description="Translate text to Spanish",
        ),
        translator_fr.as_tool(
            tool_name="translate_french",
            tool_description="Translate text to French",
        ),
    ],
)

result = await Runner.run(orchestrator, "Translate 'hello' to Spanish and French")
```

## Tool Guardrails

```python
from agents import Agent, function_tool, tool_guardrail
from agents import ToolGuardrailFunctionOutput, RunContextWrapper

@tool_guardrail
async def validate_query(
    ctx: RunContextWrapper, agent: Agent, tool_input: dict
) -> ToolGuardrailFunctionOutput:
    query = tool_input.get("query", "")
    if len(query) < 3:
        return ToolGuardrailFunctionOutput(
            tripwire_triggered=True,
            output_info="Query too short",
        )
    return ToolGuardrailFunctionOutput(tripwire_triggered=False)

@function_tool(guardrails=[validate_query])
def search(query: Annotated[str, "Search query"]) -> list[str]:
    """Search for items."""
    return ["result1", "result2"]
```

## Forcing Tool Use

```python
from agents import Agent, ModelSettings

agent = Agent(
    name="ToolUser",
    instructions="Always use tools to answer.",
    tools=[get_weather, search_database],
    model_settings=ModelSettings(
        tool_choice="required",  # Force tool usage
    ),
)
```

```

### references/structured-output.md

```markdown
# Structured Output

## AgentOutputSchema with Pydantic

```python
from pydantic import BaseModel, Field
from agents import Agent, Runner, AgentOutputSchema, ModelSettings
from openai.types.shared.reasoning import Reasoning

# Pydantic model for response structure
class ProductRecommendationLite(BaseModel):
    product_id: str = Field(description="Unique product ID")
    name: str = Field(description="Product name")
    relevance_reason: str = Field(description="Why this product matches")
    match_score: float = Field(ge=0, le=1, description="Match score 0-1")

class ProductSelectionOutput(BaseModel):
    products: list[ProductRecommendationLite] = Field(description="Selected products")

# Agent with strict JSON schema output
agent = Agent(
    name="ProductSelector",
    instructions="Select the 10 best products matching user request...",
    model=get_model(),
    model_settings=ModelSettings(
        max_tokens=64000,
        # Reasoning effort: "none", "low", "medium", "high"
        reasoning=Reasoning(effort="low"),
    ),
    # strict_json_schema=True forces LLM to return valid JSON
    output_type=AgentOutputSchema(ProductSelectionOutput, strict_json_schema=True),
)

result = await Runner.run(agent, "Find products for family hiking trip")
output: ProductSelectionOutput = result.final_output

# Use the result
for product in output.products:
    print(f"{product.name}: {product.score} - {product.relevance_reason}")
```

## Simple Output Type

```python
from dataclasses import dataclass
from typing import Literal

@dataclass
class EvaluationFeedback:
    feedback: str
    score: Literal["pass", "needs_improvement", "fail"]

evaluator = Agent[None](
    name="Evaluator",
    instructions="Evaluate content and provide feedback.",
    output_type=EvaluationFeedback,
)

result = await Runner.run(evaluator, "Review this story outline...")
evaluation: EvaluationFeedback = result.final_output
print(f"Score: {evaluation.score}, Feedback: {evaluation.feedback}")
```

## ModelSettings

```python
from agents import Agent, ModelSettings
from openai.types.shared.reasoning import Reasoning

agent = Agent(
    name="Assistant",
    instructions="Be helpful.",
    model="gpt-5.2",
    model_settings=ModelSettings(
        max_tokens=32000,
        temperature=0.7,
        tool_choice="required",  # Force tool usage
        reasoning=Reasoning(effort="medium"),  # GPT-5 reasoning
    ),
)
```

## ModelSettings Options

| Option | Description |
|--------|-------------|
| `max_tokens` | Maximum tokens in response |
| `temperature` | Randomness (0.0-2.0) |
| `top_p` | Nucleus sampling |
| `tool_choice` | "auto", "required", "none" |
| `reasoning` | Reasoning effort for GPT-5 models |
| `presence_penalty` | Penalize repeated topics |
| `frequency_penalty` | Penalize repeated tokens |

## Non-Strict Output

For schemas that don't support strict mode:

```python
from agents import Agent, AgentOutputSchema

class FlexibleOutput(BaseModel):
    data: dict  # dict type not supported in strict mode
    notes: str

agent = Agent(
    name="Flexible",
    instructions="Return flexible data.",
    output_type=AgentOutputSchema(FlexibleOutput, strict_json_schema=False),
)
```

```

### references/streaming.md

```markdown
# Streaming

## Basic Streaming

```python
from openai.types.responses import ResponseTextDeltaEvent
from agents import Agent, Runner

agent = Agent(name="Writer", instructions="Write stories.")

result = Runner.run_streamed(agent, input="Write a short story")

async for event in result.stream_events():
    if event.type == "raw_response_event":
        if isinstance(event.data, ResponseTextDeltaEvent):
            print(event.data.delta, end="", flush=True)
```

## Stream Items

```python
from agents import Agent, Runner

agent = Agent(name="Assistant", instructions="Be helpful.")

result = Runner.run_streamed(agent, input="Tell me about Python")

async for item in result.stream_items():
    print(f"Item type: {item.type}")
    if hasattr(item, "text"):
        print(f"Text: {item.text}")
```

## SSE Streaming with FastAPI

```python
import json
from fastapi import FastAPI
from fastapi.responses import StreamingResponse
from openai.types.responses import ResponseTextDeltaEvent
from agents import Agent, Runner

app = FastAPI()

agent = Agent(name="Assistant", instructions="Be helpful.")

def sse(event: str, data: dict) -> str:
    return f"event: {event}\ndata: {json.dumps(data)}\n\n"

@app.post("/stream")
async def stream_response(prompt: str):
    async def generate():
        result = Runner.run_streamed(agent, input=prompt)

        async for event in result.stream_events():
            if event.type == "raw_response_event":
                if isinstance(event.data, ResponseTextDeltaEvent):
                    yield sse("delta", {"text": event.data.delta})

        yield sse("done", {})

    return StreamingResponse(
        generate(),
        media_type="text/event-stream",
    )
```

## Streaming with Tool Calls

```python
from agents import Agent, Runner, function_tool
from openai.types.responses import ResponseTextDeltaEvent, ResponseFunctionCallArgumentsDeltaEvent

@function_tool
def get_data(query: str) -> str:
    return f"Data for {query}"

agent = Agent(
    name="DataBot",
    instructions="Fetch data when asked.",
    tools=[get_data],
)

result = Runner.run_streamed(agent, input="Get data about sales")

async for event in result.stream_events():
    if event.type == "raw_response_event":
        if isinstance(event.data, ResponseTextDeltaEvent):
            print(f"Text: {event.data.delta}", end="")
        elif isinstance(event.data, ResponseFunctionCallArgumentsDeltaEvent):
            print(f"Tool args: {event.data.delta}", end="")
```

## Streaming with Guardrails

```python
from agents import Agent, Runner, input_guardrail
from agents import GuardrailFunctionOutput, RunContextWrapper

@input_guardrail
async def check_input(
    ctx: RunContextWrapper, agent: Agent, input: str
) -> GuardrailFunctionOutput:
    if "bad" in input.lower():
        return GuardrailFunctionOutput(
            tripwire_triggered=True,
            output_info="Inappropriate content",
        )
    return GuardrailFunctionOutput(tripwire_triggered=False)

agent = Agent(
    name="SafeBot",
    instructions="Be helpful.",
    input_guardrails=[check_input],
)

try:
    result = Runner.run_streamed(agent, input="Hello")
    async for event in result.stream_events():
        # Process events
        pass
except Exception as e:
    print(f"Guardrail triggered: {e}")
```

## Collecting Full Response

```python
result = Runner.run_streamed(agent, input="Tell me a story")

# Stream first
async for event in result.stream_events():
    if event.type == "raw_response_event":
        if isinstance(event.data, ResponseTextDeltaEvent):
            print(event.data.delta, end="")

# Then get full result
final_result = await result.final_result()
print(f"\n\nFull output: {final_result.final_output}")
```

```

### references/handoffs.md

```markdown
# Handoffs

## Basic Handoffs

Handoffs allow agents to delegate tasks to specialized agents:

```python
from agents import Agent, handoff

billing_agent = Agent(
    name="BillingAgent",
    instructions="Handle billing questions. You can help with invoices, payments, and subscriptions.",
)

support_agent = Agent(
    name="SupportAgent",
    instructions="Handle general support. Handoff billing questions to the billing agent.",
    handoffs=[billing_agent],
)

# LLM automatically decides when to delegate to another agent
result = await Runner.run(support_agent, "I have a question about my invoice")
```

## Multiple Handoffs

```python
billing_agent = Agent(
    name="BillingAgent",
    instructions="Handle billing and payment questions.",
)

technical_agent = Agent(
    name="TechnicalAgent",
    instructions="Handle technical issues and troubleshooting.",
)

sales_agent = Agent(
    name="SalesAgent",
    instructions="Handle sales inquiries and pricing.",
)

triage_agent = Agent(
    name="TriageAgent",
    instructions="""You are a customer service triage agent.
    Route customers to the appropriate specialist:
    - Billing questions -> BillingAgent
    - Technical issues -> TechnicalAgent
    - Sales/pricing -> SalesAgent
    """,
    handoffs=[billing_agent, technical_agent, sales_agent],
)

result = await Runner.run(triage_agent, "My app keeps crashing")
# -> Delegates to TechnicalAgent
```

## Handoff with Context

```python
from agents import Agent, handoff, RunContextWrapper

def escalation_instructions(
    ctx: RunContextWrapper[dict], agent: Agent[dict]
) -> str:
    priority = ctx.context.get("priority", "normal")
    return f"""You are handling an escalated case.
    Priority level: {priority}
    Be thorough and professional."""

escalation_agent = Agent(
    name="EscalationAgent",
    instructions=escalation_instructions,
)

support_agent = Agent(
    name="SupportAgent",
    instructions="Handle support. Escalate complex issues.",
    handoffs=[escalation_agent],
)

result = await Runner.run(
    support_agent,
    "This is urgent, I need help immediately!",
    context={"priority": "high"},
)
```

## Handoff vs Agents as Tools

| Feature | Handoffs | Agents as Tools |
|---------|----------|-----------------|
| Control flow | LLM decides when to delegate | Parent agent calls child explicitly |
| Return | Child agent takes over | Returns result to parent |
| Use case | Specialized routing | Orchestration, parallel tasks |
| Conversation | Child continues conversation | Parent continues after tool result |

### Handoff Example

```python
# Child takes over the conversation
support_agent = Agent(
    name="Support",
    handoffs=[billing_agent],  # Billing agent takes over
)
```

### Agent as Tool Example

```python
# Parent stays in control
orchestrator = Agent(
    name="Orchestrator",
    tools=[
        billing_agent.as_tool(
            tool_name="check_billing",
            tool_description="Get billing info",
        ),
    ],
)
```

## Message Filtering

Control what messages are passed during handoff:

```python
from agents import Agent, handoff
from agents import TResponseInputItem

def filter_messages(messages: list[TResponseInputItem]) -> list[TResponseInputItem]:
    # Only keep last 5 messages
    return messages[-5:]

specialist = Agent(
    name="Specialist",
    instructions="Handle specialized tasks.",
)

agent = Agent(
    name="Router",
    instructions="Route to specialist when needed.",
    handoffs=[
        handoff(
            agent=specialist,
            input_filter=filter_messages,
        ),
    ],
)
```

```

### references/guardrails.md

```markdown
# Guardrails

## Input Guardrails

Validate and filter input before the agent processes it:

```python
from agents import Agent, Runner, input_guardrail
from agents import GuardrailFunctionOutput, RunContextWrapper

@input_guardrail
async def check_appropriate(
    ctx: RunContextWrapper, agent: Agent, input: str
) -> GuardrailFunctionOutput:
    # Check input for inappropriate content
    is_inappropriate = "bad_word" in input.lower()
    return GuardrailFunctionOutput(
        tripwire_triggered=is_inappropriate,
        output_info="Inappropriate content detected" if is_inappropriate else None,
    )

@input_guardrail
async def check_length(
    ctx: RunContextWrapper, agent: Agent, input: str
) -> GuardrailFunctionOutput:
    if len(input) > 10000:
        return GuardrailFunctionOutput(
            tripwire_triggered=True,
            output_info="Input too long (max 10000 characters)",
        )
    return GuardrailFunctionOutput(tripwire_triggered=False)

agent = Agent(
    name="SafeAgent",
    instructions="Be helpful.",
    input_guardrails=[check_appropriate, check_length],
)
```

## Output Guardrails

Validate agent output before returning:

```python
from agents import Agent, output_guardrail
from agents import GuardrailFunctionOutput, RunContextWrapper

@output_guardrail
async def check_no_pii(
    ctx: RunContextWrapper, agent: Agent, output: str
) -> GuardrailFunctionOutput:
    # Check for potential PII in output
    import re

    # Simple email pattern check
    has_email = bool(re.search(r'\b[\w.-]+@[\w.-]+\.\w+\b', output))
    # Simple phone pattern check
    has_phone = bool(re.search(r'\b\d{3}[-.]?\d{3}[-.]?\d{4}\b', output))

    if has_email or has_phone:
        return GuardrailFunctionOutput(
            tripwire_triggered=True,
            output_info="Output contains potential PII",
        )
    return GuardrailFunctionOutput(tripwire_triggered=False)

agent = Agent(
    name="PIISafeAgent",
    instructions="Help users with their questions.",
    output_guardrails=[check_no_pii],
)
```

## Guardrail with Context

```python
from agents import Agent, input_guardrail
from agents import GuardrailFunctionOutput, RunContextWrapper

@input_guardrail
async def check_user_permissions(
    ctx: RunContextWrapper[dict], agent: Agent, input: str
) -> GuardrailFunctionOutput:
    user_role = ctx.context.get("user_role", "guest")

    # Check if user can access admin features
    if "admin" in input.lower() and user_role != "admin":
        return GuardrailFunctionOutput(
            tripwire_triggered=True,
            output_info="Admin access not permitted for your role",
        )
    return GuardrailFunctionOutput(tripwire_triggered=False)

agent = Agent(
    name="RoleBasedAgent",
    instructions="Help users based on their role.",
    input_guardrails=[check_user_permissions],
)

# User without admin access
result = await Runner.run(
    agent,
    "Show me admin settings",
    context={"user_role": "user"},
)
# -> Guardrail triggered
```

## Tool Guardrails

Validate tool inputs before execution:

```python
from agents import function_tool, tool_guardrail
from agents import ToolGuardrailFunctionOutput, RunContextWrapper
from typing import Annotated

@tool_guardrail
async def validate_file_path(
    ctx: RunContextWrapper, agent: Agent, tool_input: dict
) -> ToolGuardrailFunctionOutput:
    path = tool_input.get("file_path", "")

    # Block access to sensitive directories
    forbidden = ["/etc", "/root", "~/.ssh"]
    for forbidden_path in forbidden:
        if path.startswith(forbidden_path):
            return ToolGuardrailFunctionOutput(
                tripwire_triggered=True,
                output_info=f"Access to {forbidden_path} not allowed",
            )
    return ToolGuardrailFunctionOutput(tripwire_triggered=False)

@function_tool(guardrails=[validate_file_path])
def read_file(file_path: Annotated[str, "Path to file"]) -> str:
    """Read contents of a file."""
    with open(file_path) as f:
        return f.read()
```

## Handling Guardrail Errors

```python
from agents import Agent, Runner, InputGuardrailTripwireTriggered

agent = Agent(
    name="SafeBot",
    instructions="Be helpful.",
    input_guardrails=[check_appropriate],
)

try:
    result = await Runner.run(agent, "Some bad_word input")
except InputGuardrailTripwireTriggered as e:
    print(f"Input blocked: {e.guardrail_result.output_info}")
```

## GuardrailFunctionOutput Fields

| Field | Description |
|-------|-------------|
| `tripwire_triggered` | True if guardrail should block |
| `output_info` | Human-readable explanation |

```

### references/sessions.md

```markdown
# Sessions

## Conversation History with to_input_list()

Manual conversation history management:

```python
from agents import Agent, Runner, TResponseInputItem

agent = Agent(name="ChatBot", instructions="Be helpful.")

# First message
result = await Runner.run(agent, "Hello!")

# Continue conversation with history
inputs = result.to_input_list()
inputs.append({"role": "user", "content": "Tell me more"})

result = await Runner.run(agent, inputs)
```

## SQLite Session

Automatic conversation history with SQLite:

```python
from agents import Agent, Runner
from agents.extensions.sessions import SQLiteSession

agent = Agent(name="ChatBot", instructions="Remember our conversation.")

# Session stores and loads history automatically
session = SQLiteSession("conversation_123")

result1 = await Runner.run(agent, "My name is John", session=session)
result2 = await Runner.run(agent, "What's my name?", session=session)
# -> "Your name is John"
```

## Advanced SQLite Session

```python
from agents import Agent, Runner
from agents.extensions.sessions import SQLiteSession

# Custom database path
session = SQLiteSession(
    session_id="user_456_chat",
    db_path="./data/conversations.db",
)

agent = Agent(
    name="MemoryBot",
    instructions="Remember user preferences and history.",
)

# Multiple conversations with same agent
await Runner.run(agent, "I prefer dark mode", session=session)
await Runner.run(agent, "Set language to Finnish", session=session)

# Later session retrieval
session2 = SQLiteSession(session_id="user_456_chat", db_path="./data/conversations.db")
result = await Runner.run(agent, "What are my preferences?", session=session2)
# -> Remembers dark mode and Finnish language
```

## Redis Session

For distributed systems:

```python
from agents import Agent, Runner
from agents.extensions.sessions import RedisSession

session = RedisSession(
    session_id="user_789",
    redis_url="redis://localhost:6379",
    ttl=3600,  # 1 hour expiry
)

agent = Agent(name="ScalableBot", instructions="Be helpful.")

result = await Runner.run(agent, "Hello!", session=session)
```

## OpenAI Session

Using OpenAI's built-in memory:

```python
from agents import Agent, Runner
from agents.extensions.sessions import OpenAISession

session = OpenAISession(session_id="openai_session_123")

agent = Agent(
    name="OpenAIMemoryBot",
    instructions="Use your memory to help users.",
)

result = await Runner.run(agent, "Remember I like Python", session=session)
```

## Compaction Session

Automatically summarize long conversations:

```python
from agents import Agent, Runner
from agents.extensions.sessions import CompactionSession, SQLiteSession

base_session = SQLiteSession("long_conversation")

# Compacts history when it exceeds threshold
session = CompactionSession(
    base_session=base_session,
    max_messages=20,  # Compact after 20 messages
    summary_model="gpt-5.2-mini",  # Model for summarization
)

agent = Agent(name="LongChatBot", instructions="Have long conversations.")

# After many messages, older ones are summarized
for i in range(30):
    await Runner.run(agent, f"Message {i}", session=session)
```

## Encrypted Session

For sensitive conversations:

```python
from agents import Agent, Runner
from agents.extensions.sessions import EncryptedSession, SQLiteSession

base_session = SQLiteSession("sensitive_chat")

session = EncryptedSession(
    base_session=base_session,
    encryption_key="your-32-byte-encryption-key-here",
)

agent = Agent(name="SecureBot", instructions="Handle sensitive information.")

result = await Runner.run(agent, "My SSN is 123-45-6789", session=session)
# Data stored encrypted in SQLite
```

## Session Comparison

| Session Type | Storage | Use Case |
|--------------|---------|----------|
| Manual (to_input_list) | Memory | Simple, single-request |
| SQLiteSession | Local file | Single-server apps |
| RedisSession | Redis | Distributed systems |
| OpenAISession | OpenAI | Using OpenAI memory |
| CompactionSession | Wrapper | Long conversations |
| EncryptedSession | Wrapper | Sensitive data |

```

### references/patterns.md

```markdown
# Patterns

## Multi-Agent Workflow Pipeline

Example: 3-stage pipeline (ProductSelector -> SetOptimizer -> PlanGenerator)

```python
from pathlib import Path
from pydantic import BaseModel, Field
from agents import Agent, AgentOutputSchema, ModelSettings, RunConfig, Runner
from openai.types.responses import ResponseTextDeltaEvent
from openai.types.shared.reasoning import Reasoning
from collections.abc import AsyncIterator

# --- Pydantic Output Schemas ---

class ProductLite(BaseModel):
    product_id: str
    name: str
    score: float = Field(ge=0, le=1)

class ProductsOutput(BaseModel):
    products: list[ProductLite]

class TravelSet(BaseModel):
    set_id: str  # "compact", "balanced", "extended"
    name: str
    product_ids: list[str]

class SetsOutput(BaseModel):
    sets: list[TravelSet]
    recommended_set_id: str

# --- Prompt Loading ---

PROMPTS_DIR = Path(__file__).parent / "prompts"

def load_prompt(name: str) -> str:
    return (PROMPTS_DIR / name).read_text(encoding="utf-8")

# --- Agents ---

# Step 1: Select products (structured output)
product_selector = Agent(
    name="ProductSelector",
    instructions=load_prompt("product_selector.md"),
    model=get_model(),
    model_settings=ModelSettings(max_tokens=64000),
    output_type=AgentOutputSchema(ProductsOutput, strict_json_schema=True),
)

# Step 2: Optimize sets (structured output)
set_optimizer = Agent(
    name="SetOptimizer",
    instructions=load_prompt("set_optimizer.md"),
    model=get_model(),
    model_settings=ModelSettings(
        max_tokens=16000,
        reasoning=Reasoning(effort="low"),
    ),
    output_type=AgentOutputSchema(SetsOutput, strict_json_schema=True),
)

# Step 3: Generate plan (streaming, no schema)
plan_generator = Agent(
    name="PlanGenerator",
    instructions=load_prompt("plan_generator.md"),
    model=get_model(),
    model_settings=ModelSettings(
        max_tokens=32000,
        reasoning=Reasoning(effort="low"),
    ),
    # No output_type = free text for streaming
)

# --- Runner Functions ---

async def select_products(user_prompt: str, context: str) -> list[ProductLite]:
    """Step 1: Select products."""
    result = await Runner.run(
        product_selector,
        input=f"User: {user_prompt}\n\nProducts:\n{context}",
        run_config=RunConfig(
            workflow_name="ProductSelector",
            trace_metadata={"step": "select"},
        ),
    )
    output: ProductsOutput = result.final_output
    return output.products

async def optimize_sets(products: list[dict]) -> tuple[list[TravelSet], str]:
    """Step 2: Create optimized sets."""
    result = await Runner.run(
        set_optimizer,
        input=f"Products:\n{products}",
        run_config=RunConfig(workflow_name="SetOptimizer"),
    )
    output: SetsOutput = result.final_output
    return output.sets, output.recommended_set_id

async def generate_plan_stream(products: list[dict]) -> AsyncIterator[str]:
    """Step 3: Generate plan with streaming."""
    result = Runner.run_streamed(
        plan_generator,
        input=f"Create travel plan for:\n{products}",
        run_config=RunConfig(workflow_name="PlanGenerator"),
    )
    async for event in result.stream_events():
        if event.type == "raw_response_event":
            if isinstance(event.data, ResponseTextDeltaEvent):
                yield event.data.delta

# --- Full Workflow ---

async def travel_workflow(user_prompt: str, products_context: str):
    # Step 1
    products = await select_products(user_prompt, products_context)
    print(f"Selected {len(products)} products")

    # Step 2
    sets, recommended = await optimize_sets([p.model_dump() for p in products])
    print(f"Created {len(sets)} sets, recommended: {recommended}")

    # Step 3 - stream
    async for chunk in generate_plan_stream([p.model_dump() for p in products]):
        print(chunk, end="", flush=True)
```

## LLM as a Judge

Iterative improvement with evaluator agent:

```python
from dataclasses import dataclass
from typing import Literal
from agents import Agent, Runner, TResponseInputItem, trace

@dataclass
class Evaluation:
    score: Literal["pass", "needs_improvement", "fail"]
    feedback: str

generator = Agent(
    name="Generator",
    instructions="Generate content based on feedback.",
)

evaluator = Agent(
    name="Evaluator",
    instructions="Evaluate and provide feedback.",
    output_type=Evaluation,
)

async def generate_with_feedback(prompt: str) -> str:
    inputs: list[TResponseInputItem] = [{"role": "user", "content": prompt}]

    with trace("LLM as a judge"):
        while True:
            gen_result = await Runner.run(generator, inputs)
            inputs = gen_result.to_input_list()

            eval_result = await Runner.run(evaluator, inputs)
            evaluation: Evaluation = eval_result.final_output

            if evaluation.score == "pass":
                return gen_result.final_output

            inputs.append({"role": "user", "content": f"Feedback: {evaluation.feedback}"})
```

## Tracing

Group related agent runs together:

```python
from agents import Agent, Runner, trace, RunConfig

async def workflow(user_input: str):
    with trace("MyWorkflow"):
        # All Runner.run() calls inside this block
        # appear in the same trace
        result1 = await Runner.run(agent1, user_input)
        result2 = await Runner.run(agent2, result1.to_input_list())

    return result2.final_output

# RunConfig for metadata
result = await Runner.run(
    agent,
    input=message,
    run_config=RunConfig(
        workflow_name="ProductSelector",
        trace_metadata={"agent": "selector", "locale": "fi"},
    ),
)
```

## Parallelization

Run multiple agents concurrently:

```python
import asyncio
from agents import Agent, Runner

agent1 = Agent(name="Researcher", instructions="Research topics.")
agent2 = Agent(name="Analyzer", instructions="Analyze data.")
agent3 = Agent(name="Writer", instructions="Write content.")

async def parallel_workflow(topic: str):
    # Run research and analysis in parallel
    research_task = Runner.run(agent1, f"Research: {topic}")
    analysis_task = Runner.run(agent2, f"Analyze: {topic}")

    research_result, analysis_result = await asyncio.gather(
        research_task, analysis_task
    )

    # Combine results for writer
    combined_input = f"""
    Research: {research_result.final_output}
    Analysis: {analysis_result.final_output}
    """

    writer_result = await Runner.run(agent3, combined_input)
    return writer_result.final_output
```

## Routing

Route to specialized agents based on input:

```python
from agents import Agent, Runner, function_tool
from typing import Literal

@function_tool
def classify_intent(query: str) -> Literal["billing", "technical", "sales"]:
    """Classify user intent."""
    # In real app, could use another LLM or classifier
    if "invoice" in query or "payment" in query:
        return "billing"
    elif "error" in query or "bug" in query:
        return "technical"
    return "sales"

router = Agent(
    name="Router",
    instructions="Classify user intent using the classify tool.",
    tools=[classify_intent],
)

agents = {
    "billing": Agent(name="Billing", instructions="Handle billing."),
    "technical": Agent(name="Technical", instructions="Handle tech support."),
    "sales": Agent(name="Sales", instructions="Handle sales."),
}

async def route_and_handle(query: str):
    # First, classify
    router_result = await Runner.run(router, query)
    intent = router_result.final_output  # "billing", "technical", or "sales"

    # Route to specialist
    specialist = agents[intent]
    result = await Runner.run(specialist, query)
    return result.final_output
```

## Deterministic Workflows

Force specific tool execution order:

```python
from agents import Agent, ModelSettings

# Phase 1: Must search
search_agent = Agent(
    name="Searcher",
    instructions="Search for information.",
    tools=[search_tool],
    model_settings=ModelSettings(tool_choice="required"),
)

# Phase 2: Must analyze
analyzer = Agent(
    name="Analyzer",
    instructions="Analyze the search results.",
    tools=[analyze_tool],
    model_settings=ModelSettings(tool_choice="required"),
)

# Phase 3: Free response
writer = Agent(
    name="Writer",
    instructions="Write based on analysis.",
    # No tool_choice = free text response
)

async def deterministic_workflow(query: str):
    # Guaranteed order: search -> analyze -> write
    search_result = await Runner.run(search_agent, query)
    analysis_result = await Runner.run(analyzer, search_result.to_input_list())
    final_result = await Runner.run(writer, analysis_result.to_input_list())
    return final_result.final_output
```

```

openai-agents-sdk | SkillHub