openai-agents
Build AI applications with OpenAI Agents SDK - text agents, voice agents, multi-agent handoffs, tools with Zod schemas, guardrails, and streaming. Prevents 11 documented errors. Use when: building agents with tools, voice agents with WebRTC, multi-agent workflows, or troubleshooting MaxTurnsExceededError, tool call failures, reasoning defaults, JSON output leaks.
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Install command
npx @skill-hub/cli install jezweb-claude-skills-openai-agents
Repository
Skill path: skills/openai-agents
Build AI applications with OpenAI Agents SDK - text agents, voice agents, multi-agent handoffs, tools with Zod schemas, guardrails, and streaming. Prevents 11 documented errors. Use when: building agents with tools, voice agents with WebRTC, multi-agent workflows, or troubleshooting MaxTurnsExceededError, tool call failures, reasoning defaults, JSON output leaks.
Open repositoryBest for
Primary workflow: Analyze Data & AI.
Technical facets: Full Stack, Data / AI.
Target audience: Developers building AI applications with OpenAI's Agents SDK, particularly those needing multi-agent workflows or voice interfaces.
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: jezweb.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install openai-agents into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/jezweb/claude-skills before adding openai-agents to shared team environments
- Use openai-agents for development workflows
Works across
Favorites: 0.
Sub-skills: 0.
Aggregator: No.
Original source / Raw SKILL.md
---
name: openai-agents
description: |
Build AI applications with OpenAI Agents SDK - text agents, voice agents, multi-agent handoffs, tools with Zod schemas, guardrails, and streaming. Prevents 11 documented errors.
Use when: building agents with tools, voice agents with WebRTC, multi-agent workflows, or troubleshooting MaxTurnsExceededError, tool call failures, reasoning defaults, JSON output leaks.
user-invocable: true
---
# OpenAI Agents SDK
Build AI applications with text agents, voice agents (realtime), multi-agent workflows, tools, guardrails, and human-in-the-loop patterns.
---
## Quick Start
```bash
npm install @openai/agents zod@4 # v0.4.0+ requires Zod 4 (breaking change)
npm install @openai/agents-realtime # Voice agents
export OPENAI_API_KEY="your-key"
```
**Breaking Change (v0.4.0)**: Zod 3 no longer supported. Upgrade to `zod@4`.
**Runtimes**: Node.js 22+, Deno, Bun, Cloudflare Workers (experimental)
---
## Core Concepts
**Agents**: LLMs with instructions + tools
```typescript
import { Agent } from '@openai/agents';
const agent = new Agent({ name: 'Assistant', tools: [myTool], model: 'gpt-5-mini' });
```
**Tools**: Functions with Zod schemas
```typescript
import { tool } from '@openai/agents';
import { z } from 'zod';
const weatherTool = tool({
name: 'get_weather',
parameters: z.object({ city: z.string() }),
execute: async ({ city }) => `Weather in ${city}: sunny`,
});
```
**Handoffs**: Multi-agent delegation
```typescript
const triageAgent = Agent.create({ handoffs: [specialist1, specialist2] });
```
**Guardrails**: Input/output validation
```typescript
const agent = new Agent({ inputGuardrails: [detector], outputGuardrails: [filter] });
```
**Structured Outputs**: Type-safe responses
```typescript
const agent = new Agent({ outputType: z.object({ sentiment: z.enum(['positive', 'negative']) }) });
```
---
## Text Agents
**Basic**: `const result = await run(agent, 'What is 2+2?')`
**Streaming**:
```typescript
const stream = await run(agent, 'Tell me a story', { stream: true });
for await (const event of stream) {
if (event.type === 'raw_model_stream_event') process.stdout.write(event.data?.choices?.[0]?.delta?.content || '');
}
```
---
## Multi-Agent Handoffs
```typescript
const billingAgent = new Agent({ name: 'Billing', handoffDescription: 'For billing questions', tools: [refundTool] });
const techAgent = new Agent({ name: 'Technical', handoffDescription: 'For tech issues', tools: [ticketTool] });
const triageAgent = Agent.create({ name: 'Triage', handoffs: [billingAgent, techAgent] });
```
**Agent-as-Tool Context Isolation**: When using `agent.asTool()`, sub-agents do NOT share parent conversation history (intentional design to simplify debugging).
**Workaround**: Pass context via tool parameters:
```typescript
const helperTool = tool({
name: 'use_helper',
parameters: z.object({
query: z.string(),
context: z.string().optional(),
}),
execute: async ({ query, context }) => {
return await run(subAgent, `${context}\n\n${query}`);
},
});
```
**Source**: [Issue #806](https://github.com/openai/openai-agents-js/issues/806)
---
## Guardrails
**Input**: Validate before processing
```typescript
const guardrail: InputGuardrail = {
execute: async ({ input }) => ({ tripwireTriggered: detectHomework(input) })
};
const agent = new Agent({ inputGuardrails: [guardrail] });
```
**Output**: Filter responses (PII detection, content safety)
---
## Human-in-the-Loop
```typescript
const refundTool = tool({ name: 'process_refund', requiresApproval: true, execute: async ({ amount }) => `Refunded $${amount}` });
let result = await runner.run(input);
while (result.interruption?.type === 'tool_approval') {
result = await promptUser(result.interruption) ? result.state.approve(result.interruption) : result.state.reject(result.interruption);
}
```
**Streaming HITL**: When using `stream: true` with `requiresApproval`, must explicitly check interruptions:
```typescript
const stream = await run(agent, input, { stream: true });
let result = await stream.finalResult();
while (result.interruption?.type === 'tool_approval') {
const approved = await promptUser(result.interruption);
result = approved
? await result.state.approve(result.interruption)
: await result.state.reject(result.interruption);
}
```
**Example**: [human-in-the-loop-stream.ts](https://github.com/openai/openai-agents-js/blob/main/examples/agent-patterns/human-in-the-loop-stream.ts)
---
## Realtime Voice Agents
**Create**:
```typescript
import { RealtimeAgent } from '@openai/agents-realtime';
const voiceAgent = new RealtimeAgent({
voice: 'alloy', // alloy, echo, fable, onyx, nova, shimmer
model: 'gpt-5-realtime',
tools: [weatherTool],
});
```
**Browser Session**:
```typescript
import { RealtimeSession } from '@openai/agents-realtime';
const session = new RealtimeSession(voiceAgent, { apiKey: sessionApiKey, transport: 'webrtc' });
await session.connect();
```
**CRITICAL**: Never send OPENAI_API_KEY to browser! Generate ephemeral session tokens server-side.
**Voice Handoffs**: Voice/model must match across agents (cannot change during handoff)
**Limitations**:
- **Video streaming NOT supported**: Despite camera examples, realtime video streaming is not natively supported. Model may not proactively speak based on video events. ([Issue #694](https://github.com/openai/openai-agents-js/issues/694))
**Templates**:
- `templates/realtime-agents/realtime-agent-basic.ts`
- `templates/realtime-agents/realtime-session-browser.tsx`
- `templates/realtime-agents/realtime-handoffs.ts`
**References**:
- `references/realtime-transports.md` - WebRTC vs WebSocket
---
## Framework Integration
**Cloudflare Workers** (experimental):
```typescript
export default {
async fetch(request: Request, env: Env) {
// Disable tracing or use startTracingExportLoop()
process.env.OTEL_SDK_DISABLED = 'true';
process.env.OPENAI_API_KEY = env.OPENAI_API_KEY;
const agent = new Agent({ name: 'Assistant', model: 'gpt-5-mini' });
const result = await run(agent, (await request.json()).message);
return Response.json({ response: result.finalOutput, tokens: result.usage.totalTokens });
}
};
```
**Limitations**:
- No voice agents
- 30s CPU limit, 128MB memory
- **Tracing requires manual setup** - set `OTEL_SDK_DISABLED=true` or call `startTracingExportLoop()` ([Issue #16](https://github.com/openai/openai-agents-js/issues/16))
**Next.js**: `app/api/agent/route.ts` → `POST` handler with `run(agent, message)`
**Templates**: `cloudflare-workers/`, `nextjs/`
---
## Error Handling (11+ Errors Prevented)
### 1. Zod Schema Type Errors
**Error**: Type errors with tool parameters.
**Workaround**: Define schemas inline.
```typescript
// ❌ Can cause type errors
parameters: mySchema
// ✅ Works reliably
parameters: z.object({ field: z.string() })
```
**Note**: As of v0.4.1, invalid JSON in tool call arguments is handled gracefully (previously caused SyntaxError crashes). ([PR #887](https://github.com/openai/openai-agents-js/pull/887))
**Source**: [GitHub #188](https://github.com/openai/openai-agents-js/issues/188)
### 2. MCP Tracing Errors
**Error**: "No existing trace found" with MCP servers.
**Workaround**:
```typescript
import { initializeTracing } from '@openai/agents/tracing';
await initializeTracing();
```
**Source**: [GitHub #580](https://github.com/openai/openai-agents-js/issues/580)
### 3. MaxTurnsExceededError
**Error**: Agent loops infinitely.
**Solution**: Increase maxTurns or improve instructions:
```typescript
const result = await run(agent, input, {
maxTurns: 20, // Increase limit
});
// Or improve instructions
instructions: `After using tools, provide a final answer.
Do not loop endlessly.`
```
### 4. ToolCallError
**Error**: Tool execution fails.
**Solution**: Retry with exponential backoff:
```typescript
for (let attempt = 1; attempt <= 3; attempt++) {
try {
return await run(agent, input);
} catch (error) {
if (error instanceof ToolCallError && attempt < 3) {
await sleep(1000 * Math.pow(2, attempt - 1));
continue;
}
throw error;
}
}
```
### 5. Schema Mismatch
**Error**: Output doesn't match `outputType`.
**Solution**: Use stronger model or add validation instructions:
```typescript
const agent = new Agent({
model: 'gpt-5', // More reliable than gpt-5-mini
instructions: 'CRITICAL: Return JSON matching schema exactly',
outputType: mySchema,
});
```
### 6. Reasoning Effort Defaults Changed (v0.4.0)
**Error**: Unexpected reasoning behavior after upgrading to v0.4.0.
**Why It Happens**: Default reasoning effort for gpt-5.1/5.2 changed from `"low"` to `"none"` in v0.4.0.
**Prevention**: Explicitly set reasoning effort if you need it.
```typescript
// v0.4.0+ - default is now "none"
const agent = new Agent({
model: 'gpt-5.1',
reasoning: { effort: 'low' }, // Explicitly set if needed: 'low', 'medium', 'high'
});
```
**Source**: [Release v0.4.0](https://github.com/openai/openai-agents-js/releases/tag/v0.4.0) | [PR #876](https://github.com/openai/openai-agents-js/pull/876)
### 7. Reasoning Content Leaks into JSON Output
**Error**: `response_reasoning` field appears in structured output unexpectedly.
**Why It Happens**: Model endpoint issue (not SDK bug) when using `outputType` with reasoning models.
**Workaround**: Filter out `response_reasoning` from output.
```typescript
const result = await run(agent, input);
const { response_reasoning, ...cleanOutput } = result.finalOutput;
return cleanOutput;
```
**Source**: [Issue #844](https://github.com/openai/openai-agents-js/issues/844)
**Status**: Model-side issue, coordinating with OpenAI teams
**All Errors**: See `references/common-errors.md`
**Template**: `templates/shared/error-handling.ts`
---
## Orchestration Patterns
**LLM-Based**: Agent decides routing autonomously (adaptive, higher tokens)
**Code-Based**: Explicit control flow with conditionals (predictable, lower cost)
**Parallel**: `Promise.all([run(agent1, text), run(agent2, text)])` (concurrent execution)
---
## Debugging
```typescript
process.env.DEBUG = '@openai/agents:*'; // Verbose logging
const result = await run(agent, input);
console.log(result.usage.totalTokens, result.history.length, result.currentAgent?.name);
```
❌ **Don't use when**:
- Simple OpenAI API calls (use `openai-api` skill instead)
- Non-OpenAI models exclusively
- Production voice at massive scale (consider LiveKit Agents)
---
## Production Checklist
- [ ] Set `OPENAI_API_KEY` as environment secret
- [ ] Implement error handling for all agent calls
- [ ] Add guardrails for safety-critical applications
- [ ] Enable tracing for debugging
- [ ] Set reasonable `maxTurns` to prevent runaway costs
- [ ] Use `gpt-5-mini` where possible for cost efficiency
- [ ] Implement rate limiting
- [ ] Log token usage for cost monitoring
- [ ] Test handoff flows thoroughly
- [ ] Never expose API keys to browsers (use session tokens)
---
## Token Efficiency
**Estimated Savings**: ~60%
| Task | Without Skill | With Skill | Savings |
|------|---------------|------------|---------|
| Multi-agent setup | ~12k tokens | ~5k tokens | 58% |
| Voice agent | ~10k tokens | ~4k tokens | 60% |
| Error debugging | ~8k tokens | ~3k tokens | 63% |
| **Average** | **~10k** | **~4k** | **~60%** |
**Errors Prevented**: 11 documented issues = 100% error prevention
---
## Templates Index
**Text Agents** (8):
1. `agent-basic.ts` - Simple agent with tools
2. `agent-handoffs.ts` - Multi-agent triage
3. `agent-structured-output.ts` - Zod schemas
4. `agent-streaming.ts` - Real-time events
5. `agent-guardrails-input.ts` - Input validation
6. `agent-guardrails-output.ts` - Output filtering
7. `agent-human-approval.ts` - HITL pattern
8. `agent-parallel.ts` - Concurrent execution
**Realtime Agents** (3):
9. `realtime-agent-basic.ts` - Voice setup
10. `realtime-session-browser.tsx` - React client
11. `realtime-handoffs.ts` - Voice delegation
**Framework Integration** (4):
12. `worker-text-agent.ts` - Cloudflare Workers
13. `worker-agent-hono.ts` - Hono framework
14. `api-agent-route.ts` - Next.js API
15. `api-realtime-route.ts` - Next.js voice
**Utilities** (2):
16. `error-handling.ts` - Comprehensive errors
17. `tracing-setup.ts` - Debugging
---
## References
1. `agent-patterns.md` - Orchestration strategies
2. `common-errors.md` - 9 errors with workarounds
3. `realtime-transports.md` - WebRTC vs WebSocket
4. `cloudflare-integration.md` - Workers limitations
5. `official-links.md` - Documentation links
---
## Official Resources
- **Docs**: https://openai.github.io/openai-agents-js/
- **GitHub**: https://github.com/openai/openai-agents-js
- **npm**: https://www.npmjs.com/package/@openai/agents
- **Issues**: https://github.com/openai/openai-agents-js/issues
---
**Version**: SDK v0.4.1
**Last Verified**: 2026-01-21
**Skill Author**: Jeremy Dawes (Jezweb)
**Production Tested**: Yes
**Changes**: Added v0.4.0 breaking changes (Zod 4, reasoning defaults), invalid JSON handling (v0.4.1), reasoning output leaks, streaming HITL pattern, agent-as-tool context isolation, video limitations, Cloudflare tracing setup
---
## Referenced Files
> The following files are referenced in this skill and included for context.
### templates/realtime-agents/realtime-agent-basic.ts
```typescript
/**
* Basic Realtime Voice Agent
*
* Demonstrates:
* - Creating a realtime voice agent
* - Defining tools for voice agents
* - Configuring voice and instructions
* - Understanding WebRTC vs WebSocket transports
*
* NOTE: This runs in the browser or in a Node.js environment with WebRTC support
*/
import { z } from 'zod';
import { RealtimeAgent, tool } from '@openai/agents-realtime';
// ========================================
// Tools for Voice Agent
// ========================================
// Note: Tools for realtime agents execute in the client environment
// For sensitive operations, make HTTP requests to your backend
const checkWeatherTool = tool({
name: 'check_weather',
description: 'Check current weather for a city',
parameters: z.object({
city: z.string().describe('City name'),
units: z.enum(['celsius', 'fahrenheit']).optional().default('celsius'),
}),
execute: async ({ city, units }) => {
// In production, call a real weather API
const temp = Math.floor(Math.random() * 30) + 10;
return `The weather in ${city} is sunny and ${temp}°${units === 'celsius' ? 'C' : 'F'}`;
},
});
const setReminderTool = tool({
name: 'set_reminder',
description: 'Set a reminder for the user',
parameters: z.object({
message: z.string(),
timeMinutes: z.number().describe('Minutes from now'),
}),
execute: async ({ message, timeMinutes }) => {
// In production, save to database via API call
console.log(`Reminder set: "${message}" in ${timeMinutes} minutes`);
return `I'll remind you about "${message}" in ${timeMinutes} minutes`;
},
});
const searchDocsTool = tool({
name: 'search_docs',
description: 'Search documentation',
parameters: z.object({
query: z.string(),
}),
execute: async ({ query }) => {
// In production, call your search API
return `Found documentation about: ${query}`;
},
});
// ========================================
// Create Realtime Voice Agent
// ========================================
const voiceAssistant = new RealtimeAgent({
name: 'Voice Assistant',
// Instructions for the agent's behavior
instructions: `You are a friendly and helpful voice assistant.
- Keep responses concise and conversational
- Use natural speech patterns
- When using tools, explain what you're doing
- Be proactive in offering help`,
// Tools available to the agent
tools: [checkWeatherTool, setReminderTool, searchDocsTool],
// Voice configuration (OpenAI voice options)
voice: 'alloy', // Options: alloy, echo, fable, onyx, nova, shimmer
// Model (realtime API uses specific models)
model: 'gpt-5-realtime', // Default for realtime
// Turn detection (when to consider user done speaking)
turnDetection: {
type: 'server_vad', // Voice Activity Detection on server
threshold: 0.5, // Sensitivity (0-1)
prefix_padding_ms: 300, // Audio before speech starts
silence_duration_ms: 500, // Silence to end turn
},
// Additional configuration
temperature: 0.7, // Response creativity (0-1)
maxOutputTokens: 4096, // Maximum response length
});
// ========================================
// Example: Create Session (Node.js)
// ========================================
/**
* For Node.js environments, you need to manually manage the session.
* See realtime-session-browser.tsx for browser usage.
*/
async function createNodeSession() {
// Note: WebRTC transport requires browser environment
// For Node.js, use WebSocket transport
const { OpenAIRealtimeWebSocket } = await import('@openai/agents-realtime');
const transport = new OpenAIRealtimeWebSocket({
apiKey: process.env.OPENAI_API_KEY,
});
// Create session
const session = await voiceAssistant.createSession({
transport,
});
// Handle events
session.on('connected', () => {
console.log('✅ Voice session connected');
});
session.on('disconnected', () => {
console.log('🔌 Voice session disconnected');
});
session.on('error', (error) => {
console.error('❌ Session error:', error);
});
// Audio transcription events
session.on('audio.transcription.completed', (event) => {
console.log('User said:', event.transcript);
});
session.on('agent.audio.done', (event) => {
console.log('Agent said:', event.transcript);
});
// Tool call events
session.on('tool.call', (event) => {
console.log('Tool called:', event.name, event.arguments);
});
session.on('tool.result', (event) => {
console.log('Tool result:', event.result);
});
// Connect to start session
await session.connect();
// To disconnect later
// await session.disconnect();
return session;
}
// ========================================
// Transport Options
// ========================================
/**
* WebRTC Transport (recommended for browser)
* - Lower latency
* - Better for real-time voice
* - Requires browser environment
*
* WebSocket Transport
* - Works in Node.js
* - Slightly higher latency
* - Simpler setup
*/
// Uncomment to run in Node.js
// createNodeSession().catch(console.error);
export {
voiceAssistant,
checkWeatherTool,
setReminderTool,
searchDocsTool,
createNodeSession,
};
```
### templates/realtime-agents/realtime-session-browser.tsx
```tsx
/**
* Realtime Voice Session - React Browser Client
*
* Demonstrates:
* - Creating a voice session in the browser
* - Using WebRTC transport for low latency
* - Handling audio I/O automatically
* - Managing session lifecycle
* - Displaying transcripts and tool calls
*
* IMPORTANT: Generate ephemeral API keys server-side, never expose your main API key
*/
import React, { useState, useEffect, useRef } from 'react';
import { RealtimeSession, RealtimeAgent } from '@openai/agents-realtime';
import { z } from 'zod';
// ========================================
// Voice Agent Definition
// ========================================
import { tool } from '@openai/agents-realtime';
const weatherTool = tool({
name: 'get_weather',
description: 'Get weather for a city',
parameters: z.object({
city: z.string(),
}),
execute: async ({ city }) => {
// Call your backend API
const response = await fetch(`/api/weather?city=${city}`);
const data = await response.json();
return data.weather;
},
});
const voiceAgent = new RealtimeAgent({
name: 'Voice Assistant',
instructions: 'You are a helpful voice assistant. Keep responses concise and friendly.',
tools: [weatherTool],
voice: 'alloy',
});
// ========================================
// React Component
// ========================================
interface Message {
role: 'user' | 'assistant';
content: string;
timestamp: Date;
}
interface ToolCall {
name: string;
arguments: Record<string, any>;
result?: any;
}
export function VoiceAssistant() {
const [isConnected, setIsConnected] = useState(false);
const [isListening, setIsListening] = useState(false);
const [messages, setMessages] = useState<Message[]>([]);
const [toolCalls, setToolCalls] = useState<ToolCall[]>([]);
const [error, setError] = useState<string | null>(null);
const sessionRef = useRef<RealtimeSession | null>(null);
// ========================================
// Initialize Session
// ========================================
useEffect(() => {
let session: RealtimeSession;
async function initSession() {
try {
// Get ephemeral API key from your backend
const response = await fetch('/api/generate-session-key');
const { apiKey } = await response.json();
// Create session with WebRTC transport (low latency)
session = new RealtimeSession(voiceAgent, {
apiKey,
transport: 'webrtc', // or 'websocket'
});
sessionRef.current = session;
// ========================================
// Session Event Handlers
// ========================================
session.on('connected', () => {
console.log('✅ Connected to voice session');
setIsConnected(true);
setError(null);
});
session.on('disconnected', () => {
console.log('🔌 Disconnected from voice session');
setIsConnected(false);
setIsListening(false);
});
session.on('error', (err) => {
console.error('❌ Session error:', err);
setError(err.message);
});
// ========================================
// Transcription Events
// ========================================
session.on('audio.transcription.completed', (event) => {
// User finished speaking
setMessages(prev => [...prev, {
role: 'user',
content: event.transcript,
timestamp: new Date(),
}]);
setIsListening(false);
});
session.on('audio.transcription.started', () => {
// User started speaking
setIsListening(true);
});
session.on('agent.audio.done', (event) => {
// Agent finished speaking
setMessages(prev => [...prev, {
role: 'assistant',
content: event.transcript,
timestamp: new Date(),
}]);
});
// ========================================
// Tool Call Events
// ========================================
session.on('tool.call', (event) => {
console.log('🛠️ Tool call:', event.name, event.arguments);
setToolCalls(prev => [...prev, {
name: event.name,
arguments: event.arguments,
}]);
});
session.on('tool.result', (event) => {
console.log('✅ Tool result:', event.result);
setToolCalls(prev => prev.map(tc =>
tc.name === event.name
? { ...tc, result: event.result }
: tc
));
});
// Connect to start session
await session.connect();
} catch (err: any) {
console.error('Failed to initialize session:', err);
setError(err.message);
}
}
initSession();
// Cleanup on unmount
return () => {
if (session) {
session.disconnect();
}
};
}, []);
// ========================================
// Manual Control Functions
// ========================================
const handleInterrupt = () => {
if (sessionRef.current) {
sessionRef.current.interrupt();
}
};
const handleDisconnect = () => {
if (sessionRef.current) {
sessionRef.current.disconnect();
}
};
// ========================================
// Render UI
// ========================================
return (
<div className="voice-assistant">
<div className="status-bar">
<div className={`status ${isConnected ? 'connected' : 'disconnected'}`}>
{isConnected ? '🟢 Connected' : '🔴 Disconnected'}
</div>
{isListening && <div className="listening">🎤 Listening...</div>}
</div>
{error && (
<div className="error">
❌ Error: {error}
</div>
)}
<div className="messages">
{messages.map((msg, i) => (
<div key={i} className={`message ${msg.role}`}>
<div className="role">{msg.role === 'user' ? '👤' : '🤖'}</div>
<div className="content">
<p>{msg.content}</p>
<span className="timestamp">
{msg.timestamp.toLocaleTimeString()}
</span>
</div>
</div>
))}
</div>
{toolCalls.length > 0 && (
<div className="tool-calls">
<h3>🛠️ Tool Calls</h3>
{toolCalls.map((tc, i) => (
<div key={i} className="tool-call">
<strong>{tc.name}</strong>
<pre>{JSON.stringify(tc.arguments, null, 2)}</pre>
{tc.result && (
<div className="result">
Result: {JSON.stringify(tc.result)}
</div>
)}
</div>
))}
</div>
)}
<div className="controls">
<button
onClick={handleInterrupt}
disabled={!isConnected}
>
⏸️ Interrupt
</button>
<button
onClick={handleDisconnect}
disabled={!isConnected}
>
🔌 Disconnect
</button>
</div>
<style jsx>{`
.voice-assistant {
max-width: 600px;
margin: 0 auto;
padding: 20px;
}
.status-bar {
display: flex;
gap: 20px;
margin-bottom: 20px;
}
.status {
padding: 8px 16px;
border-radius: 20px;
font-size: 14px;
}
.status.connected {
background: #d4edda;
color: #155724;
}
.status.disconnected {
background: #f8d7da;
color: #721c24;
}
.listening {
padding: 8px 16px;
background: #fff3cd;
color: #856404;
border-radius: 20px;
font-size: 14px;
}
.error {
padding: 12px;
background: #f8d7da;
color: #721c24;
border-radius: 8px;
margin-bottom: 20px;
}
.messages {
height: 400px;
overflow-y: auto;
border: 1px solid #ddd;
border-radius: 8px;
padding: 16px;
margin-bottom: 20px;
}
.message {
display: flex;
gap: 12px;
margin-bottom: 16px;
}
.message.user {
justify-content: flex-end;
}
.content {
max-width: 70%;
padding: 12px;
border-radius: 12px;
}
.message.user .content {
background: #007bff;
color: white;
}
.message.assistant .content {
background: #f1f3f4;
color: #000;
}
.timestamp {
font-size: 11px;
opacity: 0.6;
}
.tool-calls {
margin-bottom: 20px;
padding: 12px;
background: #f8f9fa;
border-radius: 8px;
}
.tool-call {
margin: 8px 0;
padding: 8px;
background: white;
border-radius: 4px;
}
.controls {
display: flex;
gap: 12px;
}
button {
flex: 1;
padding: 12px;
border: none;
border-radius: 8px;
background: #007bff;
color: white;
cursor: pointer;
}
button:disabled {
background: #ccc;
cursor: not-allowed;
}
button:hover:not(:disabled) {
background: #0056b3;
}
`}</style>
</div>
);
}
export default VoiceAssistant;
```
### templates/realtime-agents/realtime-handoffs.ts
```typescript
/**
* Realtime Agent Handoffs (Voice)
*
* Demonstrates:
* - Multi-agent voice workflows
* - Handoffs between voice agents
* - Automatic conversation history passing
* - Voice/model constraints during handoffs
*
* IMPORTANT: Unlike text agents, realtime agent handoffs have constraints:
* - Cannot change voice during handoff
* - Cannot change model during handoff
* - Conversation history automatically passed
*/
import { z } from 'zod';
import { RealtimeAgent, tool } from '@openai/agents-realtime';
// ========================================
// Specialized Agent Tools
// ========================================
const checkAccountTool = tool({
name: 'check_account',
description: 'Look up account information',
parameters: z.object({
accountId: z.string(),
}),
execute: async ({ accountId }) => {
return `Account ${accountId}: Premium tier, billing current, last login: 2025-10-20`;
},
});
const processPaymentTool = tool({
name: 'process_payment',
description: 'Process a payment',
parameters: z.object({
accountId: z.string(),
amount: z.number(),
}),
execute: async ({ accountId, amount }) => {
return `Payment of $${amount} processed for account ${accountId}`;
},
});
const checkSystemTool = tool({
name: 'check_system',
description: 'Check system status',
parameters: z.object({}),
execute: async () => {
return 'All systems operational: API ✅, Database ✅, CDN ✅';
},
});
const createTicketTool = tool({
name: 'create_ticket',
description: 'Create support ticket',
parameters: z.object({
title: z.string(),
priority: z.enum(['low', 'medium', 'high']),
}),
execute: async ({ title, priority }) => {
const id = `TICKET-${Math.floor(Math.random() * 10000)}`;
return `Created ${priority} priority ticket ${id}: ${title}`;
},
});
// ========================================
// Specialized Voice Agents
// ========================================
const billingAgent = new RealtimeAgent({
name: 'Billing Specialist',
instructions: `You handle billing and payment questions.
- Be professional and empathetic
- Explain charges clearly
- Process payments when requested
- Keep responses concise for voice`,
handoffDescription: 'Transfer for billing, payments, or account questions',
tools: [checkAccountTool, processPaymentTool],
voice: 'nova', // All agents must use same voice as parent
});
const technicalAgent = new RealtimeAgent({
name: 'Technical Support',
instructions: `You handle technical issues and system problems.
- Diagnose issues systematically
- Provide clear troubleshooting steps
- Create tickets for complex issues
- Use simple language for voice`,
handoffDescription: 'Transfer for technical problems, bugs, or system issues',
tools: [checkSystemTool, createTicketTool],
voice: 'nova', // Must match triage agent voice
});
// ========================================
// Triage Agent (Entry Point)
// ========================================
const triageVoiceAgent = new RealtimeAgent({
name: 'Customer Service',
instructions: `You are the first point of contact.
- Greet customers warmly
- Understand their issue
- Route to the right specialist
- Explain the transfer before handing off`,
handoffs: [billingAgent, technicalAgent],
voice: 'nova', // This voice will be used by all agents
model: 'gpt-5-realtime', // This model will be used by all agents
});
// ========================================
// Important Notes about Voice Handoffs
// ========================================
/**
* KEY DIFFERENCES from text agent handoffs:
*
* 1. VOICE CONSTRAINT
* - All agents in a handoff chain must use the same voice
* - Voice is set by the initial agent
* - Cannot change voice during handoff
*
* 2. MODEL CONSTRAINT
* - All agents must use the same model
* - Model is set by the initial agent
* - Cannot change model during handoff
*
* 3. AUTOMATIC HISTORY
* - Conversation history automatically passed to delegated agent
* - No need to manually manage context
* - Specialist agents can see full conversation
*
* 4. SEAMLESS AUDIO
* - Audio stream continues during handoff
* - User doesn't need to reconnect
* - Tools execute in same session
*/
// ========================================
// Example: Create Session with Handoffs
// ========================================
async function createVoiceSessionWithHandoffs() {
const { OpenAIRealtimeWebSocket } = await import('@openai/agents-realtime');
const transport = new OpenAIRealtimeWebSocket({
apiKey: process.env.OPENAI_API_KEY,
});
const session = await triageVoiceAgent.createSession({
transport,
});
// Track which agent is currently active
let currentAgent = 'Customer Service';
session.on('connected', () => {
console.log('✅ Voice session connected');
console.log('🎙️ Current agent:', currentAgent);
});
// Listen for agent changes (handoffs)
session.on('agent.changed', (event: any) => {
currentAgent = event.agent.name;
console.log('\n🔄 HANDOFF to:', currentAgent);
});
session.on('audio.transcription.completed', (event) => {
console.log(`👤 User: ${event.transcript}`);
});
session.on('agent.audio.done', (event) => {
console.log(`🤖 ${currentAgent}: ${event.transcript}`);
});
session.on('tool.call', (event) => {
console.log(`\n🛠️ Tool: ${event.name}`);
console.log(` Arguments:`, event.arguments);
});
session.on('tool.result', (event) => {
console.log(`✅ Result:`, event.result, '\n');
});
await session.connect();
console.log('\n💡 Try saying:');
console.log(' - "I have a question about my bill"');
console.log(' - "The API is returning errors"');
console.log(' - "I need to update my payment method"');
console.log('\n');
return session;
}
// ========================================
// Example: Manual Handoff Triggering
// ========================================
/**
* While handoffs usually happen automatically via LLM routing,
* you can also programmatically trigger them if needed via
* backend delegation patterns (see agent-patterns.md reference).
*/
// Uncomment to run
// createVoiceSessionWithHandoffs().catch(console.error);
export {
triageVoiceAgent,
billingAgent,
technicalAgent,
createVoiceSessionWithHandoffs,
};
```
### references/realtime-transports.md
```markdown
# Realtime Transport Options: WebRTC vs WebSocket
This reference explains the two transport options for realtime voice agents and when to use each.
---
## Overview
OpenAI Agents Realtime SDK supports two transport mechanisms:
1. **WebRTC** (Web Real-Time Communication)
2. **WebSocket** (WebSocket Protocol)
Both enable bidirectional audio streaming, but have different characteristics.
---
## WebRTC Transport
### Characteristics
- **Lower latency**: ~100-200ms typical
- **Better audio quality**: Built-in adaptive bitrate
- **Peer-to-peer optimizations**: Direct media paths when possible
- **Browser-native**: Designed for browser environments
### When to Use
- ✅ Browser-based voice UI
- ✅ Low latency critical (conversational AI)
- ✅ Real-time voice interactions
- ✅ Production voice applications
### Browser Example
```typescript
import { RealtimeSession, RealtimeAgent } from '@openai/agents-realtime';
const voiceAgent = new RealtimeAgent({
name: 'Voice Assistant',
instructions: 'You are helpful.',
voice: 'alloy',
});
const session = new RealtimeSession(voiceAgent, {
apiKey: sessionApiKey, // From your backend
transport: 'webrtc', // ← WebRTC
});
await session.connect();
```
### Pros
- Best latency for voice
- Handles network jitter better
- Automatic echo cancellation
- NAT traversal built-in
### Cons
- Requires browser environment (or WebRTC libraries in Node.js)
- Slightly more complex setup
- STUN/TURN servers may be needed for some networks
---
## WebSocket Transport
### Characteristics
- **Slightly higher latency**: ~300-500ms typical
- **Simpler protocol**: Standard WebSocket connection
- **Works anywhere**: Node.js, browser, serverless
- **Easier debugging**: Text-based protocol
### When to Use
- ✅ Node.js server environments
- ✅ Simpler implementation preferred
- ✅ Testing and development
- ✅ Non-latency-critical use cases
### Node.js Example
```typescript
import { RealtimeAgent } from '@openai/agents-realtime';
import { OpenAIRealtimeWebSocket } from '@openai/agents-realtime';
const voiceAgent = new RealtimeAgent({
name: 'Voice Assistant',
instructions: 'You are helpful.',
voice: 'alloy',
});
const transport = new OpenAIRealtimeWebSocket({
apiKey: process.env.OPENAI_API_KEY,
});
const session = await voiceAgent.createSession({
transport, // ← WebSocket
});
await session.connect();
```
### Browser Example
```typescript
const session = new RealtimeSession(voiceAgent, {
apiKey: sessionApiKey,
transport: 'websocket', // ← WebSocket
});
```
### Pros
- Works in Node.js without extra libraries
- Simpler to debug (Wireshark, browser DevTools)
- More predictable behavior
- Easier proxy/firewall setup
### Cons
- Higher latency than WebRTC
- No built-in jitter buffering
- Manual echo cancellation needed
---
## Comparison Table
| Feature | WebRTC | WebSocket |
|---------|--------|-----------|
| **Latency** | ~100-200ms | ~300-500ms |
| **Audio Quality** | Adaptive bitrate | Fixed bitrate |
| **Browser Support** | Native | Native |
| **Node.js Support** | Requires libraries | Native |
| **Setup Complexity** | Medium | Low |
| **Debugging** | Harder | Easier |
| **Best For** | Production voice UI | Development, Node.js |
---
## Audio I/O Handling
### Automatic (Default)
Both transports handle audio I/O automatically in browser:
```typescript
const session = new RealtimeSession(voiceAgent, {
transport: 'webrtc', // or 'websocket'
});
// Audio automatically captured from microphone
// Audio automatically played through speakers
await session.connect();
```
### Manual (Advanced)
For custom audio sources/sinks:
```typescript
import { OpenAIRealtimeWebRTC } from '@openai/agents-realtime';
// Custom media stream (e.g., from canvas capture)
const customStream = await navigator.mediaDevices.getDisplayMedia();
const transport = new OpenAIRealtimeWebRTC({
mediaStream: customStream,
});
const session = await voiceAgent.createSession({
transport,
});
```
---
## Network Considerations
### WebRTC
- **Firewall**: May require STUN/TURN servers
- **NAT Traversal**: Handles automatically
- **Bandwidth**: Adaptive (300 Kbps typical)
- **Port**: Dynamic (UDP preferred)
### WebSocket
- **Firewall**: Standard HTTPS port (443)
- **NAT Traversal**: Not needed
- **Bandwidth**: ~100 Kbps typical
- **Port**: 443 (wss://) or 80 (ws://)
---
## Security
### WebRTC
- Encrypted by default (DTLS-SRTP)
- Peer identity verification
- Media plane encryption
### WebSocket
- TLS encryption (wss://)
- Standard HTTPS security model
**Both are secure for production use.**
---
## Debugging Tips
### WebRTC
```javascript
// Enable WebRTC debug logs
localStorage.setItem('debug', 'webrtc:*');
// Monitor connection stats
session.transport.getStats().then(stats => {
console.log('RTT:', stats.roundTripTime);
console.log('Jitter:', stats.jitter);
});
```
### WebSocket
```javascript
// Monitor WebSocket frames in browser DevTools (Network tab)
// Or programmatically
session.transport.on('message', (data) => {
console.log('WS message:', data);
});
```
---
## Recommendations
### Production Voice UI (Browser)
```typescript
// Use WebRTC for best latency
transport: 'webrtc'
```
### Backend Processing (Node.js)
```typescript
// Use WebSocket for simplicity
const transport = new OpenAIRealtimeWebSocket({
apiKey: process.env.OPENAI_API_KEY,
});
```
### Development/Testing
```typescript
// Use WebSocket for easier debugging
transport: 'websocket'
```
### Mobile Apps
```typescript
// Use WebRTC for better quality
// Ensure WebRTC support in your framework
transport: 'webrtc'
```
---
## Migration Between Transports
Switching transports is simple - change one line:
```typescript
// From WebSocket
const session = new RealtimeSession(agent, {
transport: 'websocket',
});
// To WebRTC (just change transport)
const session = new RealtimeSession(agent, {
transport: 'webrtc',
});
// Everything else stays the same!
```
---
**Last Updated**: 2025-10-26
**Source**: [OpenAI Agents Docs - Voice Agents](https://openai.github.io/openai-agents-js/guides/voice-agents)
```
### references/common-errors.md
```markdown
# Common Errors and Solutions
This reference documents known issues with OpenAI Agents SDK and their workarounds.
---
## Error 1: Zod Schema Type Errors with Tool Parameters
**Issue**: Type errors occur when using Zod schemas as tool parameters, even when structurally compatible.
**GitHub Issue**: [#188](https://github.com/openai/openai-agents-js/issues/188)
**Symptoms**:
```typescript
// This causes TypeScript errors
const myTool = tool({
name: 'my_tool',
parameters: myZodSchema, // ❌ Type error
execute: async (input) => { /* ... */ },
});
```
**Workaround**:
```typescript
// Define schema inline
const myTool = tool({
name: 'my_tool',
parameters: z.object({
field1: z.string(),
field2: z.number(),
}), // ✅ Works
execute: async (input) => { /* ... */ },
});
// Or use type assertion (temporary fix)
const myTool = tool({
name: 'my_tool',
parameters: myZodSchema as any, // ⚠️ Loses type safety
execute: async (input) => { /* ... */ },
});
```
**Status**: Known issue as of SDK v0.2.1
**Expected Fix**: Future SDK version
---
## Error 2: MCP Server Tracing Errors
**Issue**: "No existing trace found" error when initializing RealtimeAgent with MCP servers.
**GitHub Issue**: [#580](https://github.com/openai/openai-agents-js/issues/580)
**Symptoms**:
```
UnhandledPromiseRejection: Error: No existing trace found
at RealtimeAgent.init with MCP server
```
**Workaround**:
```typescript
// Ensure tracing is initialized before creating agent
import { initializeTracing } from '@openai/agents/tracing';
await initializeTracing();
// Then create realtime agent with MCP
const agent = new RealtimeAgent({
// ... agent config with MCP servers
});
```
**Status**: Reported October 2025
**Affects**: @openai/agents-realtime v0.0.8 - v0.1.9
---
## Error 3: MaxTurnsExceededError
**Issue**: Agent enters infinite loop and hits turn limit.
**Cause**: Agent keeps calling tools or delegating without reaching conclusion.
**Symptoms**:
```
MaxTurnsExceededError: Agent exceeded maximum turns (10)
```
**Solutions**:
1. **Increase maxTurns**:
```typescript
const result = await run(agent, input, {
maxTurns: 20, // Increase limit
});
```
2. **Improve Instructions**:
```typescript
const agent = new Agent({
instructions: `You are a helpful assistant.
IMPORTANT: After using tools or delegating, provide a final answer.
Do not endlessly loop or delegate back and forth.`,
});
```
3. **Add Exit Criteria**:
```typescript
const agent = new Agent({
instructions: `Answer the question using up to 3 tool calls.
After 3 tool calls, synthesize a final answer.`,
});
```
**Prevention**: Write clear instructions with explicit completion criteria.
---
## Error 4: ToolCallError (Transient Failures)
**Issue**: Tool execution fails temporarily (network, rate limits, external API issues).
**Symptoms**:
```
ToolCallError: Failed to execute tool 'search_api'
```
**Solution**: Implement retry logic with exponential backoff.
```typescript
import { ToolCallError } from '@openai/agents';
async function runWithRetry(agent, input, maxRetries = 3) {
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
return await run(agent, input);
} catch (error) {
if (error instanceof ToolCallError && attempt < maxRetries) {
const delay = 1000 * Math.pow(2, attempt - 1);
await new Promise(resolve => setTimeout(resolve, delay));
continue;
}
throw error;
}
}
}
```
**See Template**: `templates/shared/error-handling.ts`
---
## Error 5: GuardrailExecutionError with Fallback
**Issue**: Guardrail itself fails (e.g., guardrail agent unavailable).
**Symptoms**:
```
GuardrailExecutionError: Guardrail 'safety_check' failed to execute
```
**Solution**: Implement fallback guardrails.
```typescript
import { GuardrailExecutionError } from '@openai/agents';
const primaryGuardrail = { /* ... */ };
const fallbackGuardrail = { /* simple keyword filter */ };
const agent = new Agent({
inputGuardrails: [primaryGuardrail],
});
try {
const result = await run(agent, input);
} catch (error) {
if (error instanceof GuardrailExecutionError && error.state) {
// Retry with fallback guardrail
agent.inputGuardrails = [fallbackGuardrail];
const result = await run(agent, error.state);
}
}
```
**See Template**: `templates/text-agents/agent-guardrails-input.ts`
---
## Error 6: Schema Mismatch (outputType vs Actual Output)
**Issue**: Agent returns data that doesn't match declared `outputType` schema.
**Cause**: Model sometimes deviates from schema despite instructions.
**Symptoms**:
```
Validation Error: Output does not match schema
```
**Solutions**:
1. **Add Validation Instructions**:
```typescript
const agent = new Agent({
instructions: `You MUST return data matching this exact schema.
Double-check your output before finalizing.`,
outputType: mySchema,
});
```
2. **Use Stricter Models**:
```typescript
const agent = new Agent({
model: 'gpt-5', // More reliable than gpt-5-mini for structured output
outputType: mySchema,
});
```
3. **Catch and Retry**:
```typescript
try {
const result = await run(agent, input);
// Validate output
mySchema.parse(result.finalOutput);
} catch (error) {
// Retry with stronger prompt
const retryResult = await run(agent,
`CRITICAL: Your previous output was invalid. Return valid JSON matching the schema exactly. ${input}`
);
}
```
---
## Error 7: Ollama Integration Failures
**Issue**: TypeScript Agent SDK fails to connect with Ollama models.
**GitHub Issue**: [#136](https://github.com/openai/openai-agents-js/issues/136)
**Symptoms**:
```
TypeError: Cannot read properties of undefined (reading 'completions')
```
**Cause**: SDK designed for OpenAI API format; Ollama requires adapter.
**Workaround**: Use Vercel AI SDK adapter or stick to OpenAI-compatible models.
**Status**: Experimental support; not officially supported.
---
## Error 8: Built-in webSearchTool Intermittent Errors
**Issue**: Built-in `webSearchTool()` sometimes throws exceptions.
**Symptoms**: Unpredictable failures when invoking web search.
**Workaround**:
```typescript
// Use custom search tool with error handling
const customSearchTool = tool({
name: 'search',
description: 'Search the web',
parameters: z.object({ query: z.string() }),
execute: async ({ query }) => {
try {
// Your search API (Tavily, Google, etc.)
const results = await fetch(`https://api.example.com/search?q=${query}`);
return await results.json();
} catch (error) {
return { error: 'Search temporarily unavailable' };
}
},
});
```
**Status**: Known issue in early SDK versions.
---
## Error 9: Agent Builder Export Bugs
**Issue**: Code exported from Agent Builder has bugs (template string escaping, state typing).
**Source**: [OpenAI Community](https://community.openai.com/t/bugs-in-agent-builder-exported-code-typescript-template-string-escaping-state-typing-and-property-naming/1362119)
**Symptoms**: Exported code doesn't compile or run.
**Solution**: Manually review and fix exported code before use.
---
## General Error Handling Pattern
**Comprehensive error handling template**:
```typescript
import {
MaxTurnsExceededError,
InputGuardrailTripwireTriggered,
OutputGuardrailTripwireTriggered,
ToolCallError,
GuardrailExecutionError,
ModelBehaviorError,
} from '@openai/agents';
try {
const result = await run(agent, input, { maxTurns: 10 });
return result;
} catch (error) {
if (error instanceof MaxTurnsExceededError) {
// Agent hit turn limit - logic issue
console.error('Agent looped too many times');
throw error;
} else if (error instanceof InputGuardrailTripwireTriggered) {
// Input blocked by guardrail - don't retry
console.error('Input blocked:', error.outputInfo);
return { error: 'Input not allowed' };
} else if (error instanceof OutputGuardrailTripwireTriggered) {
// Output blocked by guardrail - don't retry
console.error('Output blocked:', error.outputInfo);
return { error: 'Response blocked for safety' };
} else if (error instanceof ToolCallError) {
// Tool failed - retry with backoff
console.error('Tool failed:', error.toolName);
return retryWithBackoff(agent, input);
} else if (error instanceof GuardrailExecutionError) {
// Guardrail failed - use fallback
console.error('Guardrail failed');
return runWithFallbackGuardrail(agent, input);
} else if (error instanceof ModelBehaviorError) {
// Unexpected model behavior - don't retry
console.error('Model behavior error');
throw error;
} else {
// Unknown error
console.error('Unknown error:', error);
throw error;
}
}
```
**See Template**: `templates/shared/error-handling.ts`
---
**Last Updated**: 2025-10-26
**Sources**:
- [GitHub Issues](https://github.com/openai/openai-agents-js/issues)
- [OpenAI Community](https://community.openai.com/)
- SDK Documentation
```
### templates/shared/error-handling.ts
```typescript
/**
* Comprehensive error handling patterns for OpenAI Agents SDK
*
* Covers all major error types:
* - MaxTurnsExceededError: Agent hit maximum turns limit
* - InputGuardrailTripwireTriggered: Input blocked by guardrail
* - OutputGuardrailTripwireTriggered: Output blocked by guardrail
* - ToolCallError: Tool execution failed
* - ModelBehaviorError: Unexpected model behavior
* - GuardrailExecutionError: Guardrail itself failed
*/
import {
Agent,
run,
MaxTurnsExceededError,
InputGuardrailTripwireTriggered,
OutputGuardrailTripwireTriggered,
ModelBehaviorError,
ToolCallError,
GuardrailExecutionError,
} from '@openai/agents';
/**
* Run agent with comprehensive error handling and retry logic
*/
export async function runAgentWithErrorHandling(
agent: Agent,
input: string,
options: {
maxRetries?: number;
maxTurns?: number;
onError?: (error: Error, attempt: number) => void;
} = {}
) {
const { maxRetries = 3, maxTurns = 10, onError } = options;
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
const result = await run(agent, input, { maxTurns });
return result;
} catch (error) {
// Notify error callback
if (onError) {
onError(error as Error, attempt);
}
// Handle specific error types
if (error instanceof MaxTurnsExceededError) {
console.error('❌ Agent exceeded maximum turns');
console.error(` Agent entered an infinite loop after ${maxTurns} turns`);
throw error; // Don't retry - this is a logic issue
} else if (error instanceof InputGuardrailTripwireTriggered) {
console.error('❌ Input blocked by guardrail');
console.error(' Reason:', error.outputInfo);
throw error; // Don't retry - input is invalid
} else if (error instanceof OutputGuardrailTripwireTriggered) {
console.error('❌ Output blocked by guardrail');
console.error(' Reason:', error.outputInfo);
throw error; // Don't retry - output violates policy
} else if (error instanceof ToolCallError) {
console.error(`⚠️ Tool call failed (attempt ${attempt}/${maxRetries})`);
console.error(' Tool:', error.toolName);
console.error(' Error:', error.message);
if (attempt === maxRetries) {
throw error; // Give up after max retries
}
// Exponential backoff
const delay = 1000 * Math.pow(2, attempt - 1);
console.log(` Retrying in ${delay}ms...`);
await new Promise(resolve => setTimeout(resolve, delay));
} else if (error instanceof ModelBehaviorError) {
console.error('❌ Unexpected model behavior');
console.error(' Details:', error.message);
throw error; // Don't retry - model is behaving incorrectly
} else if (error instanceof GuardrailExecutionError) {
console.error('❌ Guardrail execution failed');
console.error(' Guardrail:', error.guardrailName);
console.error(' Error:', error.message);
// Option to retry with fallback guardrail
// See common-errors.md for fallback pattern
throw error;
} else {
// Unknown error - retry with exponential backoff
console.error(`⚠️ Unknown error (attempt ${attempt}/${maxRetries})`);
console.error(' Error:', error);
if (attempt === maxRetries) {
throw error;
}
const delay = 1000 * Math.pow(2, attempt - 1);
console.log(` Retrying in ${delay}ms...`);
await new Promise(resolve => setTimeout(resolve, delay));
}
}
}
throw new Error('Max retries exceeded');
}
/**
* Example usage
*/
export async function exampleUsage() {
const agent = new Agent({
name: 'Assistant',
instructions: 'You are a helpful assistant.',
});
try {
const result = await runAgentWithErrorHandling(
agent,
'What is 2+2?',
{
maxRetries: 3,
maxTurns: 10,
onError: (error, attempt) => {
console.log(`Error on attempt ${attempt}:`, error.message);
},
}
);
console.log('✅ Success:', result.finalOutput);
console.log('Tokens used:', result.usage.totalTokens);
} catch (error) {
console.error('❌ Final error:', error);
process.exit(1);
}
}
// Uncomment to run example
// exampleUsage();
```