Back to skills
SkillHub ClubAnalyze Data & AIFull StackData / AI

agents

Build voice AI agents with ElevenLabs. Use when creating voice assistants, customer service bots, interactive voice characters, or any real-time voice conversation experience.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
137
Hot score
95
Updated
March 20, 2026
Overall rating
C3.8
Composite score
3.8
Best-practice grade
B79.6

Install command

npx @skill-hub/cli install elevenlabs-skills-agents

Repository

elevenlabs/skills

Skill path: agents

Build voice AI agents with ElevenLabs. Use when creating voice assistants, customer service bots, interactive voice characters, or any real-time voice conversation experience.

Open repository

Best for

Primary workflow: Analyze Data & AI.

Technical facets: Full Stack, Data / AI.

Target audience: everyone.

License: MIT.

Original source

Catalog source: SkillHub Club.

Repository owner: elevenlabs.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install agents into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/elevenlabs/skills before adding agents to shared team environments
  • Use agents for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: agents
description: Build voice AI agents with ElevenLabs. Use when creating voice assistants, customer service bots, interactive voice characters, or any real-time voice conversation experience.
license: MIT
compatibility: Requires internet access and an ElevenLabs API key (ELEVENLABS_API_KEY).
metadata: {"openclaw": {"requires": {"env": ["ELEVENLABS_API_KEY"]}, "primaryEnv": "ELEVENLABS_API_KEY"}}
---

# ElevenLabs Agents Platform

Build voice AI agents with natural conversations, multiple LLM providers, custom tools, and easy web embedding.

> **Setup:** See [Installation Guide](references/installation.md) for CLI and SDK setup.

## Quick Start with CLI

The ElevenLabs CLI is the recommended way to create and manage agents:

```bash
# Install CLI and authenticate
npm install -g @elevenlabs/cli
elevenlabs auth login

# Initialize project and create an agent
elevenlabs agents init
elevenlabs agents add "My Assistant" --template default

# Push to ElevenLabs platform
elevenlabs agents push
```

**Available templates:** `default`, `minimal`, `voice-only`, `text-only`, `customer-service`, `assistant`

### Python

```python
from elevenlabs.client import ElevenLabs

client = ElevenLabs()

agent = client.conversational_ai.agents.create(
    name="My Assistant",
    conversation_config={
        "agent": {"first_message": "Hello! How can I help?", "language": "en"},
        "tts": {"voice_id": "JBFqnCBsd6RMkjVDRZzb"}
    },
    prompt={
        "prompt": "You are a helpful assistant. Be concise and friendly.",
        "llm": "gpt-4o-mini",
        "temperature": 0.7
    }
)
```

### JavaScript

```javascript
import { ElevenLabsClient } from "@elevenlabs/elevenlabs-js";
const client = new ElevenLabsClient();

const agent = await client.conversationalAi.agents.create({
  name: "My Assistant",
  conversationConfig: {
    agent: { firstMessage: "Hello! How can I help?", language: "en" },
    tts: { voiceId: "JBFqnCBsd6RMkjVDRZzb" }
  },
  prompt: { prompt: "You are a helpful assistant.", llm: "gpt-4o-mini", temperature: 0.7 }
});
```

### cURL

```bash
curl -X POST "https://api.elevenlabs.io/v1/convai/agents/create" \
  -H "xi-api-key: $ELEVENLABS_API_KEY" -H "Content-Type: application/json" \
  -d '{"name": "My Assistant", "conversation_config": {"agent": {"first_message": "Hello!", "language": "en"}, "tts": {"voice_id": "JBFqnCBsd6RMkjVDRZzb"}}, "prompt": {"prompt": "You are helpful.", "llm": "gpt-4o-mini"}}'
```

## Starting Conversations

**Server-side (Python):** Get signed URL for client connection:
```python
signed_url = client.conversational_ai.conversations.get_signed_url(agent_id="your-agent-id")
```

**Client-side (JavaScript):**
```javascript
import { Conversation } from "@elevenlabs/client";

const conversation = await Conversation.startSession({
  agentId: "your-agent-id",
  onMessage: (msg) => console.log("Agent:", msg.message),
  onUserTranscript: (t) => console.log("User:", t.message),
  onError: (e) => console.error(e)
});
```

**React Hook:**
```typescript
import { useConversation } from "@elevenlabs/react";

const conversation = useConversation({ onMessage: (msg) => console.log(msg) });
// Get signed URL from backend, then:
await conversation.startSession({ signedUrl: token });
```

## Configuration

| Provider | Models |
|----------|--------|
| OpenAI | `gpt-4o`, `gpt-4o-mini`, `gpt-4-turbo` |
| Anthropic | `claude-3-5-sonnet`, `claude-3-5-haiku` |
| Google | `gemini-1.5-pro`, `gemini-1.5-flash` |
| Custom | `custom-llm` (bring your own endpoint) |

**Popular voices:** `JBFqnCBsd6RMkjVDRZzb` (George), `EXAVITQu4vr4xnSDxMaL` (Sarah), `onwK4e9ZLuTAKqWW03F9` (Daniel), `XB0fDUnXU5powFXDhCwa` (Charlotte)

**Turn-taking modes:** `server_vad` (auto-detect speech end) or `turn_based` (explicit signals)

See [Agent Configuration](references/agent-configuration.md) for all options.

## Tools

Extend agents with webhook, client, or system tools:

```python
tools=[
    # Webhook: server-side API call
    {"type": "webhook", "name": "get_weather", "description": "Get weather",
     "webhook": {"url": "https://api.example.com/weather", "method": "POST"},
     "parameters": {"type": "object", "properties": {"location": {"type": "string"}}, "required": ["location"]}},
    # System: built-in capabilities
    {"type": "system", "name": "end_call"},
    {"type": "system", "name": "transfer_to_number", "phone_number": "+1234567890"}
]
```

**Client tools** run in browser:
```javascript
clientTools: {
  show_product: async ({ productId }) => {
    document.getElementById("product").src = `/products/${productId}`;
    return { success: true };
  }
}
```

See [Client Tools Reference](references/client-tools.md) for complete documentation.

## Widget Embedding

```html
<elevenlabs-convai agent-id="your-agent-id"></elevenlabs-convai>
<script src="https://unpkg.com/@elevenlabs/convai-widget-embed" async type="text/javascript"></script>
```

Customize with attributes: `avatar-image-url`, `action-text`, `start-call-text`, `end-call-text`.

See [Widget Embedding Reference](references/widget-embedding.md) for all options.

## Outbound Calls

Make outbound phone calls using your agent via Twilio integration:

### Python

```python
response = client.conversational_ai.twilio.outbound_call(
    agent_id="your-agent-id",
    agent_phone_number_id="your-phone-number-id",
    to_number="+1234567890"
)
print(f"Call initiated: {response.conversation_id}")
```

### JavaScript

```javascript
const response = await client.conversationalAi.twilio.outboundCall({
  agentId: "your-agent-id",
  agentPhoneNumberId: "your-phone-number-id",
  toNumber: "+1234567890",
});
```

### cURL

```bash
curl -X POST "https://api.elevenlabs.io/v1/convai/twilio/outbound-call" \
  -H "xi-api-key: $ELEVENLABS_API_KEY" -H "Content-Type: application/json" \
  -d '{"agent_id": "your-agent-id", "agent_phone_number_id": "your-phone-number-id", "to_number": "+1234567890"}'
```

See [Outbound Calls Reference](references/outbound-calls.md) for configuration overrides and dynamic variables.

## Managing Agents

### Using CLI (Recommended)

```bash
# List agents and check status
elevenlabs agents list
elevenlabs agents status

# Import agents from platform to local config
elevenlabs agents pull                      # Import all agents
elevenlabs agents pull --agent <agent-id>   # Import specific agent

# Push local changes to platform
elevenlabs agents push              # Upload configurations
elevenlabs agents push --dry-run    # Preview changes first

# Add tools to agents
elevenlabs agents tools add "Weather API" --type webhook --config-path ./weather.json
```

### Project Structure

The CLI creates a project structure for managing agents:

```
your_project/
├── agents.json       # Agent definitions
├── tools.json        # Tool configurations
├── agent_configs/    # Individual agent configs
└── tool_configs/     # Individual tool configs
```

### SDK Examples

```python
# List
agents = client.conversational_ai.agents.list()

# Get
agent = client.conversational_ai.agents.get(agent_id="your-agent-id")

# Update (partial - only include fields to change)
client.conversational_ai.agents.update(agent_id="your-agent-id", name="New Name")
client.conversational_ai.agents.update(agent_id="your-agent-id",
    prompt={"prompt": "New instructions", "llm": "claude-3-5-sonnet"})

# Delete
client.conversational_ai.agents.delete(agent_id="your-agent-id")
```

See [Agent Configuration](references/agent-configuration.md) for all configuration options and SDK examples.

## Error Handling

```python
try:
    agent = client.conversational_ai.agents.create(...)
except Exception as e:
    print(f"API error: {e}")
```

Common errors: **401** (invalid key), **404** (not found), **422** (invalid config), **429** (rate limit)

## References

- [Installation Guide](references/installation.md) - SDK setup and migration
- [Agent Configuration](references/agent-configuration.md) - All config options and CRUD examples
- [Client Tools](references/client-tools.md) - Webhook, client, and system tools
- [Widget Embedding](references/widget-embedding.md) - Website integration
- [Outbound Calls](references/outbound-calls.md) - Twilio phone call integration


---

## Referenced Files

> The following files are referenced in this skill and included for context.

### references/installation.md

```markdown
# Installation

## CLI (Recommended)

The ElevenLabs CLI is the recommended way to create and manage agents:

```bash
npm install -g @elevenlabs/cli
# or
pnpm add -g @elevenlabs/cli
# or
yarn global add @elevenlabs/cli
```

Requires Node.js 16.0.0 or higher.

### Authentication

```bash
elevenlabs auth login          # Authenticate with API key
elevenlabs auth whoami         # Verify current login status
elevenlabs auth logout         # Remove stored credentials
```

API keys are securely stored in `~/.agents/api_keys.json`.

### Quick Start

```bash
# Initialize a new project
elevenlabs agents init

# Create an agent from template
elevenlabs agents add "My Assistant" --template default

# Push to ElevenLabs platform
elevenlabs agents push
```

## JavaScript / TypeScript SDK

For programmatic access and client-side integration:

```bash
npm install @elevenlabs/elevenlabs-js
```

> **Important:** Always use `@elevenlabs/elevenlabs-js`. The old `elevenlabs` npm package (v1.x) is deprecated and should not be used.

```javascript
import { ElevenLabsClient } from "@elevenlabs/elevenlabs-js";

// Option 1: Environment variable (recommended)
// Set ELEVENLABS_API_KEY in your environment
const client = new ElevenLabsClient();

// Option 2: Pass directly
const client = new ElevenLabsClient({ apiKey: "your-api-key" });
```

### Migrating from deprecated packages

If you have old packages installed, remove them:

```bash
# Remove deprecated packages
npm uninstall elevenlabs

# Install the current packages
npm install @elevenlabs/elevenlabs-js

# For client-side/browser usage, also install:
npm install @elevenlabs/client  # Browser client
npm install @elevenlabs/react   # React hooks
```

**Import changes:**
```javascript
import { ElevenLabsClient } from "@elevenlabs/elevenlabs-js";
import { Conversation } from "@elevenlabs/client";
import { useConversation } from "@elevenlabs/react";
```

## Python

```bash
pip install elevenlabs
```

```python
from elevenlabs.client import ElevenLabs

# Option 1: Environment variable (recommended)
# Set ELEVENLABS_API_KEY in your environment
client = ElevenLabs()

# Option 2: Pass directly
client = ElevenLabs(api_key="your-api-key")
```

## cURL / REST API

Set your API key as an environment variable:

```bash
export ELEVENLABS_API_KEY="your-api-key"
```

Include in requests via the `xi-api-key` header:

```bash
curl -X POST "https://api.elevenlabs.io/v1/convai/agents/create" \
  -H "xi-api-key: $ELEVENLABS_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"name": "My Agent", "prompt": {"prompt": "You are helpful.", "llm": "gpt-4o-mini"}}'
```

## Getting an API Key

1. Sign up at [elevenlabs.io](https://elevenlabs.io)
2. Go to [API Keys](https://elevenlabs.io/app/settings/api-keys)
3. Click **Create API Key**
4. Copy and store securely

Or use the `setup-api-key` skill for guided setup.

## Environment Variables

| Variable | Description |
|----------|-------------|
| `ELEVENLABS_API_KEY` | Your ElevenLabs API key (required) |

```

### references/agent-configuration.md

```markdown
# Agent Configuration

Complete reference for configuring conversational AI agents.

## Configuration Structure

```python
agent = client.conversational_ai.agents.create(
    name="My Agent",
    conversation_config={...},  # TTS, ASR, turn-taking settings
    prompt={...},               # LLM and system prompt
    tools=[...],                # Webhook, client, and system tools
    platform_settings={...}     # Auth, privacy, call limits
)
```

## conversation_config

Controls the real-time conversation behavior.

### agent

```python
conversation_config={
    "agent": {
        "first_message": "Hello! How can I help you today?",
        "language": "en",
        "max_tokens_agent_response": 500
    }
}
```

| Field | Type | Description |
|-------|------|-------------|
| `first_message` | string | What the agent says when conversation starts |
| `language` | string | ISO 639-1 language code (en, es, fr, etc.) |
| `max_tokens_agent_response` | int | Max tokens per agent response |

### tts (Text-to-Speech)

```python
conversation_config={
    "tts": {
        "voice_id": "JBFqnCBsd6RMkjVDRZzb",
        "model_id": "eleven_flash_v2_5",
        "stability": 0.5,
        "similarity_boost": 0.75,
        "optimize_streaming_latency": 3
    }
}
```

| Field | Type | Description |
|-------|------|-------------|
| `voice_id` | string | Voice to use (required) |
| `model_id` | string | TTS model - use flash models for low latency |
| `stability` | float | 0-1, lower = more expressive |
| `similarity_boost` | float | 0-1, higher = closer to original voice |
| `optimize_streaming_latency` | int | 0-4, higher = faster but lower quality |

**Recommended TTS models for real-time:**
- `eleven_flash_v2_5` - Ultra-low latency (~75ms)
- `eleven_turbo_v2_5` - Balanced quality/speed

### asr (Automatic Speech Recognition)

```python
conversation_config={
    "asr": {
        "model_id": "scribe_v2_realtime",
        "keyterms": ["ElevenLabs", "TechCorp"]
    }
}
```

| Field | Type | Description |
|-------|------|-------------|
| `model_id` | string | ASR model (default: scribe_v2_realtime) |
| `keyterms` | array | Words to recognize accurately |

### turn (Turn-Taking)

```python
conversation_config={
    "turn": {
        "mode": "server_vad",
        "silence_threshold_ms": 500,
        "interrupt_sensitivity": 0.5
    }
}
```

| Field | Type | Description |
|-------|------|-------------|
| `mode` | string | `server_vad` (auto) or `turn_based` (manual) |
| `silence_threshold_ms` | int | Silence duration before agent responds |
| `interrupt_sensitivity` | float | 0-1, how easily user can interrupt |

## prompt

Configures the LLM behavior.

```python
prompt={
    "prompt": "You are a helpful customer service agent...",
    "llm": "gpt-4o-mini",
    "temperature": 0.7,
    "max_tokens": 500,
    "tools_strict_mode": True
}
```

| Field | Type | Description |
|-------|------|-------------|
| `prompt` | string | System prompt defining agent behavior |
| `llm` | string | Model ID (see LLM providers below) |
| `temperature` | float | 0-1, higher = more creative |
| `max_tokens` | int | Max tokens for LLM response |
| `tools_strict_mode` | bool | Enforce strict tool parameter validation |

### LLM Providers

| Provider | Model IDs |
|----------|-----------|
| OpenAI | `gpt-4o`, `gpt-4o-mini`, `gpt-4-turbo` |
| Anthropic | `claude-3-5-sonnet`, `claude-3-5-haiku` |
| Google | `gemini-1.5-pro`, `gemini-1.5-flash` |
| Custom | `custom-llm` (requires custom_llm config) |

### Custom LLM

```python
prompt={
    "prompt": "You are helpful.",
    "llm": "custom-llm",
    "custom_llm": {
        "url": "https://your-llm-endpoint.com/v1/chat/completions",
        "model_id": "your-model-id",
        "api_key": "your-api-key"
    }
}
```

## platform_settings

Platform-level configuration for security and limits.

```python
platform_settings={
    "auth": {
        "enable_auth": True,
        "allowlist": ["https://example.com"]
    },
    "privacy": {
        "record_conversation": False,
        "retention_days": 30
    },
    "call_limits": {
        "max_call_duration_secs": 600,
        "max_concurrent_calls": 10
    }
}
```

### auth

| Field | Type | Description |
|-------|------|-------------|
| `enable_auth` | bool | Require signed URLs for connections |
| `allowlist` | array | Allowed origins for CORS |

### privacy

| Field | Type | Description |
|-------|------|-------------|
| `record_conversation` | bool | Store conversation audio/transcripts |
| `retention_days` | int | How long to keep recordings |

### call_limits

| Field | Type | Description |
|-------|------|-------------|
| `max_call_duration_secs` | int | Max conversation length |
| `max_concurrent_calls` | int | Max simultaneous conversations |

## Knowledge Base / RAG

Add documents for the agent to reference:

```python
# Upload a document
doc = client.conversational_ai.knowledge_base.upload(
    file=open("product_guide.pdf", "rb"),
    name="Product Guide"
)

# Create agent with knowledge base
agent = client.conversational_ai.agents.create(
    name="Support Agent",
    knowledge_base=[doc.document_id],
    prompt={
        "prompt": "You are a support agent. Use the knowledge base to answer questions.",
        "llm": "gpt-4o-mini"
    }
)
```

## CRUD Operations

### Using CLI (Recommended)

```bash
# Initialize project
elevenlabs agents init

# Create agent from template
elevenlabs agents add "My Agent" --template default
elevenlabs agents add "Support Bot" --template customer-service

# List agents
elevenlabs agents list

# Check status
elevenlabs agents status

# Push local changes to platform
elevenlabs agents push
elevenlabs agents push --dry-run    # Preview changes first

# Import agents from platform
elevenlabs agents pull                      # Import all
elevenlabs agents pull --agent <agent-id>   # Import specific agent
elevenlabs agents pull --update             # Override local configs

# View available templates
elevenlabs agents templates list
elevenlabs agents templates show <template-name>

# Add tools
elevenlabs agents tools add "API Tool" --type webhook --config-path ./config.json

# Generate widget code
elevenlabs agents widget <agent-id>
```

### SDK: List Agents

```python
agents = client.conversational_ai.agents.list()
for agent in agents.agents:
    print(f"{agent.name}: {agent.agent_id}")
```

```javascript
const agents = await client.conversationalAi.agents.list();
```

```bash
curl -X GET "https://api.elevenlabs.io/v1/convai/agents" -H "xi-api-key: $ELEVENLABS_API_KEY"
```

### SDK: Get Agent

```python
agent = client.conversational_ai.agents.get(agent_id="your-agent-id")
```

```javascript
const agent = await client.conversationalAi.agents.get("your-agent-id");
```

```bash
curl -X GET "https://api.elevenlabs.io/v1/convai/agents/your-agent-id" -H "xi-api-key: $ELEVENLABS_API_KEY"
```

### SDK: Update Agent

Only include fields you want to change. All other settings remain unchanged.

**Python:**
```python
# Update name
client.conversational_ai.agents.update(agent_id="id", name="New Name")

# Update conversation config
client.conversational_ai.agents.update(agent_id="id", conversation_config={
    "agent": {"first_message": "Welcome back!"},
    "tts": {"voice_id": "EXAVITQu4vr4xnSDxMaL", "model_id": "eleven_flash_v2_5"}
})

# Update prompt/LLM
client.conversational_ai.agents.update(agent_id="id", prompt={
    "prompt": "New instructions.", "llm": "claude-3-5-sonnet", "temperature": 0.8
})

# Update tools (replaces existing)
client.conversational_ai.agents.update(agent_id="id", tools=[
    {"type": "webhook", "name": "check_inventory", ...},
    {"type": "system", "name": "end_call"}
])

# Update platform settings
client.conversational_ai.agents.update(agent_id="id", platform_settings={
    "auth": {"enable_auth": True, "allowlist": ["https://myapp.com"]},
    "call_limits": {"max_concurrent_calls": 20}
})
```

**JavaScript:**
```javascript
await client.conversationalAi.agents.update("id", { name: "New Name" });
await client.conversationalAi.agents.update("id", {
  conversationConfig: { tts: { voiceId: "EXAVITQu4vr4xnSDxMaL" } }
});
await client.conversationalAi.agents.update("id", {
  prompt: { prompt: "New instructions.", llm: "claude-3-5-sonnet" }
});
```

**cURL:**
```bash
curl -X PATCH "https://api.elevenlabs.io/v1/convai/agents/your-agent-id" \
  -H "xi-api-key: $ELEVENLABS_API_KEY" -H "Content-Type: application/json" \
  -d '{"name": "New Name"}'
```

#### Updatable Fields

| Section | Fields |
|---------|--------|
| Root | `name` |
| `conversation_config.agent` | `first_message`, `language`, `max_tokens_agent_response` |
| `conversation_config.tts` | `voice_id`, `model_id`, `stability`, `similarity_boost`, `optimize_streaming_latency` |
| `conversation_config.asr` | `model_id`, `keyterms` |
| `conversation_config.turn` | `mode`, `silence_threshold_ms`, `interrupt_sensitivity` |
| `prompt` | `prompt`, `llm`, `temperature`, `max_tokens`, `tools_strict_mode`, `custom_llm` |
| `tools` | Array of tools (replaces existing) |
| `platform_settings.auth` | `enable_auth`, `allowlist` |
| `platform_settings.privacy` | `record_conversation`, `retention_days` |
| `platform_settings.call_limits` | `max_call_duration_secs`, `max_concurrent_calls` |

### SDK: Delete Agent

```python
client.conversational_ai.agents.delete(agent_id="your-agent-id")
```

```javascript
await client.conversationalAi.agents.delete("your-agent-id");
```

```bash
curl -X DELETE "https://api.elevenlabs.io/v1/convai/agents/your-agent-id" -H "xi-api-key: $ELEVENLABS_API_KEY"
```

## CI/CD Integration

Use the CLI in your deployment pipeline:

```bash
# Set API key as environment variable
export ELEVENLABS_API_KEY="your-api-key"

# Push changes (non-interactive)
elevenlabs agents push
```

## Example Configurations

### Customer Support Agent

```python
agent = client.conversational_ai.agents.create(
    name="Support Agent",
    conversation_config={
        "agent": {"first_message": "Hi! Thanks for calling TechCorp support.", "language": "en"},
        "tts": {"voice_id": "XB0fDUnXU5powFXDhCwa", "model_id": "eleven_flash_v2_5"},
        "turn": {"mode": "server_vad", "silence_threshold_ms": 700}
    },
    prompt={
        "prompt": "You are a customer support agent. Be helpful, professional, concise.",
        "llm": "gpt-4o-mini", "temperature": 0.5
    },
    tools=[{"type": "system", "name": "end_call"}, {"type": "system", "name": "transfer_to_number", "phone_number": "+1234567890"}],
    platform_settings={"call_limits": {"max_call_duration_secs": 900}}
)
```

### Low-Latency Assistant

```python
agent = client.conversational_ai.agents.create(
    name="Quick Assistant",
    conversation_config={
        "agent": {"first_message": "Hey! What do you need?", "max_tokens_agent_response": 100},
        "tts": {"voice_id": "JBFqnCBsd6RMkjVDRZzb", "model_id": "eleven_flash_v2_5", "optimize_streaming_latency": 4},
        "turn": {"mode": "server_vad", "silence_threshold_ms": 300, "interrupt_sensitivity": 0.8}
    },
    prompt={"prompt": "Fast, efficient assistant. Brief answers.", "llm": "gpt-4o-mini", "temperature": 0.3, "max_tokens": 100}
)
```

```

### references/client-tools.md

```markdown
# Client Tools

Extend your agent with custom capabilities. Tools let the agent take actions beyond just talking.

## Tool Types

| Type | Execution | Use Case |
|------|-----------|----------|
| **Webhook** | Server-side via HTTP | Database queries, API calls, secure operations |
| **Client** | Browser-side JavaScript | UI updates, local storage, navigation |
| **System** | Built-in ElevenLabs | End call, transfer, standard actions |

## Webhook Tools

Execute server-side logic when the agent needs external data or actions.

### Basic Webhook

```python
agent = client.conversational_ai.agents.create(
    name="Weather Assistant",
    tools=[{
        "type": "webhook",
        "name": "get_weather",
        "description": "Get current weather for a city. Use when user asks about weather.",
        "webhook": {
            "url": "https://api.example.com/weather",
            "method": "POST",
            "headers": {
                "Authorization": "Bearer {{API_KEY}}"
            }
        },
        "parameters": {
            "type": "object",
            "properties": {
                "city": {
                    "type": "string",
                    "description": "City name, e.g., 'San Francisco'"
                },
                "units": {
                    "type": "string",
                    "enum": ["celsius", "fahrenheit"],
                    "description": "Temperature units"
                }
            },
            "required": ["city"]
        }
    }],
    prompt={
        "prompt": "You are a helpful assistant that can check the weather.",
        "llm": "gpt-4o-mini"
    }
)
```

### Webhook Request Format

When the agent calls a webhook tool, ElevenLabs sends:

```json
{
  "tool_call_id": "call_abc123",
  "tool_name": "get_weather",
  "parameters": {
    "city": "San Francisco",
    "units": "fahrenheit"
  },
  "conversation_id": "conv_xyz789"
}
```

### Webhook Response Format

Your server should respond with:

```json
{
  "result": "The weather in San Francisco is 68°F and sunny."
}
```

Or for structured data:

```json
{
  "result": {
    "temperature": 68,
    "condition": "sunny",
    "humidity": 45
  }
}
```

### Webhook with Authentication

```python
tools=[{
    "type": "webhook",
    "name": "lookup_order",
    "description": "Look up order status by order ID",
    "webhook": {
        "url": "https://api.mystore.com/orders/lookup",
        "method": "POST",
        "headers": {
            "Authorization": "Bearer {{ORDER_API_KEY}}",
            "X-Store-ID": "store_123"
        },
        "timeout_ms": 5000
    },
    "parameters": {
        "type": "object",
        "properties": {
            "order_id": {
                "type": "string",
                "description": "Order ID (e.g., ORD-12345)"
            }
        },
        "required": ["order_id"]
    }
}]
```

### Server Implementation (Node.js)

```javascript
app.post("/webhook/get_weather", async (req, res) => {
  const { parameters, conversation_id } = req.body;
  const { city, units = "fahrenheit" } = parameters;

  // Fetch weather from your data source
  const weather = await weatherService.get(city, units);

  res.json({
    result: `It's ${weather.temp}°${units === "celsius" ? "C" : "F"} and ${weather.condition} in ${city}.`,
  });
});
```

### Server Implementation (Python)

```python
@app.post("/webhook/get_weather")
async def get_weather(request: Request):
    data = await request.json()
    city = data["parameters"]["city"]
    units = data["parameters"].get("units", "fahrenheit")

    # Fetch weather from your data source
    weather = weather_service.get(city, units)

    return {
        "result": f"It's {weather['temp']}°{'C' if units == 'celsius' else 'F'} and {weather['condition']} in {city}."
    }
```

## Client Tools

Execute JavaScript in the user's browser. Useful for UI updates, navigation, or accessing browser APIs.

### Defining Client Tools

Client tools are registered when starting a conversation:

```javascript
import { Conversation } from "@elevenlabs/client";

const conversation = await Conversation.startSession({
  agentId: "your-agent-id",
  clientTools: {
    show_product: async ({ productId }) => {
      // Update UI to show product
      const modal = document.getElementById("product-modal");
      modal.innerHTML = await fetchProductCard(productId);
      modal.showModal();
      return { success: true, message: "Showing product" };
    },

    navigate_to: async ({ page }) => {
      // Navigate to a page
      window.location.href = `/${page}`;
      return { success: true };
    },

    save_preference: async ({ key, value }) => {
      // Store in localStorage
      localStorage.setItem(key, value);
      return { saved: true };
    },
  },
});
```

### Registering Client Tools with Agent

Tell the agent about available client tools in the agent config:

```python
agent = client.conversational_ai.agents.create(
    name="Shopping Assistant",
    tools=[
        {
            "type": "client",
            "name": "show_product",
            "description": "Display a product card to the user",
            "parameters": {
                "type": "object",
                "properties": {
                    "productId": {
                        "type": "string",
                        "description": "Product ID to display"
                    }
                },
                "required": ["productId"]
            }
        },
        {
            "type": "client",
            "name": "navigate_to",
            "description": "Navigate user to a different page",
            "parameters": {
                "type": "object",
                "properties": {
                    "page": {
                        "type": "string",
                        "enum": ["cart", "checkout", "account", "home"],
                        "description": "Page to navigate to"
                    }
                },
                "required": ["page"]
            }
        }
    ],
    prompt={
        "prompt": """You are a shopping assistant.
When users want to see a product, use show_product.
When users want to go somewhere, use navigate_to.""",
        "llm": "gpt-4o-mini"
    }
)
```

### Client Tool Return Values

Return data that the agent can use in conversation:

```javascript
clientTools: {
  check_cart: async () => {
    const cart = JSON.parse(localStorage.getItem("cart") || "[]");
    return {
      itemCount: cart.length,
      total: cart.reduce((sum, item) => sum + item.price, 0),
      items: cart.map((item) => item.name),
    };
  };
}
```

The agent receives this data and can say: "You have 3 items in your cart totaling $45.99."

## System Tools

Built-in tools provided by ElevenLabs.

### end_call

Ends the current conversation:

```python
tools=[
    {"type": "system", "name": "end_call"}
]
```

The agent can say "Goodbye!" and then end the call programmatically.

### transfer_to_number

Transfer to a phone number (requires telephony integration):

```python
tools=[
    {
        "type": "system",
        "name": "transfer_to_number",
        "phone_number": "+1234567890",
        "description": "Transfer to human support"
    }
]
```

### transfer_to_agent

Transfer to another ElevenLabs agent:

```python
tools=[
    {
        "type": "system",
        "name": "transfer_to_agent",
        "agent_id": "other-agent-id",
        "description": "Transfer to sales specialist"
    }
]
```

## Best Practices

### Tool Descriptions

Write clear descriptions so the LLM knows when to use tools:

```python
# Good - specific and actionable
"description": "Look up order status. Use when customer asks about their order, delivery, or shipping."

# Bad - vague
"description": "Order tool"
```

### Parameter Descriptions

Help the LLM extract correct values:

```python
"parameters": {
    "type": "object",
    "properties": {
        "order_id": {
            "type": "string",
            "description": "Order ID in format ORD-XXXXX (e.g., ORD-12345)"
        },
        "email": {
            "type": "string",
            "description": "Customer email address for verification"
        }
    }
}
```

### Error Handling

Return helpful error messages:

```javascript
// Server webhook
app.post("/webhook/lookup_order", async (req, res) => {
  const { order_id } = req.body.parameters;

  const order = await db.orders.find(order_id);

  if (!order) {
    return res.json({
      result: {
        error: true,
        message: `Order ${order_id} not found. Please verify the order ID.`,
      },
    });
  }

  res.json({ result: order });
});
```

### Timeouts

Set reasonable timeouts for webhooks:

```python
"webhook": {
    "url": "https://api.example.com/slow-operation",
    "method": "POST",
    "timeout_ms": 10000  # 10 seconds
}
```

## Complete Example

```python
agent = client.conversational_ai.agents.create(
    name="E-commerce Assistant",
    tools=[
        # Webhook: Server-side order lookup
        {
            "type": "webhook",
            "name": "lookup_order",
            "description": "Look up order status by order ID or email",
            "webhook": {
                "url": "https://api.mystore.com/orders/lookup",
                "method": "POST",
                "headers": {"Authorization": "Bearer {{API_KEY}}"}
            },
            "parameters": {
                "type": "object",
                "properties": {
                    "order_id": {"type": "string"},
                    "email": {"type": "string"}
                }
            }
        },
        # Client: Browser-side product display
        {
            "type": "client",
            "name": "show_product",
            "description": "Display product details to the customer",
            "parameters": {
                "type": "object",
                "properties": {
                    "product_id": {"type": "string"}
                },
                "required": ["product_id"]
            }
        },
        # System: Built-in call control
        {"type": "system", "name": "end_call"},
        {
            "type": "system",
            "name": "transfer_to_number",
            "phone_number": "+1234567890"
        }
    ],
    prompt={
        "prompt": """You are an e-commerce support assistant.

Available actions:
- lookup_order: Check order status
- show_product: Display products to customer
- end_call: End conversation politely
- transfer_to_number: Transfer to human support

Always verify order ID before lookup. Offer transfer for complex issues.""",
        "llm": "gpt-4o-mini"
    }
)
```

```

### references/widget-embedding.md

```markdown
# Widget Embedding

Add a voice AI agent to any website with the ElevenLabs conversation widget.

## Basic Embed

```html
<elevenlabs-convai agent-id="your-agent-id"></elevenlabs-convai>
<script src="https://unpkg.com/@elevenlabs/convai-widget-embed" async type="text/javascript"></script>
```

This creates a floating button that users can click to start a voice conversation.

> **Note:** Widgets currently require public agents with authentication disabled. For authenticated flows, use the SDKs.

## Widget Attributes

### Required

| Attribute | Description |
|-----------|-------------|
| `agent-id` | Your ElevenLabs agent ID |
| `signed-url` | Alternative to `agent-id` when using signed URLs |

### Appearance

| Attribute | Description | Default |
|-----------|-------------|---------|
| `avatar-image-url` | URL for agent avatar image | ElevenLabs logo |
| `avatar-orb-color-1` | Primary orb gradient color | `#2792dc` |
| `avatar-orb-color-2` | Secondary orb gradient color | `#9ce6e6` |

### Text Labels

| Attribute | Description | Default |
|-----------|-------------|---------|
| `action-text` | Tooltip when hovering | "Talk to AI" |
| `start-call-text` | Button to start call | "Start call" |
| `end-call-text` | Button to end call | "End call" |
| `expand-text` | Expand chat button | "Open" |
| `collapse-text` | Collapse chat button | "Close" |
| `listening-text` | Listening state label | "Listening..." |
| `speaking-text` | Speaking state label | "Assistant speaking" |

### Behavior

| Attribute | Description | Default |
|-----------|-------------|---------|
| `variant` | Widget style: `compact` or `expanded` | `compact` |
| `server-location` | Server region (`us`, `eu-residency`, `in-residency`, `global`) | `us` |
| `dismissible` | Allow the user to minimize the widget | `false` |
| `disable-banner` | Hide "Powered by ElevenLabs" | `false` |

## Examples

### Custom Avatar

```html
<elevenlabs-convai
  agent-id="your-agent-id"
  avatar-image-url="https://example.com/your-avatar.png"
></elevenlabs-convai>
```

### Custom Colors

```html
<elevenlabs-convai
  agent-id="your-agent-id"
  avatar-orb-color-1="#ff6b6b"
  avatar-orb-color-2="#ffd93d"
></elevenlabs-convai>
```

### Custom Text

```html
<elevenlabs-convai
  agent-id="your-agent-id"
  action-text="Chat with our AI assistant"
  start-call-text="Begin conversation"
  end-call-text="Hang up"
></elevenlabs-convai>
```

### Expanded Variant

```html
<elevenlabs-convai
  agent-id="your-agent-id"
  variant="expanded"
></elevenlabs-convai>
```

### Full Customization

```html
<elevenlabs-convai
  agent-id="your-agent-id"
  avatar-image-url="https://example.com/support-agent.png"
  avatar-orb-color-1="#4f46e5"
  avatar-orb-color-2="#818cf8"
  action-text="Talk to Support"
  start-call-text="Start voice chat"
  end-call-text="End conversation"
  expand-text="Open assistant"
  collapse-text="Minimize"
></elevenlabs-convai>
```

## CSS Customization

The widget uses Shadow DOM but exposes CSS custom properties:

```css
elevenlabs-convai {
  --elevenlabs-convai-widget-width: 400px;
  --elevenlabs-convai-widget-height: 600px;
}
```

### Positioning

By default, the widget appears in the bottom-right corner. Override with CSS:

```css
elevenlabs-convai {
  position: fixed;
  bottom: 20px;
  right: 20px;
  /* Or position differently */
  left: 20px;
  right: auto;
}
```

### Z-Index

```css
elevenlabs-convai {
  z-index: 9999;
}
```

## JavaScript Control

Access the widget element to control it programmatically:

```html
<elevenlabs-convai id="my-widget" agent-id="your-agent-id"></elevenlabs-convai>

<script>
  const widget = document.getElementById("my-widget");

  // Start a conversation
  widget.startConversation();

  // End the conversation
  widget.endConversation();

  // Listen for events
  widget.addEventListener("conversationStarted", () => {
    console.log("Conversation started");
  });

  widget.addEventListener("conversationEnded", () => {
    console.log("Conversation ended");
  });
</script>
```

### Custom Trigger Button

Hide the default widget and use your own button:

```html
<style>
  elevenlabs-convai {
    display: none;
  }
</style>

<button onclick="document.getElementById('widget').startConversation()">
  Talk to AI
</button>

<elevenlabs-convai id="widget" agent-id="your-agent-id"></elevenlabs-convai>
```

## Authentication

For agents with authentication enabled, pass a signed URL:

```html
<elevenlabs-convai id="widget" agent-id="your-agent-id"></elevenlabs-convai>

<script>
  async function startAuthenticatedConversation() {
    // Get signed URL from your backend
    const response = await fetch("/api/get-signed-url");
    const { signedUrl } = await response.json();

    const widget = document.getElementById("widget");
    widget.setAttribute("signed-url", signedUrl);
    widget.startConversation();
  }
</script>
```

Your backend:

```python
@app.get("/api/get-signed-url")
def get_signed_url():
    signed_url = client.conversational_ai.conversations.get_signed_url(
        agent_id="your-agent-id"
    )
    return {"signedUrl": signed_url.signed_url}
```

## Mobile Considerations

### Responsive Positioning

```css
/* Desktop: bottom-right */
elevenlabs-convai {
  position: fixed;
  bottom: 20px;
  right: 20px;
}

/* Mobile: full-width bottom */
@media (max-width: 768px) {
  elevenlabs-convai {
    bottom: 0;
    right: 0;
    left: 0;
    --elevenlabs-convai-widget-width: 100%;
  }
}
```

### Touch-Friendly

The widget is touch-optimized by default. For better mobile UX:

```css
@media (max-width: 768px) {
  elevenlabs-convai {
    /* Larger touch target */
    transform: scale(1.1);
    transform-origin: bottom right;
  }
}
```

## Multiple Widgets

You can have multiple widgets for different agents:

```html
<elevenlabs-convai
  agent-id="support-agent-id"
  action-text="Support"
  style="right: 20px"
></elevenlabs-convai>

<elevenlabs-convai
  agent-id="sales-agent-id"
  action-text="Sales"
  style="right: 100px"
></elevenlabs-convai>
```

## Framework Integration

### React

```jsx
function App() {
  useEffect(() => {
    // Load widget script
    const script = document.createElement("script");
    script.src = "https://unpkg.com/@elevenlabs/convai-widget-embed";
    script.async = true;
    document.body.appendChild(script);

    return () => document.body.removeChild(script);
  }, []);

  return (
    <div>
      <elevenlabs-convai agent-id="your-agent-id"></elevenlabs-convai>
    </div>
  );
}
```

### Vue

```vue
<template>
  <div>
    <elevenlabs-convai agent-id="your-agent-id"></elevenlabs-convai>
  </div>
</template>

<script setup>
import { onMounted } from "vue";

onMounted(() => {
  const script = document.createElement("script");
  script.src = "https://unpkg.com/@elevenlabs/convai-widget-embed";
  script.async = true;
  document.body.appendChild(script);
});
</script>
```

### Next.js

```jsx
import Script from "next/script";

export default function Page() {
  return (
    <>
      <Script
        src="https://unpkg.com/@elevenlabs/convai-widget-embed"
        strategy="lazyOnload"
      />
      <elevenlabs-convai agent-id="your-agent-id"></elevenlabs-convai>
    </>
  );
}
```

## Troubleshooting

### Widget Not Appearing

1. Check that the agent ID is correct
2. Verify the script is loaded (check Network tab)
3. Check for JavaScript errors in console
4. Ensure no CSS is hiding the widget

### Audio Issues

1. Ensure HTTPS (microphone requires secure context)
2. Check browser permissions for microphone
3. Test in a supported browser (Chrome, Firefox, Safari, Edge)

### CORS Errors

If using authentication, ensure your domain is in the agent's allowlist:

```python
platform_settings={
    "auth": {
        "enable_auth": True,
        "allowlist": ["https://yourdomain.com"]
    }
}
```

```

### references/outbound-calls.md

```markdown
# Outbound Calls

Make outbound phone calls using your ElevenLabs agent via Twilio integration.

## Prerequisites

1. A configured ElevenLabs agent
2. A Twilio phone number linked to your agent (obtain `agent_phone_number_id` from ElevenLabs dashboard)
3. Your ElevenLabs API key

## Basic Usage

See the [main agents skill](../SKILL.md#outbound-calls) for basic Python, JavaScript, and cURL examples.

## Request Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `agent_id` | string | Yes | The ID of your ElevenLabs agent |
| `agent_phone_number_id` | string | Yes | The ID of the Twilio phone number linked to your agent |
| `to_number` | string | Yes | The destination phone number (E.164 format) |
| `conversation_initiation_client_data` | object | No | Override conversation settings for this call |

## Response

```json
{
  "success": true,
  "message": "Call initiated successfully",
  "conversation_id": "conv_abc123",
  "callSid": "CA1234567890abcdef"
}
```

| Field | Type | Description |
|-------|------|-------------|
| `success` | boolean | Whether the call was initiated successfully |
| `message` | string | Status message |
| `conversation_id` | string | ElevenLabs conversation ID for tracking |
| `callSid` | string | Twilio Call SID for reference |

## Customizing the Call

Override agent settings for a specific call using `conversation_initiation_client_data`:

### Python

```python
response = client.conversational_ai.twilio.outbound_call(
    agent_id="your-agent-id",
    agent_phone_number_id="your-phone-number-id",
    to_number="+1234567890",
    conversation_initiation_client_data={
        "conversation_config_override": {
            "agent": {
                "first_message": "Hello! This is a reminder about your appointment tomorrow.",
                "language": "en"
            },
            "tts": {
                "voice_id": "JBFqnCBsd6RMkjVDRZzb"
            }
        },
        "dynamic_variables": {
            "customer_name": "John",
            "appointment_time": "2:00 PM"
        }
    }
)
```

### JavaScript

```javascript
const response = await client.conversationalAi.twilio.outboundCall({
  agentId: "your-agent-id",
  agentPhoneNumberId: "your-phone-number-id",
  toNumber: "+1234567890",
  conversationInitiationClientData: {
    conversationConfigOverride: {
      agent: {
        firstMessage: "Hello! This is a reminder about your appointment tomorrow.",
        language: "en",
      },
      tts: {
        voiceId: "JBFqnCBsd6RMkjVDRZzb",
      },
    },
    dynamicVariables: {
      customer_name: "John",
      appointment_time: "2:00 PM",
    },
  },
});
```

## Configuration Overrides

### Agent Settings

| Option | Type | Description |
|--------|------|-------------|
| `first_message` | string | Custom greeting for this call |
| `language` | string | Language code (e.g., "en", "es", "fr") |
| `prompt` | object | Override agent prompt and LLM settings |

### TTS Settings

| Option | Type | Description |
|--------|------|-------------|
| `voice_id` | string | Voice ID to use for this call |
| `stability` | number | Voice stability (0.0-1.0) |
| `similarity_boost` | number | Voice similarity boost (0.0-1.0) |
| `speed` | number | Speech speed multiplier |

### Dynamic Variables

Pass custom data to your agent's prompt using `dynamic_variables`. Reference them in your agent's prompt with `{{variable_name}}` syntax.

## Complete Example

```python
from elevenlabs import ElevenLabs

client = ElevenLabs()

# Make personalized outbound calls
customers = [
    {"name": "Alice", "phone": "+1234567890", "balance": "$150.00"},
    {"name": "Bob", "phone": "+0987654321", "balance": "$75.50"},
]

for customer in customers:
    try:
        response = client.conversational_ai.twilio.outbound_call(
            agent_id="payment-reminder-agent",
            agent_phone_number_id="your-phone-number-id",
            to_number=customer["phone"],
            conversation_initiation_client_data={
                "conversation_config_override": {
                    "agent": {
                        "first_message": f"Hello {customer['name']}, this is a friendly reminder about your account."
                    }
                },
                "dynamic_variables": {
                    "customer_name": customer["name"],
                    "balance": customer["balance"]
                }
            }
        )
        print(f"Called {customer['name']}: {response.conversation_id}")
    except Exception as e:
        print(f"Failed to call {customer['name']}: {e}")
```

```