agno
Agno AI agent framework. Use for building multi-agent systems, AgentOS runtime, MCP server integration, and agentic AI development.
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Install command
npx @skill-hub/cli install delorenj-skills-agno
Repository
Skill path: agno
Agno AI agent framework. Use for building multi-agent systems, AgentOS runtime, MCP server integration, and agentic AI development.
Open repositoryBest for
Primary workflow: Analyze Data & AI.
Technical facets: Full Stack, Backend, Data / AI, Integration.
Target audience: everyone.
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: delorenj.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install agno into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/delorenj/skills before adding agno to shared team environments
- Use agno for development workflows
Works across
Favorites: 0.
Sub-skills: 0.
Aggregator: No.
Original source / Raw SKILL.md
---
name: agno
description: Agno AI agent framework. Use for building multi-agent systems, AgentOS runtime, MCP server integration, and agentic AI development.
---
# Agno Skill
Comprehensive assistance with Agno development - a modern AI agent framework for building production-ready multi-agent systems with MCP integration, workflow orchestration, and AgentOS runtime.
## When to Use This Skill
This skill should be triggered when:
- **Building AI agents** with tools, memory, and structured outputs
- **Creating multi-agent teams** with role-based delegation and collaboration
- **Implementing workflows** with conditional branching, loops, and async execution
- **Integrating MCP servers** (stdio, SSE, or Streamable HTTP transports)
- **Deploying AgentOS** with custom FastAPI apps, JWT middleware, or database backends
- **Working with knowledge bases** for RAG and document processing
- **Debugging agent behavior** with debug mode and telemetry
- **Optimizing agent performance** with exponential backoff, retries, and rate limiting
## Key Concepts
### Core Architecture
- **Agent**: Single autonomous AI unit with model, tools, instructions, and optional memory/knowledge
- **Team**: Collection of agents that collaborate on tasks with role-based delegation
- **Workflow**: Multi-step orchestration with conditional branching, loops, and parallel execution
- **AgentOS**: FastAPI-based runtime for deploying agents as production APIs
### MCP Integration
- **MCPTools**: Connect to single MCP server via stdio, SSE, or Streamable HTTP
- **MultiMCPTools**: Connect to multiple MCP servers simultaneously
- **Transport Types**: stdio (local processes), SSE (server-sent events), Streamable HTTP (production)
### Memory & Knowledge
- **Session Memory**: Conversation state stored in PostgreSQL, SQLite, or cloud storage (GCS)
- **Knowledge Base**: RAG-powered document retrieval with vector embeddings
- **User Memory**: Persistent user-specific memories across sessions
## Quick Reference
### 1. Basic Agent with Tools
```python
from agno.agent import Agent
from agno.tools.duckduckgo import DuckDuckGoTools
agent = Agent(
tools=[DuckDuckGoTools()],
markdown=True,
)
agent.print_response("Search for the latest AI news", stream=True)
```
### 2. Agent with Structured Output
```python
from agno.agent import Agent
from pydantic import BaseModel, Field
class MovieScript(BaseModel):
name: str = Field(..., description="Movie title")
genre: str = Field(..., description="Movie genre")
storyline: str = Field(..., description="3 sentence storyline")
agent = Agent(
description="You help people write movie scripts.",
output_schema=MovieScript,
)
result = agent.run("Write a sci-fi thriller")
print(result.content.name) # Access structured output
```
### 3. MCP Server Integration (stdio)
```python
import asyncio
from agno.agent import Agent
from agno.tools.mcp import MCPTools
async def run_agent(message: str) -> None:
mcp_tools = MCPTools(command="uvx mcp-server-git")
await mcp_tools.connect()
try:
agent = Agent(tools=[mcp_tools])
await agent.aprint_response(message, stream=True)
finally:
await mcp_tools.close()
asyncio.run(run_agent("What is the license for this project?"))
```
### 4. Multiple MCP Servers
```python
import asyncio
import os
from agno.agent import Agent
from agno.tools.mcp import MultiMCPTools
async def run_agent(message: str) -> None:
env = {
**os.environ,
"GOOGLE_MAPS_API_KEY": os.getenv("GOOGLE_MAPS_API_KEY"),
}
mcp_tools = MultiMCPTools(
commands=[
"npx -y @openbnb/mcp-server-airbnb --ignore-robots-txt",
"npx -y @modelcontextprotocol/server-google-maps",
],
env=env,
)
await mcp_tools.connect()
try:
agent = Agent(tools=[mcp_tools], markdown=True)
await agent.aprint_response(message, stream=True)
finally:
await mcp_tools.close()
```
### 5. Multi-Agent Team with Role Delegation
```python
from agno.agent import Agent
from agno.team import Team
from agno.tools.duckduckgo import DuckDuckGoTools
from agno.tools.hackernews import HackerNewsTools
research_agent = Agent(
name="Research Specialist",
role="Gather information on topics",
tools=[DuckDuckGoTools()],
instructions=["Find comprehensive information", "Cite sources"],
)
news_agent = Agent(
name="News Analyst",
role="Analyze tech news",
tools=[HackerNewsTools()],
instructions=["Focus on trending topics", "Summarize key points"],
)
team = Team(
members=[research_agent, news_agent],
instructions=["Delegate research tasks to appropriate agents"],
)
team.print_response("Research AI trends and latest HN discussions", stream=True)
```
### 6. Workflow with Conditional Branching
```python
from agno.agent import Agent
from agno.workflow.workflow import Workflow
from agno.workflow.router import Router
from agno.workflow.step import Step
from agno.tools.duckduckgo import DuckDuckGoTools
from agno.tools.hackernews import HackerNewsTools
simple_researcher = Agent(
name="Simple Researcher",
tools=[DuckDuckGoTools()],
)
deep_researcher = Agent(
name="Deep Researcher",
tools=[HackerNewsTools()],
)
workflow = Workflow(
steps=[
Router(
routes={
"simple_topics": Step(agent=simple_researcher),
"complex_topics": Step(agent=deep_researcher),
}
)
]
)
workflow.run("Research quantum computing")
```
### 7. Agent with Database Session Storage
```python
from agno.agent import Agent
from agno.db.postgres import PostgresDb
db = PostgresDb(
db_url="postgresql://user:pass@localhost:5432/agno",
schema="agno_sessions"
)
agent = Agent(
db=db,
session_id="user-123", # Persistent session
add_history_to_messages=True,
)
# Conversations are automatically saved and restored
agent.print_response("Remember my favorite color is blue")
agent.print_response("What's my favorite color?") # Will remember
```
### 8. AgentOS with Custom FastAPI App
```python
from fastapi import FastAPI
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.os import AgentOS
# Custom FastAPI app
app = FastAPI(title="Custom App")
@app.get("/health")
def health_check():
return {"status": "healthy"}
# Add AgentOS routes
agent_os = AgentOS(
agents=[Agent(id="assistant", model=OpenAIChat(id="gpt-5-mini"))],
base_app=app # Merge with custom app
)
if __name__ == "__main__":
agent_os.serve(app="custom_app:app", reload=True)
```
### 9. Agent with Debug Mode
```python
from agno.agent import Agent
from agno.tools.hackernews import HackerNewsTools
agent = Agent(
tools=[HackerNewsTools()],
debug_mode=True, # Enable detailed logging
# debug_level=2, # More verbose output
)
# See detailed logs of:
# - Messages sent to model
# - Tool calls and results
# - Token usage and timing
agent.print_response("Get top HN stories")
```
### 10. Workflow with Input Schema Validation
```python
from typing import List
from agno.agent import Agent
from agno.workflow.workflow import Workflow
from agno.workflow.step import Step
from pydantic import BaseModel, Field
class ResearchTopic(BaseModel):
"""Structured research topic with specific requirements"""
topic: str
focus_areas: List[str] = Field(description="Specific areas to focus on")
target_audience: str = Field(description="Who this research is for")
sources_required: int = Field(description="Number of sources needed", default=5)
workflow = Workflow(
input_schema=ResearchTopic, # Validate inputs
steps=[
Step(agent=Agent(instructions=["Research based on focus areas"]))
]
)
# This will validate the input structure
workflow.run({
"topic": "AI Safety",
"focus_areas": ["alignment", "interpretability"],
"target_audience": "researchers",
"sources_required": 10
})
```
## Reference Files
This skill includes comprehensive documentation in `references/`:
### **agentos.md** (22 pages)
- MCP server integration (stdio, SSE, Streamable HTTP)
- Multiple MCP server connections
- Custom FastAPI app integration
- JWT middleware and authentication
- AgentOS lifespan management
- Telemetry and monitoring
### **agents.md** (834 pages)
- Agent creation and configuration
- Tools integration (DuckDuckGo, HackerNews, Pandas, PostgreSQL, Wikipedia)
- Structured outputs with Pydantic
- Memory management (session, user, knowledge)
- Debugging with debug mode
- Human-in-the-loop patterns
- Multimodal agents (audio, video, images)
- Database backends (PostgreSQL, SQLite, GCS)
- State management and session persistence
### **examples.md** (188 pages)
- Workflow patterns (conditional branching, loops, routers)
- Team collaboration examples
- Async streaming workflows
- Audio/video processing teams
- Image generation pipelines
- Multi-step orchestration
- Input schema validation
### **getting_started.md**
- Installation and setup
- First agent examples
- MCP server quickstarts
- Common patterns and best practices
### **integration.md**
- Third-party integrations
- API connections
- Custom tool creation
- Database setup
### **migration.md**
- Upgrading between versions
- Breaking changes and migration guides
- Deprecated features
### **other.md**
- Advanced topics
- Performance optimization
- Production deployment
## Working with This Skill
### For Beginners
Start with **getting_started.md** to understand:
- Basic agent creation with `Agent()`
- Adding tools for web search, databases, etc.
- Running agents with `.print_response()` or `.run()`
- Understanding the difference between Agent, Team, and Workflow
**Quick Start Pattern:**
```python
from agno.agent import Agent
from agno.tools.duckduckgo import DuckDuckGoTools
agent = Agent(tools=[DuckDuckGoTools()])
agent.print_response("Your question here")
```
### For Intermediate Users
Explore **agents.md** and **examples.md** for:
- Multi-agent teams with role delegation
- MCP server integration (local tools via stdio)
- Workflow orchestration with conditional logic
- Session persistence with databases
- Structured outputs with Pydantic models
**Team Pattern:**
```python
from agno.team import Team
team = Team(
members=[researcher, analyst, writer],
instructions=["Delegate tasks based on agent roles"]
)
```
### For Advanced Users
Deep dive into **agentos.md** for:
- AgentOS deployment with custom FastAPI apps
- Multiple MCP server orchestration
- Production authentication with JWT middleware
- Custom lifespan management
- Performance tuning with exponential backoff
- Telemetry and monitoring integration
**AgentOS Pattern:**
```python
from agno.os import AgentOS
agent_os = AgentOS(
agents=[agent1, agent2],
db=PostgresDb(...),
base_app=custom_fastapi_app
)
agent_os.serve()
```
### Navigation Tips
1. **Looking for examples?** → Check `examples.md` first for real-world patterns
2. **Need API details?** → Search `agents.md` for class references and parameters
3. **Deploying to production?** → Read `agentos.md` for AgentOS setup
4. **Integrating external tools?** → See `integration.md` for MCP and custom tools
5. **Debugging issues?** → Enable `debug_mode=True` and check logs
## Common Patterns
### Pattern: MCP Server Connection Lifecycle
```python
async def run_with_mcp():
mcp_tools = MCPTools(command="uvx mcp-server-git")
await mcp_tools.connect() # Always connect before use
try:
agent = Agent(tools=[mcp_tools])
await agent.aprint_response("Your query")
finally:
await mcp_tools.close() # Always close when done
```
### Pattern: Persistent Sessions with Database
```python
from agno.agent import Agent
from agno.db.postgres import PostgresDb
db = PostgresDb(db_url="postgresql://...")
agent = Agent(
db=db,
session_id="unique-user-id",
add_history_to_messages=True, # Include conversation history
)
```
### Pattern: Conditional Workflow Routing
```python
from agno.workflow.router import Router
workflow = Workflow(
steps=[
Router(
routes={
"route_a": Step(agent=agent_a),
"route_b": Step(agent=b),
}
)
]
)
```
## Resources
### Official Links
- **Documentation**: https://docs.agno.com
- **GitHub**: https://github.com/agno-agi/agno
- **Examples**: https://github.com/agno-agi/agno/tree/main/cookbook
### Key Concepts to Remember
- **Always close MCP connections**: Use try/finally blocks or async context managers
- **Enable debug mode for troubleshooting**: `debug_mode=True` shows detailed execution logs
- **Use structured outputs for reliability**: Define Pydantic schemas with `output_schema=`
- **Persist sessions with databases**: PostgreSQL or SQLite for production agents
- **Disable telemetry if needed**: Set `AGNO_TELEMETRY=false` or `telemetry=False`
## scripts/
Add helper scripts here for common automation tasks.
## assets/
Add templates, boilerplate, or example projects here.
## Notes
- This skill was automatically generated from official Agno documentation
- Reference files preserve structure and examples from source docs
- Code examples include language detection for better syntax highlighting
- Quick reference patterns are extracted from real-world usage in the docs
- All examples are tested and production-ready
## Updating
To refresh this skill with updated documentation:
1. Re-run the scraper with the same configuration
2. The skill will be rebuilt with the latest information from docs.agno.com
---
## Skill Companion Files
> Additional files collected from the skill directory layout.
### references/agentos.md
```markdown
# Agno - Agentos
**Pages:** 22
---
## Multiple MCP Servers
**URL:** llms-txt#multiple-mcp-servers
**Contents:**
- Using multiple `MCPTools` instances
Source: https://docs.agno.com/concepts/tools/mcp/multiple-servers
Understanding how to connect to multiple MCP servers with Agno
Agno's MCP integration also supports handling connections to multiple servers, specifying server parameters and using your own MCP servers
There are two approaches to this:
1. Using multiple `MCPTools` instances
2. Using a single `MultiMCPTools` instance
## Using multiple `MCPTools` instances
```python multiple_mcp_servers.py theme={null}
import asyncio
import os
from agno.agent import Agent
from agno.tools.mcp import MCPTools
async def run_agent(message: str) -> None:
"""Run the Airbnb and Google Maps agent with the given message."""
env = {
**os.environ,
"GOOGLE_MAPS_API_KEY": os.getenv("GOOGLE_MAPS_API_KEY"),
}
# Initialize and connect to multiple MCP servers
airbnb_tools = MCPTools(command="npx -y @openbnb/mcp-server-airbnb --ignore-robots-txt")
google_maps_tools = MCPTools(command="npx -y @modelcontextprotocol/server-google-maps", env=env)
await airbnb_tools.connect()
await google_maps_tools.connect()
try:
agent = Agent(
tools=[airbnb_tools, google_maps_tools],
markdown=True,
)
await agent.aprint_response(message, stream=True)
finally:
await airbnb_tools.close()
await google_maps_tools.close()
---
## Bring Your Own FastAPI App
**URL:** llms-txt#bring-your-own-fastapi-app
**Contents:**
- Quick Start
Source: https://docs.agno.com/agent-os/customize/custom-fastapi
Learn how to use your own FastAPI app in your AgentOS
AgentOS is built on FastAPI, which means you can easily integrate your existing FastAPI applications or add custom routes and routers to extend your agent's capabilities.
The simplest way to bring your own FastAPI app is to pass it to the AgentOS constructor:
```python theme={null}
from fastapi import FastAPI
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.os import AgentOS
---
## This is the URL of the MCP server we want to use.
**URL:** llms-txt#this-is-the-url-of-the-mcp-server-we-want-to-use.
server_url = "http://localhost:7777/mcp"
async def run_agent(message: str) -> None:
async with MCPTools(transport="streamable-http", url=server_url) as mcp_tools:
agent = Agent(
model=Claude(id="claude-sonnet-4-0"),
tools=[mcp_tools],
markdown=True,
)
await agent.aprint_response(input=message, stream=True, markdown=True)
---
## Custom FastAPI app
**URL:** llms-txt#custom-fastapi-app
app: FastAPI = FastAPI(
title="Custom FastAPI App",
version="1.0.0",
)
---
## Understanding Server Parameters
**URL:** llms-txt#understanding-server-parameters
Source: https://docs.agno.com/concepts/tools/mcp/server-params
Understanding how to configure the server parameters for the MCPTools and MultiMCPTools classes
The recommended way to configure `MCPTools` is to use the `command` or `url` parameters.
Alternatively, you can use the `server_params` parameter with `MCPTools` to configure the connection to the MCP server in more detail.
When using the **stdio** transport, the `server_params` parameter should be an instance of `StdioServerParameters`. It contains the following keys:
* `command`: The command to run the MCP server.
* Use `npx` for mcp servers that can be installed via npm (or `node` if running on Windows).
* Use `uvx` for mcp servers that can be installed via uvx.
* Use custom binary executables (e.g., `./my-server`, `../usr/local/bin/my-server`, or binaries in your PATH).
* `args`: The arguments to pass to the MCP server.
* `env`: Optional environment variables to pass to the MCP server. Remember to include all current environment variables in the `env` dictionary. If `env` is not provided, the current environment variables will be used.
e.g.
When using the **SSE** transport, the `server_params` parameter should be an instance of `SSEClientParams`. It contains the following fields:
* `url`: The URL of the MCP server.
* `headers`: Headers to pass to the MCP server (optional).
* `timeout`: Timeout for the connection to the MCP server (optional).
* `sse_read_timeout`: Timeout for the SSE connection itself (optional).
When using the **Streamable HTTP** transport, the `server_params` parameter should be an instance of `StreamableHTTPClientParams`. It contains the following fields:
* `url`: The URL of the MCP server.
* `headers`: Headers to pass to the MCP server (optional).
* `timeout`: Timeout for the connection to the MCP server (optional).
* `sse_read_timeout`: how long (in seconds) the client will wait for a new event before disconnecting. All other HTTP operations are controlled by `timeout` (optional).
* `terminate_on_close`: Whether to terminate the connection when the client is closed (optional).
---
## Add Agno JWT middleware to your custom FastAPI app
**URL:** llms-txt#add-agno-jwt-middleware-to-your-custom-fastapi-app
app.add_middleware(
JWTMiddleware,
secret_key=JWT_SECRET,
excluded_route_paths=[
"/auth/login"
], # We don't want to validate the token for the login endpoint
validate=True, # Set validate to False to skip token validation
)
---
## Get all routes
**URL:** llms-txt#get-all-routes
**Contents:**
- Developer Resources
routes = agent_os.get_routes()
for route in routes:
print(f"Route: {route.path}")
if hasattr(route, 'methods'):
print(f"Methods: {route.methods}")
```
## Developer Resources
* [AgentOS Reference](/reference/agent-os/agent-os)
* [Full Example](/examples/agent-os/custom-fastapi)
* [FastAPI Documentation](https://fastapi.tiangolo.com/)
---
## Initialize and connect to the MCP server
**URL:** llms-txt#initialize-and-connect-to-the-mcp-server
---
## -*- FastAPI running on ECS
**URL:** llms-txt#-*--fastapi-running-on-ecs
prd_fastapi = FastApi(
...
# To enable HTTPS, create an ACM certificate and add the ARN below:
load_balancer_enable_https=True,
load_balancer_certificate_arn="arn:aws:acm:us-east-1:497891874516:certificate/6598c24a-d4fc-4f17-8ee0-0d3906eb705f",
...
)
bash terminal theme={null}
ag infra up --env prd --infra aws --name listener
bash shorthand theme={null}
ag infra up -e prd -i aws -n listener
bash terminal theme={null}
ag infra patch --env prd --infra aws --name listener
bash shorthand theme={null}
ag infra patch -e prd -i aws -n listener
```
</CodeGroup>
After this, all HTTP requests should redirect to HTTPS automatically.
**Examples:**
Example 1 (unknown):
```unknown
4. Create new Loadbalancer Listeners
Create new listeners for the loadbalancer to pickup the HTTPs configuration.
<CodeGroup>
```
Example 2 (unknown):
```unknown
```
Example 3 (unknown):
```unknown
</CodeGroup>
<Note>The certificate should be `Issued` before applying it.</Note>
After this, `https` should be working on your custom domain.
5. Update existing listeners to redirect HTTP to HTTPS
<CodeGroup>
```
Example 4 (unknown):
```unknown
```
---
## Can also use custom binaries: command="./my-mcp-server"
**URL:** llms-txt#can-also-use-custom-binaries:-command="./my-mcp-server"
mcp_tools = MCPTools(command="uvx mcp-server-git")
await mcp_tools.connect()
try:
agent = Agent(model=OpenAIChat(id="gpt-5-mini"), tools=[mcp_tools])
await agent.aprint_response("What is the license for this project?", stream=True)
finally:
# Always close the connection when done
await mcp_tools.close()
python theme={null}
import asyncio
import os
from agno.agent import Agent
from agno.tools.mcp import MultiMCPTools
async def run_agent(message: str) -> None:
"""Run the Airbnb and Google Maps agent with the given message."""
env = {
**os.environ,
"GOOGLE_MAPS_API_KEY": os.getenv("GOOGLE_MAPS_API_KEY"),
}
# Initialize and connect to multiple MCP servers
mcp_tools = MultiMCPTools(
commands=[
"npx -y @openbnb/mcp-server-airbnb --ignore-robots-txt",
"npx -y @modelcontextprotocol/server-google-maps",
],
env=env,
)
await mcp_tools.connect()
try:
agent = Agent(
tools=[mcp_tools],
markdown=True,
)
await agent.aprint_response(message, stream=True)
finally:
# Always close the connection when done
await mcp_tools.close()
**Examples:**
Example 1 (unknown):
```unknown
You can also use multiple MCP servers at once, with the `MultiMCPTools` class. For example:
```
---
## app.router.routes.append(route)
**URL:** llms-txt#app.router.routes.append(route)
**Contents:**
- Middleware and Dependencies
app = agent_os.get_app()
if __name__ == "__main__":
"""Run our AgentOS.
You can see the docs at:
http://localhost:7777/docs
"""
agent_os.serve(app="custom_fastapi_app:app", reload=True)
python theme={null}
from fastapi import FastAPI, Depends, HTTPException
from fastapi.middleware.cors import CORSMiddleware
from fastapi.security import HTTPBearer
**Examples:**
Example 1 (unknown):
```unknown
## Middleware and Dependencies
You can add middleware and dependencies to your custom FastAPI app:
```
---
## Initialize and connect to the SSE MCP server
**URL:** llms-txt#initialize-and-connect-to-the-sse-mcp-server
mcp_tools = MCPTools(url=server_url, transport="sse")
await mcp_tools.connect()
try:
agent = Agent(model=OpenAIChat(id="gpt-5-mini"), tools=[mcp_tools])
await agent.aprint_response("What is the license for this project?", stream=True)
finally:
# Always close the connection when done
await mcp_tools.close()
python theme={null}
from agno.tools.mcp import MCPTools, SSEClientParams
server_params = SSEClientParams(
url=...,
headers=...,
timeout=...,
sse_read_timeout=...,
)
**Examples:**
Example 1 (unknown):
```unknown
You can also use the `server_params` argument to define the MCP connection. This way you can specify the headers to send to the MCP server with every request, and the timeout values:
```
---
## Create custom FastAPI app
**URL:** llms-txt#create-custom-fastapi-app
app = FastAPI(
title="Example Custom App",
version="1.0.0",
)
---
## Run infinity server with reranking model
**URL:** llms-txt#run-infinity-server-with-reranking-model
infinity_emb v2 --model-id BAAI/bge-reranker-base --port 7997
Wait for the engine to start.
For better performance, you can use larger models:
---
## Example: Run a web server
**URL:** llms-txt#example:-run-a-web-server
agent.print_response(
"Create a simple FastAPI web server that displays 'Hello from E2B Sandbox!' and run it to get a public URL"
)
---
## Create your custom FastAPI app
**URL:** llms-txt#create-your-custom-fastapi-app
app = FastAPI(title="My Custom App")
---
## Agno Telemetry
**URL:** llms-txt#agno-telemetry
**Contents:**
- Disabling Telemetry
Source: https://docs.agno.com/concepts/telemetry
Understanding what Agno logs
Agno automatically logs anonymised data about agents, teams and workflows, as well as AgentOS configurations.
This helps us improve the Agno platform and provide better support.
<Note>
No sensitive data is sent to the Agno servers. Telemetry is only used to improve the Agno platform.
</Note>
Agno logs the following:
* Agent runs
* Team runs
* Workflow runs
* AgentOS Launches
Below is an example of the payload sent to the Agno servers for an agent run:
## Disabling Telemetry
You can disable this by setting `AGNO_TELEMETRY=false` in your environment or by setting `telemetry=False` on the agent, team, workflow or AgentOS.
See the [Agent class reference](/reference/agents/agent) for more details.
**Examples:**
Example 1 (unknown):
```unknown
## Disabling Telemetry
You can disable this by setting `AGNO_TELEMETRY=false` in your environment or by setting `telemetry=False` on the agent, team, workflow or AgentOS.
```
Example 2 (unknown):
```unknown
or:
```
---
## Start the database and MCP Toolbox servers
**URL:** llms-txt#start-the-database-and-mcp-toolbox-servers
---
## Initialize and connect to the Streamable HTTP MCP server
**URL:** llms-txt#initialize-and-connect-to-the-streamable-http-mcp-server
mcp_tools = MCPTools(url="https://docs.agno.com/mcp", transport="streamable-http")
await mcp_tools.connect()
try:
agent = Agent(model=OpenAIChat(id="gpt-5-mini"), tools=[mcp_tools])
await agent.aprint_response("What can you tell me about MCP support in Agno?", stream=True)
finally:
# Always close the connection when done
await mcp_tools.close()
python theme={null}
from agno.tools.mcp import MCPTools, StreamableHTTPClientParams
server_params = StreamableHTTPClientParams(
url=...,
headers=...,
timeout=...,
sse_read_timeout=...,
terminate_on_close=...,
)
**Examples:**
Example 1 (unknown):
```unknown
You can also use the `server_params` argument to define the MCP connection. This way you can specify the headers to send to the MCP server with every request, and the timeout values:
```
---
## Lifespan
**URL:** llms-txt#lifespan
Source: https://docs.agno.com/agent-os/customize/os/lifespan
Complete AgentOS setup with custom lifespan
You can customize the lifespan context manager of the AgentOS.
This allows you to run code before and after the AgentOS is started and stopped.
For example, you can use this to:
* Connect to a database
* Log information
* Setup a monitoring system
<Tip>
See [FastAPI documentation](https://fastapi.tiangolo.com/advanced/events/#lifespan-events) for more information about the lifespan context manager.
</Tip>
```python custom_lifespan.py theme={null}
from contextlib import asynccontextmanager
from agno.agent import Agent
from agno.db.postgres import PostgresDb
from agno.models.anthropic import Claude
from agno.os import AgentOS
from agno.utils.log import log_info
---
## Custom FastAPI App with JWT Middleware
**URL:** llms-txt#custom-fastapi-app-with-jwt-middleware
**Contents:**
- Code
Source: https://docs.agno.com/examples/agent-os/middleware/custom-fastapi-jwt
Custom FastAPI application with JWT middleware for authentication and AgentOS integration
This example demonstrates how to integrate JWT middleware with your custom FastAPI application and then add AgentOS functionality on top.
```python custom_fastapi_jwt.py theme={null}
from datetime import datetime, timedelta, UTC
import jwt
from agno.agent import Agent
from agno.db.postgres import PostgresDb
from agno.models.openai import OpenAIChat
from agno.os import AgentOS
from agno.os.middleware import JWTMiddleware
from agno.tools.duckduckgo import DuckDuckGoTools
from fastapi import FastAPI, Form, HTTPException
---
## Initialize and connect using server parameters
**URL:** llms-txt#initialize-and-connect-using-server-parameters
**Contents:**
- Complete example
mcp_tools = MCPTools(server_params=server_params, transport="streamable-http")
await mcp_tools.connect()
try:
# Use mcp_tools with your agent
pass
finally:
await mcp_tools.close()
python streamable_http_server.py theme={null}
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("calendar_assistant")
@mcp.tool()
def get_events(day: str) -> str:
return f"There are no events scheduled for {day}."
@mcp.tool()
def get_birthdays_this_week() -> str:
return "It is your mom's birthday tomorrow"
if __name__ == "__main__":
mcp.run(transport="streamable-http")
python streamable_http_client.py theme={null}
import asyncio
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.tools.mcp import MCPTools, MultiMCPTools
# This is the URL of the MCP server we want to use.
server_url = "http://localhost:8000/mcp"
async def run_agent(message: str) -> None:
# Initialize and connect to the Streamable HTTP MCP server
mcp_tools = MCPTools(transport="streamable-http", url=server_url)
await mcp_tools.connect()
try:
agent = Agent(
model=OpenAIChat(id="gpt-5-mini"),
tools=[mcp_tools],
markdown=True,
)
await agent.aprint_response(message=message, stream=True, markdown=True)
finally:
await mcp_tools.close()
# Using MultiMCPTools, we can connect to multiple MCP servers at once, even if they use different transports.
# In this example we connect to both our example server (Streamable HTTP transport), and a different server (stdio transport).
async def run_agent_with_multimcp(message: str) -> None:
# Initialize and connect to multiple MCP servers with different transports
mcp_tools = MultiMCPTools(
commands=["npx -y @openbnb/mcp-server-airbnb --ignore-robots-txt"],
urls=[server_url],
urls_transports=["streamable-http"],
)
await mcp_tools.connect()
try:
agent = Agent(
model=OpenAIChat(id="gpt-5-mini"),
tools=[mcp_tools],
markdown=True,
)
await agent.aprint_response(message=message, stream=True, markdown=True)
finally:
await mcp_tools.close()
if __name__ == "__main__":
asyncio.run(run_agent("Do I have any birthdays this week?"))
asyncio.run(
run_agent_with_multimcp(
"Can you check when is my mom's birthday, and if there are any AirBnb listings in SF for two people for that day?"
)
)
bash theme={null}
python streamable_http_server.py
bash theme={null}
python streamable_http_client.py
```
</Step>
</Steps>
**Examples:**
Example 1 (unknown):
```unknown
## Complete example
Let's set up a simple local server and connect to it using the Streamable HTTP transport:
<Steps>
<Step title="Setup the server">
```
Example 2 (unknown):
```unknown
</Step>
<Step title="Setup the client">
```
Example 3 (unknown):
```unknown
</Step>
<Step title="Run the server">
```
Example 4 (unknown):
```unknown
</Step>
<Step title="Run the client">
```
---
```
### references/getting_started.md
```markdown
# Agno - Getting Started
**Pages:** 28
---
## Agent Infra AWS
**URL:** llms-txt#agent-infra-aws
**Contents:**
- Next
Source: https://docs.agno.com/templates/agent-infra-aws/introduction
The Agent Infra AWS template provides a simple AWS infrastructure for running AgentOS. It contains:
* An AgentOS instance, serving Agents, Teams, Workflows and utilities using FastAPI.
* A PostgreSQL database for storing sessions, memories and knowledge.
You can run your Agent Infra AWS locally as well as on AWS. This guide goes over the local setup first.
<Snippet file="setup.mdx" />
<Snippet file="create-agent-infra-aws-codebase.mdx" />
<Snippet file="run-agent-infra-aws-local.mdx" />
Congratulations on running your Agent Infra AWS locally. Next Steps:
* Read how to [update infra settings](/templates/infra-management/infra-settings)
* Read how to [create a git repository for your workspace](/templates/infra-management/git-repo)
* Read how to [manage the development application](/templates/infra-management/development-app)
* Read how to [format and validate your code](/templates/infra-management/format-and-validate)
* Read how to [add python libraries](/templates/infra-management/install)
* Chat with us on [discord](https://agno.link/discord)
---
## What is Agno?
**URL:** llms-txt#what-is-agno?
Source: https://docs.agno.com/introduction
**Agno is an incredibly fast multi-agent framework, runtime and control plane.**
Use it to build multi-agent systems with memory, knowledge, human in the loop and MCP support. You can orchestrate agents as multi-agent teams (more autonomy) or step-based agentic workflows (more control).
Here’s an example of an Agent that connects to an MCP server, manages conversation state in a database, and is served using a FastAPI application that you can interact with using the [AgentOS UI](https://os.agno.com).
```python agno_agent.py lines theme={null}
from agno.agent import Agent
from agno.db.sqlite import SqliteDb
from agno.models.anthropic import Claude
from agno.os import AgentOS
from agno.tools.mcp import MCPTools
---
## Install dependencies
**URL:** llms-txt#install-dependencies
pip install "agno[infra]" openai exa_py python-dotenv
4. Create a new project with [AgentOS](/agent-os/introduction):
```bash
ag infra create # Choose: [1] agent-infra-docker (default)
---
## Ask the agent to process web content
**URL:** llms-txt#ask-the-agent-to-process-web-content
**Contents:**
- 3. Google Places Crawler
agent.print_response("Summarize the content from https://docs.agno.com/introduction", markdown=True)
python theme={null}
from agno.agent import Agent
from agno.tools.apify import ApifyTools
agent = Agent(
tools=[
ApifyTools(actors=["compass/crawler-google-places"])
]
)
**Examples:**
Example 1 (unknown):
```unknown
### 3. Google Places Crawler
The [Google Places Crawler](https://apify.com/compass/crawler-google-places) extracts data about businesses from Google Maps and Google Places.
```
---
## Use the agent to get website content
**URL:** llms-txt#use-the-agent-to-get-website-content
**Contents:**
- Available Apify Tools
- 1. RAG Web Browser
agent.print_response("What information can you find on https://docs.agno.com/introduction ?", markdown=True)
python theme={null}
from agno.agent import Agent
from agno.tools.apify import ApifyTools
agent = Agent(
tools=[
ApifyTools(actors=["apify/rag-web-browser"])
],
markdown=True
)
**Examples:**
Example 1 (unknown):
```unknown
## Available Apify Tools
You can easily integrate any Apify Actor as a tool. Here are some examples:
### 1. RAG Web Browser
The [RAG Web Browser](https://apify.com/apify/rag-web-browser) Actor is specifically designed for AI and LLM applications. It searches the web for a query or processes a URL, then cleans and formats the content for your agent. This tool is enabled by default.
```
---
## A2A
**URL:** llms-txt#a2a
**Contents:**
- Setup
Source: https://docs.agno.com/agent-os/interfaces/a2a/introduction
Expose your Agno Agent via the A2A protocol
Google's [Agent-to-Agent Protocol (A2A)](https://a2a-protocol.org/latest/topics/what-is-a2a/) aims at creating a standard way for Agents to communicate with each other.
Agno integrates seamlessly with A2A, allowing you to expose your Agno Agent and Teams in a A2A compatible way.
This is done with our `A2A` interface, which you can use with our [AgentOS](/agent-os/introduction) runtime.
You just need to set `a2a_interface=True` when creating your `AgentOS` instance and serve it as normal:
By default all the Agents, Teams and Workflows in the AgentOS will be exposed via `A2A`.
You can also specify which Agents, Teams and Workflows to expose:
```python a2a-interface-initialization.py theme={null}
from agno.agent import Agent
from agno.os import AgentOS
from agno.os.interfaces.a2a import A2A
agent = Agent(name="My Agno Agent")
**Examples:**
Example 1 (unknown):
```unknown
By default all the Agents, Teams and Workflows in the AgentOS will be exposed via `A2A`.
You can also specify which Agents, Teams and Workflows to expose:
```
---
## Introduction to Knowledge
**URL:** llms-txt#introduction-to-knowledge
**Contents:**
- The Problem with Knowledge-Free Agents
- Real-World Impact
- Intelligent Text-to-SQL Agents
- Customer Support Excellence
- Internal Knowledge Assistant
- Ready to Get Started?
Source: https://docs.agno.com/concepts/knowledge/overview
Understand why Knowledge is essential for building intelligent, context-aware AI agents that provide accurate, relevant responses.
Imagine asking an AI agent about your company's HR policies, and instead of generic advice, it gives you precise answers based on your actual employee handbook. Or picture a customer support agent that knows your specific product details, pricing, and troubleshooting guides. This is the power of Knowledge in Agno.
## The Problem with Knowledge-Free Agents
Without access to specific information, AI agents can only rely on their general training data. This leads to:
* **Generic responses** that don't match your specific context
* **Outdated information** from training data that's months or years old
* **Hallucinations** when the agent guesses at facts it doesn't actually know
* **Limited usefulness** for domain-specific tasks or company-specific workflows
### Intelligent Text-to-SQL Agents
Build agents that know your exact database schema, column names, and common query patterns. Instead of guessing at table structures, they retrieve the specific schema information needed for each query, ensuring accurate SQL generation.
### Customer Support Excellence
Create a support agent with access to your complete product documentation, FAQ database, and troubleshooting guides. Customers get accurate answers instantly, without waiting for human agents to look up information.
### Internal Knowledge Assistant
Deploy an agent that knows your company's processes, policies, and institutional knowledge. New employees can get onboarding help, and existing team members can quickly find answers to complex procedural questions.
## Ready to Get Started?
Transform your agents from generic assistants to domain experts:
<CardGroup cols={2}>
<Card title="Learn How It Works" icon="book-open" href="/concepts/knowledge/how-it-works">
Understand the simple RAG pipeline behind intelligent knowledge retrieval
</Card>
<Card title="Build Your First Agent" icon="rocket" href="/concepts/knowledge/getting-started">
Follow our quick tutorial to create a knowledge-powered agent in minutes
</Card>
</CardGroup>
---
## Getting Help
**URL:** llms-txt#getting-help
**Contents:**
- Need help?
- Building with Agno?
- Looking for dedicated support?
Source: https://docs.agno.com/introduction/getting-help
Connect with builders, get support, and explore Agent Engineering.
Head over to our [community forum](https://agno.link/community) for help and insights from the team.
## Building with Agno?
Share what you're building on [X](https://agno.link/x), [LinkedIn](https://www.linkedin.com/company/agno-agi) or join our [Discord](https://agno.link/discord) to connect with other builders.
## Looking for dedicated support?
We've helped many companies turn ideas into AI products. [Book a call](https://cal.com/team/agno/intro) to get started.
---
## Introduction
**URL:** llms-txt#introduction
**Contents:**
- Setup
- Examples
Source: https://docs.agno.com/examples/getting-started/introduction
This guide walks through the basics of building Agents with Agno.
The examples build on each other, introducing new concepts and capabilities progressively. Each example contains detailed comments, example prompts, and required dependencies.
Create a virtual environment:
Install the required dependencies:
Export your OpenAI API key:
<CardGroup cols={3}>
<Card title="Basic Agent" icon="robot" iconType="duotone" href="./01-basic-agent">
Build a news reporter with a vibrant personality. This Agent only shows basic LLM inference.
</Card>
<Card title="Agent with Tools" icon="toolbox" iconType="duotone" href="./02-agent-with-tools">
Add web search capabilities using DuckDuckGo for real-time information gathering.
</Card>
<Card title="Agent with Knowledge" icon="brain" iconType="duotone" href="./03-agent-with-knowledge">
Add a vector database to your agent to store and search knowledge.
</Card>
<Card title="Agent with Storage" icon="database" iconType="duotone" href="./06-agent-with-storage">
Add persistence to your agents with session management and history capabilities.
</Card>
<Card title="Agent Team" icon="users" iconType="duotone" href="./17-agent-team">
Create an agent team specializing in market research and financial analysis.
</Card>
<Card title="Structured Output" icon="code" iconType="duotone" href="./05-structured-output">
Generate a structured output using a Pydantic model.
</Card>
<Card title="Custom Tools" icon="wrench" iconType="duotone" href="./04-write-your-own-tool">
Create and integrate custom tools with your agent.
</Card>
<Card title="Research Agent" icon="magnifying-glass" iconType="duotone" href="./18-research-agent-exa">
Build an AI research agent using Exa with controlled output steering.
</Card>
<Card title="Research Workflow" icon="diagram-project" iconType="duotone" href="./19-blog-generator-workflow">
Create a research workflow combining web searches and content scraping.
</Card>
<Card title="Image Agent" icon="image" iconType="duotone" href="./13-image-agent">
Create an agent that can understand images.
</Card>
<Card title="Image Generation" icon="paintbrush" iconType="duotone" href="./14-image-generation">
Create an Agent that can generate images using DALL-E.
</Card>
<Card title="Video Generation" icon="video" iconType="duotone" href="./15-video-generation">
Create an Agent that can generate videos using ModelsLabs.
</Card>
<Card title="Audio Agent" icon="microphone" iconType="duotone" href="./16-audio-agent">
Create an Agent that can process audio input and generate responses.
</Card>
<Card title="Agent with State" icon="database" iconType="duotone" href="./07-agent-state">
Create an Agent with session state management.
</Card>
<Card title="Agent Context" icon="sitemap" iconType="duotone" href="./08-agent-context">
Evaluate dependencies at agent.run and inject them into the instructions.
</Card>
<Card title="Agent Session" icon="clock-rotate-left" iconType="duotone" href="./09-agent-session">
Create an Agent with persistent session memory across conversations.
</Card>
<Card title="User Memories" icon="memory" iconType="duotone" href="./10-user-memories-and-summaries">
Create an Agent that stores user memories and summaries.
</Card>
<Card title="Function Retries" icon="rotate" iconType="duotone" href="./11-retry-function-call">
Handle function retries for failed or unsatisfactory outputs.
</Card>
<Card title="Human in the Loop" icon="user-check" iconType="duotone" href="./12-human-in-the-loop">
Add user confirmation and safety checks for interactive agent control.
</Card>
</CardGroup>
Each example includes runnable code and detailed explanations. We recommend following them in order, as concepts build upon previous examples.
**Examples:**
Example 1 (unknown):
```unknown
Install the required dependencies:
```
Example 2 (unknown):
```unknown
Export your OpenAI API key:
```
---
## Examples Gallery
**URL:** llms-txt#examples-gallery
**Contents:**
- Getting Started
- Use Cases
- Agent Concepts
- Models
Source: https://docs.agno.com/examples/introduction
Explore Agno's example gallery showcasing everything from single-agent tasks to sophisticated multi-agent workflows.
Welcome to Agno's example gallery! Here you'll discover examples showcasing everything from **single-agent tasks** to sophisticated **multi-agent workflows**. You can either:
* Run the examples individually
* Clone the entire [Agno cookbook](https://github.com/agno-agi/agno/tree/main/cookbook)
Have an interesting example to share? Please consider [contributing](https://github.com/agno-agi/agno-docs) to our growing collection.
If you're just getting started, follow the [Getting Started](/examples/getting-started) guide for a step-by-step tutorial. The examples build on each other, introducing new concepts and capabilities progressively.
Build real-world applications with Agno.
<CardGroup cols={3}>
<Card title="Simple Agents" icon="user-astronaut" iconType="duotone" href="/examples/use-cases/agents">
Simple agents for web scraping, data processing, financial analysis, etc.
</Card>
<Card title="Multi-Agent Teams" icon="people-group" iconType="duotone" href="/examples/use-cases/teams/">
Multi-agent teams that collaborate to solve tasks.
</Card>
<Card title="Advanced Workflows" icon="diagram-project" iconType="duotone" href="/examples/use-cases/workflows/">
Advanced workflows for creating blog posts, investment reports, etc.
</Card>
</CardGroup>
Explore Agent concepts with detailed examples.
<CardGroup cols={3}>
<Card title="Multimodal" icon="image" iconType="duotone" href="/examples/concepts/multimodal">
Learn how to use multimodal Agents
</Card>
<Card title="Knowledge" icon="brain-circuit" iconType="duotone" href="/examples/concepts/knowledge">
Add domain-specific knowledge to your Agents
</Card>
<Card title="RAG" icon="book-bookmark" iconType="duotone" href="/examples/concepts/knowledge/rag">
Learn how to use Agentic RAG
</Card>
<Card title="Hybrid search" icon="magnifying-glass-plus" iconType="duotone" href="/examples/concepts/knowledge/search_type/hybrid-search">
Combine semantic and keyword search
</Card>
<Card title="Memory" icon="database" iconType="duotone" href="/examples/concepts/memory">
Let Agents remember past conversations
</Card>
<Card title="Tools" icon="screwdriver-wrench" iconType="duotone" href="/examples/concepts/tools">
Extend your Agents with 100s or tools
</Card>
<Card title="Storage" icon="hard-drive" iconType="duotone" href="/examples/concepts/db">
Store Agents sessions in a database
</Card>
<Card title="Vector Databases" icon="database" iconType="duotone" href="/examples/concepts/vectordb">
Store Knowledge in Vector Databases
</Card>
<Card title="Embedders" icon="database" iconType="duotone" href="/examples/concepts/knowledge/embedders">
Convert text to embeddings to store in VectorDbs
</Card>
</CardGroup>
Explore different models with Agno.
<CardGroup cols={3}>
<Card title="OpenAI" icon="network-wired" iconType="duotone" href="/examples/models/openai">
Examples using OpenAI GPT models
</Card>
<Card title="Ollama" icon="laptop-code" iconType="duotone" href="/examples/models/ollama">
Examples using Ollama models locally
</Card>
<Card title="Anthropic" icon="network-wired" iconType="duotone" href="/examples/models/anthropic">
Examples using Anthropic models like Claude
</Card>
<Card title="Cohere" icon="brain-circuit" iconType="duotone" href="/examples/models/cohere">
Examples using Cohere command models
</Card>
<Card title="DeepSeek" icon="circle-nodes" iconType="duotone" href="/examples/models/deepseek">
Examples using DeepSeek models
</Card>
<Card title="Gemini" icon="google" iconType="duotone" href="/examples/models/gemini">
Examples using Google Gemini models
</Card>
<Card title="Groq" icon="bolt" iconType="duotone" href="/examples/models/groq">
Examples using Groq's fast inference
</Card>
<Card title="Mistral" icon="wind" iconType="duotone" href="/examples/models/mistral">
Examples using Mistral models
</Card>
<Card title="Azure" icon="microsoft" iconType="duotone" href="/examples/models/azure">
Examples using Azure OpenAI
</Card>
<Card title="Fireworks" icon="sparkles" iconType="duotone" href="/examples/models/fireworks">
Examples using Fireworks models
</Card>
<Card title="AWS" icon="aws" iconType="duotone" href="/examples/models/aws">
Examples using Amazon Bedrock
</Card>
<Card title="Hugging Face" icon="face-awesome" iconType="duotone" href="/examples/models/huggingface">
Examples using Hugging Face models
</Card>
<Card title="NVIDIA" icon="microchip" iconType="duotone" href="/examples/models/nvidia">
Examples using NVIDIA models
</Card>
<Card title="Nebius" icon="people-group" iconType="duotone" href="/examples/models/nebius">
Examples using Nebius AI models
</Card>
<Card title="Together" icon="people-group" iconType="duotone" href="/examples/models/together">
Examples using Together AI models
</Card>
<Card title="xAI" icon="brain-circuit" iconType="duotone" href="/examples/models/xai">
Examples using xAI models
</Card>
<Card title="LangDB" icon="rust" iconType="duotone" href="/examples/models/langdb">
Examples using LangDB AI Gateway.
</Card>
</CardGroup>
---
## Designed for Agent Engineering
**URL:** llms-txt#designed-for-agent-engineering
**Contents:**
- Core Intelligence
- Memory, Knowledge, and Persistence
- Execution & Control
- Runtime & Evaluation
- Security & Privacy
Source: https://docs.agno.com/introduction/features
Agno is built for real-world **Agent Engineering**, helping engineers build, deploy, and scale multi-agent systems in production. Here are some key features that make Agno stand out:
### Core Intelligence
* **Model Agnostic**: Works with any model provider so you can use your favorite LLMs.
* **Type Safe**: Enforce structured I/O through `input_schema` and `output_schema` for predictable, composable behavior.
* **Dynamic Context Engineering**: Inject variables, state, and retrieved data on the fly into context. Perfect for dependency-driven agents.
### Memory, Knowledge, and Persistence
* **Persistent Storage**: Give your Agents, Teams, and Workflows a database to persist session history, state, and messages.
* **User Memory**: Built-in memory system that allows Agents to recall user-specific context across sessions.
* **Agentic RAG**: Connect to 20+ vector stores (called **Knowledge** in Agno) with hybrid search + reranking out of the box.
* **Culture (Collective Memory)**: Shared knowledge that compounds across agents and time.
### Execution & Control
* **Human-in-the-Loop**: Native support for confirmations, manual overrides, and external tool execution.
* **Guardrails**: Built-in safeguards for validation, security, and prompt protection.
* **Agent Lifecycle Hooks**: Pre- and post-hooks to validate or transform inputs and outputs.
* **MCP Integration**: First-class support for the Model Context Protocol (MCP) to connect Agents with external systems.
* **Toolkits**: 113+ built-in toolkits with thousands of tools — ready for use across domains like data, code, web, and enterprise APIs.
### Runtime & Evaluation
* **Runtime**: Pre-built FastAPI based runtime with SSE compatible endpoints, ready for production on day 1.
* **Control Plane (UI)**: Integrated interface to visualize, monitor, and debug agent activity in real time.
* **Natively Multimodal**: Agents can process and generate text, images, audio, video, and files.
* **Evals**: Measure your Agents' Accuracy, Performance, and Reliability.
### Security & Privacy
* **Private by Design**: Runs entirely in your cloud. The UI connects directly to your AgentOS from your browser — no data is ever sent externally.
* **Data Governance**: Your data lives securely in your Agent database, with no external data sharing or vendor lock-in.
* **Access Control**: Role-based access (RBAC) and per-agent permissions to protect sensitive contexts and tools.
Every part of Agno is built for real-world deployment, where developer experience meets production performance.
---
## Singlestore
**URL:** llms-txt#singlestore
**Contents:**
- Usage
Source: https://docs.agno.com/concepts/db/singlestore
Learn to use Singlestore as a database for your Agents
Agno supports using [Singlestore](https://www.singlestore.com/) as a database with the `SingleStoreDb` class.
You can get started with Singlestore following their [documentation](https://docs.singlestore.com/db/v9.0/introduction/).
```python singlestore_for_agent.py theme={null}
from os import getenv
from agno.agent import Agent
from agno.db.singlestore import SingleStoreDb
---
## The agent will first search for relevant URLs, then analyze their content in detail
**URL:** llms-txt#the-agent-will-first-search-for-relevant-urls,-then-analyze-their-content-in-detail
**Contents:**
- Usage
agent.print_response(
"Analyze the content of the following URL: https://docs.agno.com/introduction and also give me latest updates on AI agents"
)
bash theme={null}
export GOOGLE_API_KEY=xxx
bash theme={null}
pip install -U google-genai agno
bash Mac theme={null}
python cookbook/models/google/gemini/url_context_with_search.py
bash Windows theme={null}
python cookbook/models/google/gemini/url_context_with_search.py
```
</CodeGroup>
</Step>
</Steps>
**Examples:**
Example 1 (unknown):
```unknown
## Usage
<Steps>
<Snippet file="create-venv-step.mdx" />
<Step title="Set your API key">
```
Example 2 (unknown):
```unknown
</Step>
<Step title="Install libraries">
```
Example 3 (unknown):
```unknown
</Step>
<Step title="Run Agent">
<CodeGroup>
```
Example 4 (unknown):
```unknown
```
---
## AG-UI
**URL:** llms-txt#ag-ui
**Contents:**
- Example usage
- Custom Events
- Core Components
- `AGUI` interface
- Initialization Parameters
- Key Method
- Endpoints
- Serving your AgentOS
- Parameters
Source: https://docs.agno.com/agent-os/interfaces/ag-ui/introduction
Expose your Agno Agent via the AG-UI protocol
AG-UI, the [Agent-User Interaction Protocol](https://github.com/ag-ui-protocol/ag-ui), standardizes how AI agents connect to front-end applications.
<Note>
**Migration from Apps**: If you're migrating from `AGUIApp`, see the [v2 migration guide](/how-to/v2-migration#7-apps-interfaces) for complete steps.
</Note>
<Steps>
<Step title="Install backend dependencies">
</Step>
<Step title="Run the backend">
Expose an Agno Agent through the AG-UI interface using `AgentOS` and `AGUI`.
<Step title="Run the frontend">
Use Dojo (`ag-ui`'s frontend) as an advanced, customizable interface for AG-UI agents.
1. Clone: `git clone https://github.com/ag-ui-protocol/ag-ui.git`
2. Install dependencies in `/ag-ui/typescript-sdk`: `pnpm install`
3. Build the Agno package in `/ag-ui/integrations/agno`: `pnpm run build`
4. Start Dojo following the instructions in the repository.
</Step>
<Step title="Chat with your Agno Agent">
With Dojo running, open `http://localhost:3000` and select your Agno agent.
</Step>
</Steps>
You can see more in our [cookbook examples](https://github.com/agno-agi/agno/tree/main/cookbook/agent_os/interfaces/agui/).
Custom events you create in your tools are automatically delivered to AG-UI in the AG-UI custom event format.
**Creating custom events:**
**Yielding from tools:**
Custom events are streamed in real-time to the AG-UI frontend.
See [Custom Events documentation](/concepts/agents/running-agents#custom-events) for more details.
* `AGUI` (interface): Wraps an Agno `Agent` or `Team` into an AG-UI compatible FastAPI router.
* `AgentOS.serve`: Serves your FastAPI app (including the AGUI router) with Uvicorn.
`AGUI` mounts protocol-compliant routes on your app.
Main entry point for AG-UI exposure.
### Initialization Parameters
| Parameter | Type | Default | Description |
| --------- | ----------------- | ------- | ---------------------- |
| `agent` | `Optional[Agent]` | `None` | Agno `Agent` instance. |
| `team` | `Optional[Team]` | `None` | Agno `Team` instance. |
Provide `agent` or `team`.
| Method | Parameters | Return Type | Description |
| ------------ | ------------------------ | ----------- | -------------------------------------------------------- |
| `get_router` | `use_async: bool = True` | `APIRouter` | Returns the AG-UI FastAPI router and attaches endpoints. |
Mounted at the interface's route prefix (root by default):
* `POST /agui`: Main entrypoint. Accepts `RunAgentInput` from `ag-ui-protocol`. Streams AG-UI events.
* `GET /status`: Health/status endpoint for the interface.
Refer to `ag-ui-protocol` docs for payload details.
## Serving your AgentOS
Use `AgentOS.serve` to run your app with Uvicorn.
| Parameter | Type | Default | Description |
| --------- | --------------------- | ------------- | -------------------------------------- |
| `app` | `Union[str, FastAPI]` | required | FastAPI app instance or import string. |
| `host` | `str` | `"localhost"` | Host to bind. |
| `port` | `int` | `7777` | Port to bind. |
| `reload` | `bool` | `False` | Enable auto-reload for development. |
See [cookbook examples](https://github.com/agno-agi/agno/tree/main/cookbook/agent_os/interfaces/agui/) for updated interface patterns.
**Examples:**
Example 1 (unknown):
```unknown
</Step>
<Step title="Run the backend">
Expose an Agno Agent through the AG-UI interface using `AgentOS` and `AGUI`.
```
Example 2 (unknown):
```unknown
</Step>
<Step title="Run the frontend">
Use Dojo (`ag-ui`'s frontend) as an advanced, customizable interface for AG-UI agents.
1. Clone: `git clone https://github.com/ag-ui-protocol/ag-ui.git`
2. Install dependencies in `/ag-ui/typescript-sdk`: `pnpm install`
3. Build the Agno package in `/ag-ui/integrations/agno`: `pnpm run build`
4. Start Dojo following the instructions in the repository.
</Step>
<Step title="Chat with your Agno Agent">
With Dojo running, open `http://localhost:3000` and select your Agno agent.
</Step>
</Steps>
You can see more in our [cookbook examples](https://github.com/agno-agi/agno/tree/main/cookbook/agent_os/interfaces/agui/).
## Custom Events
Custom events you create in your tools are automatically delivered to AG-UI in the AG-UI custom event format.
**Creating custom events:**
```
Example 3 (unknown):
```unknown
**Yielding from tools:**
```
---
## MCP Toolbox
**URL:** llms-txt#mcp-toolbox
**Contents:**
- Prerequisites
- Quick Start
Source: https://docs.agno.com/concepts/tools/mcp/mcp-toolbox
Learn how to use MCPToolbox with Agno to connect to MCP Toolbox for Databases with tool filtering capabilities.
**MCPToolbox** enables Agents to connect to Google's [MCP Toolbox for Databases](https://googleapis.github.io/genai-toolbox/getting-started/introduction/) with advanced filtering capabilities. It extends Agno's `MCPTools` functionality to filter tools by toolset or tool name, allowing agents to load only the specific database tools they need.
You'll need the following to use MCPToolbox:
Our default setup will also require you to have Docker or Podman installed, to run the MCP Toolbox server and database for the examples.
Get started with MCPToolbox instantly using our fully functional demo.
```bash theme={null}
**Examples:**
Example 1 (unknown):
```unknown
Our default setup will also require you to have Docker or Podman installed, to run the MCP Toolbox server and database for the examples.
## Quick Start
Get started with MCPToolbox instantly using our fully functional demo.
```
---
## Parallel and custom function step streaming on AgentOS
**URL:** llms-txt#parallel-and-custom-function-step-streaming-on-agentos
Source: https://docs.agno.com/examples/concepts/workflows/04-workflows-parallel-execution/parallel_and_custom_function_step_streaming_agentos
This example demonstrates how to use parallel steps with custom function executors and streaming on AgentOS.
This example demonstrates how to use using steps with custom function
executors, and how to stream their responses using the [AgentOS](/agent-os/introduction).
The agents and teams running inside the custom function step in `Parallel` will also stream their results to the AgentOS.
```python parallel_and_custom_function_step_streaming_agentos.py theme={null}
from typing import AsyncIterator, Union
from agno.agent import Agent
from agno.db.in_memory import InMemoryDb
from agno.db.sqlite import SqliteDb
from agno.models.openai import OpenAIChat
from agno.os import AgentOS
from agno.run.workflow import WorkflowRunOutputEvent
from agno.tools.googlesearch import GoogleSearchTools
from agno.tools.hackernews import HackerNewsTools
from agno.workflow.parallel import Parallel
from agno.workflow.step import Step, StepInput, StepOutput
from agno.workflow.workflow import Workflow
---
## What is AgentOS?
**URL:** llms-txt#what-is-agentos?
**Contents:**
- Overview
- Getting Started
- Security & Privacy First
- Complete Data Ownership
- Next Steps
Source: https://docs.agno.com/agent-os/introduction
The production runtime and control plane for your agentic systems
AgentOS is Agno's production-ready runtime that runs entirely within your own infrastructure, ensuring complete data privacy and control of your agentic system.
Agno also provides a beautiful web interface for managing, monitoring, and interacting with your AgentOS, with no data ever being persisted outside of your environment.
<Check>
Behind the scenes, AgentOS is a FastAPI app that you can run locally or in your cloud. It is designed to be easy to deploy and scale.
</Check>
Ready to get started with AgentOS? Here's what you need to do:
<CardGroup cols={2}>
<Card title="Create Your First OS" icon="plus" href="/agent-os/creating-your-first-os">
Set up a new AgentOS instance from scratch using our templates
</Card>
<Card title="Connect Your AgentOS" icon="link" href="/agent-os/connecting-your-os">
Learn how to connect your local development environment to the platform
</Card>
</CardGroup>
## Security & Privacy First
AgentOS is designed with enterprise security and data privacy as foundational principles, not afterthoughts.
<Frame>
<img src="https://mintcdn.com/agno-v2/Is_2Bv3MNVYdZh1v/images/agentos-secure-infra-illustration.png?fit=max&auto=format&n=Is_2Bv3MNVYdZh1v&q=85&s=b13db5d4b3c25eb5508752f7d3474b51" alt="AgentOS Security and Privacy Architecture" style={{ borderRadius: "0.5rem" }} data-og-width="3258" width="3258" data-og-height="1938" height="1938" data-path="images/agentos-secure-infra-illustration.png" data-optimize="true" data-opv="3" srcset="https://mintcdn.com/agno-v2/Is_2Bv3MNVYdZh1v/images/agentos-secure-infra-illustration.png?w=280&fit=max&auto=format&n=Is_2Bv3MNVYdZh1v&q=85&s=c548641b352030a8fee914cd49919417 280w, https://mintcdn.com/agno-v2/Is_2Bv3MNVYdZh1v/images/agentos-secure-infra-illustration.png?w=560&fit=max&auto=format&n=Is_2Bv3MNVYdZh1v&q=85&s=9640bb14a9d22619973e7efb20ab1be5 560w, https://mintcdn.com/agno-v2/Is_2Bv3MNVYdZh1v/images/agentos-secure-infra-illustration.png?w=840&fit=max&auto=format&n=Is_2Bv3MNVYdZh1v&q=85&s=82645dfaae8f0155bc3912cdfaf656cc 840w, https://mintcdn.com/agno-v2/Is_2Bv3MNVYdZh1v/images/agentos-secure-infra-illustration.png?w=1100&fit=max&auto=format&n=Is_2Bv3MNVYdZh1v&q=85&s=ba5cf9921c1b389d58216ba71ef38515 1100w, https://mintcdn.com/agno-v2/Is_2Bv3MNVYdZh1v/images/agentos-secure-infra-illustration.png?w=1650&fit=max&auto=format&n=Is_2Bv3MNVYdZh1v&q=85&s=d7ca28c6e75259c18b08783224c1a2e4 1650w, https://mintcdn.com/agno-v2/Is_2Bv3MNVYdZh1v/images/agentos-secure-infra-illustration.png?w=2500&fit=max&auto=format&n=Is_2Bv3MNVYdZh1v&q=85&s=122528a3dc3ecf7789fb1b076be48f08 2500w" />
</Frame>
### Complete Data Ownership
* **Your Infrastructure, Your Data**: AgentOS runs entirely within your cloud environment
* **Zero Data Transmission**: No conversations, logs, or metrics are sent to external services
* **Private by Default**: All processing, storage, and analytics happen locally
To learn more about AgentOS Security, check out the [AgentOS Security](/agent-os/security) page.
<CardGroup cols={2}>
<Card title="Control Plane" icon="desktop" href="/agent-os/control-plane">
Learn how to use the AgentOS control plane to manage and monitor your OSs
</Card>
<Card title="Create Your First OS" icon="rocket" href="/agent-os/creating-your-first-os">
Get started by creating your first AgentOS instance
</Card>
</CardGroup>
---
## Quickstart
**URL:** llms-txt#quickstart
**Contents:**
- Build your first Agent
Source: https://docs.agno.com/introduction/quickstart
Build and run your first Agent using Agno.
**Agents are AI programs where a language model controls the flow of execution.**
In 10 lines of code, we can build an Agent that uses tools to achieve a task.
## Build your first Agent
Instead of a toy demo, let's build an Agent that you can extend and build upon. We'll connect our agent to the Agno MCP server, and give it a database to store conversation history and state.
**This is a simple yet complete example that you can extend by connecting to any MCP server**.
```python agno_agent.py lines theme={null}
from agno.agent import Agent
from agno.db.sqlite import SqliteDb
from agno.models.anthropic import Claude
from agno.os import AgentOS
from agno.tools.mcp import MCPTools
**Examples:**
Example 1 (unknown):
```unknown
## Build your first Agent
Instead of a toy demo, let's build an Agent that you can extend and build upon. We'll connect our agent to the Agno MCP server, and give it a database to store conversation history and state.
**This is a simple yet complete example that you can extend by connecting to any MCP server**.
```
---
## Add documentation content
**URL:** llms-txt#add-documentation-content
knowledge.add_contents(urls=["https://docs.agno.com/introduction/agents.md"])
---
## Deploy your AgentOS
**URL:** llms-txt#deploy-your-agentos
**Contents:**
- Overview
- What is A Template?
- Here's How They Work
Source: https://docs.agno.com/deploy/introduction
How to take your AgentOS to production
You can build, test, and improve your AgentOS locally, but to run it in production you’ll need to deploy it to your own infrastructure. Because it’s pure Python code, you’re free to deploy AgentOS anywhere. To make things easier, we’ve also put together a set of ready to use templates - standardized codebases you can use to quickly deploy AgentOS to your own infrastructure.
Currently supported templates:
Docker Template: [agent-infra-docker](https://github.com/agno-agi/agent-infra-docker)
AWS Template: [agent-infra-aws](https://github.com/agno-agi/agent-infra-aws)
* Modal Template
* Railway Template
* Render Template
* GCP Template
## What is A Template?
A template is a standardized codebase for a production AgentOS. It contains:
* An AgentOS instance using FastAPI.
* A Database for storing Sessions, Memories, Knowledge and Evals.
They are setup to run locally using docker and on cloud providers. They're a fantastic starting point and exactly what we use for our customers. You'll definitely need to customize them to fit your specific needs, but they'll get you started much faster.
## Here's How They Work
**Step 1**: Create your codebase using: `ag infra create` and choose a template.
This will clone one of our templates and give you a starting point.
**Step 2**: `cd` into your codebase and run locally using docker: `ag infra up`
This will start your AgentOS instance and PostgreSQL database locally using docker.
**Step 3 (For AWS template)**: Run on AWS: `ag infra up prd:aws`
This will start your AgentOS instance and PostgreSQL database on AWS.
We recommend starting with the `agent-infra-docker` template and taking it from there.
<CardGroup cols={2}>
<Card title="Agent Infra Docker " icon="server" href="/templates/agent-infra-docker">
An AgentOS template with a docker compose file.
</Card>
<Card title="Agent Infra AWS" icon="server" href="/templates/agent-infra-aws">
An AgentOS template with a AWS infrastructure.
</Card>
</CardGroup>
---
## Cancelling a Run
**URL:** llms-txt#cancelling-a-run
Source: https://docs.agno.com/concepts/teams/run-cancel
Learn how to cancel a team run.
You can cancel a run by using the `cancel_run` function on the Team.
Below is a basic example that starts an team run in a thread and cancels it from another thread, simulating how it can be done via an API. This is supported via [AgentOS](/agent-os/introduction) as well.
For a more complete example, see [Cancel a run](https://github.com/agno-agi/agno/tree/main/cookbook/teams/basic/team_cancel_a_run.py).
---
## Getting Started with Knowledge
**URL:** llms-txt#getting-started-with-knowledge
**Contents:**
- What You'll Build
- Prerequisites
- Step 1: Set Up Your Knowledge Base
Source: https://docs.agno.com/concepts/knowledge/getting-started
Build your first knowledge-powered agent in three simple steps with this hands-on tutorial.
Ready to build your first intelligent agent? This guide will walk you through creating a knowledge-powered agent that can answer questions about your documents in just a few minutes.
By the end of this tutorial, you'll have an agent that can:
* Read and understand your documents or website content
* Answer specific questions based on that information
* Provide sources for its responses
* Search intelligently without you having to specify what to look for
<Steps>
<Step title="Install Agno">
</Step>
<Step title="Set up your API key">
<Note>This tutorial uses OpenAI, but Agno supports [many other models](/concepts/models/overview).</Note>
</Step>
</Steps>
## Step 1: Set Up Your Knowledge Base
First, let's create a knowledge base with a vector database to store your information:
```python knowledge_agent.py theme={null}
from agno.agent import Agent
from agno.knowledge.knowledge import Knowledge
from agno.vectordb.pgvector import PgVector
from agno.models.openai import OpenAIChat
**Examples:**
Example 1 (unknown):
```unknown
</Step>
<Step title="Set up your API key">
```
Example 2 (unknown):
```unknown
<Note>This tutorial uses OpenAI, but Agno supports [many other models](/concepts/models/overview).</Note>
</Step>
</Steps>
## Step 1: Set Up Your Knowledge Base
First, let's create a knowledge base with a vector database to store your information:
```
---
## Performance
**URL:** llms-txt#performance
**Contents:**
- Agent Performance
- Instantiation Time
Source: https://docs.agno.com/introduction/performance
Get extreme performance out of the box with Agno.
If you're building with Agno, you're guaranteed best-in-class performance by default. Our obsession with performance is necessary because even simple AI workflows can spawn hundreds of Agents and because many tasks are long-running -- stateless, horizontal scalability is key for success.
At Agno, we optimize performance across 3 dimensions:
1. **Agent performance:** We optimize static operations (instantiation, memory footprint) and runtime operations (tool calls, memory updates, history management).
2. **System performance:** The AgentOS API is async by default and has a minimal memory footprint. The system is stateless and horizontally scalable, with a focus on preventing memory leaks. It handles parallel and batch embedding generation during knowledge ingestion, metrics collection in background tasks, and other system-level optimizations.
3. **Agent reliability and accuracy:** Monitored through evals, which we’ll explore later.
Let's measure the time it takes to instantiate an Agent and the memory footprint of an Agent. Here are the numbers (last measured in Oct 2025, on an Apple M4 MacBook Pro):
* **Agent instantiation:** \~3μs on average
* **Memory footprint:** \~6.6Kib on average
We'll show below that Agno Agents instantiate **529× faster than Langgraph**, **57× faster than PydanticAI**, and **70× faster than CrewAI**. Agno Agents also use **24× lower memory than Langgraph**, **4× lower than PydanticAI**, and **10× lower than CrewAI**.
<Note>
Run time performance is bottlenecked by inference and hard to benchmark accurately, so we focus on minimizing overhead, reducing memory usage, and parallelizing tool calls.
</Note>
### Instantiation Time
Let's measure instantiation time for an Agent with 1 tool. We'll run the evaluation 1000 times to get a baseline measurement. We'll compare Agno to LangGraph, CrewAI and Pydantic AI.
<Note>
The code for this benchmark is available [here](https://github.com/agno-agi/agno/tree/main/cookbook/evals/performance). You should run the evaluation yourself on your own machine, please, do not take these results at face value.
</Note>
```shell theme={null}
---
## Step with custom function streaming on AgentOS
**URL:** llms-txt#step-with-custom-function-streaming-on-agentos
Source: https://docs.agno.com/examples/concepts/workflows/01-basic-workflows/step_with_function_streaming_agentos
This example demonstrates how to use named steps with custom function executors and streaming on AgentOS.
This example demonstrates how to use Step objects with custom function executors, and how to stream their responses using the [AgentOS](/agent-os/introduction).
The agent and team running inside the custom function step can also stream their results directly to the AgentOS.
```python step_with_function_streaming_agentos.py theme={null}
from typing import AsyncIterator, Union
from agno.agent.agent import Agent
from agno.db.in_memory import InMemoryDb
---
## Example 1: Scrape a webpage as Markdown
**URL:** llms-txt#example-1:-scrape-a-webpage-as-markdown
**Contents:**
- Usage
agent.print_response(
"Scrape this webpage as markdown: https://docs.agno.com/introduction",
)
bash theme={null}
export OPENAI_API_KEY=xxx
export BRIGHT_DATA_API_KEY=xxx
bash theme={null}
pip install -U requests openai agno
bash Mac theme={null}
python cookbook/tools/brightdata_tools.py
bash Windows theme={null}
python cookbook/tools/brightdata_tools.py
```
</CodeGroup>
</Step>
</Steps>
**Examples:**
Example 1 (unknown):
```unknown
## Usage
<Steps>
<Snippet file="create-venv-step.mdx" />
<Step title="Set your API keys">
```
Example 2 (unknown):
```unknown
</Step>
<Step title="Install libraries">
```
Example 3 (unknown):
```unknown
</Step>
<Step title="Run Agent">
<CodeGroup>
```
Example 4 (unknown):
```unknown
```
---
## Eleven Labs
**URL:** llms-txt#eleven-labs
**Contents:**
- Prerequisites
- Example
Source: https://docs.agno.com/concepts/tools/toolkits/others/eleven_labs
**ElevenLabsTools** enable an Agent to perform audio generation tasks using [ElevenLabs](https://elevenlabs.io/docs/product/introduction)
You need to install the `elevenlabs` library and an API key which can be obtained from [Eleven Labs](https://elevenlabs.io/)
Set the `ELEVEN_LABS_API_KEY` environment variable.
The following agent will use Eleven Labs to generate audio based on a user prompt.
```python cookbook/tools/eleven_labs_tools.py theme={null}
from agno.agent import Agent
from agno.tools.eleven_labs import ElevenLabsTools
**Examples:**
Example 1 (unknown):
```unknown
Set the `ELEVEN_LABS_API_KEY` environment variable.
```
Example 2 (unknown):
```unknown
## Example
The following agent will use Eleven Labs to generate audio based on a user prompt.
```
---
## Whatsapp
**URL:** llms-txt#whatsapp
**Contents:**
- Setup
- Example Usage
- Core Components
- `Whatsapp` Interface
- Initialization Parameters
- Key Method
- Endpoints
- `GET /whatsapp/status`
- `GET /whatsapp/webhook`
- `POST /whatsapp/webhook`
Source: https://docs.agno.com/agent-os/interfaces/whatsapp/introduction
Host agents as Whatsapp Applications.
Use the WhatsApp interface to serve Agents or Teams via WhatsApp. It mounts webhook routes on a FastAPI app and sends responses back to WhatsApp users and threads.
Follow the WhatsApp setup guide in the [Whatsapp Cookbook](https://github.com/agno-agi/agno/blob/main/cookbook/agent_os/interfaces/whatsapp/readme.md).
You will need environment variables:
* `WHATSAPP_ACCESS_TOKEN`
* `WHATSAPP_PHONE_NUMBER_ID`
* `WHATSAPP_VERIFY_TOKEN`
* Optional (production): `WHATSAPP_APP_SECRET` and `APP_ENV=production`
<Note>
The user's phone number is automatically used as the `user_id` for runs. This ensures that sessions and memory are appropriately scoped to the user.
The phone number is also used for the `session_id` so a single Whatsapp conversation will be a single session. It is important to take this into account when considering session history.
</Note>
Create an agent, expose it with the `Whatsapp` interface, and serve via `AgentOS`:
See more in our [cookbook examples](https://github.com/agno-agi/agno/tree/main/cookbook/agent_os/interfaces/whatsapp/).
* `Whatsapp` (interface): Wraps an Agno `Agent` or `Team` for WhatsApp via FastAPI.
* `AgentOS.serve`: Serves the FastAPI app using Uvicorn.
## `Whatsapp` Interface
Main entry point for Agno WhatsApp applications.
### Initialization Parameters
| Parameter | Type | Default | Description |
| --------- | ----------------- | ------- | ---------------------- |
| `agent` | `Optional[Agent]` | `None` | Agno `Agent` instance. |
| `team` | `Optional[Team]` | `None` | Agno `Team` instance. |
Provide `agent` or `team`.
| Method | Parameters | Return Type | Description |
| ------------ | ------------------------ | ----------- | -------------------------------------------------- |
| `get_router` | `use_async: bool = True` | `APIRouter` | Returns the FastAPI router and attaches endpoints. |
Mounted under the `/whatsapp` prefix:
### `GET /whatsapp/status`
* Health/status of the interface.
### `GET /whatsapp/webhook`
* Verifies WhatsApp webhook (`hub.challenge`).
* Returns `hub.challenge` on success; `403` on token mismatch; `500` if `WHATSAPP_VERIFY_TOKEN` missing.
### `POST /whatsapp/webhook`
* Receives WhatsApp messages and events.
* Validates signature (`X-Hub-Signature-256`); bypassed in development mode.
* Processes text, image, video, audio, and document messages via the agent/team.
* Sends replies (splits long messages; uploads and sends generated images).
* Responses: `200 {"status": "processing"}` or `{"status": "ignored"}`, `403` invalid signature, `500` errors.
---
## Agent Infra Docker
**URL:** llms-txt#agent-infra-docker
Source: https://docs.agno.com/templates/agent-infra-docker/introduction
The Agent Infra Docker template provides a simple Docker Compose file for running AgentOS. It contains:
* An AgentOS instance, serving Agents, Teams, Workflows and utilities using FastAPI.
* A PostgreSQL database for storing sessions, memories and knowledge.
<Snippet file="setup.mdx" />
<Snippet file="create-agent-infra-docker-codebase.mdx" />
<Snippet file="run-agent-infra-docker-local.mdx" />
---
```
### references/index.md
```markdown
# Agno Documentation Index
## Categories
### Agentos
**File:** `agentos.md`
**Pages:** 22
### Agents
**File:** `agents.md`
**Pages:** 834
### Examples
**File:** `examples.md`
**Pages:** 188
### Getting Started
**File:** `getting_started.md`
**Pages:** 28
### Integration
**File:** `integration.md`
**Pages:** 42
### Migration
**File:** `migration.md`
**Pages:** 6
### Other
**File:** `other.md`
**Pages:** 1373
```
### references/integration.md
```markdown
# Agno - Integration
**Pages:** 42
---
## Database Tables
**URL:** llms-txt#database-tables
**Contents:**
- Table Definition
- Create a database revision
- Migrate dev database
- Optional: Add test user
- Migrate production database
- Update the `workspace/prd_resources.py` file
Source: https://docs.agno.com/templates/infra-management/database-tables
Agno templates come pre-configured with [SqlAlchemy](https://www.sqlalchemy.org/) and [Alembic](https://alembic.sqlalchemy.org/en/latest/) to manage databases. You can use these tables to store data for your Agents, Teams and Workflows. The general way to add a table is:
1. Add table definition to the `db/tables` directory.
2. Import the table class in the `db/tables/__init__.py` file.
3. Create a database migration.
4. Run database migration.
Let's create a `UsersTable`, copy the following code to `db/tables/user.py`
Update the `db/tables/__init__.py` file:
## Create a database revision
Run the alembic command to create a database migration in the dev container:
## Migrate dev database
Run the alembic command to migrate the dev database:
### Optional: Add test user
Now lets's add a test user. Copy the following code to `db/tables/test_add_user.py`
Run the script to add a test adding a user:
## Migrate production database
We recommended migrating the production database by setting the environment variable `MIGRATE_DB = True` and restarting the production service. This runs `alembic -c db/alembic.ini upgrade head` from the entrypoint script at container startup.
### Update the `workspace/prd_resources.py` file
```python workspace/prd_resources.py theme={null}
...
**Examples:**
Example 1 (unknown):
```unknown
Update the `db/tables/__init__.py` file:
```
Example 2 (unknown):
```unknown
## Create a database revision
Run the alembic command to create a database migration in the dev container:
```
Example 3 (unknown):
```unknown
## Migrate dev database
Run the alembic command to migrate the dev database:
```
Example 4 (unknown):
```unknown
### Optional: Add test user
Now lets's add a test user. Copy the following code to `db/tables/test_add_user.py`
```
---
## Setup the SQLite database
**URL:** llms-txt#setup-the-sqlite-database
db = SqliteDb(db_file="tmp/data.db")
---
## Setup database
**URL:** llms-txt#setup-database
db = PostgresDb(db_url="postgresql+psycopg://ai:ai@localhost:5532/ai")
---
## Initialize LanceDB vector database
**URL:** llms-txt#initialize-lancedb-vector-database
---
## Define the database URL where the vector database will be stored
**URL:** llms-txt#define-the-database-url-where-the-vector-database-will-be-stored
db_url = "/tmp/lancedb"
---
## 1. Configure vector database with embedder
**URL:** llms-txt#1.-configure-vector-database-with-embedder
vector_db = PgVector(
table_name="company_knowledge",
db_url="postgresql+psycopg://user:pass@localhost:5432/db",
embedder=OpenAIEmbedder(id="text-embedding-3-small") # Optional: defaults to OpenAIEmbedder
)
---
## Airbnb Mcp
**URL:** llms-txt#airbnb-mcp
**Contents:**
- Code
- Usage
Source: https://docs.agno.com/examples/use-cases/agents/airbnb_mcp
🏠 MCP Airbnb Agent - Search for Airbnb listings!
This example shows how to create an agent that uses MCP and Llama 4 to search for Airbnb listings.
<Steps>
<Snippet file="create-venv-step.mdx" />
<Step title="Set your API key">
</Step>
<Step title="Install libraries">
</Step>
<Step title="Run Agent">
<CodeGroup>
</CodeGroup>
</Step>
</Steps>
**Examples:**
Example 1 (unknown):
```unknown
## Usage
<Steps>
<Snippet file="create-venv-step.mdx" />
<Step title="Set your API key">
```
Example 2 (unknown):
```unknown
</Step>
<Step title="Install libraries">
```
Example 3 (unknown):
```unknown
</Step>
<Step title="Run Agent">
<CodeGroup>
```
Example 4 (unknown):
```unknown
```
---
## SSE Transport
**URL:** llms-txt#sse-transport
Source: https://docs.agno.com/concepts/tools/mcp/transports/sse
Agno's MCP integration supports the [SSE transport](https://modelcontextprotocol.io/docs/concepts/transports#server-sent-events-sse). This transport enables server-to-client streaming, and can prove more useful than [stdio](https://modelcontextprotocol.io/docs/concepts/transports#standard-input%2Foutput-stdio) when working with restricted networks.
<Note>
This transport is not recommended anymore by the MCP protocol. Use the [Streamable HTTP transport](/concepts/tools/mcp/transports/streamable_http) instead.
</Note>
To use it, initialize the `MCPTools` passing the URL of the MCP server and setting the transport to `sse`:
```python theme={null}
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.tools.mcp import MCPTools
server_url = "http://localhost:8000/sse"
---
## Set up vector database - stores embeddings
**URL:** llms-txt#set-up-vector-database---stores-embeddings
vector_db = PgVector(
table_name="vectors",
db_url="postgresql+psycopg://user:pass@localhost:5432/db"
)
---
## Setup your Database
**URL:** llms-txt#setup-your-database
db = SingleStoreDb(db_url=db_url)
---
## Get your Neon database URL
**URL:** llms-txt#get-your-neon-database-url
NEON_DB_URL = getenv("NEON_DB_URL")
---
## Setup the DynamoDB database
**URL:** llms-txt#setup-the-dynamodb-database
**Contents:**
- Params
- Developer Resources
class Article(BaseModel):
title: str
summary: str
reference_links: List[str]
hn_researcher = Agent(
name="HackerNews Researcher",
model=OpenAIChat("gpt-5-mini"),
role="Gets top stories from hackernews.",
tools=[HackerNewsTools()],
)
web_searcher = Agent(
name="Web Searcher",
model=OpenAIChat("gpt-5-mini"),
role="Searches the web for information on a topic",
tools=[DuckDuckGoTools()],
add_datetime_to_context=True,
)
hn_team = Team(
name="HackerNews Team",
model=OpenAIChat("gpt-5-mini"),
members=[hn_researcher, web_searcher],
db=db,
instructions=[
"First, search hackernews for what the user is asking about.",
"Then, ask the web searcher to search for each story to get more information.",
"Finally, provide a thoughtful and engaging summary.",
],
output_schema=Article,
markdown=True,
show_members_responses=True,
)
hn_team.print_response("Write an article about the top 2 stories on hackernews")
```
<Snippet file="db-dynamodb-params.mdx" />
## Developer Resources
* View [Cookbook](https://github.com/agno-agi/agno/blob/main/cookbook/db/dynamodb/dynamo_for_team.py)
---
## Embed sentence in database
**URL:** llms-txt#embed-sentence-in-database
embeddings = GeminiEmbedder().get_embedding("The quick brown fox jumps over the lazy dog.")
---
## -*- Secrets for production database
**URL:** llms-txt#-*--secrets-for-production-database
prd_db_secret = SecretsManager(
...
# Create secret from workspace/secrets/prd_db_secrets.yml
secret_files=[infra_settings.infra_root.joinpath("infra/secrets/prd_db_secrets.yml")],
)
python FastApi theme={null}
prd_fastapi = FastApi(
...
aws_secrets=[prd_secret],
...
)
python RDS theme={null}
prd_db = DbInstance(
...
aws_secret=prd_db_secret,
...
)
```
</CodeGroup>
Production resources can also read secrets using yaml files but we highly recommend using [AWS Secrets](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html).
**Examples:**
Example 1 (unknown):
```unknown
Read the secret in production apps using:
<CodeGroup>
```
Example 2 (unknown):
```unknown
```
---
## JSON Files as Database
**URL:** llms-txt#json-files-as-database
**Contents:**
- Usage
Source: https://docs.agno.com/concepts/db/json
Agno supports using local JSON files as a "database" with the `JsonDb` class.
This is a simple way to store your Agent's session data without having to setup a database.
<Warning>
Using JSON files as a database is not recommended for production applications.
Use it for demos, testing and any other use case where you don't want to setup a database.
</Warning>
```python json_for_agent.py theme={null}
from agno.agent import Agent
from agno.db.json import JsonDb
---
## Setup the Neon database
**URL:** llms-txt#setup-the-neon-database
db = PostgresDb(db_url=NEON_DB_URL)
---
## Create MCPTools instance
**URL:** llms-txt#create-mcptools-instance
mcp_tools = MCPTools(
transport="streamable-http",
url="https://docs.agno.com/mcp"
)
---
## Performance with Database Logging
**URL:** llms-txt#performance-with-database-logging
**Contents:**
- Code
Source: https://docs.agno.com/examples/concepts/evals/performance/performance_db_logging
Learn how to store performance evaluation results in the database.
This example shows how to store evaluation results in the database.
```python theme={null}
"""Example showing how to store evaluation results in the database."""
from agno.agent import Agent
from agno.db.postgres.postgres import PostgresDb
from agno.eval.performance import PerformanceEval
from agno.models.openai import OpenAIChat
---
## Database URL
**URL:** llms-txt#database-url
db_url = "postgresql+psycopg://ai:ai@localhost:5532/ai"
---
## Database connection
**URL:** llms-txt#database-connection
db_url = "postgresql+psycopg://ai:ai@localhost:5532/ai"
db = PostgresDb(db_url=db_url)
@tool(requires_confirmation=True)
def delete_records(table_name: str, count: int) -> str:
"""Delete records from a database table.
Args:
table_name: Name of the table
count: Number of records to delete
Returns:
str: Confirmation message
"""
return f"Deleted {count} records from {table_name}"
@tool(requires_confirmation=True)
def send_notification(recipient: str, message: str) -> str:
"""Send a notification to a user.
Args:
recipient: Email or username of the recipient
message: Notification message
Returns:
str: Confirmation message
"""
return f"Sent notification to {recipient}: {message}"
---
## Setup the Firestore database
**URL:** llms-txt#setup-the-firestore-database
db = FirestoreDb(project_id=PROJECT_ID)
---
## Accuracy with Database Logging
**URL:** llms-txt#accuracy-with-database-logging
**Contents:**
- Code
Source: https://docs.agno.com/examples/concepts/evals/accuracy/accuracy_db_logging
Learn how to store evaluation results in the database for tracking and analysis.
This example shows how to store evaluation results in the database.
```python theme={null}
"""Example showing how to store evaluation results in the database."""
from typing import Optional
from agno.agent import Agent
from agno.db.postgres.postgres import PostgresDb
from agno.eval.accuracy import AccuracyEval, AccuracyResult
from agno.models.openai import OpenAIChat
from agno.tools.calculator import CalculatorTools
---
## Setup the JSON database
**URL:** llms-txt#setup-the-json-database
db = JsonDb(db_path="tmp/json_db")
---
## Get the vector database
**URL:** llms-txt#get-the-vector-database
db = Chroma(embedding_function=OpenAIEmbeddings(), persist_directory=str(chroma_db_dir))
---
## Database configuration for metrics storage
**URL:** llms-txt#database-configuration-for-metrics-storage
db_url = "postgresql+psycopg://ai:ai@localhost:5532/ai"
db = PostgresDb(db_url=db_url, session_table="team_metrics_sessions")
---
## Setup your database
**URL:** llms-txt#setup-your-database
db = SqliteDb(db_file="agno.db")
---
## Allow the memories to sync with Zep database
**URL:** llms-txt#allow-the-memories-to-sync-with-zep-database
---
## Setup the Supabase database
**URL:** llms-txt#setup-the-supabase-database
db = PostgresDb(db_url=SUPABASE_DB_URL)
---
## Initialize vector database connection
**URL:** llms-txt#initialize-vector-database-connection
vector_db = Qdrant(
collection="thai-recipes", url="http://localhost:6333", embedder=embedder
)
---
## Add embedding to database
**URL:** llms-txt#add-embedding-to-database
embeddings = CohereEmbedder(id="embed-english-v3.0").get_embedding("The quick brown fox jumps over the lazy dog.")
---
## - External IDs for cloud integrations
**URL:** llms-txt#--external-ids-for-cloud-integrations
**Contents:**
- Content Retrieval and Management
**Examples:**
Example 1 (unknown):
```unknown
### Content Retrieval and Management
```
---
## JSON files as database, on Google Cloud Storage (GCS)
**URL:** llms-txt#json-files-as-database,-on-google-cloud-storage-(gcs)
**Contents:**
- Usage
Source: https://docs.agno.com/concepts/db/gcs
Agno supports using [Google Cloud Storage (GCS)](https://cloud.google.com/storage) as a database with the `GcsJsonDb` class.
Session data will be stored as JSON blobs in a GCS bucket.
You can get started with GCS following their [Get Started guide](https://cloud.google.com/docs/get-started).
```python gcs_for_agent.py theme={null}
import uuid
import google.auth
from agno.agent import Agent
from agno.db.gcs_json import GcsJsonDb
---
## Database connection URL
**URL:** llms-txt#database-connection-url
db_url = "postgresql+psycopg://ai:ai@localhost:5532/ai"
---
## MongoDB Database
**URL:** llms-txt#mongodb-database
**Contents:**
- Usage
Source: https://docs.agno.com/concepts/db/mongodb
Learn to use MongoDB as a database for your Agents
Agno supports using [MongoDB](https://www.mongodb.com/) as a database with the `MongoDb` class.
<Tip>
**v2 Migration Support**: If you're upgrading from Agno v1, MongoDB is fully supported in the v2 migration script. See the [migration guide](/how-to/v2-migration) for details.
</Tip>
```python mongodb_for_agent.py theme={null}
from agno.agent import Agent
from agno.db.mongo import MongoDb
---
## Setup the database
**URL:** llms-txt#setup-the-database
**Contents:**
- Usage
db_url = "postgresql+psycopg://ai:ai@localhost:5532/ai"
db = PostgresDb(db_url=db_url)
agent = Agent(
model=VLLM(id="Qwen/Qwen2.5-7B-Instruct"),
db=db,
tools=[DuckDuckGoTools()],
add_history_to_context=True,
)
agent.print_response("How many people live in Canada?")
agent.print_response("What is their national anthem called?")
bash theme={null}
pip install -U agno openai vllm sqlalchemy psycopg ddgs
bash theme={null}
./cookbook/scripts/run_pgvector.sh
bash theme={null}
vllm serve Qwen/Qwen2.5-7B-Instruct \
--enable-auto-tool-choice \
--tool-call-parser hermes \
--dtype float16 \
--max-model-len 8192 \
--gpu-memory-utilization 0.9
bash theme={null}
python cookbook/models/vllm/db.py
```
</Step>
</Steps>
**Examples:**
Example 1 (unknown):
```unknown
<Note>
Ensure Postgres database is running.
</Note>
## Usage
<Steps>
<Snippet file="create-venv-step.mdx" />
<Step title="Install Libraries">
```
Example 2 (unknown):
```unknown
</Step>
<Step title="Start Postgres database">
```
Example 3 (unknown):
```unknown
</Step>
<Step title="Start vLLM server">
```
Example 4 (unknown):
```unknown
</Step>
<Step title="Run Agent">
```
---
## Stdio Transport
**URL:** llms-txt#stdio-transport
Source: https://docs.agno.com/concepts/tools/mcp/transports/stdio
The stdio (standard input/output) transport is the default one in Agno's integration. It works best for local integrations.
To use it, simply initialize the `MCPTools` class with the `command` argument.
The command you want to pass is the one used to run the MCP server the agent will have access to.
For example `uvx mcp-server-git`, which runs a [git MCP server](https://github.com/modelcontextprotocol/servers/tree/main/src/git):
```python theme={null}
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.tools.mcp import MCPTools
---
## Write Your Own Tool
**URL:** llms-txt#write-your-own-tool
**Contents:**
- Code
Source: https://docs.agno.com/examples/getting-started/04-write-your-own-tool
This example shows how to create and use your own custom tool with Agno.
You can replace the Hacker News functionality with any API or service you want!
Some ideas for your own tools:
* Weather data fetcher
* Stock price analyzer
* Personal calendar integration
* Custom database queries
* Local file operations
```python custom_tools.py theme={null}
import json
from textwrap import dedent
import httpx
from agno.agent import Agent
from agno.models.openai import OpenAIChat
def get_top_hackernews_stories(num_stories: int = 10) -> str:
"""Use this function to get top stories from Hacker News.
Args:
num_stories (int): Number of stories to return. Defaults to 10.
Returns:
str: JSON string of top stories.
"""
# Fetch top story IDs
response = httpx.get("https://hacker-news.firebaseio.com/v0/topstories.json")
story_ids = response.json()
# Fetch story details
stories = []
for story_id in story_ids[:num_stories]:
story_response = httpx.get(
f"https://hacker-news.firebaseio.com/v0/item/{story_id}.json"
)
story = story_response.json()
if "text" in story:
story.pop("text", None)
stories.append(story)
return json.dumps(stories)
---
## Persistent Session with Database
**URL:** llms-txt#persistent-session-with-database
**Contents:**
- Code
- Usage
Source: https://docs.agno.com/examples/concepts/teams/session/persistent_session
This example demonstrates how to use persistent session storage with a PostgreSQL database to maintain team conversations across multiple runs.
<Steps>
<Snippet file="create-venv-step.mdx" />
<Step title="Install required libraries">
</Step>
<Step title="Set environment variables">
</Step>
<Step title="Start PostgreSQL database">
</Step>
<Step title="Run the agent">
</Step>
</Steps>
**Examples:**
Example 1 (unknown):
```unknown
## Usage
<Steps>
<Snippet file="create-venv-step.mdx" />
<Step title="Install required libraries">
```
Example 2 (unknown):
```unknown
</Step>
<Step title="Set environment variables">
```
Example 3 (unknown):
```unknown
</Step>
<Step title="Start PostgreSQL database">
```
Example 4 (unknown):
```unknown
</Step>
<Step title="Run the agent">
```
---
## Model Context Protocol (MCP)
**URL:** llms-txt#model-context-protocol-(mcp)
Source: https://docs.agno.com/concepts/tools/mcp/mcp
Learn how to use MCP with Agno to enable your agents to interact with external systems through a standardized interface.
The [Model Context Protocol (MCP)](https://modelcontextprotocol.io) enables Agents to interact with external systems through a standardized interface.
You can connect your Agents to any MCP server, using Agno's MCP integration.
This simple example shows how to connect an Agent to the Agno MCP server:
```python agno_agent.py lines theme={null}
from agno.agent import Agent
from agno.models.anthropic import Claude
from agno.tools.mcp import MCPTools
---
## Load all documents into the vector database
**URL:** llms-txt#load-all-documents-into-the-vector-database
knowledge.add_contents(
[
{
"path": downloaded_cv_paths[0],
"metadata": {
"user_id": "jordan_mitchell",
"document_type": "cv",
"year": 2025,
},
},
{
"path": downloaded_cv_paths[1],
"metadata": {
"user_id": "taylor_brooks",
"document_type": "cv",
"year": 2025,
},
},
{
"path": downloaded_cv_paths[2],
"metadata": {
"user_id": "morgan_lee",
"document_type": "cv",
"year": 2025,
},
},
{
"path": downloaded_cv_paths[3],
"metadata": {
"user_id": "casey_jordan",
"document_type": "cv",
"year": 2025,
},
},
{
"path": downloaded_cv_paths[4],
"metadata": {
"user_id": "alex_rivera",
"document_type": "cv",
"year": 2025,
},
},
]
)
---
## Define the directory where the Chroma database is located
**URL:** llms-txt#define-the-directory-where-the-chroma-database-is-located
chroma_db_dir = pathlib.Path("./chroma_db")
---
## Reliability with Database Logging
**URL:** llms-txt#reliability-with-database-logging
**Contents:**
- Code
Source: https://docs.agno.com/examples/concepts/evals/reliability/reliability_db_logging
Learn how to store reliability evaluation results in the database.
This example shows how to store evaluation results in the database.
```python theme={null}
"""Example showing how to store evaluation results in the database."""
from typing import Optional
from agno.agent import Agent
from agno.db.postgres.postgres import PostgresDb
from agno.eval.reliability import ReliabilityEval, ReliabilityResult
from agno.models.openai import OpenAIChat
from agno.run.agent import RunOutput
from agno.tools.calculator import CalculatorTools
---
```
### references/migration.md
```markdown
# Agno - Migration
**Pages:** 6
---
## How to get the token and chat_id:
**URL:** llms-txt#how-to-get-the-token-and-chat_id:
---
## How to connect to an Upstash Vector index
**URL:** llms-txt#how-to-connect-to-an-upstash-vector-index
---
## Define how to turn analysis into actionable insights
**URL:** llms-txt#define-how-to-turn-analysis-into-actionable-insights
**Contents:**
- 3d. Define the Report Format
intelligence_synthesis = dedent("""
INTELLIGENCE SYNTHESIS:
- Detect crisis indicators through sentiment velocity and coordination patterns
- Identify competitive positioning and feature gap discussions
- Surface growth opportunities and advocacy moments
- Generate strategic recommendations with clear priority levels
""")
print("Intelligence synthesis defined")
python theme={null}
**Examples:**
Example 1 (unknown):
```unknown
### 3d. Define the Report Format
```
---
## How to Switch Between Different Models
**URL:** llms-txt#how-to-switch-between-different-models
**Contents:**
- Recommended Approach
- Safe Model Switching
Source: https://docs.agno.com/faq/switching-models
When working with Agno, you may need to switch between different models. While Agno supports 20+ model providers, switching between different providers can cause compatibility issues. Switching models within the same provider is generally safer and more reliable.
## Recommended Approach
**Safe:** Switch models within the same provider (OpenAI → OpenAI, Google → Google)\
**Risky:** Switch between different providers (OpenAI ↔ Google ↔ Anthropic)
Cross-provider switching is risky because message history between model providers are often not interchangeable, as some require messages that others don't. However, we are actively working to improve interoperability.
## Safe Model Switching
The safest way to switch models is to change the model ID while keeping the same provider:
```python theme={null}
from uuid import uuid4
from agno.agent import Agent
from agno.db.sqlite import SqliteDb
from agno.models.openai.chat import OpenAIChat
db = SqliteDb(db_file="tmp/agent_sessions.db")
session_id = str(uuid4())
user_id = "[email protected]"
---
## Migrating to Agno v2.0
**URL:** llms-txt#migrating-to-agno-v2.0
**Contents:**
- Installing Agno v2
- Migrating your Agno DB
- Migrating your Agno code
- 1. Agents and Teams
- 2. Storage
Source: https://docs.agno.com/how-to/v2-migration
Guide to migrate your Agno applications from v1 to v2.
If you have questions during your migration, we can help! Find us on [Discord](https://discord.gg/4MtYHHrgA8) or [Discourse](https://community.agno.com/).
<Tip>
Reference the [v2.0 Changelog](/how-to/v2-changelog) for the full list of
changes.
</Tip>
## Installing Agno v2
If you are already using Agno, you can upgrade to v2 by running:
Otherwise, you can install the latest version of Agno v2 by running:
## Migrating your Agno DB
If you used our `Storage` or `Memory` functionalities to store Agent sessions and memories in your database, you can start by migrating your tables.
Use our migration script: [`libs/agno/scripts/migrate_to_v2.py`](https://github.com/agno-agi/agno/blob/main/libs/agno/scripts/migrate_to_v2.py)
The script supports PostgreSQL, MySQL, SQLite, and MongoDB. Update the database connection settings, the batch size (useful if you are migrating large tables) in the script and run it.
* The script won't cleanup the old tables, in case you still need them.
* The script is idempotent. If something goes wrong or if you stop it mid-run, you can run it again.
* Metrics are automatically converted from v1 to v2 format.
## Migrating your Agno code
Each section here covers a specific framework domain, with before and after examples and detailed explanations where needed.
### 1. Agents and Teams
[Agents](/concepts/agents/overview) and [Teams](/concepts/teams/overview) are the main building blocks in the Agno framework.
Below are some of the v2 updates we have made to the `Agent` and `Team` classes:
1.1. Streaming responses with `arun` now returns an `AsyncIterator`, not a coroutine. This is how you consume the resulting events now, when streaming a run:
1.2. The `RunResponse` class is now `RunOutput`. This is the type of the results you get when running an Agent:
1.3. The events you get when streaming an Agent result have been renamed:
* `RunOutputStartedEvent` → `RunStartedEvent`
* `RunOutputCompletedEvent` → `RunCompletedEvent`
* `RunOutputErrorEvent` → `RunErrorEvent`
* `RunOutputCancelledEvent` → `RunCancelledEvent`
* `RunOutputContinuedEvent` → `RunContinuedEvent`
* `RunOutputPausedEvent` → `RunPausedEvent`
* `RunOutputContentEvent` → `RunContentEvent`
1.4. Similarly, for Team output events:
* `TeamRunOutputStartedEvent` → `TeamRunStartedEvent`
* `TeamRunOutputCompletedEvent` → `TeamRunCompletedEvent`
* `TeamRunOutputErrorEvent` → `TeamRunErrorEvent`
* `TeamRunOutputCancelledEvent` → `TeamRunCancelledEvent`
* `TeamRunOutputContentEvent` → `TeamRunContentEvent`
1.5. The `add_state_in_messages` parameter has been deprecated. Variables in instructions are now resolved automatically by default.
1.6. The `context` parameter has been renamed to `dependencies`.
This is how it looked like on v1:
This is how it looks like now, on v2:
<Tip>
See the full list of changes in the [Agent
Updates](/how-to/v2-changelog#agent-updates) section of the changelog.
</Tip>
Storage is used to persist Agent sessions, state and memories in a database.
This is how Storage looks like on v1:
These are the changes we have made for v2:
2.1. The `Storage` classes have moved from `agno/storage` to `agno/db`. We will now refer to them as our `Db` classes.
2.2. The `mode` parameter has been deprecated. The same instance can now be used by Agents, Teams and Workflows.
2.3. The `table_name` parameter has been deprecated. One instance now handles multiple tables, you can define their names individually.
These are all the supported tables, each used to persist data related to a specific domain:
2.4. Previously running a `Team` would create a team session and sessions for every team member participating in the run. Now, only the `Team` session is created. The runs for the team leader and all members can be found in the `Team` session.
```python v2_storage_team_sessions.py theme={null}
team.run(...)
team_session = team.get_latest_session()
**Examples:**
Example 1 (unknown):
```unknown
Otherwise, you can install the latest version of Agno v2 by running:
```
Example 2 (unknown):
```unknown
## Migrating your Agno DB
If you used our `Storage` or `Memory` functionalities to store Agent sessions and memories in your database, you can start by migrating your tables.
Use our migration script: [`libs/agno/scripts/migrate_to_v2.py`](https://github.com/agno-agi/agno/blob/main/libs/agno/scripts/migrate_to_v2.py)
The script supports PostgreSQL, MySQL, SQLite, and MongoDB. Update the database connection settings, the batch size (useful if you are migrating large tables) in the script and run it.
Notice:
* The script won't cleanup the old tables, in case you still need them.
* The script is idempotent. If something goes wrong or if you stop it mid-run, you can run it again.
* Metrics are automatically converted from v1 to v2 format.
## Migrating your Agno code
Each section here covers a specific framework domain, with before and after examples and detailed explanations where needed.
### 1. Agents and Teams
[Agents](/concepts/agents/overview) and [Teams](/concepts/teams/overview) are the main building blocks in the Agno framework.
Below are some of the v2 updates we have made to the `Agent` and `Team` classes:
1.1. Streaming responses with `arun` now returns an `AsyncIterator`, not a coroutine. This is how you consume the resulting events now, when streaming a run:
```
Example 3 (unknown):
```unknown
1.2. The `RunResponse` class is now `RunOutput`. This is the type of the results you get when running an Agent:
```
Example 4 (unknown):
```unknown
1.3. The events you get when streaming an Agent result have been renamed:
* `RunOutputStartedEvent` → `RunStartedEvent`
* `RunOutputCompletedEvent` → `RunCompletedEvent`
* `RunOutputErrorEvent` → `RunErrorEvent`
* `RunOutputCancelledEvent` → `RunCancelledEvent`
* `RunOutputContinuedEvent` → `RunContinuedEvent`
* `RunOutputPausedEvent` → `RunPausedEvent`
* `RunOutputContentEvent` → `RunContentEvent`
1.4. Similarly, for Team output events:
* `TeamRunOutputStartedEvent` → `TeamRunStartedEvent`
* `TeamRunOutputCompletedEvent` → `TeamRunCompletedEvent`
* `TeamRunOutputErrorEvent` → `TeamRunErrorEvent`
* `TeamRunOutputCancelledEvent` → `TeamRunCancelledEvent`
* `TeamRunOutputContentEvent` → `TeamRunContentEvent`
1.5. The `add_state_in_messages` parameter has been deprecated. Variables in instructions are now resolved automatically by default.
1.6. The `context` parameter has been renamed to `dependencies`.
This is how it looked like on v1:
```
---
## Migrating to Workflows 2.0
**URL:** llms-txt#migrating-to-workflows-2.0
**Contents:**
- Migrating from Workflows 1.0
- Key Differences
- Migration Steps
- Example of Blog Post Generator Workflow
Source: https://docs.agno.com/how-to/workflows-migration
Learn how to migrate to Workflows 2.0.
## Migrating from Workflows 1.0
Workflows 2.0 is a completely new approach to agent automation, and requires an upgrade from the Workflows 1.0 implementation. It introduces a new, more flexible and powerful way to build workflows.
| Workflows 1.0 | Workflows 2.0 | Migration Path |
| ----------------- | ----------------- | -------------------------------- |
| Linear only | Multiple patterns | Add Parallel/Condition as needed |
| Agent-focused | Mixed components | Convert functions to Steps |
| Limited branching | Smart routing | Replace if/else with Router |
| Manual loops | Built-in Loop | Use Loop component |
1. **Assess current workflow**: Identify parallel opportunities
2. **Add conditions**: Convert if/else logic to Condition components
3. **Extract functions**: Move custom logic to function-based steps
4. **Enable streaming**: For event-based information
5. **Add state management**: Use `workflow_session_state` for data sharing
### Example of Blog Post Generator Workflow
Lets take an example that demonstrates how to build a sophisticated blog post generator that combines
web research capabilities with professional writing expertise. The workflow uses a multi-stage
approach:
1. Intelligent web research and source gathering
2. Content extraction and processing
3. Professional blog post writing with proper citations
Here's the code for the blog post generator in **Workflows 1.0**:
```python theme={null}
import json
from textwrap import dedent
from typing import Dict, Iterator, Optional
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.run.workflow import WorkflowCompletedEvent
from agno.storage.sqlite import SqliteDb
from agno.tools.duckduckgo import DuckDuckGoTools
from agno.tools.newspaper4k import Newspaper4kTools
from agno.utils.log import logger
from agno.utils.pprint import pprint_run_response
from agno.workflow import RunOutput, Workflow
from pydantic import BaseModel, Field
class NewsArticle(BaseModel):
title: str = Field(..., description="Title of the article.")
url: str = Field(..., description="Link to the article.")
summary: Optional[str] = Field(
..., description="Summary of the article if available."
)
class SearchResults(BaseModel):
articles: list[NewsArticle]
class ScrapedArticle(BaseModel):
title: str = Field(..., description="Title of the article.")
url: str = Field(..., description="Link to the article.")
summary: Optional[str] = Field(
..., description="Summary of the article if available."
)
content: Optional[str] = Field(
...,
description="Full article content in markdown format. None if content is unavailable.",
)
class BlogPostGenerator(Workflow):
"""Advanced workflow for generating professional blog posts with proper research and citations."""
description: str = dedent("""\
An intelligent blog post generator that creates engaging, well-researched content.
This workflow orchestrates multiple AI agents to research, analyze, and craft
compelling blog posts that combine journalistic rigor with engaging storytelling.
The system excels at creating content that is both informative and optimized for
digital consumption.
""")
# Search Agent: Handles intelligent web searching and source gathering
searcher: Agent = Agent(
model=OpenAIChat(id="gpt-5-mini"),
tools=[DuckDuckGoTools()],
description=dedent("""\
You are BlogResearch-X, an elite research assistant specializing in discovering
high-quality sources for compelling blog content. Your expertise includes:
- Finding authoritative and trending sources
- Evaluating content credibility and relevance
- Identifying diverse perspectives and expert opinions
- Discovering unique angles and insights
- Ensuring comprehensive topic coverage\
"""),
instructions=dedent("""\
1. Search Strategy 🔍
- Find 10-15 relevant sources and select the 5-7 best ones
- Prioritize recent, authoritative content
- Look for unique angles and expert insights
2. Source Evaluation 📊
- Verify source credibility and expertise
- Check publication dates for timeliness
- Assess content depth and uniqueness
3. Diversity of Perspectives 🌐
- Include different viewpoints
- Gather both mainstream and expert opinions
- Find supporting data and statistics\
"""),
output_schema=SearchResults,
)
# Content Scraper: Extracts and processes article content
article_scraper: Agent = Agent(
model=OpenAIChat(id="gpt-5-mini"),
tools=[Newspaper4kTools()],
description=dedent("""\
You are ContentBot-X, a specialist in extracting and processing digital content
for blog creation. Your expertise includes:
- Efficient content extraction
- Smart formatting and structuring
- Key information identification
- Quote and statistic preservation
- Maintaining source attribution\
"""),
instructions=dedent("""\
1. Content Extraction 📑
- Extract content from the article
- Preserve important quotes and statistics
- Maintain proper attribution
- Handle paywalls gracefully
2. Content Processing 🔄
- Format text in clean markdown
- Preserve key information
- Structure content logically
3. Quality Control ✅
- Verify content relevance
- Ensure accurate extraction
- Maintain readability\
"""),
output_schema=ScrapedArticle,
)
# Content Writer Agent: Crafts engaging blog posts from research
writer: Agent = Agent(
model=OpenAIChat(id="gpt-5-mini"),
description=dedent("""\
You are BlogMaster-X, an elite content creator combining journalistic excellence
with digital marketing expertise. Your strengths include:
- Crafting viral-worthy headlines
- Writing engaging introductions
- Structuring content for digital consumption
- Incorporating research seamlessly
- Optimizing for SEO while maintaining quality
- Creating shareable conclusions\
"""),
instructions=dedent("""\
1. Content Strategy 📝
- Craft attention-grabbing headlines
- Write compelling introductions
- Structure content for engagement
- Include relevant subheadings
2. Writing Excellence ✍️
- Balance expertise with accessibility
- Use clear, engaging language
- Include relevant examples
- Incorporate statistics naturally
3. Source Integration 🔍
- Cite sources properly
- Include expert quotes
- Maintain factual accuracy
4. Digital Optimization 💻
- Structure for scanability
- Include shareable takeaways
- Optimize for SEO
- Add engaging subheadings\
"""),
expected_output=dedent("""\
# {Viral-Worthy Headline}
## Introduction
{Engaging hook and context}
## {Compelling Section 1}
{Key insights and analysis}
{Expert quotes and statistics}
## {Engaging Section 2}
{Deeper exploration}
{Real-world examples}
## {Practical Section 3}
{Actionable insights}
{Expert recommendations}
## Key Takeaways
- {Shareable insight 1}
- {Practical takeaway 2}
- {Notable finding 3}
## Sources
{Properly attributed sources with links}\
"""),
markdown=True,
)
def run(
self,
topic: str,
use_search_cache: bool = True,
use_scrape_cache: bool = True,
use_cached_report: bool = True,
) -> Iterator[RunOutputEvent]:
logger.info(f"Generating a blog post on: {topic}")
# Use the cached blog post if use_cache is True
if use_cached_report:
cached_blog_post = self.get_cached_blog_post(topic)
if cached_blog_post:
yield WorkflowCompletedEvent(
run_id=self.run_id,
content=cached_blog_post,
)
return
# Search the web for articles on the topic
search_results: Optional[SearchResults] = self.get_search_results(
topic, use_search_cache
)
# If no search_results are found for the topic, end the workflow
if search_results is None or len(search_results.articles) == 0:
yield WorkflowCompletedEvent(
run_id=self.run_id,
content=f"Sorry, could not find any articles on the topic: {topic}",
)
return
# Scrape the search results
scraped_articles: Dict[str, ScrapedArticle] = self.scrape_articles(
topic, search_results, use_scrape_cache
)
# Prepare the input for the writer
writer_input = {
"topic": topic,
"articles": [v.model_dump() for v in scraped_articles.values()],
}
# Run the writer and yield the response
yield from self.writer.run(json.dumps(writer_input, indent=4), stream=True)
# Save the blog post in the cache
self.add_blog_post_to_cache(topic, self.writer.run_response.content)
def get_cached_blog_post(self, topic: str) -> Optional[str]:
logger.info("Checking if cached blog post exists")
return self.session_state.get("blog_posts", {}).get(topic)
def add_blog_post_to_cache(self, topic: str, blog_post: str):
logger.info(f"Saving blog post for topic: {topic}")
self.session_state.setdefault("blog_posts", {})
self.session_state["blog_posts"][topic] = blog_post
def get_cached_search_results(self, topic: str) -> Optional[SearchResults]:
logger.info("Checking if cached search results exist")
search_results = self.session_state.get("search_results", {}).get(topic)
return (
SearchResults.model_validate(search_results)
if search_results and isinstance(search_results, dict)
else search_results
)
def add_search_results_to_cache(self, topic: str, search_results: SearchResults):
logger.info(f"Saving search results for topic: {topic}")
self.session_state.setdefault("search_results", {})
self.session_state["search_results"][topic] = search_results
def get_cached_scraped_articles(
self, topic: str
) -> Optional[Dict[str, ScrapedArticle]]:
logger.info("Checking if cached scraped articles exist")
scraped_articles = self.session_state.get("scraped_articles", {}).get(topic)
return (
ScrapedArticle.model_validate(scraped_articles)
if scraped_articles and isinstance(scraped_articles, dict)
else scraped_articles
)
def add_scraped_articles_to_cache(
self, topic: str, scraped_articles: Dict[str, ScrapedArticle]
):
logger.info(f"Saving scraped articles for topic: {topic}")
self.session_state.setdefault("scraped_articles", {})
self.session_state["scraped_articles"][topic] = scraped_articles
def get_search_results(
self, topic: str, use_search_cache: bool, num_attempts: int = 3
) -> Optional[SearchResults]:
# Get cached search_results from the session state if use_search_cache is True
if use_search_cache:
try:
search_results_from_cache = self.get_cached_search_results(topic)
if search_results_from_cache is not None:
search_results = SearchResults.model_validate(
search_results_from_cache
)
logger.info(
f"Found {len(search_results.articles)} articles in cache."
)
return search_results
except Exception as e:
logger.warning(f"Could not read search results from cache: {e}")
# If there are no cached search_results, use the searcher to find the latest articles
for attempt in range(num_attempts):
try:
searcher_response: RunOutput = self.searcher.run(topic)
if (
searcher_response is not None
and searcher_response.content is not None
and isinstance(searcher_response.content, SearchResults)
):
article_count = len(searcher_response.content.articles)
logger.info(
f"Found {article_count} articles on attempt {attempt + 1}"
)
# Cache the search results
self.add_search_results_to_cache(topic, searcher_response.content)
return searcher_response.content
else:
logger.warning(
f"Attempt {attempt + 1}/{num_attempts} failed: Invalid response type"
)
except Exception as e:
logger.warning(f"Attempt {attempt + 1}/{num_attempts} failed: {str(e)}")
logger.error(f"Failed to get search results after {num_attempts} attempts")
return None
def scrape_articles(
self, topic: str, search_results: SearchResults, use_scrape_cache: bool
) -> Dict[str, ScrapedArticle]:
scraped_articles: Dict[str, ScrapedArticle] = {}
# Get cached scraped_articles from the session state if use_scrape_cache is True
if use_scrape_cache:
try:
scraped_articles_from_cache = self.get_cached_scraped_articles(topic)
if scraped_articles_from_cache is not None:
scraped_articles = scraped_articles_from_cache
logger.info(
f"Found {len(scraped_articles)} scraped articles in cache."
)
return scraped_articles
except Exception as e:
logger.warning(f"Could not read scraped articles from cache: {e}")
# Scrape the articles that are not in the cache
for article in search_results.articles:
if article.url in scraped_articles:
logger.info(f"Found scraped article in cache: {article.url}")
continue
article_scraper_response: RunOutput = self.article_scraper.run(
article.url
)
if (
article_scraper_response is not None
and article_scraper_response.content is not None
and isinstance(article_scraper_response.content, ScrapedArticle)
):
scraped_articles[article_scraper_response.content.url] = (
article_scraper_response.content
)
logger.info(f"Scraped article: {article_scraper_response.content.url}")
# Save the scraped articles in the session state
self.add_scraped_articles_to_cache(topic, scraped_articles)
return scraped_articles
---
```