anthropic
Anthropic Claude API integration — chat completions, streaming, vision, tool use, and batch processing via the Anthropic Messages API. Generate text with Claude Opus, Sonnet, and Haiku models, process images, use tool calling, and manage conversations. Built for AI agents — Python stdlib only, zero dependencies. Use for AI text generation, multimodal analysis, tool-augmented AI, batch processing, and Claude model interaction.
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Install command
npx @skill-hub/cli install openclaw-skills-anthropic
Repository
Skill path: skills/aiwithabidi/anthropic
Anthropic Claude API integration — chat completions, streaming, vision, tool use, and batch processing via the Anthropic Messages API. Generate text with Claude Opus, Sonnet, and Haiku models, process images, use tool calling, and manage conversations. Built for AI agents — Python stdlib only, zero dependencies. Use for AI text generation, multimodal analysis, tool-augmented AI, batch processing, and Claude model interaction.
Open repositoryBest for
Primary workflow: Analyze Data & AI.
Technical facets: Full Stack, Backend, Data / AI, Integration.
Target audience: everyone.
License: MIT.
Original source
Catalog source: SkillHub Club.
Repository owner: openclaw.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install anthropic into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/openclaw/skills before adding anthropic to shared team environments
- Use anthropic for development workflows
Works across
Favorites: 0.
Sub-skills: 0.
Aggregator: No.
Original source / Raw SKILL.md
---
name: anthropic
description: "Anthropic Claude API integration — chat completions, streaming, vision, tool use, and batch processing via the Anthropic Messages API. Generate text with Claude Opus, Sonnet, and Haiku models, process images, use tool calling, and manage conversations. Built for AI agents — Python stdlib only, zero dependencies. Use for AI text generation, multimodal analysis, tool-augmented AI, batch processing, and Claude model interaction."
homepage: https://www.agxntsix.ai
license: MIT
compatibility: Python 3.10+ (stdlib only — no dependencies)
metadata: {"openclaw": {"emoji": "🔮", "requires": {"env": ["ANTHROPIC_API_KEY"]}, "primaryEnv": "ANTHROPIC_API_KEY", "homepage": "https://www.agxntsix.ai"}}
---
# 🔮 Anthropic
Anthropic Claude API integration — chat completions, streaming, vision, tool use, and batch processing via the Anthropic Messages API.
## Features
- **Messages API** — Claude Opus, Sonnet, Haiku completions
- **Streaming** — real-time token streaming responses
- **Vision** — image analysis and understanding
- **Tool use** — function calling with structured output
- **System prompts** — custom system instructions
- **Multi-turn conversations** — context management
- **Batch API** — bulk message processing
- **Token counting** — estimate usage before sending
- **Extended thinking** — deep reasoning mode
- **Model listing** — available models and capabilities
## Requirements
| Variable | Required | Description |
|----------|----------|-------------|
| `ANTHROPIC_API_KEY` | ✅ | API key/token for Anthropic |
## Quick Start
```bash
# Send a message to Claude
python3 {baseDir}/scripts/anthropic.py chat "What is the meaning of life?" --model claude-sonnet-4-20250514
```
```bash
# Chat with system prompt
python3 {baseDir}/scripts/anthropic.py chat-system --system "You are a financial analyst" "Analyze AAPL stock"
```
```bash
# Analyze an image
python3 {baseDir}/scripts/anthropic.py chat-image --image photo.jpg 'What do you see in this image?'
```
```bash
# Stream a response
python3 {baseDir}/scripts/anthropic.py stream "Write a short story about a robot" --model claude-sonnet-4-20250514
```
## Commands
### `chat`
Send a message to Claude.
```bash
python3 {baseDir}/scripts/anthropic.py chat "What is the meaning of life?" --model claude-sonnet-4-20250514
```
### `chat-system`
Chat with system prompt.
```bash
python3 {baseDir}/scripts/anthropic.py chat-system --system "You are a financial analyst" "Analyze AAPL stock"
```
### `chat-image`
Analyze an image.
```bash
python3 {baseDir}/scripts/anthropic.py chat-image --image photo.jpg 'What do you see in this image?'
```
### `stream`
Stream a response.
```bash
python3 {baseDir}/scripts/anthropic.py stream "Write a short story about a robot" --model claude-sonnet-4-20250514
```
### `batch-create`
Create a batch request.
```bash
python3 {baseDir}/scripts/anthropic.py batch-create requests.jsonl
```
### `batch-list`
List batch jobs.
```bash
python3 {baseDir}/scripts/anthropic.py batch-list
```
### `batch-get`
Get batch status.
```bash
python3 {baseDir}/scripts/anthropic.py batch-get batch_abc123
```
### `batch-results`
Get batch results.
```bash
python3 {baseDir}/scripts/anthropic.py batch-results batch_abc123
```
### `count-tokens`
Count tokens in a message.
```bash
python3 {baseDir}/scripts/anthropic.py count-tokens "How many tokens is this message?"
```
### `models`
List available models.
```bash
python3 {baseDir}/scripts/anthropic.py models
```
### `tools`
Chat with tool use.
```bash
python3 {baseDir}/scripts/anthropic.py tools --tools '[{"name":"get_weather","description":"Get weather","input_schema":{"type":"object","properties":{"location":{"type":"string"}}}}]' "What is the weather in NYC?"
```
### `thinking`
Extended thinking mode.
```bash
python3 {baseDir}/scripts/anthropic.py thinking "Solve this math problem step by step: what is 123 * 456?" --budget 10000
```
## Output Format
All commands output JSON by default. Add `--human` for readable formatted output.
```bash
# JSON (default, for programmatic use)
python3 {baseDir}/scripts/anthropic.py chat --limit 5
# Human-readable
python3 {baseDir}/scripts/anthropic.py chat --limit 5 --human
```
## Script Reference
| Script | Description |
|--------|-------------|
| `{baseDir}/scripts/anthropic.py` | Main CLI — all Anthropic operations |
## Data Policy
This skill **never stores data locally**. All requests go directly to the Anthropic API and results are returned to stdout. Your data stays on Anthropic servers.
## Credits
---
Built by [M. Abidi](https://www.linkedin.com/in/mohammad-ali-abidi) | [agxntsix.ai](https://www.agxntsix.ai)
[YouTube](https://youtube.com/@aiwithabidi) | [GitHub](https://github.com/aiwithabidi)
Part of the **AgxntSix Skill Suite** for OpenClaw agents.
📅 **Need help setting up OpenClaw for your business?** [Book a free consultation](https://cal.com/agxntsix/abidi-openclaw)
---
## Skill Companion Files
> Additional files collected from the skill directory layout.
### _meta.json
```json
{
"owner": "aiwithabidi",
"slug": "anthropic",
"displayName": "Anthropic",
"latest": {
"version": "1.0.0",
"publishedAt": 1771452325905,
"commit": "https://github.com/openclaw/skills/commit/150f914cc7d1a2e77e2368a985d87f5092c8b121"
},
"history": []
}
```
### scripts/anthropic.py
```python
#!/usr/bin/env python3
"""Anthropic CLI — comprehensive API integration for AI agents.
Full CRUD operations, search, reporting, and automation.
Zero dependencies beyond Python stdlib.
"""
import argparse
import json
import os
import sys
import urllib.request
import urllib.error
import urllib.parse
from datetime import datetime, timezone
API_BASE = "https://api.anthropic.com/v1"
def get_token():
"""Get API token from environment."""
token = os.environ.get("ANTHROPIC_API_KEY", "")
if not token:
env_path = os.path.join(
os.environ.get("WORKSPACE", os.path.expanduser("~/.openclaw/workspace")),
".env"
)
if os.path.exists(env_path):
with open(env_path) as f:
for line in f:
line = line.strip()
if line.startswith("ANTHROPIC_API_KEY="):
token = line.split("=", 1)[1].strip().strip('"').strip("'")
break
if not token:
print(f"Error: ANTHROPIC_API_KEY not set", file=sys.stderr)
sys.exit(1)
return token
def api(method, path, data=None, params=None):
"""Make an API request."""
token = get_token()
url = f"{API_BASE}{path}"
if params:
qs = urllib.parse.urlencode({k: v for k, v in params.items() if v is not None})
if qs:
url = f"{url}?{qs}"
body = json.dumps(data).encode() if data else None
req = urllib.request.Request(url, data=body, method=method)
req.add_header("Authorization", f"Bearer {token}")
req.add_header("Content-Type", "application/json")
req.add_header("Accept", "application/json")
try:
resp = urllib.request.urlopen(req, timeout=30)
raw = resp.read().decode()
return json.loads(raw) if raw.strip() else {"ok": True}
except urllib.error.HTTPError as e:
err_body = e.read().decode()
print(json.dumps({"error": True, "code": e.code, "message": err_body}), file=sys.stderr)
sys.exit(1)
def output(data, human=False):
"""Output data as JSON or human-readable."""
if human and isinstance(data, list):
for item in data:
if isinstance(item, dict):
for k, v in item.items():
print(f" {k}: {v}")
print()
else:
print(item)
elif human and isinstance(data, dict):
for k, v in data.items():
print(f" {k}: {v}")
else:
print(json.dumps(data, indent=2, default=str))
def cmd_chat(args):
"""Send a message to Claude."""
params = {}
if hasattr(args, 'limit') and args.limit:
params["limit"] = args.limit
if hasattr(args, 'id') and args.id:
data = api("GET", f"/chat/{args.id}")
else:
data = api("GET", "/chat", params=params)
output(data, getattr(args, 'human', False))
def cmd_chat_system(args):
"""Chat with system prompt."""
params = {}
if hasattr(args, 'limit') and args.limit:
params["limit"] = args.limit
if hasattr(args, 'id') and args.id:
data = api("GET", f"/chat/{args.id}")
else:
data = api("GET", "/chat/system", params=params)
output(data, getattr(args, 'human', False))
def cmd_chat_image(args):
"""Analyze an image."""
params = {}
if hasattr(args, 'limit') and args.limit:
params["limit"] = args.limit
if hasattr(args, 'id') and args.id:
data = api("GET", f"/chat/{args.id}")
else:
data = api("GET", "/chat/image", params=params)
output(data, getattr(args, 'human', False))
def cmd_stream(args):
"""Stream a response."""
params = {}
if hasattr(args, 'limit') and args.limit:
params["limit"] = args.limit
if hasattr(args, 'id') and args.id:
data = api("GET", f"/stream/{args.id}")
else:
data = api("GET", "/stream", params=params)
output(data, getattr(args, 'human', False))
def cmd_batch_create(args):
"""Create a batch request."""
params = {}
if hasattr(args, 'limit') and args.limit:
params["limit"] = args.limit
if hasattr(args, 'id') and args.id:
data = api("POST", f"/batch/{args.id}")
else:
data = api("POST", "/batch/create", params=params)
output(data, getattr(args, 'human', False))
def cmd_batch_list(args):
"""List batch jobs."""
params = {}
if hasattr(args, 'limit') and args.limit:
params["limit"] = args.limit
if hasattr(args, 'id') and args.id:
data = api("GET", f"/batch/{args.id}")
else:
data = api("GET", "/batch/list", params=params)
output(data, getattr(args, 'human', False))
def cmd_batch_get(args):
"""Get batch status."""
params = {}
if hasattr(args, 'limit') and args.limit:
params["limit"] = args.limit
if hasattr(args, 'id') and args.id:
data = api("GET", f"/batch/{args.id}")
else:
data = api("GET", "/batch/get", params=params)
output(data, getattr(args, 'human', False))
def cmd_batch_results(args):
"""Get batch results."""
params = {}
if hasattr(args, 'limit') and args.limit:
params["limit"] = args.limit
if hasattr(args, 'id') and args.id:
data = api("GET", f"/batch/{args.id}")
else:
data = api("GET", "/batch/results", params=params)
output(data, getattr(args, 'human', False))
def cmd_count_tokens(args):
"""Count tokens in a message."""
params = {}
if hasattr(args, 'limit') and args.limit:
params["limit"] = args.limit
if hasattr(args, 'id') and args.id:
data = api("GET", f"/count/{args.id}")
else:
data = api("GET", "/count/tokens", params=params)
output(data, getattr(args, 'human', False))
def cmd_models(args):
"""List available models."""
params = {}
if hasattr(args, 'limit') and args.limit:
params["limit"] = args.limit
if hasattr(args, 'id') and args.id:
data = api("GET", f"/models/{args.id}")
else:
data = api("GET", "/models", params=params)
output(data, getattr(args, 'human', False))
def cmd_tools(args):
"""Chat with tool use."""
params = {}
if hasattr(args, 'limit') and args.limit:
params["limit"] = args.limit
if hasattr(args, 'id') and args.id:
data = api("GET", f"/tools/{args.id}")
else:
data = api("GET", "/tools", params=params)
output(data, getattr(args, 'human', False))
def cmd_thinking(args):
"""Extended thinking mode."""
params = {}
if hasattr(args, 'limit') and args.limit:
params["limit"] = args.limit
if hasattr(args, 'id') and args.id:
data = api("GET", f"/thinking/{args.id}")
else:
data = api("GET", "/thinking", params=params)
output(data, getattr(args, 'human', False))
COMMANDS = {
"chat": cmd_chat,
"chat-system": cmd_chat_system,
"chat-image": cmd_chat_image,
"stream": cmd_stream,
"batch-create": cmd_batch_create,
"batch-list": cmd_batch_list,
"batch-get": cmd_batch_get,
"batch-results": cmd_batch_results,
"count-tokens": cmd_count_tokens,
"models": cmd_models,
"tools": cmd_tools,
"thinking": cmd_thinking,
}
def main():
parser = argparse.ArgumentParser(
description="Anthropic CLI — AI agent integration",
formatter_class=argparse.RawDescriptionHelpFormatter,
)
parser.add_argument("command", choices=list(COMMANDS.keys()), help="Command to run")
parser.add_argument("args", nargs="*", help="Command arguments")
parser.add_argument("--human", action="store_true", help="Human-readable output")
parser.add_argument("--limit", type=int, help="Limit results")
parser.add_argument("--id", help="Resource ID")
parser.add_argument("--from", dest="from_date", help="Start date")
parser.add_argument("--to", dest="to_date", help="End date")
parser.add_argument("--status", help="Filter by status")
parser.add_argument("--sort", help="Sort field")
parser.add_argument("--query", help="Search query")
parsed = parser.parse_args()
cmd_func = COMMANDS[parsed.command]
cmd_func(parsed)
if __name__ == "__main__":
main()
```