Back to skills
SkillHub ClubShip Full StackFull Stack
deep-search
Imported from https://github.com/openclaw/skills.
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Stars
3,111
Hot score
99
Updated
March 20, 2026
Overall rating
C4.0
Composite score
4.0
Best-practice grade
F36.0
Install command
npx @skill-hub/cli install openclaw-skills-deep-search
Repository
openclaw/skills
Skill path: skills/aiwithabidi/deep-search
Imported from https://github.com/openclaw/skills.
Open repositoryBest for
Primary workflow: Ship Full Stack.
Technical facets: Full Stack.
Target audience: everyone.
License: MIT.
Original source
Catalog source: SkillHub Club.
Repository owner: openclaw.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install deep-search into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/openclaw/skills before adding deep-search to shared team environments
- Use deep-search for development workflows
Works across
Claude CodeCodex CLIGemini CLIOpenCode
Favorites: 0.
Sub-skills: 0.
Aggregator: No.
Original source / Raw SKILL.md
---
name: deep-search
description: Three-tier AI search routing β quick facts (sonar), research comparisons (sonar-pro), and deep analysis (sonar-reasoning-pro). Auto-selects model tier based on query complexity. Focus modes: internet, academic, news, youtube, reddit. Use for research, fact-checking, competitive analysis, or any web search task.
homepage: https://www.agxntsix.ai
license: MIT
compatibility: Python 3.10+, Perplexity API key
metadata: {"openclaw": {"emoji": "\ud83d\udd0e", "requires": {"env": ["PERPLEXITY_API_KEY"]}, "primaryEnv": "PERPLEXITY_API_KEY", "homepage": "https://www.agxntsix.ai"}}
---
# Deep Search π
Multi-tier Perplexity-powered search with automatic Langfuse observability tracing.
## When to Use
- Quick facts and simple lookups β `quick` tier
- Standard research, comparisons, how-to β `pro` tier
- Deep analysis, market research, complex questions β `deep` tier
- Academic paper search, news monitoring, Reddit/YouTube research
## Usage
```bash
# Quick search (sonar, ~2s)
python3 {baseDir}/scripts/deep_search.py quick "what is OpenClaw"
# Pro search (sonar-pro, ~5-8s)
python3 {baseDir}/scripts/deep_search.py pro "compare Claude vs GPT-4o for coding"
# Deep research (sonar-reasoning-pro, ~10-20s)
python3 {baseDir}/scripts/deep_search.py deep "full market analysis of AI agent frameworks"
# Focus modes
python3 {baseDir}/scripts/deep_search.py pro "query" --focus academic
python3 {baseDir}/scripts/deep_search.py pro "query" --focus news
python3 {baseDir}/scripts/deep_search.py pro "query" --focus youtube
python3 {baseDir}/scripts/deep_search.py pro "query" --focus reddit
```
## Tiers
| Tier | Model | Speed | Best For |
|------|-------|-------|----------|
| quick | sonar | ~2s | Simple facts, quick lookups |
| pro | sonar-pro | ~5-8s | Research, comparisons |
| deep | sonar-reasoning-pro | ~10-20s | Deep analysis, complex questions |
## Environment
- `PERPLEXITY_API_KEY` β Required. Perplexity API key.
- `OPENROUTER_API_KEY` β Optional. For Langfuse tracing model pricing.
## Credits
Built by [M. Abidi](https://www.linkedin.com/in/mohammad-ali-abidi) | [agxntsix.ai](https://www.agxntsix.ai)
[YouTube](https://youtube.com/@aiwithabidi) | [GitHub](https://github.com/aiwithabidi)
Part of the **AgxntSix Skill Suite** for OpenClaw agents.
π
**Need help setting up OpenClaw for your business?** [Book a free consultation](https://cal.com/agxntsix/abidi-openclaw)
---
## Skill Companion Files
> Additional files collected from the skill directory layout.
### _meta.json
```json
{
"owner": "aiwithabidi",
"slug": "deep-search",
"displayName": "Deep Search",
"latest": {
"version": "1.0.0",
"publishedAt": 1772578346096,
"commit": "https://github.com/openclaw/skills/commit/258bb79d4d6919916b67ebe2c7d5301dab3624b0"
},
"history": []
}
```
### scripts/deep_search.py
```python
#!/usr/bin/env python3
"""
AgxntSix Deep Search β Multi-tier Perplexity search with Langfuse tracing
Three tiers of search depth:
quick β sonar (fast, simple lookups, ~1-2s)
pro β sonar-pro (multi-step reasoning, ~3-5s)
deep β sonar-reasoning-pro (chain-of-thought, thorough, ~10-20s)
Usage:
deep_search.py quick "what time is it in Austin TX"
deep_search.py pro "compare Neo4j vs FalkorDB for AI agent memory"
deep_search.py deep "analyze the current state of AI agent memory architectures"
"""
import argparse
import json
import os
import sys
import requests
from datetime import datetime
# Langfuse tracing
os.environ.setdefault("LANGFUSE_SECRET_KEY", "sk-lf-115cb6b4-7153-4fe6-9255-bf28f8b115de")
os.environ.setdefault("LANGFUSE_PUBLIC_KEY", "pk-lf-8a9322b9-5eb1-4e8b-815e-b3428dc69bc4")
os.environ.setdefault("LANGFUSE_HOST", "http://langfuse-web:3000")
try:
from langfuse import observe, get_client, Langfuse
TRACING = True
except ImportError:
TRACING = False
def observe(**kwargs):
def decorator(fn):
return fn
return decorator
def get_session_id():
"""Generate session ID based on date+hour for grouping related calls."""
return datetime.now().strftime("session-%Y%m%d-%H")
DEFAULT_USER_ID = "agxntsix"
API_KEY = os.environ.get("PERPLEXITY_API_KEY") or os.environ.get("PPLX_API_KEY")
BASE_URL = "https://api.perplexity.ai"
if not API_KEY:
try:
config_path = os.path.expanduser("~/.openclaw/openclaw.json")
with open(config_path) as f:
config = json.load(f)
API_KEY = config.get("tools", {}).get("web", {}).get("search", {}).get("perplexity", {}).get("apiKey", "")
except:
pass
TIERS = {
"quick": {
"model": "sonar",
"description": "Fast lookup (~1-2s)",
"system_prompt": "Be concise. Answer in 2-3 sentences max."
},
"pro": {
"model": "sonar-pro",
"description": "Multi-step reasoning (~3-5s)",
"system_prompt": "Provide a thorough, well-structured answer with key details and sources."
},
"deep": {
"model": "sonar-reasoning-pro",
"description": "Deep chain-of-thought analysis (~10-20s)",
"system_prompt": "You are a research analyst. Provide comprehensive, deeply-reasoned analysis. Include multiple perspectives, cite sources, identify trends, and highlight what matters most. Structure your response with clear sections."
}
}
@observe(as_type="generation")
def search(tier: str, query: str, focus: str = "internet"):
if not API_KEY:
print(json.dumps({"error": "No Perplexity API key found."}))
return
tier_config = TIERS.get(tier)
if not tier_config:
print(json.dumps({"error": f"Unknown tier: {tier}. Use: quick, pro, deep"}))
return
# Update Langfuse trace with session/user context
if TRACING:
try:
lf = get_client()
lf.update_current_trace(
session_id=get_session_id(),
user_id=DEFAULT_USER_ID,
tags=[f"search-{tier}", f"focus-{focus}"],
metadata={"tier": tier, "focus": focus}
)
except Exception:
pass
headers = {
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
}
payload = {
"model": tier_config["model"],
"messages": [
{"role": "system", "content": tier_config["system_prompt"]},
{"role": "user", "content": query}
],
"search_domain_filter": [],
"return_citations": True,
"return_related_questions": tier == "deep"
}
if focus != "internet":
payload["search_focus"] = focus
start = datetime.now()
try:
resp = requests.post(
f"{BASE_URL}/chat/completions",
headers=headers,
json=payload,
timeout=60
)
resp.raise_for_status()
data = resp.json()
elapsed = (datetime.now() - start).total_seconds()
result = {
"tier": tier,
"model": tier_config["model"],
"query": query,
"elapsed_seconds": round(elapsed, 1),
"answer": data["choices"][0]["message"]["content"],
"citations": data.get("citations", []),
}
if data.get("related_questions"):
result["related_questions"] = data["related_questions"]
if data.get("usage"):
result["tokens"] = {
"prompt": data["usage"].get("prompt_tokens"),
"completion": data["usage"].get("completion_tokens"),
"total": data["usage"].get("total_tokens")
}
if TRACING:
try:
lf = get_client()
lf.update_current_generation(
model=tier_config["model"],
input=query,
output=result.get("answer", ""),
usage_details={
"input": result.get("tokens", {}).get("prompt", 0),
"output": result.get("tokens", {}).get("completion", 0),
},
metadata={
"tier": tier,
"focus": focus,
"citations": result.get("citations", []),
"elapsed_seconds": result.get("elapsed_seconds"),
}
)
except Exception:
pass
print(json.dumps(result, indent=2))
return result
except requests.exceptions.HTTPError as e:
print(json.dumps({
"error": f"HTTP {e.response.status_code}",
"detail": e.response.text[:500] if e.response else str(e)
}))
except Exception as e:
print(json.dumps({"error": str(e)}))
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="AgxntSix Deep Search (Perplexity)")
parser.add_argument("tier", choices=["quick", "pro", "deep"],
help="Search depth: quick (sonar), pro (sonar-pro), deep (sonar-reasoning-pro)")
parser.add_argument("query", help="Search query")
parser.add_argument("--focus", default="internet",
help="Search focus (internet, academic, news, youtube, reddit)")
args = parser.parse_args()
search(args.tier, args.query, args.focus)
if TRACING:
try:
get_client().flush()
except:
pass
```