Back to skills
SkillHub ClubAnalyze Data & AIFull StackData / AI

Deep Search

3-tier Perplexity AI search routing with auto model selection

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
3,131
Hot score
99
Updated
March 20, 2026
Overall rating
C4.0
Composite score
4.0
Best-practice grade
C61.2

Install command

npx @skill-hub/cli install openclaw-skills-agxntsix-deep-search

Repository

openclaw/skills

Skill path: skills/aiwithabidi/agxntsix-deep-search

3-tier Perplexity AI search routing with auto model selection

Open repository

Best for

Primary workflow: Analyze Data & AI.

Technical facets: Full Stack, Data / AI.

Target audience: everyone.

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: openclaw.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install Deep Search into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/openclaw/skills before adding Deep Search to shared team environments
  • Use Deep Search for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: Deep Search
version: 1.0.0
description: 3-tier Perplexity AI search routing with auto model selection
author: aiwithabidi
---

# Deep Search πŸ”

3-tier Perplexity AI search routing β€” quick (sonar), research (sonar-pro), deep analysis (sonar-reasoning-pro). Auto-selects model tier based on query complexity. Focus modes: internet, academic, news, youtube, reddit.

## Usage

```bash
# Quick lookup (sonar)
python3 scripts/deep_search.py quick "what is OpenClaw?"

# Research-grade (sonar-pro)
python3 scripts/deep_search.py pro "compare LangChain vs LlamaIndex"

# Deep analysis (sonar-reasoning-pro)
python3 scripts/deep_search.py deep "full market analysis of AI agent frameworks"

# Focus modes
python3 scripts/deep_search.py pro "query" --focus academic
python3 scripts/deep_search.py pro "query" --focus news
python3 scripts/deep_search.py pro "query" --focus youtube
python3 scripts/deep_search.py pro "query" --focus reddit
```

## Requirements

- `PERPLEXITY_API_KEY` environment variable
- Python 3.10+
- `requests` package

## Credits

Built by **AgxntSix** β€” AI ops agent by [M. Abidi](https://www.linkedin.com/in/mohammad-ali-abidi)
🌐 [agxntsix.ai](https://www.agxntsix.ai) | Part of the **AgxntSix Skill Suite** for OpenClaw agents


---

## Referenced Files

> The following files are referenced in this skill and included for context.

### scripts/deep_search.py

```python
#!/usr/bin/env python3
"""
AgxntSix Deep Search β€” Multi-tier Perplexity search with Langfuse tracing

Three tiers of search depth:
  quick   β†’ sonar           (fast, simple lookups, ~1-2s)
  pro     β†’ sonar-pro       (multi-step reasoning, ~3-5s)  
  deep    β†’ sonar-reasoning-pro  (chain-of-thought, thorough, ~10-20s)

Usage:
  deep_search.py quick "what time is it in Austin TX"
  deep_search.py pro "compare Neo4j vs FalkorDB for AI agent memory"
  deep_search.py deep "analyze the current state of AI agent memory architectures"
"""
import argparse
import json
import os
import sys
import requests
from datetime import datetime

# Langfuse tracing
os.environ.setdefault("LANGFUSE_SECRET_KEY", "sk-lf-115cb6b4-7153-4fe6-9255-bf28f8b115de")
os.environ.setdefault("LANGFUSE_PUBLIC_KEY", "pk-lf-8a9322b9-5eb1-4e8b-815e-b3428dc69bc4")
os.environ.setdefault("LANGFUSE_HOST", "http://langfuse-web:3000")

try:
    from langfuse import observe, get_client, Langfuse
    TRACING = True
except ImportError:
    TRACING = False
    def observe(**kwargs):
        def decorator(fn):
            return fn
        return decorator

def get_session_id():
    """Generate session ID based on date+hour for grouping related calls."""
    return datetime.now().strftime("session-%Y%m%d-%H")

DEFAULT_USER_ID = "agxntsix"

API_KEY = os.environ.get("PERPLEXITY_API_KEY") or os.environ.get("PPLX_API_KEY")
BASE_URL = "https://api.perplexity.ai"

if not API_KEY:
    try:
        config_path = os.path.expanduser("~/.openclaw/openclaw.json")
        with open(config_path) as f:
            config = json.load(f)
        API_KEY = config.get("tools", {}).get("web", {}).get("search", {}).get("perplexity", {}).get("apiKey", "")
    except:
        pass

TIERS = {
    "quick": {
        "model": "sonar",
        "description": "Fast lookup (~1-2s)",
        "system_prompt": "Be concise. Answer in 2-3 sentences max."
    },
    "pro": {
        "model": "sonar-pro", 
        "description": "Multi-step reasoning (~3-5s)",
        "system_prompt": "Provide a thorough, well-structured answer with key details and sources."
    },
    "deep": {
        "model": "sonar-reasoning-pro",
        "description": "Deep chain-of-thought analysis (~10-20s)",
        "system_prompt": "You are a research analyst. Provide comprehensive, deeply-reasoned analysis. Include multiple perspectives, cite sources, identify trends, and highlight what matters most. Structure your response with clear sections."
    }
}

@observe(as_type="generation")
def search(tier: str, query: str, focus: str = "internet"):
    if not API_KEY:
        print(json.dumps({"error": "No Perplexity API key found."}))
        return
    
    tier_config = TIERS.get(tier)
    if not tier_config:
        print(json.dumps({"error": f"Unknown tier: {tier}. Use: quick, pro, deep"}))
        return
    
    # Update Langfuse trace with session/user context
    if TRACING:
        try:
            lf = get_client()
            lf.update_current_trace(
                session_id=get_session_id(),
                user_id=DEFAULT_USER_ID,
                tags=[f"search-{tier}", f"focus-{focus}"],
                metadata={"tier": tier, "focus": focus}
            )
        except Exception:
            pass
    
    headers = {
        "Authorization": f"Bearer {API_KEY}",
        "Content-Type": "application/json"
    }
    
    payload = {
        "model": tier_config["model"],
        "messages": [
            {"role": "system", "content": tier_config["system_prompt"]},
            {"role": "user", "content": query}
        ],
        "search_domain_filter": [],
        "return_citations": True,
        "return_related_questions": tier == "deep"
    }
    
    if focus != "internet":
        payload["search_focus"] = focus
    
    start = datetime.now()
    
    try:
        resp = requests.post(
            f"{BASE_URL}/chat/completions",
            headers=headers,
            json=payload,
            timeout=60
        )
        resp.raise_for_status()
        data = resp.json()
        
        elapsed = (datetime.now() - start).total_seconds()
        
        result = {
            "tier": tier,
            "model": tier_config["model"],
            "query": query,
            "elapsed_seconds": round(elapsed, 1),
            "answer": data["choices"][0]["message"]["content"],
            "citations": data.get("citations", []),
        }
        
        if data.get("related_questions"):
            result["related_questions"] = data["related_questions"]
        
        if data.get("usage"):
            result["tokens"] = {
                "prompt": data["usage"].get("prompt_tokens"),
                "completion": data["usage"].get("completion_tokens"),
                "total": data["usage"].get("total_tokens")
            }
        
        if TRACING:
            try:
                lf = get_client()
                lf.update_current_generation(
                    model=tier_config["model"],
                    input=query,
                    output=result.get("answer", ""),
                    usage_details={
                        "input": result.get("tokens", {}).get("prompt", 0),
                        "output": result.get("tokens", {}).get("completion", 0),
                    },
                    metadata={
                        "tier": tier,
                        "focus": focus,
                        "citations": result.get("citations", []),
                        "elapsed_seconds": result.get("elapsed_seconds"),
                    }
                )
            except Exception:
                pass
        
        print(json.dumps(result, indent=2))
        return result
        
    except requests.exceptions.HTTPError as e:
        print(json.dumps({
            "error": f"HTTP {e.response.status_code}",
            "detail": e.response.text[:500] if e.response else str(e)
        }))
    except Exception as e:
        print(json.dumps({"error": str(e)}))

if __name__ == "__main__":
    parser = argparse.ArgumentParser(description="AgxntSix Deep Search (Perplexity)")
    parser.add_argument("tier", choices=["quick", "pro", "deep"], 
                        help="Search depth: quick (sonar), pro (sonar-pro), deep (sonar-reasoning-pro)")
    parser.add_argument("query", help="Search query")
    parser.add_argument("--focus", default="internet", 
                        help="Search focus (internet, academic, news, youtube, reddit)")
    
    args = parser.parse_args()
    search(args.tier, args.query, args.focus)
    
    if TRACING:
        try:
            get_client().flush()
        except:
            pass

```



---

## Skill Companion Files

> Additional files collected from the skill directory layout.

### _meta.json

```json
{
  "owner": "aiwithabidi",
  "slug": "agxntsix-deep-search",
  "displayName": "Deep Search",
  "latest": {
    "version": "1.0.0",
    "publishedAt": 1771136180729,
    "commit": "https://github.com/openclaw/skills/commit/7b11316976565962a72994074585d606951fd5d5"
  },
  "history": []
}

```

Deep Search | SkillHub