model-intel
Live LLM model pricing and capabilities from OpenRouter. List top models, search by name, compare side-by-side, find best model for a use case, check pricing. Always up-to-date from the OpenRouter API. Triggers: model pricing, compare models, best model for, cheapest model, model cost, LLM comparison, what models are available.
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Install command
npx @skill-hub/cli install openclaw-skills-model-intel-pro
Repository
Skill path: skills/aiwithabidi/model-intel-pro
Live LLM model pricing and capabilities from OpenRouter. List top models, search by name, compare side-by-side, find best model for a use case, check pricing. Always up-to-date from the OpenRouter API. Triggers: model pricing, compare models, best model for, cheapest model, model cost, LLM comparison, what models are available.
Open repositoryBest for
Primary workflow: Ship Full Stack.
Technical facets: Full Stack, Backend.
Target audience: everyone.
License: MIT.
Original source
Catalog source: SkillHub Club.
Repository owner: openclaw.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install model-intel into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/openclaw/skills before adding model-intel to shared team environments
- Use model-intel for development workflows
Works across
Favorites: 0.
Sub-skills: 0.
Aggregator: No.
Original source / Raw SKILL.md
---
name: model-intel
version: 1.0.0
description: >
Live LLM model pricing and capabilities from OpenRouter. List top models, search by name,
compare side-by-side, find best model for a use case, check pricing. Always up-to-date
from the OpenRouter API. Triggers: model pricing, compare models, best model for,
cheapest model, model cost, LLM comparison, what models are available.
license: MIT
compatibility:
openclaw: ">=0.10"
metadata:
openclaw:
requires:
bins: ["python3"]
env: ["OPENROUTER_API_KEY"]
---
# Model Intel 🧠💰
Live LLM model intelligence — pricing, capabilities, and comparisons from OpenRouter.
## When to Use
- Finding the best model for a specific task (coding, reasoning, creative, fast, cheap)
- Comparing model pricing and capabilities
- Checking current model availability and context lengths
- Answering "what's the cheapest model that can do X?"
## Usage
```bash
# List top models by provider
python3 {baseDir}/scripts/model_intel.py list
# Search by name
python3 {baseDir}/scripts/model_intel.py search "claude"
# Side-by-side comparison
python3 {baseDir}/scripts/model_intel.py compare "claude-opus" "gpt-4o"
# Best model for a use case
python3 {baseDir}/scripts/model_intel.py best fast
python3 {baseDir}/scripts/model_intel.py best code
python3 {baseDir}/scripts/model_intel.py best reasoning
python3 {baseDir}/scripts/model_intel.py best cheap
python3 {baseDir}/scripts/model_intel.py best vision
# Pricing details
python3 {baseDir}/scripts/model_intel.py price "gemini-flash"
```
## Use Cases
| Command | When |
|---------|------|
| `best fast` | Need lowest latency |
| `best cheap` | Budget-constrained |
| `best code` | Programming tasks |
| `best reasoning` | Complex logic/math |
| `best vision` | Image understanding |
| `best long-context` | Large document processing |
## Credits
Built by [M. Abidi](https://www.linkedin.com/in/mohammad-ali-abidi) | [agxntsix.ai](https://www.agxntsix.ai)
[YouTube](https://youtube.com/@aiwithabidi) | [GitHub](https://github.com/aiwithabidi)
Part of the **AgxntSix Skill Suite** for OpenClaw agents.
📅 **Need help setting up OpenClaw for your business?** [Book a free consultation](https://cal.com/agxntsix/abidi-openclaw)
---
## Skill Companion Files
> Additional files collected from the skill directory layout.
### _meta.json
```json
{
"owner": "aiwithabidi",
"slug": "model-intel-pro",
"displayName": "Model Intel Pro",
"latest": {
"version": "1.0.0",
"publishedAt": 1771140081254,
"commit": "https://github.com/openclaw/skills/commit/97a12a0333b021558fc8ff7eb6069b4300f6b436"
},
"history": []
}
```
### scripts/model_intel.py
```python
#!/usr/bin/env python3
"""
Model Intelligence — Live model pricing & capabilities from OpenRouter.
Usage:
model_intel.py list # Top models by category
model_intel.py price <model_id> # Pricing for specific model
model_intel.py compare <model1> <model2> # Side-by-side comparison
model_intel.py best <task> # Best model for task type
model_intel.py search <query> # Search models by name
Tasks: code, reasoning, creative, fast, cheap, vision, long-context
"""
import argparse
import json
import os
import sys
import requests
OPENROUTER_KEY = os.environ.get("OPENROUTER_API_KEY", "")
if not OPENROUTER_KEY:
try:
with open(os.path.expanduser("~/.openclaw/workspace/.env")) as f:
for line in f:
if line.startswith("OPENROUTER_API_KEY="):
OPENROUTER_KEY = line.strip().split("=", 1)[1]
except:
pass
CACHE = {}
def get_models():
if "models" in CACHE:
return CACHE["models"]
resp = requests.get("https://openrouter.ai/api/v1/models", timeout=15)
resp.raise_for_status()
models = resp.json().get("data", [])
CACHE["models"] = models
return models
def fmt_price(p):
if p is None: return "?"
val = float(p) * 1_000_000
if val == 0: return "FREE"
if val < 0.01: return f"${val:.4f}/1M"
return f"${val:.2f}/1M"
def model_summary(m):
pricing = m.get("pricing", {})
return {
"id": m["id"],
"name": m.get("name", m["id"]),
"context": m.get("context_length", "?"),
"input": fmt_price(pricing.get("prompt")),
"output": fmt_price(pricing.get("completion")),
"modalities": m.get("architecture", {}).get("modality", "?"),
}
def cmd_list(args):
models = get_models()
# Group by provider
by_provider = {}
for m in models:
provider = m["id"].split("/")[0] if "/" in m["id"] else "other"
by_provider.setdefault(provider, []).append(m)
top_providers = ["anthropic", "google", "openai", "meta-llama", "deepseek", "mistralai"]
for p in top_providers:
if p in by_provider:
print(f"\n=== {p} ===")
for m in sorted(by_provider[p], key=lambda x: x.get("name", ""))[:8]:
s = model_summary(m)
print(f" {s['id']:50s} ctx:{s['context']:>8} in:{s['input']:>14} out:{s['output']:>14}")
def cmd_price(args):
models = get_models()
matches = [m for m in models if args.model.lower() in m["id"].lower()]
if not matches:
print(f"No models matching '{args.model}'")
return
for m in matches[:5]:
s = model_summary(m)
print(json.dumps(s, indent=2))
def cmd_compare(args):
models = get_models()
results = []
for target in [args.model1, args.model2]:
match = [m for m in models if target.lower() in m["id"].lower()]
if match:
results.append(model_summary(match[0]))
else:
results.append({"id": target, "error": "not found"})
print(json.dumps(results, indent=2))
def cmd_search(args):
models = get_models()
q = args.query.lower()
matches = [m for m in models if q in m["id"].lower() or q in m.get("name", "").lower()]
for m in matches[:15]:
s = model_summary(m)
print(f" {s['id']:55s} ctx:{s['context']:>8} in:{s['input']:>14} out:{s['output']:>14}")
def cmd_best(args):
"""Recommend best models for a task type based on live data."""
models = get_models()
task_filters = {
"code": lambda m: any(k in m["id"].lower() for k in ["claude", "gpt-4", "deepseek-coder", "codestral"]),
"reasoning": lambda m: any(k in m["id"].lower() for k in ["o1", "o3", "reasoning", "think", "r1"]),
"creative": lambda m: any(k in m["id"].lower() for k in ["claude", "gpt-4", "gemini"]),
"fast": lambda m: any(k in m["id"].lower() for k in ["flash", "mini", "haiku", "instant", "nano"]),
"cheap": lambda m: float(m.get("pricing", {}).get("prompt", "999")) < 0.000001,
"vision": lambda m: "image" in m.get("architecture", {}).get("modality", ""),
"long-context": lambda m: (m.get("context_length") or 0) >= 100000,
}
filt = task_filters.get(args.task)
if not filt:
print(f"Unknown task: {args.task}. Options: {', '.join(task_filters.keys())}")
return
matches = [m for m in models if filt(m)]
matches.sort(key=lambda m: float(m.get("pricing", {}).get("prompt") or "999"))
print(f"\nBest models for '{args.task}' (sorted by input price):\n")
for m in matches[:10]:
s = model_summary(m)
print(f" {s['id']:55s} ctx:{s['context']:>8} in:{s['input']:>14} out:{s['output']:>14}")
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Model Intelligence (OpenRouter)")
sub = parser.add_subparsers(dest="cmd")
sub.add_parser("list", help="List top models by provider")
p = sub.add_parser("price", help="Get pricing for a model")
p.add_argument("model")
p = sub.add_parser("compare", help="Compare two models")
p.add_argument("model1")
p.add_argument("model2")
p = sub.add_parser("search", help="Search models")
p.add_argument("query")
p = sub.add_parser("best", help="Best model for a task")
p.add_argument("task", help="code|reasoning|creative|fast|cheap|vision|long-context")
args = parser.parse_args()
if not args.cmd:
parser.print_help()
else:
globals()[f"cmd_{args.cmd}"](args)
```