llm-inference
Use when wanting to interact with any LLM - Explains available inference endpoints so the agent selects suitable models.
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Install command
npx @skill-hub/cli install dave1010-tools-llm-inference
Repository
Skill path: .agents/skills/llm-inference
Use when wanting to interact with any LLM - Explains available inference endpoints so the agent selects suitable models.
Open repositoryBest for
Primary workflow: Ship Full Stack.
Technical facets: Full Stack.
Target audience: everyone.
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: dave1010.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install llm-inference into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/dave1010/tools before adding llm-inference to shared team environments
- Use llm-inference for development workflows
Works across
Favorites: 0.
Sub-skills: 0.
Aggregator: No.
Original source / Raw SKILL.md
--- name: llm-inference description: "Use when wanting to interact with any LLM - Explains available inference endpoints so the agent selects suitable models." --- ## LLM Inference The Cloudflare Pages function `functions/cerebras-chat.ts` provides OpenAI-compatible LLM inference. See `tools/cerebras-llm-inference/index.html` for a working example. ### Available models | Model | Max context tokens | Requests / minute | Tokens / minute | | --- | --- | --- | --- | | gpt-oss-120b | 65,536 | 30 | 64,000 | | llama-3.3-70b | 65,536 | 30 | 64,000 | | llama3.1-8b | 8,192 | 30 | 60,000 | | qwen-3-235b-a22b-instruct-2507 | 65,536 | 30 | 64,000 | | qwen-3-235b-a22b-thinking-2507 | 65,536 | 30 | 60,000 | | qwen-3-32b | 65,536 | 30 | 64,000 | | zai-glm-4.6 | 64,000 | 10 | 150,000 | - `llama3.1-8b` is the fastest option. - `zai-glm-4.6` is the most powerful option. - `gpt-oss-120b` remains the best all rounder. LLMs are not just for chat: they can be used to process any string in any arbitrary way. If making a tool that requires the LLM to respond in a specific way or format then be very clear and explicit in its system prompt; eg what to include/exclude, plain/markdown formatting, length, etc.