groq-inference
Fast LLM inference with Groq API - chat, vision, audio STT/TTS, tool use. Use when: groq, fast inference, low latency, whisper, PlayAI TTS, Llama, vision API, tool calling, voice agents, real-time AI.
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Install command
npx @skill-hub/cli install scientiacapital-skills-groq-inference-skill
Repository
Skill path: active/groq-inference-skill
Fast LLM inference with Groq API - chat, vision, audio STT/TTS, tool use. Use when: groq, fast inference, low latency, whisper, PlayAI TTS, Llama, vision API, tool calling, voice agents, real-time AI.
Open repositoryBest for
Primary workflow: Analyze Data & AI.
Technical facets: Full Stack, Backend, Data / AI.
Target audience: Development teams looking for install-ready agent workflows..
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: scientiacapital.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install groq-inference into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/scientiacapital/skills before adding groq-inference to shared team environments
- Use groq-inference for development workflows
Works across
Favorites: 0.
Sub-skills: 0.
Aggregator: No.