Back to skills
SkillHub ClubAnalyze Data & AIFull StackBackendData / AI

groq-inference

Fast LLM inference with Groq API - chat, vision, audio STT/TTS, tool use. Use when: groq, fast inference, low latency, whisper, PlayAI TTS, Llama, vision API, tool calling, voice agents, real-time AI.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
6
Hot score
82
Updated
March 20, 2026
Overall rating
C1.1
Composite score
1.1
Best-practice grade
B71.9

Install command

npx @skill-hub/cli install scientiacapital-skills-groq-inference-skill

Repository

scientiacapital/skills

Skill path: active/groq-inference-skill

Fast LLM inference with Groq API - chat, vision, audio STT/TTS, tool use. Use when: groq, fast inference, low latency, whisper, PlayAI TTS, Llama, vision API, tool calling, voice agents, real-time AI.

Open repository

Best for

Primary workflow: Analyze Data & AI.

Technical facets: Full Stack, Backend, Data / AI.

Target audience: Development teams looking for install-ready agent workflows..

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: scientiacapital.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install groq-inference into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/scientiacapital/skills before adding groq-inference to shared team environments
  • Use groq-inference for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

groq-inference | SkillHub