Back to skills
SkillHub ClubRun DevOpsFull StackDevOps

llama-cpp

Runs LLM inference on CPU, Apple Silicon, and consumer GPUs without NVIDIA hardware. Use for edge deployment, M1/M2/M3 Macs, AMD/Intel GPUs, or when CUDA is unavailable. Supports GGUF quantization (1.5-8 bit) for reduced memory and 4-10× speedup vs PyTorch on CPU.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
8,996
Hot score
99
Updated
March 20, 2026
Overall rating
C4.5
Composite score
4.5
Best-practice grade
B81.2

Install command

npx @skill-hub/cli install nousresearch-hermes-agent-llama-cpp

Repository

NousResearch/hermes-agent

Skill path: skills/mlops/inference/llama-cpp

Runs LLM inference on CPU, Apple Silicon, and consumer GPUs without NVIDIA hardware. Use for edge deployment, M1/M2/M3 Macs, AMD/Intel GPUs, or when CUDA is unavailable. Supports GGUF quantization (1.5-8 bit) for reduced memory and 4-10× speedup vs PyTorch on CPU.

Open repository

Best for

Primary workflow: Run DevOps.

Technical facets: Full Stack, DevOps.

Target audience: Development teams looking for install-ready agent workflows..

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: NousResearch.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install llama-cpp into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/NousResearch/hermes-agent before adding llama-cpp to shared team environments
  • Use llama-cpp for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

llama-cpp | SkillHub