Back to skills
SkillHub ClubResearch & OpsFull StackDevOpsData / AI

llamaguard

Meta's 7-8B specialized moderation model for LLM input/output filtering. 6 safety categories - violence/hate, sexual content, weapons, substances, self-harm, criminal planning. 94-95% accuracy. Deploy with vLLM, HuggingFace, Sagemaker. Integrates with NeMo Guardrails.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
5,246
Hot score
99
Updated
March 20, 2026
Overall rating
C4.5
Composite score
4.5
Best-practice grade
B81.2

Install command

npx @skill-hub/cli install orchestra-research-ai-research-skills-llamaguard
Safety AlignmentLlamaGuardContent ModerationMetaGuardrailsSafety ClassificationInput FilteringOutput FilteringAI Safety

Repository

Orchestra-Research/AI-Research-SKILLs

Skill path: 07-safety-alignment/llamaguard

Meta's 7-8B specialized moderation model for LLM input/output filtering. 6 safety categories - violence/hate, sexual content, weapons, substances, self-harm, criminal planning. 94-95% accuracy. Deploy with vLLM, HuggingFace, Sagemaker. Integrates with NeMo Guardrails.

Open repository

Best for

Primary workflow: Research & Ops.

Technical facets: Full Stack, DevOps, Data / AI, Tech Writer.

Target audience: Development teams looking for install-ready agent workflows..

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: Orchestra-Research.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install llamaguard into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/Orchestra-Research/AI-Research-SKILLs before adding llamaguard to shared team environments
  • Use llamaguard for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

llamaguard | SkillHub