Back to skills
SkillHub ClubAnalyze Data & AIFull StackData / AITesting

llm-evaluation

LLM evaluation and testing patterns including prompt testing, hallucination detection, benchmark creation, and quality metrics. Use when testing LLM applications, validating prompt quality, implementing systematic evaluation, or measuring LLM performance.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
45
Hot score
91
Updated
March 20, 2026
Overall rating
C2.7
Composite score
2.7
Best-practice grade
B71.9

Install command

npx @skill-hub/cli install applied-artificial-intelligence-claude-code-toolkit-llm-evaluation

Repository

applied-artificial-intelligence/claude-code-toolkit

Skill path: skills/llm-evaluation

LLM evaluation and testing patterns including prompt testing, hallucination detection, benchmark creation, and quality metrics. Use when testing LLM applications, validating prompt quality, implementing systematic evaluation, or measuring LLM performance.

Open repository

Best for

Primary workflow: Analyze Data & AI.

Technical facets: Full Stack, Data / AI, Testing.

Target audience: Development teams looking for install-ready agent workflows..

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: applied-artificial-intelligence.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install llm-evaluation into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/applied-artificial-intelligence/claude-code-toolkit before adding llm-evaluation to shared team environments
  • Use llm-evaluation for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

llm-evaluation | SkillHub