llm-evaluation
LLM evaluation and testing patterns including prompt testing, hallucination detection, benchmark creation, and quality metrics. Use when testing LLM applications, validating prompt quality, implementing systematic evaluation, or measuring LLM performance.
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Install command
npx @skill-hub/cli install applied-artificial-intelligence-claude-code-toolkit-llm-evaluation
Repository
Skill path: skills/llm-evaluation
LLM evaluation and testing patterns including prompt testing, hallucination detection, benchmark creation, and quality metrics. Use when testing LLM applications, validating prompt quality, implementing systematic evaluation, or measuring LLM performance.
Open repositoryBest for
Primary workflow: Analyze Data & AI.
Technical facets: Full Stack, Data / AI, Testing.
Target audience: Development teams looking for install-ready agent workflows..
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: applied-artificial-intelligence.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install llm-evaluation into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/applied-artificial-intelligence/claude-code-toolkit before adding llm-evaluation to shared team environments
- Use llm-evaluation for development workflows
Works across
Favorites: 0.
Sub-skills: 0.
Aggregator: No.