ai-evals
Create an AI Evals Pack (eval PRD, test set, rubric, judge plan, results + iteration loop). Use for LLM evaluation, benchmarks, rubrics, error analysis/open coding, and ship/no-ship quality gates for AI features.
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Install command
npx @skill-hub/cli install liqiongyu-lenny-skills-plus-ai-evals
Repository
Skill path: skills/ai-evals
Create an AI Evals Pack (eval PRD, test set, rubric, judge plan, results + iteration loop). Use for LLM evaluation, benchmarks, rubrics, error analysis/open coding, and ship/no-ship quality gates for AI features.
Open repositoryBest for
Primary workflow: Research & Ops.
Technical facets: Full Stack, Data / AI, Testing.
Target audience: Development teams looking for install-ready agent workflows..
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: liqiongyu.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install ai-evals into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/liqiongyu/lenny_skills_plus before adding ai-evals to shared team environments
- Use ai-evals for development workflows
Works across
Favorites: 0.
Sub-skills: 0.
Aggregator: No.