Back to skills
SkillHub ClubResearch & OpsFull StackData / AITesting

ai-evals

Create an AI Evals Pack (eval PRD, test set, rubric, judge plan, results + iteration loop). Use for LLM evaluation, benchmarks, rubrics, error analysis/open coding, and ship/no-ship quality gates for AI features.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
36
Hot score
90
Updated
March 20, 2026
Overall rating
C2.1
Composite score
2.1
Best-practice grade
A88.4

Install command

npx @skill-hub/cli install liqiongyu-lenny-skills-plus-ai-evals

Repository

liqiongyu/lenny_skills_plus

Skill path: skills/ai-evals

Create an AI Evals Pack (eval PRD, test set, rubric, judge plan, results + iteration loop). Use for LLM evaluation, benchmarks, rubrics, error analysis/open coding, and ship/no-ship quality gates for AI features.

Open repository

Best for

Primary workflow: Research & Ops.

Technical facets: Full Stack, Data / AI, Testing.

Target audience: Development teams looking for install-ready agent workflows..

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: liqiongyu.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install ai-evals into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/liqiongyu/lenny_skills_plus before adding ai-evals to shared team environments
  • Use ai-evals for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

ai-evals | SkillHub