Back to skills
SkillHub ClubAnalyze Data & AIData / AITesting

agent-evals

Provides a systematic approach to evaluating AI agents using binary criteria instead of subjective scales. Focuses on error analysis, trace examination, and creating component-level tests. Includes practical templates for graders and workflows to identify failure patterns in agent reasoning.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
159
Hot score
96
Updated
March 19, 2026
Overall rating
A8.2
Composite score
6.6
Best-practice grade
A88.4

Install command

npx @skill-hub/cli install panaversity-agentfactory-agent-evals
agent-evaluationtesting-frameworkerror-analysisllm-judgeregression-testing

Repository

panaversity/agentfactory

Skill path: .claude/skills/agent-evals

Provides a systematic approach to evaluating AI agents using binary criteria instead of subjective scales. Focuses on error analysis, trace examination, and creating component-level tests. Includes practical templates for graders and workflows to identify failure patterns in agent reasoning.

Open repository

Best for

Primary workflow: Analyze Data & AI.

Technical facets: Data / AI, Testing.

Target audience: AI/ML teams looking for install-ready agent workflows..

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: panaversity.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install agent-evals into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/panaversity/agentfactory before adding agent-evals to shared team environments
  • Use agent-evals for ai/ml workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

agent-evals | SkillHub