agent-evaluation
Provides a detailed framework for evaluating Claude Code agents and skills, covering evaluation methods, metrics, and practical implementation guidance. Focuses on outcome-based assessment rather than exact execution paths, with specific metrics for different evaluation scenarios.
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Install command
npx @skill-hub/cli install neolabhq-context-engineering-kit-agent-evaluation
Repository
Skill path: plugins/customaize-agent/skills/agent-evaluation
Provides a detailed framework for evaluating Claude Code agents and skills, covering evaluation methods, metrics, and practical implementation guidance. Focuses on outcome-based assessment rather than exact execution paths, with specific metrics for different evaluation scenarios.
Open repositoryBest for
Primary workflow: Analyze Data & AI.
Technical facets: Full Stack, Data / AI, Testing.
Target audience: Meta teams looking for install-ready agent workflows..
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: NeoLabHQ.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install agent-evaluation into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/NeoLabHQ/context-engineering-kit before adding agent-evaluation to shared team environments
- Use agent-evaluation for meta workflows
Works across
Favorites: 0.
Sub-skills: 0.
Aggregator: No.