Back to skills
SkillHub ClubAnalyze Data & AIFull StackData / AITesting

llm-evaluation

Implement comprehensive evaluation strategies for LLM applications using automated metrics, human feedback, and benchmarking. Use when testing LLM performance, measuring AI application quality, or establishing evaluation frameworks.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
0
Hot score
74
Updated
March 20, 2026
Overall rating
C2.5
Composite score
2.5
Best-practice grade
A92.0

Install command

npx @skill-hub/cli install agentgptsmith-monadframework-llm-evaluation

Repository

agentgptsmith/MonadFramework

Skill path: .claude/skills/llm-evaluation

Implement comprehensive evaluation strategies for LLM applications using automated metrics, human feedback, and benchmarking. Use when testing LLM performance, measuring AI application quality, or establishing evaluation frameworks.

Open repository

Best for

Primary workflow: Analyze Data & AI.

Technical facets: Full Stack, Data / AI, Testing.

Target audience: Development teams looking for install-ready agent workflows..

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: agentgptsmith.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install llm-evaluation into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/agentgptsmith/MonadFramework before adding llm-evaluation to shared team environments
  • Use llm-evaluation for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

llm-evaluation | SkillHub