promptfoo-evaluation
Configures and runs LLM evaluation using Promptfoo framework. Use when setting up prompt testing, creating evaluation configs (promptfooconfig.yaml), writing Python custom assertions, implementing llm-rubric for LLM-as-judge, or managing few-shot examples in prompts. Triggers on keywords like "promptfoo", "eval", "LLM evaluation", "prompt testing", or "model comparison".
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Install command
npx @skill-hub/cli install nguyendinhquocx-code-ai-promptfoo-evaluation
Repository
Skill path: skills/promptfoo-evaluation
Configures and runs LLM evaluation using Promptfoo framework. Use when setting up prompt testing, creating evaluation configs (promptfooconfig.yaml), writing Python custom assertions, implementing llm-rubric for LLM-as-judge, or managing few-shot examples in prompts. Triggers on keywords like "promptfoo", "eval", "LLM evaluation", "prompt testing", or "model comparison".
Open repositoryBest for
Primary workflow: Write Technical Docs.
Technical facets: Full Stack, Data / AI, Tech Writer, Testing.
Target audience: Development teams looking for install-ready agent workflows..
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: nguyendinhquocx.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install promptfoo-evaluation into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/nguyendinhquocx/code-ai before adding promptfoo-evaluation to shared team environments
- Use promptfoo-evaluation for development workflows
Works across
Favorites: 0.
Sub-skills: 0.
Aggregator: No.