ref-hallucination-arena
Benchmark LLM reference recommendation capabilities by verifying every cited paper against Crossref, PubMed, arXiv, and DBLP. Measures hallucination rate, per-field accuracy (title/author/year/DOI), discipline breakdown, and year constraint compliance. Supports tool-augmented (ReAct + web search) mode. Use when the user asks to evaluate, benchmark, or compare models on academic reference hallucination, literature recommendation quality, or citation accuracy.
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Install command
npx @skill-hub/cli install agentscope-ai-openjudge-ref-hallucination-arena
Repository
Skill path: skills/ref-hallucination-arena
Benchmark LLM reference recommendation capabilities by verifying every cited paper against Crossref, PubMed, arXiv, and DBLP. Measures hallucination rate, per-field accuracy (title/author/year/DOI), discipline breakdown, and year constraint compliance. Supports tool-augmented (ReAct + web search) mode. Use when the user asks to evaluate, benchmark, or compare models on academic reference hallucination, literature recommendation quality, or citation accuracy.
Open repositoryBest for
Primary workflow: Ship Full Stack.
Technical facets: Full Stack, Frontend.
Target audience: Development teams looking for install-ready agent workflows..
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: agentscope-ai.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install ref-hallucination-arena into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/agentscope-ai/OpenJudge before adding ref-hallucination-arena to shared team environments
- Use ref-hallucination-arena for development workflows
Works across
Favorites: 0.
Sub-skills: 0.
Aggregator: No.