peer-review
Structured manuscript/grant review with checklist-based evaluation. Use when writing formal peer reviews with specific criteria methodology assessment, statistical validity, reporting standards compliance (CONSORT/STROBE), and constructive feedback. Best for actual review writing, manuscript revision. For evaluating claims/evidence quality use scientific-critical-thinking; for quantitative scoring frameworks use scholar-evaluation.
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Install command
npx @skill-hub/cli install agentgptsmith-monadframework-peer-review
Repository
Skill path: .claude/skills/scientific/peer-review
Structured manuscript/grant review with checklist-based evaluation. Use when writing formal peer reviews with specific criteria methodology assessment, statistical validity, reporting standards compliance (CONSORT/STROBE), and constructive feedback. Best for actual review writing, manuscript revision. For evaluating claims/evidence quality use scientific-critical-thinking; for quantitative scoring frameworks use scholar-evaluation.
Open repositoryBest for
Primary workflow: Write Technical Docs.
Technical facets: Full Stack, Data / AI, Tech Writer.
Target audience: Development teams looking for install-ready agent workflows..
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: agentgptsmith.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install peer-review into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/agentgptsmith/MonadFramework before adding peer-review to shared team environments
- Use peer-review for development workflows
Works across
Favorites: 0.
Sub-skills: 0.
Aggregator: No.