gemini-peer-review
Get a second opinion from Gemini on code, architecture, debugging, or security. Uses direct Gemini API calls — no CLI dependencies. Trigger with 'ask gemini', 'gemini review', 'second opinion', 'peer review', or 'consult gemini'.
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Install command
npx @skill-hub/cli install jezweb-claude-skills-gemini-peer-review
Repository
Skill path: plugins/dev-tools/skills/gemini-peer-review
Get a second opinion from Gemini on code, architecture, debugging, or security. Uses direct Gemini API calls — no CLI dependencies. Trigger with 'ask gemini', 'gemini review', 'second opinion', 'peer review', or 'consult gemini'.
Open repositoryBest for
Primary workflow: Run DevOps.
Technical facets: Full Stack, Backend, Security.
Target audience: everyone.
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: jezweb.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install gemini-peer-review into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/jezweb/claude-skills before adding gemini-peer-review to shared team environments
- Use gemini-peer-review for development workflows
Works across
Favorites: 0.
Sub-skills: 0.
Aggregator: No.
Original source / Raw SKILL.md
---
name: gemini-peer-review
description: "Get a second opinion from Gemini on code, architecture, debugging, or security. Uses direct Gemini API calls — no CLI dependencies. Trigger with 'ask gemini', 'gemini review', 'second opinion', 'peer review', or 'consult gemini'."
compatibility: claude-code-only
---
# Gemini Peer Review
Consult Gemini as a coding peer for a second opinion on code quality, architecture decisions, debugging, or security reviews.
## Setup
**API Key**: Set `GEMINI_API_KEY` as an environment variable. Get a key from https://aistudio.google.com/apikey if you don't have one.
```bash
export GEMINI_API_KEY="your-key-here"
```
## Workflow
1. **Determine mode** from user request (review, architect, debug, security, quick)
2. **Read target files** into context
3. **Build prompt** using the AI-to-AI template from [references/prompt-templates.md](references/prompt-templates.md)
4. **Write prompt to file** at `.claude/artifacts/gemini-prompt.txt` (avoids shell escaping issues)
5. **Call the API** — generate a Python script that:
- Reads `GEMINI_API_KEY` from environment
- Reads the prompt from `.claude/artifacts/gemini-prompt.txt`
- POSTs to `https://generativelanguage.googleapis.com/v1beta/models/{model}:generateContent`
- Payload: `{"contents": [{"parts": [{"text": prompt}]}], "generationConfig": {"temperature": 0.3, "maxOutputTokens": 8192}}`
- Extracts text from `candidates[0].content.parts[0].text`
- Prints result to stdout
Write the script to `.claude/scripts/gemini-review.py` and run it.
6. **Synthesize** — present Gemini's findings, add your own perspective (agree/disagree), let the user decide what to implement
## Modes
### Code Review
Review specific files for bugs, logic errors, security vulnerabilities, performance issues, and best practice violations.
Read the target files, build a prompt using the Code Review template, call with `gemini-2.5-flash`.
### Architecture Advice
Get feedback on design decisions with trade-off analysis. Include project context (CLAUDE.md, relevant source files).
Read project context, build a prompt using the Architecture template, call with `gemini-2.5-pro`.
### Debugging Help
Analyse errors when stuck after 2+ failed fix attempts. Gemini sees the code fresh without your debugging context bias.
Read the problematic files, build a prompt using the Debug template (include error message and previous attempts), call with `gemini-2.5-flash`.
### Security Scan
Scan code for security vulnerabilities (injection, auth bypass, data exposure).
Read the target directory's source files, build a prompt using the Security template, call with `gemini-2.5-pro`.
### Quick Question
Fast question without file context. Build prompt inline, write to file, call with `gemini-2.5-flash`.
## Model Selection
| Mode | Model | Why |
|------|-------|-----|
| review, debug, quick | `gemini-2.5-flash` | Fast, good for straightforward analysis |
| architect, security-scan | `gemini-2.5-pro` | Better reasoning for complex trade-offs |
Check current model IDs if errors occur — they change frequently:
```bash
curl -s "https://generativelanguage.googleapis.com/v1beta/models?key=$GEMINI_API_KEY" | python3 -c "import sys,json; [print(m['name']) for m in json.load(sys.stdin)['models'] if 'gemini' in m['name']]"
```
## When to Use
**Good use cases**:
- Before committing major changes (final review)
- When stuck debugging after multiple attempts
- Architecture decisions with multiple valid options
- Security-sensitive code review
**Avoid using for**:
- Simple syntax checks (Claude handles these faster)
- Every single edit (too slow, unnecessary)
- Questions with obvious answers
## Prompt Construction
**Critical**: Always use the AI-to-AI prompting format. Write the full prompt to a file — never pass code inline via bash arguments (shell escaping will break it).
When building the prompt:
1. Start with the AI-to-AI header from [references/prompt-templates.md](references/prompt-templates.md)
2. Append the mode-specific template
3. Append the file contents with clear `--- filename ---` separators
4. Write to `.claude/artifacts/gemini-prompt.txt`
5. Generate and run the API call script
## Reference Files
| When | Read |
|------|------|
| Building prompts for any mode | [references/prompt-templates.md](references/prompt-templates.md) |
---
## Referenced Files
> The following files are referenced in this skill and included for context.
### references/prompt-templates.md
```markdown
# Prompt Templates
## AI-to-AI Prompting Format
Always use this format when constructing prompts for Gemini. It prevents role confusion — Gemini knows it's advising Claude Code, not talking to the human.
```
[Claude Code consulting Gemini for peer review]
Task: [Specific task description]
Provide direct analysis with file:line references. I will synthesize your findings with mine before presenting to the developer.
```
## Per-Mode Templates
### Code Review
```
[Claude Code consulting Gemini for peer review]
Task: Code review — check for bugs, logic errors, security vulnerabilities (SQL injection, XSS, etc.), performance issues, best practice violations, type safety problems, and missing error handling.
[If reference docs available:]
Check against these official docs:
- [URL]
Provide direct analysis with file:line references. I will synthesize your findings with mine before presenting to the developer.
```
### Architecture Advice
```
[Claude Code consulting Gemini for peer review]
Task: Architecture advice — [description of the decision or problem]
Analyse for: architectural anti-patterns, scalability concerns, maintainability issues, better alternatives, potential technical debt.
Provide specific, actionable recommendations and rationale. I will synthesize your findings with mine before presenting to the developer.
```
### Debugging Help
```
[Claude Code consulting Gemini for peer review]
Task: Debug analysis — identify root cause (not just symptoms), explain why it's happening, suggest specific fix with code example, and how to prevent in future.
Error: [error message/description]
What was tried: [previous attempts if any]
Provide direct analysis with file:line references. I will synthesize your findings with mine before presenting to the developer.
```
### Security Scan
```
[Claude Code consulting Gemini for peer review]
Task: Security audit — check for injection vulnerabilities, authentication/authorisation issues, data exposure, insecure defaults, missing input validation, CORS misconfiguration, and credential handling.
Provide direct analysis with file:line references and severity ratings. I will synthesize your findings with mine before presenting to the developer.
```
## Flash vs Pro
- **Flash** (default): Faster (~5-15s), good for code reviews, debugging, most tasks. Safer with directory scanning.
- **Pro**: Better reasoning (~15-30s), preferred for complex architecture decisions and security reviews. May try to use tools aggressively when scanning directories.
Both models have similar quality on straightforward tasks. Pro's advantage shows on complex reasoning where trade-off analysis matters.
```