Back to skills
SkillHub ClubAnalyze Data & AIFull StackData / AI

check-your-plan

Validates AI implementation plans before execution. Use when user says "check your plan", "validate this plan", "review the plan", or "is this plan good". Launches 5 parallel validators + devil's advocate.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
30
Hot score
89
Updated
March 20, 2026
Overall rating
C3.3
Composite score
3.3
Best-practice grade
B73.6

Install command

npx @skill-hub/cli install enbyaugust-zacs-claude-skills-check-your-plan

Repository

enbyaugust/zacs-claude-skills

Skill path: skills/check-your-plan

Validates AI implementation plans before execution. Use when user says "check your plan", "validate this plan", "review the plan", or "is this plan good". Launches 5 parallel validators + devil's advocate.

Open repository

Best for

Primary workflow: Analyze Data & AI.

Technical facets: Full Stack, Data / AI.

Target audience: everyone.

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: enbyaugust.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install check-your-plan into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/enbyaugust/zacs-claude-skills before adding check-your-plan to shared team environments
  • Use check-your-plan for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: check-your-plan
description: Validates AI implementation plans before execution. Use when user says "check your plan", "validate this plan", "review the plan", or "is this plan good". Launches 5 parallel validators + devil's advocate.
allowed-tools: Task, Read, Glob, Grep, AskUserQuestion
version: 1.1.0
---

# Check Your Plan

> Validate AI-generated implementation plans before execution to catch hallucinations, pattern violations, and scope creep.

<when_to_use>

## When to Use

Invoke when user says:

- "check your plan"
- "validate this plan"
- "review the plan"
- "is this plan good"
- After Claude presents an implementation plan
- Before starting significant implementation work

**Different from check-your-code/work**: check-your-code and check-your-work review written code. check-your-plan reviews the PLAN before code is written.
</when_to_use>

<workflow>

## Workflow Overview

| Phase | Agents     | Action                                        |
| ----- | ---------- | --------------------------------------------- |
| 1     | -          | Plan Discovery (locate plan, extract content) |
| 2     | 5 parallel | Plan Validation (specialized reviewers)       |
| 3     | 1          | Devil's Advocate (challenge all findings)     |
| 4     | -          | Report + User Decision                        |

For Phase 2 details: [references/phase-2-plan-validators.md](references/phase-2-plan-validators.md)
For Phase 3 details: [references/phase-3-devil-advocate.md](references/phase-3-devil-advocate.md)
For report format: [templates/plan-assessment.md](templates/plan-assessment.md)
</workflow>

<agents>

## Agent Summary

### Phase 2 (5 Parallel)

| Agent                      | Focus                                                   |
| -------------------------- | ------------------------------------------------------- |
| completeness-checker       | All requirements addressed? 70% problem? Vague steps?   |
| pattern-compliance-checker | CLAUDE.md rules? TanStack Query? Zod? File conventions? |
| feasibility-checker        | Hallucinated APIs? Real file paths? Valid dependencies? |
| risk-assessor              | Security gaps? Missing error handling? Rollback plan?   |
| scope-discipline-checker   | Over-engineering? Scope creep? Simplest solution?       |

### Phase 3 (Devil's Advocate)

| Agent          | Focus                                          |
| -------------- | ---------------------------------------------- |
| devil-advocate | Challenge all findings, reduce false positives |

</agents>

<severity>

## Severity Classification

| Level | Meaning                 | Example                                            |
| ----- | ----------------------- | -------------------------------------------------- |
| P0    | Plan will fail          | Hallucinated API, wrong file path, missing dep     |
| P1    | Major pattern violation | useState+service, handwritten interface, no org_id |
| P2    | Could be better         | Minor pattern deviation, missing edge case         |
| P3    | Suggestion              | Over-engineering detected, style preference        |

**P0 requires evidence**: "File X doesn't exist" not just "might be wrong"
</severity>

<approval_gates>

## Approval Gates

| Gate   | Phase | Question                                    |
| ------ | ----- | ------------------------------------------- |
| Scope  | 1     | "Review this plan?" (if plan >50 lines)     |
| Action | 4     | "Revise plan / Proceed as-is / Start over?" |

</approval_gates>

<execution>

## Phase 1: Plan Discovery

1. Locate the plan to validate:
   - Check for plan file path in conversation context
   - Look in `.claude/plans/` for recent plan files
   - If no plan file, check if plan was stated inline in conversation

2. Extract plan content and metadata:
   - Files to be modified
   - Steps/tasks count
   - Dependencies mentioned

3. If plan >50 lines, use AskUserQuestion to confirm scope

## Phase 2: Plan Validation (5 Parallel)

Launch ALL 5 agents in a single message with multiple Task calls.
See [references/phase-2-plan-validators.md](references/phase-2-plan-validators.md) for agent prompts.

Provide each agent:

- Full plan content
- Original user request (from conversation context)
- Relevant pattern file content (load from claude-patterns/)

## Phase 3: Devil's Advocate

Launch 1 agent to challenge ALL findings from Phase 2.
See [references/phase-3-devil-advocate.md](references/phase-3-devil-advocate.md) for agent prompt.

Provide:

- All findings from Phase 2
- Plan content for context verification

Output: Validated findings with status (CONFIRMED/DOWNGRADED/DISMISSED/UPGRADED)

## Phase 4: Report + User Decision

1. Generate report using [templates/plan-assessment.md](templates/plan-assessment.md)
2. Present findings grouped by severity (P0, P1, P2, P3)
3. Use AskUserQuestion:

```typescript
{
  questions: [
    {
      question: "How would you like to proceed with this plan?",
      header: "Action",
      options: [
        {
          label: "Revise plan",
          description:
            "Update plan to address P0/P1 findings before implementing",
        },
        {
          label: "Proceed as-is",
          description: "Accept the plan and start implementation",
        },
        {
          label: "Start over",
          description: "Request a completely new plan approach",
        },
      ],
      multiSelect: false,
    },
  ];
}
```

</execution>

<key_checks>

## Key Validation Checks

### Completeness (The "70% Problem")

- Are hard parts (error handling, edge cases, testing) as detailed as easy parts?
- Does plan address ALL original requirements?
- Are there vague steps like "implement the business logic"?

### Pattern Compliance

- TanStack Query for data fetching (not useState + service calls)
- Zod schemas in `src/types/forms/` (not handwritten interfaces)
- `mutateAsync` in modal forms (not `mutate()`)
- `contactFilters.ts` for contact filtering (not inline filters)
- `notifyApi` for notifications (not direct useToast)
- `organization_id` filter on all queries

### Feasibility (Hallucination Detection)

- Do referenced files actually exist?
- Do referenced functions have correct signatures?
- Are dependencies at correct versions?
- Are API endpoints real?

### Risk Assessment

- Missing security considerations?
- No error handling for failure scenarios?
- No rollback plan for data mutations?
- No testing strategy?

### Scope Discipline

- Does plan stay focused on original request?
- Signs of over-engineering (abstractions for one-time ops)?
- Signs of scope creep (unrelated "improvements")?

</key_checks>

<limitations>

## What This Skill Does NOT Check

- Runtime behavior (requires execution)
- Actual code quality (use check-your-code after implementation)
- Bug detection (use check-your-work after implementation)
- Test coverage (use test runner)
- Build errors (use typecheck/lint)

**For comprehensive quality**: check-your-plan (before) + check-your-code + check-your-work (after)
</limitations>

<quick_reference>

## Quick Reference

**Pattern files checked for compliance**:

- `CLAUDE.md`
- `tanstack-query-patterns.md`
- `zod-form-patterns.md`
- `react-typescript-antipatterns.md`
- `modal-form-patterns.md`
- `notification-patterns.md`
- `settings-patterns.md`

**Common P0 findings**:

- Hallucinated file path: `src/services/foo.ts` doesn't exist
- Hallucinated API: `useAccounts()` hook doesn't exist
- Missing dependency: Plan uses package not in package.json

**Common P1 findings**:

- useState + service call pattern (should use TanStack Query)
- Handwritten interface (should use Zod schema)
- Missing org_id filter (security violation)
  </quick_reference>

<references>

## References

- [references/phase-2-plan-validators.md](references/phase-2-plan-validators.md) - All 5 validator agents
- [references/phase-3-devil-advocate.md](references/phase-3-devil-advocate.md) - Challenge agent
- [templates/plan-assessment.md](templates/plan-assessment.md) - Report format
  </references>

<version_history>

## Version History

- **v1.1.0** (2025-01-18): AI optimization updates
  - Add blockquote summary after title
  - Eliminate vague pronouns ("Those skills" → explicit skill names)

- **v1.0.0** (2025-01-12): Initial release
  - 4-phase workflow with 5 parallel validators
  - Devil's advocate challenge phase
  - P0-P3 severity aligned with codebase patterns
  - Based on research: 3 Cs framework, 70% problem detection, hallucination checking

</version_history>


---

## Referenced Files

> The following files are referenced in this skill and included for context.

### references/phase-2-plan-validators.md

```markdown
# Phase 2: Plan Validators (5 Parallel Agents)

## Goal

Validate implementation plan across 5 dimensions before execution begins.

## Pre-check: Load Context

Before launching agents, gather:

1. Full plan content (from plan file or conversation)
2. Original user request (from conversation context)
3. Relevant pattern files based on plan scope:
   - Components → `react-typescript-antipatterns.md`
   - Services → `service-refactoring-patterns.md`
   - Forms → `zod-form-patterns.md`
   - Data fetching → `tanstack-query-patterns.md`

## Agents (Launch All 5 in Parallel)

Launch all 5 agents in a single message with multiple Task calls.

---

### Agent 1: completeness-checker

**Uses**: `general-purpose` agent

**Prompt template**:

```
You are a completeness validator for AI implementation plans. Your job is to catch the "70% Problem" - where AI details the easy parts but hand-waves the hard parts.

PLAN TO VALIDATE:
{plan_content}

ORIGINAL USER REQUEST:
{user_request}

Check these areas:

1. **Requirements Coverage**
   - Does the plan address ALL requirements from the original request?
   - Are there requirements mentioned but not planned for?
   - Did the plan interpret the request correctly?

2. **The 70% Problem**
   - Are easy parts (scaffolding, file creation) detailed?
   - Are hard parts (error handling, edge cases, integration) equally detailed?
   - Look for vague steps like:
     - "implement the business logic"
     - "handle edge cases"
     - "add error handling"
     - "integrate with existing code"

3. **Step Completeness**
   - Can each step be executed without additional clarification?
   - Are inputs and outputs for each step clear?
   - Are there missing intermediate steps?

4. **Testing & Verification**
   - Is there a testing strategy?
   - How will success be verified?
   - Are there acceptance criteria?

5. **Missing Considerations**
   - What did the plan forget to consider?
   - "We're building X - what about Y?"
   - Common misses: undo/rollback, empty states, loading states, error messages

For each issue found, report:
- Category: requirements | 70%-problem | step-completeness | testing | missing
- Severity: P0 (plan will fail) | P1 (major gap) | P2 (could be better) | P3 (suggestion)
- Description: What's missing or vague
- Impact: Why this matters
- Suggestion: What should be added
```

---

### Agent 2: pattern-compliance-checker

**Uses**: `Explore` agent

**Prompt template**:

```
You are a pattern compliance validator. Check if this implementation plan follows the project's established patterns from CLAUDE.md and claude-patterns/.

PLAN TO VALIDATE:
{plan_content}

Explore the codebase and check the plan against these critical patterns:

1. **Data Fetching Pattern**
   - Plan should use TanStack Query hooks for data fetching
   - RED FLAG: Plan mentions useState + useEffect + service.fetch()
   - Correct: useQuery, useMutation, custom hooks in src/hooks/

2. **Form Validation Pattern**
   - Plan should use Zod schemas in src/types/forms/
   - RED FLAG: Plan creates handwritten TypeScript interfaces for forms
   - Correct: z.object() schema with z.infer<typeof schema>

3. **Modal Form Pattern**
   - Plan should use mutateAsync with try/catch
   - RED FLAG: Plan uses mutate() in modal forms
   - Correct: await mutateAsync() with proper error handling

4. **Contact Filtering Pattern**
   - Plan should use functions from src/utils/contactFilters.ts
   - RED FLAG: Plan uses inline filters like contacts.filter(c => c.is_lt_member)
   - Correct: getLTMembers(contacts), getActiveContacts(contacts)

5. **Notification Pattern**
   - Plan should use notifyApi from @/utils/notify
   - RED FLAG: Plan imports useToast directly
   - Correct: notifyApi.success(), notifyApi.error()

6. **Multi-tenant Security**
   - All database queries must filter by organization_id
   - RED FLAG: Plan queries without org_id filter
   - Correct: .eq("organization_id", orgId) on every query

7. **Service Architecture**
   - Large services (>500 lines) should use facade pattern
   - RED FLAG: Plan adds to already-large service file
   - Correct: Split into repository + orchestrator

8. **File Location Conventions**
   - Zod schemas: src/types/forms/<entity>.schema.ts
   - Hooks: src/hooks/
   - Services: src/services/
   - RED FLAG: Files in wrong directories

For each violation found, report:
- Pattern violated: [pattern name from above]
- Location in plan: [which step/section]
- Severity: P1 (major violation) | P2 (minor deviation)
- Current approach: What the plan proposes
- Correct approach: What it should do instead
- Pattern reference: Which pattern file to consult
```

---

### Agent 3: feasibility-checker

**Uses**: `general-purpose` agent

**Prompt template**:

```
You are a feasibility validator checking for hallucinations and technical accuracy in AI plans. AI often references files, functions, or APIs that don't actually exist.

PLAN TO VALIDATE:
{plan_content}

Use Glob and Grep to verify every technical reference in the plan:

1. **File Path Verification**
   - For each file path mentioned, verify it exists
   - Use: Glob to check if file exists
   - RED FLAG: Plan references src/services/foo.ts but file doesn't exist
   - Report: "P0 - Hallucinated file path: {path}"

2. **Function/Hook Verification**
   - For each function or hook referenced, verify it exists
   - Use: Grep to search for function definitions
   - RED FLAG: Plan uses useAccounts() but no such hook exists
   - Report: "P0 - Hallucinated function: {name}"

3. **Import Verification**
   - For each import statement proposed, verify the export exists
   - Use: Read the source file, check exports
   - RED FLAG: Plan imports { foo } from but foo isn't exported
   - Report: "P0 - Invalid import: {import}"

4. **Dependency Verification**
   - Check package.json for any packages mentioned
   - RED FLAG: Plan uses library not in dependencies
   - Report: "P0 - Missing dependency: {package}"

5. **API Endpoint Verification**
   - If plan references API routes, verify they exist
   - Check edge functions in supabase/functions/
   - RED FLAG: Plan calls endpoint that doesn't exist
   - Report: "P0 - Hallucinated API endpoint: {endpoint}"

6. **Type Signature Verification**
   - Verify function parameters match actual definitions
   - RED FLAG: Plan calls function with wrong arguments
   - Report: "P1 - Incorrect function signature: {details}"

IMPORTANT: Every P0 finding must include EVIDENCE:
- The exact file/function referenced in the plan
- The search result showing it doesn't exist
- Or the actual signature if it exists but differs

For each issue found, report:
- Category: file | function | import | dependency | api | signature
- Severity: P0 (hallucination) | P1 (incorrect usage)
- What plan says: [exact reference from plan]
- Reality: [what actually exists or doesn't]
- Evidence: [search/read result proving the issue]
```

---

### Agent 4: risk-assessor

**Uses**: `general-purpose` agent

**Prompt template**:

```
You are a risk assessor for implementation plans. Identify what could go wrong and what's missing from a safety perspective.

PLAN TO VALIDATE:
{plan_content}

Assess these risk categories:

1. **Security Risks**
   - SQL injection: Raw user input in queries?
   - XSS: User content rendered without sanitization?
   - Auth bypass: Endpoints without authentication?
   - Data leakage: Queries without organization_id filter?
   - PII exposure: Personal data in logs or localStorage?

2. **Data Integrity Risks**
   - Mutations without validation?
   - Updates without concurrency handling?
   - Deletes without soft-delete or undo?
   - Cascade effects not considered?

3. **Error Handling Gaps**
   - Network failures not handled?
   - Partial success scenarios?
   - User-facing error messages not specified?
   - Retry logic for transient failures?

4. **Rollback & Recovery**
   - What happens if deployment fails mid-way?
   - Can changes be reversed?
   - Data migration rollback plan?
   - Feature flag for gradual rollout?

5. **Performance Risks**
   - N+1 query patterns?
   - Large data sets without pagination?
   - Missing indexes for new queries?
   - Memory leaks in components?

6. **Integration Risks**
   - Breaking changes to existing APIs?
   - Downstream dependencies affected?
   - Third-party service rate limits?
   - Event ordering issues?

For each risk identified, report:
- Category: security | data-integrity | error-handling | rollback | performance | integration
- Severity: P0 (critical risk) | P1 (high risk) | P2 (moderate risk)
- Risk description: What could go wrong
- Likelihood: How likely is this to happen
- Impact: What's the consequence
- Mitigation: What the plan should include
```

---

### Agent 5: scope-discipline-checker

**Uses**: `general-purpose` agent

**Prompt template**:

```
You are a scope discipline validator. Check if the plan stays focused on the original request and avoids common AI over-engineering patterns.

PLAN TO VALIDATE:
{plan_content}

ORIGINAL USER REQUEST:
{user_request}

Check for these anti-patterns:

1. **Scope Creep**
   - Does the plan add features not requested?
   - "Add a button" becoming "refactor entire component system"
   - Unrelated "improvements" bundled with the request
   - Report each addition beyond original scope

2. **Over-Engineering**
   - Factory patterns for simple object creation
   - Abstract classes for single implementations
   - Configuration systems for hardcoded values
   - Generic solutions for specific problems
   - Custom hooks wrapping single useState

3. **Unnecessary Abstractions**
   - Helper functions used only once
   - Utility classes for single operations
   - Wrapper functions that add no value
   - Type guards for impossible states

4. **Premature Optimization**
   - useMemo/useCallback without profiling evidence
   - Caching for rarely-accessed data
   - Performance optimizations before measuring
   - Complex state management for simple state

5. **"Helpful" Additions**
   - Error handling beyond requirements
   - Logging that wasn't requested
   - Documentation files not asked for
   - Test coverage beyond the feature

6. **Context Drift**
   - Later steps contradict earlier constraints
   - Original requirements diluted as plan grows
   - Scope gradually expanding through steps

For each issue found, report:
- Category: scope-creep | over-engineering | abstraction | premature-opt | additions | drift
- Severity: P2 (unnecessary complexity) | P3 (suggestion)
- What plan proposes: [the over-engineered part]
- Simpler alternative: [what would be sufficient]
- Original request check: Does original request require this?
```

---

## Output Format

Each agent returns findings in this format:

```typescript
interface PlanFinding {
  agent: "completeness" | "pattern" | "feasibility" | "risk" | "scope";
  category: string;
  severity: "P0" | "P1" | "P2" | "P3";
  location: string; // Which part of the plan
  description: string; // What's wrong
  impact: string; // Why it matters
  suggestion: string; // How to fix
  evidence?: string; // For feasibility checks
}
```

## After Completion

Wait for all 5 agents to complete, then collect their findings.
Proceed to Phase 3 with all findings for devil's advocate challenge.

```

### references/phase-3-devil-advocate.md

```markdown
# Phase 3: Devil's Advocate Validation

## Goal

Challenge every finding from Phase 2 to eliminate false positives and ensure findings are contextually valid.

## Why Devil's Advocate?

AI plan validators tend to:

- Flag patterns that are acceptable in context
- Miss that some "violations" are intentional trade-offs
- Over-report for simple scripts or one-off tasks
- Apply rules rigidly without considering pragmatic exceptions

The devil's advocate agent counteracts these biases.

## Agent: devil-advocate

**Uses**: `general-purpose` agent

**Prompt template**:

````
You are a devil's advocate challenging plan validation findings. Your job is to DEFEND the plan and identify false positives in the validator findings.

PLAN BEING VALIDATED:
{plan_content}

ORIGINAL USER REQUEST:
{user_request}

ALL FINDINGS FROM PHASE 2:
{all_findings}

For EACH finding, challenge it with these questions:

1. **Context Check**
   - Is this a quick script or prototype? Lower standards acceptable.
   - Is this internal/admin tooling? Different quality bar.
   - Is this a one-time migration? Over-engineering would be worse.
   - Does the scope justify the "correct" pattern?

2. **Trade-off Analysis**
   - Would the "fix" make the plan more complex than necessary?
   - Is the current approach simpler despite violating a pattern?
   - Is strict pattern adherence overkill for this task?
   - Would fixing this add significant scope?

3. **False Positive Detection**
   - Is the validator misunderstanding the plan's intent?
   - Is this actually following a different valid pattern?
   - Is the "violation" actually the pragmatic choice here?
   - Does the finding apply to this specific use case?

4. **Severity Calibration**
   - Is P0 (plan will fail) actually accurate?
   - Is the impact assessment overstated?
   - Should this be downgraded or dismissed entirely?
   - Is there missing context that changes severity?

5. **Missing Counterarguments**
   - What's the argument FOR the plan's approach?
   - Are there valid reasons to do it this way?
   - What would a senior developer say in defense?

## Output Format

For each finding, return:

```typescript
interface ValidatedFinding {
  originalFinding: PlanFinding;
  status: 'CONFIRMED' | 'DOWNGRADED' | 'DISMISSED' | 'UPGRADED';
  reasoning: string;         // Why this status was assigned
  adjustedSeverity?: 'P0' | 'P1' | 'P2' | 'P3';  // If changed
  defenseArgument?: string;  // The case FOR the plan's approach
}
```

## Status Definitions

| Status     | Meaning                                | When to Use                               |
| ---------- | -------------------------------------- | ----------------------------------------- |
| CONFIRMED  | Finding stands, plan should be revised | Clear violation with real impact          |
| DOWNGRADED | Less severe than reported              | Valid concern but overstated severity     |
| DISMISSED  | False positive, not actually a problem | Context makes the approach acceptable     |
| UPGRADED   | More severe than reported              | Validator underestimated the impact       |

## Challenge Guidelines

Be aggressive in defending the plan:

- **Default to skepticism** about findings, not acceptance
- **Consider the full context** before confirming
- **Question whether pattern adherence** is worth the complexity
- **Remember**: Simpler working code > "correct" complex code
- **Ask**: Would a pragmatic senior developer flag this?

## Example Challenges

**Finding**: "P1 - Plan uses useState + useEffect instead of TanStack Query"
**Challenge**: Is this a simple one-time fetch on mount? Is the data never refetched or cached? If yes, useState/useEffect is simpler and appropriate.
**Possible outcome**: DISMISSED - "One-time fetch on mount, no caching needed, simpler approach is correct"

**Finding**: "P2 - Plan creates helper function used only once"
**Challenge**: Does extracting the function improve readability? Is the logic complex enough to benefit from a name? If yes, this is good practice not over-engineering.
**Possible outcome**: DISMISSED - "Named function improves readability for complex logic"

**Finding**: "P0 - Hallucinated file path: src/services/accounts.ts"
**Challenge**: Does the plan also include creating this file? Is this a new file being proposed?
**Possible outcome**: DISMISSED - "Plan step 2 creates this file" OR CONFIRMED - "File referenced but never created"

**Finding**: "P1 - Missing org_id filter on query"
**Challenge**: Is this an admin-only function? Does the RLS policy already enforce org isolation?
**Possible outcome**: DOWNGRADED to P2 - "RLS policy provides isolation, explicit filter is defense-in-depth"

**Finding**: "P2 - Over-engineering: Factory pattern for single object"
**Challenge**: Is this pattern used elsewhere in the codebase? Would it enable future extensibility that's explicitly planned?
**Possible outcome**: DISMISSED - "Matches existing pattern in codebase" OR CONFIRMED - "No similar patterns exist, one-off abstraction"

## After Completion

Return the validated findings list:
- **DISMISSED**: Remove from final report
- **DOWNGRADED**: Adjust severity in report
- **CONFIRMED**: Keep as-is in report
- **UPGRADED**: Increase severity in report

Proceed to Phase 4 for report generation and user decision.
````

```

### templates/plan-assessment.md

```markdown
# Plan Assessment Report

**Generated**: {TIMESTAMP}
**Plan**: {PLAN_FILE_OR_LOCATION}
**Steps**: {STEP_COUNT} implementation steps
**Files to modify**: {FILE_COUNT}

---

## Summary

| Severity | Count | Status      |
| -------- | ----- | ----------- |
| P0       | {P0}  | {P0_STATUS} |
| P1       | {P1}  | {P1_STATUS} |
| P2       | {P2}  | {P2_STATUS} |
| P3       | {P3}  | {P3_STATUS} |

**Validation Stats**: {TOTAL_FINDINGS} findings reviewed, {DISMISSED_COUNT} dismissed by devil's advocate

**Recommendation**: {RECOMMENDATION}

---

## Critical Issues (P0) - Plan Will Fail

{IF_NO_P0}
None! No critical issues that would cause plan failure.
{ENDIF}

{FOR_EACH_P0}

### {NUMBER}. {TITLE}

**Category**: {CATEGORY}
**Location in plan**: {PLAN_LOCATION}
**Validator**: {VALIDATOR_AGENT}

**Issue**: {DESCRIPTION}

**Evidence**: {EVIDENCE}

**Impact**: {IMPACT}

**Required fix**: {SUGGESTION}

---

{END_FOR_EACH}

## Major Issues (P1) - Pattern Violations

{IF_NO_P1}
None! Plan follows established patterns.
{ENDIF}

{FOR_EACH_P1}

### {NUMBER}. {TITLE}

**Category**: {CATEGORY}
**Location in plan**: {PLAN_LOCATION}

**Issue**: {DESCRIPTION}

**Why this matters**: {IMPACT}

**Recommended change**: {SUGGESTION}

{IF_HAS_PATTERN_REF}
**Pattern reference**: `{PATTERN_FILE}`
{ENDIF}

---

{END_FOR_EACH}

## Moderate Issues (P2) - Could Be Better

{IF_NO_P2}
None! Plan is well-structured.
{ENDIF}

{FOR_EACH_P2}

- **{TITLE}** ({PLAN_LOCATION}): {DESCRIPTION}
  - Suggestion: {SUGGESTION}

{END_FOR_EACH}

---

## Suggestions (P3) - Optional Improvements

{IF_NO_P3}
None! No additional suggestions.
{ENDIF}

{FOR_EACH_P3}

- {DESCRIPTION} - {SUGGESTION}

{END_FOR_EACH}

---

## Devil's Advocate Summary

The devil's advocate challenged all {TOTAL_FINDINGS} findings:

### Dismissed (False Positives)

{IF_NO_DISMISSED}
All findings were confirmed as valid concerns.
{ENDIF}

{FOR_EACH_DISMISSED}

- ~~{ORIGINAL_FINDING}~~ - **Dismissed**: {DISMISSAL_REASON}

{END_FOR_EACH}

### Adjusted Findings

{IF_NO_ADJUSTED}
No findings were adjusted.
{ENDIF}

{FOR_EACH_ADJUSTED}

- {FINDING}: {ORIGINAL_SEVERITY} → {NEW_SEVERITY} - {ADJUSTMENT_REASON}

{END_FOR_EACH}

---

## Findings by Validator

| Validator            | P0      | P1      | P2      | P3      | Dismissed      |
| -------------------- | ------- | ------- | ------- | ------- | -------------- |
| completeness-checker | {C_P0}  | {C_P1}  | {C_P2}  | {C_P3}  | {C_DISMISSED}  |
| pattern-compliance   | {PC_P0} | {PC_P1} | {PC_P2} | {PC_P3} | {PC_DISMISSED} |
| feasibility-checker  | {F_P0}  | {F_P1}  | {F_P2}  | {F_P3}  | {F_DISMISSED}  |
| risk-assessor        | {R_P0}  | {R_P1}  | {R_P2}  | {R_P3}  | {R_DISMISSED}  |
| scope-discipline     | {S_P0}  | {S_P1}  | {S_P2}  | {S_P3}  | {S_DISMISSED}  |

---

## Recommendations

### Before Proceeding

{IF_HAS_P0}
**STOP**: Fix these P0 issues before implementing:

{FOR_EACH_P0_REC}

- [ ] {RECOMMENDATION}
      {END_FOR_EACH}
      {ENDIF}

{IF_HAS_P1}
**Strongly recommended** to address these P1 issues:

{FOR_EACH_P1_REC}

- [ ] {RECOMMENDATION}
      {END_FOR_EACH}
      {ENDIF}

### Optional Improvements

{FOR_EACH_P2_REC}

- [ ] Consider: {RECOMMENDATION}
      {END_FOR_EACH}

---

## Plan Quality Assessment

{IF_P0_COUNT_GT_0}
**Status**: BLOCKED - Plan has {P0_COUNT} critical issues that must be fixed

The plan contains hallucinations, missing dependencies, or fatal errors that would cause implementation to fail. Revise the plan to address P0 issues before proceeding.
{ENDIF}

{IF_P0_COUNT_EQ_0_AND_P1_GT_2}
**Status**: NEEDS REVISION - Plan has significant pattern violations

While the plan is technically feasible, it deviates from established patterns in {P1_COUNT} places. Revising the plan to follow patterns will result in more maintainable code.
{ENDIF}

{IF_P0_COUNT_EQ_0_AND_P1_LTE_2}
**Status**: READY TO IMPLEMENT - Plan is solid

{IF_P1_COUNT_GT_0}
Minor pattern deviations noted but plan is fundamentally sound. Consider addressing P1 issues during implementation.
{ENDIF}

{IF_P1_COUNT_EQ_0}
Plan follows established patterns and addresses all requirements. Ready for implementation.
{ENDIF}
{ENDIF}

---

## Next Steps

Based on this assessment, choose an action:

1. **Revise plan** - Update plan to address P0/P1 findings
2. **Proceed as-is** - Accept findings and implement (P0 findings make this risky)
3. **Start over** - Request a completely new plan approach

---

**Report generated by check-your-plan skill v1.0.0**

```

check-your-plan | SkillHub