Back to skills
SkillHub ClubShip Full StackFull Stack
challenge
/em -challenge — Pre-Mortem Plan Analysis
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Stars
5,888
Hot score
99
Updated
March 20, 2026
Overall rating
C4.0
Composite score
4.0
Best-practice grade
B75.1
Install command
npx @skill-hub/cli install alirezarezvani-claude-skills-challenge
Repository
alirezarezvani/claude-skills
Skill path: .gemini/skills/challenge
/em -challenge — Pre-Mortem Plan Analysis
Open repositoryBest for
Primary workflow: Ship Full Stack.
Technical facets: Full Stack.
Target audience: everyone.
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: alirezarezvani.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install challenge into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/alirezarezvani/claude-skills before adding challenge to shared team environments
- Use challenge for development workflows
Works across
Claude CodeCodex CLIGemini CLIOpenCode
Favorites: 0.
Sub-skills: 0.
Aggregator: No.
Original source / Raw SKILL.md
--- name: "challenge" description: "/em -challenge — Pre-Mortem Plan Analysis" --- # /em:challenge — Pre-Mortem Plan Analysis **Command:** `/em:challenge <plan>` Systematically finds weaknesses in any plan before reality does. Not to kill the plan — to make it survive contact with reality. --- ## The Core Idea Most plans fail for predictable reasons. Not bad luck — bad assumptions. Overestimated demand. Underestimated complexity. Dependencies nobody questioned. Timing that made sense in a spreadsheet but not in the real world. The pre-mortem technique: **imagine it's 12 months from now and this plan failed spectacularly. Now work backwards. Why?** That's not pessimism. It's how you build something that doesn't collapse. --- ## When to Run a Challenge - Before committing significant resources to a plan - Before presenting to the board or investors - When you notice you're only hearing positive feedback about the plan - When the plan requires multiple external dependencies to align - When there's pressure to move fast and "figure it out later" - When you feel excited about the plan (excitement is a signal to scrutinize harder) --- ## The Challenge Framework ### Step 1: Extract Core Assumptions Before you can test a plan, you need to surface everything it assumes to be true. For each section of the plan, ask: - What has to be true for this to work? - What are we assuming about customer behavior? - What are we assuming about competitor response? - What are we assuming about our own execution capability? - What external factors does this depend on? **Common assumption categories:** - **Market assumptions** — size, growth rate, customer willingness to pay, buying cycle - **Execution assumptions** — team capacity, velocity, no major hires needed - **Customer assumptions** — they have the problem, they know they have it, they'll pay to solve it - **Competitive assumptions** — incumbents won't respond, no new entrant, moat holds - **Financial assumptions** — burn rate, revenue timing, CAC, LTV ratios - **Dependency assumptions** — partner will deliver, API won't change, regulations won't shift ### Step 2: Rate Each Assumption For every assumption extracted, rate it on two dimensions: **Confidence level (how sure are you this is true):** - **High** — verified with data, customer conversations, market research - **Medium** — directionally right but not validated - **Low** — plausible but untested - **Unknown** — we simply don't know **Impact if wrong (what happens if this assumption fails):** - **Critical** — plan fails entirely - **High** — major delay or cost overrun - **Medium** — significant rework required - **Low** — manageable adjustment ### Step 3: Map Vulnerabilities The matrix of Low/Unknown confidence × Critical/High impact = your highest-risk assumptions. **Vulnerability = Low confidence + High impact** These are not problems to ignore. They're the bets you're making. The question is: are you making them consciously? ### Step 4: Find the Dependency Chain Many plans fail not because any single assumption is wrong, but because multiple assumptions have to be right simultaneously. Map the chain: - Does assumption B depend on assumption A being true first? - If the first thing goes wrong, how many downstream things break? - What's the critical path? What has zero slack? ### Step 5: Test the Reversibility For each critical vulnerability: if this assumption turns out to be wrong at month 3, what do you do? - Can you pivot? - Can you cut scope? - Is money already spent? - Are commitments already made? The less reversible, the more rigorously you need to validate before committing. --- ## Output Format **Challenge Report: [Plan Name]** ``` CORE ASSUMPTIONS (extracted) 1. [Assumption] — Confidence: [H/M/L/?] — Impact if wrong: [Critical/High/Medium/Low] 2. ... VULNERABILITY MAP Critical risks (act before proceeding): • [#N] [Assumption] — WHY it might be wrong — WHAT breaks if it is High risks (validate before scaling): • ... DEPENDENCY CHAIN [Assumption A] → depends on → [Assumption B] → which enables → [Assumption C] Weakest link: [X] — if this breaks, [Y] and [Z] also fail REVERSIBILITY ASSESSMENT • Reversible bets: [list] • Irreversible commitments: [list — treat with extreme care] KILL SWITCHES What would have to be true at [30/60/90 days] to continue vs. kill/pivot? • Continue if: ... • Kill/pivot if: ... HARDENING ACTIONS 1. [Specific validation to do before proceeding] 2. [Alternative approach to consider] 3. [Contingency to build into the plan] ``` --- ## Challenge Patterns by Plan Type ### Product Roadmap - Are we building what customers will pay for, or what they said they wanted? - Does the velocity estimate account for real team capacity (not theoretical)? - What happens if the anchor feature takes 3× longer than estimated? - Who owns decisions when requirements conflict? ### Go-to-Market Plan - What's the actual ICP conversion rate, not the hoped-for one? - How many touches to close, and do you have the sales capacity for that? - What happens if the first 10 deals take 3 months instead of 1? - Is "land and expand" a real motion or a hope? ### Hiring Plan - What happens if the key hire takes 4 months to find, not 6 weeks? - Is the plan dependent on retaining specific people who might leave? - Does the plan account for ramp time (usually 3–6 months before full productivity)? - What's the burn impact if headcount leads revenue by 6 months? ### Fundraising Plan - What's your fallback if the lead investor passes? - Have you modeled the timeline if it takes 6 months, not 3? - What's your runway at current burn if the round closes at the low end? - What assumptions break if you raise 50% of the target amount? --- ## The Hardest Questions These are the ones people skip: - "What's the bear case, not the base case?" - "If this exact plan was run by a team we don't trust, would it work?" - "What are we not saying out loud because it's uncomfortable?" - "Who has incentives to make this plan sound better than it is?" - "What would an enemy of this plan attack first?" --- ## Deliverable The output of `/em:challenge` is not permission to stop. It's a vulnerability map. Now you can make conscious decisions: validate the risky assumptions, hedge the critical ones, or accept the bets you're making knowingly. Unknown risks are dangerous. Known risks are manageable.