pathfinding
This skill should be used when requirements are unclear, brainstorming ideas, or when "pathfind", "brainstorm", "figure out", "clarify requirements", or "work through" are mentioned.
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Install command
npx @skill-hub/cli install outfitter-dev-agents-pathfinding
Repository
Skill path: baselayer/skills/pathfinding
This skill should be used when requirements are unclear, brainstorming ideas, or when "pathfind", "brainstorm", "figure out", "clarify requirements", or "work through" are mentioned.
Open repositoryBest for
Primary workflow: Research & Ops.
Technical facets: Full Stack.
Target audience: everyone.
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: outfitter-dev.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install pathfinding into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/outfitter-dev/agents before adding pathfinding to shared team environments
- Use pathfinding for productivity workflows
Works across
Favorites: 0.
Sub-skills: 0.
Aggregator: No.
Original source / Raw SKILL.md
---
name: pathfinding
version: 2.0.0
description: This skill should be used when requirements are unclear, brainstorming ideas, or when "pathfind", "brainstorm", "figure out", "clarify requirements", or "work through" are mentioned.
---
# Pathfinding
Adaptive Q&A → unclear requirements → clear path.
<when_to_use>
- Ambiguous/incomplete requirements
- Complex features needing exploration
- Greenfield projects with open questions
- Collaborative brainstorming or problem solving
NOT for: time-critical bugs, well-defined tasks, obvious questions
</when_to_use>
<confidence>
| Bar | Lvl | % | Name | Action |
|-----|-----|---|------|--------|
| `░░░░░` | 0 | 0–19 | Prepping | Gather foundational context |
| `▓░░░░` | 1 | 20–39 | Scouting | Ask broad questions |
| `▓▓░░░` | 2 | 40–59 | Exploring | Ask focusing questions |
| `▓▓▓░░` | 3 | 60–74 | Charting | Risky to proceed; gaps remain |
| `▓▓▓▓░` | 4 | 75–89 | Mapped | Viable; push toward 5 |
| `▓▓▓▓▓` | 5 | 90–100 | Ready | Deliver |
Start honest. Clear request → level 4–5. Vague → level 0–2.
At level 4: "Can proceed, but 1–2 more questions would reach full confidence. Continue or deliver now?"
Below level 5: include `△ Caveats` section.
</confidence>
<phases>
Track with TodoWrite. Phases advance only, never regress.
| Phase | Trigger | activeForm |
|-------|---------|------------|
| Prep | level 0–1 | "Prepping" |
| Explore | level 2–3 | "Exploring" |
| Clarify | level 4 | "Clarifying" |
| Deliver | level 5 | "Delivering" |
TodoWrite format — each phase gets context-specific title:
```text
- Prep { domain } requirements
- Explore { approach } options
- Clarify { key unknowns, 3-4 words }
- Deliver { artifact type }
```
Situational (insert before Deliver when triggered):
- Resolve Conflicts → `◆ Caution` or `◆◆ Hazard` pushback
- Validate Assumptions → high-risk assumptions before delivery
Workflow:
- Start: Create phase matching initial confidence `in_progress`
- Transition: Mark current `completed`, add next `in_progress`
- High start (4+): Skip directly to `Clarify` or `Deliver`
- Early delivery: Skip to `Deliver` + `△ Caveats`
</phases>
<gather>
Calibrate first — user may have already provided context (docs, prior conversation, pointed you at files). If enough context exists, skip to level 3–4. Don't re-ask what's already clear.
If gaps remain, explore focus areas (pick what's relevant):
- Purpose: What problem? Why now?
- Constraints: Time, tech, team, dependencies
- Success: How will we know it works?
- Scope: What's in, what's out?
When multiple approaches exist:
- Propose 2–3 options with trade-offs
- Lead with recommendation ★ and reasoning
- Let user pick, combine, or redirect
Principles:
- YAGNI — cut what's not needed
- DRY — don't duplicate effort or logic
- Simplest thing — prefer boring solutions
</gather>
<questions>
Use `EnterPlanMode` for each question — enables keyboard navigation of options.
Structure:
- Prose above tool: context, reasoning, ★ recommendation if clear lean
- Inside tool: options only (concise, scannable)
At level 0 — start with session intent:
- Quick pulse check vs deep dive?
- Exploring possibilities or solving a specific problem?
- What does "done" look like?
Levels 1–4 — focus on substance:
- 2–4 options per question + "5. Something else"
- Inline `[★]` on recommended option + *italicized rationale*
- User replies: number, modifications, or combos
</questions>
<workflow>
Loop: Answer → Restate → Update Confidence → Next action
After each answer emit:
- Confidence: {BAR} {NAME}
- Assumptions: { if material }
- Unknowns: { what we can clarify; note unknowables when relevant }
- Decisions: { what's locked in }
- Concerns: { what feels off + why }
Next action by level:
- 0–2: Ask clarifying questions
- 3: Summarize (3 bullets max), fork toward 5
- 4: Offer choice: refine or proceed
- 5: Deliver
</workflow>
<issues>
When answer reveals a concern mid-stream:
- Pause before next question
- Surface with `△` + brief description
- Ask: clarify now, note for later, or proceed with assumption?
Example: "△ This assumes the API supports batch operations — clarify now, note for later, or proceed?"
If user proceeds despite significant gap → escalate to `pushback` protocol.
</issues>
<pushback>
Escalate when choice conflicts with goals/constraints/best practices:
- `◇ Alternative`: Minor misalignment. Present option + reasoning.
- `◆ Caution`: Clear conflict. Recommend alternative, explain risks, ask to proceed. Triggers Resolve Conflicts.
- `◆◆ Hazard`: High failure risk. Require mitigation or explicit override. Triggers Resolve Conflicts.
Override: Accept "Proceed anyway: {REASON}" → log in next reflection → mark Resolve Conflicts complete.
</pushback>
<skeptic>
Integrate skeptic agent for complexity sanity checks:
**Recommend** (offer choice):
- Level 5 reached with △ Caveats > 2
- Red flag language in decisions: "might need later", "more flexible", "best practice"
```text
Before finalizing — you have {N} caveats. Want to run skeptic for a sanity check?
[AskUserQuestion]
1. Yes, quick check [★] — I'll challenge complexity interactively
2. Yes, deep analysis — launch skeptic agent in background
3. No, proceed — deliver as-is
```
**Auto-invoke** (no choice):
- Level 4+ with 3+ unknowns persisting across 2+ question cycles
- ◆◆ Hazard escalation triggered during session
When auto-invoking:
```text
[Auto-invoking skeptic — {REASON}]
```
Launch with Task tool:
- subagent_type: "outfitter:skeptic"
- prompt: Include current decisions, unknowns, and caveats
- run_in_background: false (wait for findings before delivery)
After skeptic returns:
- Present findings to user
- If verdict is `block` → add Resolve Conflicts phase
- If verdict is `caution` → offer choice to address or acknowledge
- If verdict is `proceed` → continue to delivery
</skeptic>
<completion>
Level 5: Produce artifact immediately (doc, plan, code, outline). If none specified, suggest one.
After delivering, ask where to persist (if applicable):
```text
[EnterPlanMode]
1. { discovered path } [★] — { source: `CLAUDE.md` preference | existing directory | convention }
2. Create issue — { Linear/GitHub/Beads based on project context }
3. ADR — { if architectural decision }
4. Don't persist — keep in conversation only
5. Something else — different location or format
```
Discovery order for option 1:
1. `CLAUDE.md` or project instructions with explicit plan storage preference
2. Existing `.agents/plans/` directory
3. Existing `docs/plans/` directory
4. Fall back to `.agents/plans/` if nothing found
Always suggest filename based on topic. Match existing conventions if present.
Mark Deliver `completed` after artifact is delivered (persistence is optional follow-up).
Below 5: Append `△ Caveats`:
- Open questions + context
- Assumed decisions + defaults
- Known concerns + impact
- Deferred items + revisit timing
</completion>
<rules>
ALWAYS:
- TodoWrite phase matching initial confidence at start
- `EnterPlanMode` for each question (keyboard nav)
- Prose above tool for context + ★ recommendation
- One question at a time, wait for response
- Restate + update confidence before next move
- Update todos at level 4, level 5 thresholds
- Apply pushback protocol on conflicts
- Check skeptic triggers at level 4+ (unknowns, caveats, red flags)
NEVER:
- Proceed from 0–3 without clarifying questions
- Hide uncertainty below level 5
- Stack questions or bury decisions in paragraphs
- Put recommendation inside plan tool (keep in prose)
- Skip reflection after answer
- Regress phases
- Ignore skeptic auto-invoke triggers
</rules>
<references>
- [confidence.md](references/confidence.md) — confidence deep dive
- [questions.md](references/questions.md) — question crafting
- [examples/](examples/) — session examples
- [FORMATTING.md](../../shared/`rules/`FORMATTING.md) — formatting conventions
- skeptic agent (outfitter:skeptic) — complexity sanity checks
</references>
---
## Referenced Files
> The following files are referenced in this skill and included for context.
### references/confidence.md
```markdown
# Confidence
Confidence reflects certainty that you can deliver the requested outcome with the available information.
## Philosophy
Balance two goals:
1. **Gather enough** to deliver quality results
2. **Avoid over-questioning** that frustrates user
Consider:
- **Clarity**: How well-defined is the ask?
- **Risk**: What happens if assumptions are wrong?
- **Complexity**: How many moving parts?
- **Ambiguity**: How many valid interpretations?
## Level Overview
| Bar | Level | Name | Internal % |
| --------- | ----- | ------------ | ---------- |
| `░░░░░` | 0 | **Prepping** | 0–19% |
| `▓░░░░` | 1 | **Scouting** | 20–39% |
| `▓▓░░░` | 2 | **Exploring**| 40–59% |
| `▓▓▓░░` | 3 | **Charting** | 60–74% |
| `▓▓▓▓░` | 4 | **Mapped** | 75–89% |
| `▓▓▓▓▓` | 5 | **Ready** | 90–100% |
## Phase Transitions
Confidence levels trigger phase transitions. Phases always advance, never regress.
### Phase-Confidence Mapping
| Level | Phase | activeForm |
|-------|-------|------------|
| 0–1 | Prep | "Prepping" |
| 2–3 | Explore | "Exploring" |
| 4 | Clarify | "Clarifying" |
| 5 | Deliver | "Delivering" |
### Rules
1. **No regression**: If confidence drops (4 → 3), stay in current phase
2. **Skip when starting high**: Level 5 start → go directly to Deliver
3. **Phase independence**: Confidence can fluctuate within a phase
4. **Early delivery**: User can request delivery at any phase → add `△ Caveats`
### Edge Cases
**High start**: Clear requirements → Start at Ready, go directly to Deliver
**Confidence drop**: Reach Mapped (4), enter Clarify, then realize gap (drops to 3) → Stay in Clarify, ask targeted questions
**Rapid ascent**: Start at Exploring (2) → one answer jumps to Mapped (4) → next to Ready (5) → transition through phases quickly
### Level 0: Prepping `░░░░░`
**Phase**: Prep
**When**: Request completely unclear, no domain context, pure guessing
**Ask**: Scope, constraints, goals, background
**Example**: "Make it better" with no context about what "it" is.
### Level 1: Scouting `▓░░░░`
**Phase**: Prep
**When**: Vague direction, domain clear but specifics aren't
**Ask**: What system? How big? What's in place?
**Example**: "Improvements to the dashboard" — which kind?
### Level 2: Exploring `▓▓░░░`
**Phase**: Explore
**When**: General area understood, lack critical details, multiple approaches possible
**Ask**: Which approach? What about X? What matters most? Speed vs quality?
**Example**: "Authentication" — method, scale, existing system unknown.
### Level 3: Charting `▓▓▓░░`
**Phase**: Explore
**When**: Reasonable understanding, could deliver with notable assumptions
**Do**:
1. Summarize (3 bullets max)
2. Ask 2–3 targeted questions toward level 4–5
3. If user proceeds early → add `△ Caveats`
**Example**: OAuth login — general approach known, need providers + fallback strategy.
### Level 4: Mapped `▓▓▓▓░`
**Phase**: Clarify
**When**: Solid understanding, few clarifications would reach Ready, low risk
**Do**: Offer choice — "Can proceed, but 1–2 more questions would reach full confidence. Continue or deliver now?"
**Example**: New API endpoint — data model understood, need error handling approach.
### Level 5: Ready `▓▓▓▓▓`
**Phase**: Deliver
**When**: Clear understanding, no major assumptions, minimal risk
**Do**: Produce artifact immediately, succinct next steps, no more questions unless something emerges
**Example**: "Add logout button to header" — clear, specific, low-risk.
## Special Cases
### Starting Confidence
Start honest. Don't artificially start low if the request is clear.
- **Clear request** → level 4–5
- **Vague request** → level 0–2
### Delivering Below Level 5
User wants quick delivery at lower confidence:
1. Confirm they want to proceed
2. Add `△ Caveats` section
3. List assumptions, concerns, unknowns
### Calibration
- Deliver at 5, goes well → calibrated
- Deliver at 5, miss the mark → overconfident
- Stay at 0–2 too long → underconfident
## Tuning
Percentage boundaries can adjust based on risk tolerance:
- **Higher risk tolerance** → shift boundaries down
- **Lower risk tolerance** → shift boundaries up
```
### references/questions.md
```markdown
# Question Format
## Anatomy of a Good Question
**Components**:
1. **Q{N}**: Question number (for tracking)
2. **Question**: Clear, specific, focused on one decision
3. **Why it matters**: One sentence explaining impact
4. **Options**: 2–4 meaningful choices
5. **Nuance**: Brief context for each option
6. **★ Recommendation** (optional): Your lean with reasoning
## Delivery via EnterPlanMode
Use `EnterPlanMode` for each question — enables keyboard navigation.
**Structure**:
- **Prose above tool**: context, reasoning, ★ recommendation
- **Inside tool**: options only (concise, scannable)
Don't bury recommendations inside the tool — keep them visible in prose.
## Crafting Options
### Option Count Guidelines
**2 options**: Use when choices are binary or you want to keep it simple
- Good: "Web app or mobile app?"
- Avoid: Forcing false dichotomy when more options exist
**3 options**: Sweet spot for most questions
- Good: Covers main approaches plus one alternative
- Avoid: Making options too similar
**4 options**: Use when you need a combination or "other"
- Good: Three distinct approaches + a hybrid option
- Avoid: Analysis paralysis with too many choices
### Option Quality
**Good options**:
- Mutually exclusive (can pick only one)
- Collectively exhaustive (covers reasonable space)
- Clearly differentiated (not subtle variations)
- Actionable (leads to concrete next steps)
**Bad options**:
- Overlapping: "Option 1: Use React. Option 2: Use modern framework."
- Too similar: "Option 1: 100ms timeout. Option 2: 150ms timeout."
- Vague: "Option 1: Do it the normal way."
- Open-ended: "Option 1: Whatever you think is best."
## Why It Matters
The one-sentence explanation serves multiple purposes:
1. **Context**: Helps user understand why you're asking
2. **Priority**: Shows this isn't arbitrary
3. **Decision framing**: Clarifies what depends on this choice
4. **Respect**: Demonstrates you're not just asking for the sake of asking
**Good examples**:
- "Why it matters — determines database schema design"
- "Why it matters — affects performance characteristics and scaling strategy"
- "Why it matters — impacts user experience for first-time visitors"
**Weak examples**:
- "Why it matters — I need to know"
- "Why it matters — this is important"
- "Why it matters — because"
## Adding Nuance
Each option should include helpful context:
**Good nuance**:
- Trade-offs: "Faster to implement but less flexible long-term"
- Implications: "Requires HTTPS and external dependency"
- Prerequisites: "Need existing user database"
- Typical use case: "Best for high-traffic applications"
**Weak nuance**:
- Restating the obvious: "Uses OAuth" (when option says OAuth)
- Generic statements: "Good option"
- No information: Just the option name with no context
## Recommendations (★)
Use recommendations when:
- You have genuine expertise or insight
- One option clearly fits better for typical cases
- User seems uncertain or asks for guidance
**Don't recommend when**:
- Purely user preference (e.g., color scheme)
- Not enough context yet
- All options equally valid
**Good**: `1. React [★] — mature ecosystem *best starting point for most teams*`
**Weak**:
- ★ I like this one
- ★ Most popular
- Recommendation buried in prose above options
## User Replies
Number is a shorthand, not a constraint:
- `2` → selects option 2
- `2, but with caching` → selection + modification
- `2 and 3` → combo
- `What's the difference?` → clarification request
All valid.
## Adaptive Cadence
**Baseline** (~80% of questions):
- Clear question + one-sentence "why"
- 2–4 options with brief nuance
- Inline `[★]` on recommended option
- Optional: `[★] { expanded reasoning }` in prose above if helpful
**Expand when**:
- High ambiguity or risk
- User uncertain or asks for detail
- Technical complexity needs explanation
**Simplify when**:
- Straightforward question
- User shows expertise
- Question 6+ in session
- User wants to move faster
```