Back to skills
SkillHub ClubAnalyze Data & AIFull StackData / AI

tracking

A graph-based issue tracker designed for AI agents to manage multi-session work. It maintains task state and dependencies across conversation compactions, providing persistent memory for complex projects. Integrates with git for version control and supports automated review workflows.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
23
Hot score
88
Updated
March 20, 2026
Overall rating
A8.0
Composite score
6.0
Best-practice grade
N/A

Install command

npx @skill-hub/cli install reinamaccredy-maestro-tracking
task-managementworkflow-automationpersistent-memorymulti-sessiondependency-tracking

Repository

ReinaMacCredy/maestro

Skill path: skills/tracking

A graph-based issue tracker designed for AI agents to manage multi-session work. It maintains task state and dependencies across conversation compactions, providing persistent memory for complex projects. Integrates with git for version control and supports automated review workflows.

Open repository

Best for

Primary workflow: Analyze Data & AI.

Technical facets: Full Stack, Data / AI.

Target audience: AI developers and teams working on complex, multi-session projects that require persistent task tracking across conversation boundaries.

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: ReinaMacCredy.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install tracking into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/ReinaMacCredy/maestro before adding tracking to shared team environments
  • Use tracking for productivity workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: tracking
description: >
  Tracks complex, multi-session work using the Beads issue tracker and dependency graphs, and provides
  persistent memory that survives conversation compaction. Use when work spans multiple sessions, has
  complex dependencies, or needs persistent context across compaction cycles. Trigger with phrases like
  "create task for", "what's ready to work on", "show task", "track this work", "what's blocking", or
  "update status". MUST load maestro-core skill first for routing.
metadata:
  version: "2.2.0"
---

## Prerequisites

- **Load maestro-core first** - [maestro-core](../maestro-core/SKILL.md) for routing table and fallback policies
- Routing and fallback policies are defined in [AGENTS.md](../../AGENTS.md).

# Tracking - Persistent Memory for AI Agents

Graph-based issue tracker that survives conversation compaction. Provides persistent memory for multi-session work with complex dependencies.

## Entry Points

| Trigger | Reference | Action |
|---------|-----------|--------|
| `bd`, `beads` | `references/workflow.md` | Core CLI operations |
| `fb`, `file-beads` | `references/FILE_BEADS.md` | File beads from plan → auto-orchestration |
| `rb`, `review-beads` | `references/REVIEW_BEADS.md` | Review filed beads |

## Quick Decision

**bd vs TodoWrite**:
- "Will I need this in 2 weeks?" → **YES** = bd
- "Could history get compacted?" → **YES** = bd
- "Has blockers/dependencies?" → **YES** = bd
- "Done this session?" → **YES** = TodoWrite

**Rule**: If resuming in 2 weeks would be hard without bd, use bd.

## Essential Commands

| Command | Purpose |
|---------|---------|
| `bd ready` | Show tasks ready to work on |
| `bd create "Title" -p 1` | Create new task |
| `bd show <id>` | View task details |
| `bd update <id> --status in_progress` | Start working |
| `bd update <id> --notes "Progress"` | Add progress notes |
| `bd close <id> --reason completed` | Complete task |
| `bd dep add <child> <parent>` | Add dependency |
| `bd sync` | Sync with git remote |

## Session Protocol

1. **Start**: `bd ready` → pick highest priority → `bd show <id>` → update to `in_progress`
2. **Work**: Add notes frequently (critical for compaction survival)
3. **End**: Close finished work → `bd sync` → `git push`

## Reference Files

| Category | Files |
|----------|-------|
| **Workflows** | `workflow.md`, `WORKFLOWS.md`, `FILE_BEADS.md`, `REVIEW_BEADS.md` |
| **CLI** | `CLI_REFERENCE.md`, `DEPENDENCIES.md`, `LABELS.md` |
| **Integration** | `conductor-integration.md`, `VILLAGE.md`, `GIT_INTEGRATION.md` |
| **Operations** | `AGENTS.md`, `RESUMABILITY.md`, `TROUBLESHOOTING.md` |

## Anti-Patterns

- ❌ Using TodoWrite for multi-session work
- ❌ Forgetting to add notes (loses context on compaction)
- ❌ Not running `bd sync` before ending session
- ❌ Creating beads for trivial single-session tasks

## Related

- [maestro-core](../maestro-core/SKILL.md) - Workflow router and skill hierarchy
- [conductor](../conductor/SKILL.md) - Automated beads operations via facade
- [orchestrator](../orchestrator/SKILL.md) - Multi-agent parallel execution


---

## Referenced Files

> The following files are referenced in this skill and included for context.

### ../maestro-core/SKILL.md

```markdown
---
name: maestro-core
description: Use when any Maestro skill loads - provides skill hierarchy, HALT/DEGRADE policies, and trigger routing rules for orchestration decisions
---

# Maestro Core - Workflow Router

Central hub for Maestro workflow skills. Routes triggers, defines hierarchy, and handles fallbacks.

## Skill Hierarchy

```
conductor (1) > orchestrator (2) > designing (3) > tracking (4) > specialized (5)
```

Higher rank wins on conflicts.

## Ownership Matrix

| Skill | Owns | Primary Triggers |
|-------|------|------------------|
| conductor | Implementation + autonomous | `ci`, `ca`, `/conductor-implement`, `/conductor-autonomous` |
| orchestrator | Parallel execution | `co`, `/conductor-orchestrate` |
| designing | Phases 1-10 (design → track creation) | `ds`, `cn`, `/conductor-newtrack`, `/conductor-design` |
| tracking | Task/bead management | `bd *`, `fb`, `rb` |
| handoff | Session cycling | `ho`, `/conductor-finish`, `/conductor-handoff` |
| creating-skills | Skill authoring | "create skill", "write skill" |

## Workflow Chain

```
ds/cn → design.md → /conductor-newtrack → spec.md + plan.md → fb → tracking → ci/co/ca → implementation
```

## Routing Table

**CRITICAL:** After loading `maestro-core`, you MUST explicitly load the target skill via `skill(name="...")` before proceeding.

See **[routing-table.md](references/routing-table.md)** for complete trigger → skill mappings, phrase triggers, and conditional routing logic.

### Quick Triggers

| Trigger | Skill |
|---------|-------|
| `ds`, `cn` | designing |
| `ci` | conductor |
| `ca` | conductor |
| `co` | orchestrator |
| `fb`, `rb`, `bd *` | tracking |
| `ho` | handoff |

### Routing Flow

```
1. User triggers command (e.g., `ci`)
2. Load maestro-core → get routing table
3. Look up trigger → find target skill
4. MUST call skill tool to load target skill
5. Follow loaded skill instructions
```

## Fallback Policies

| Condition | Action | Message |
|-----------|--------|---------|
| `bd` unavailable | HALT | `❌ Cannot proceed: bd CLI required` |
| `conductor/` missing | DEGRADE | `⚠️ Standalone mode - limited features` |
| Agent Mail unavailable | HALT | `❌ Cannot proceed: Agent Mail required for coordination` |

## Quick Reference

| Concern | Reference |
|---------|-----------|
| Complete workflow | [workflow-chain.md](references/workflow-chain.md) |
| All routing rules | [routing-table.md](references/routing-table.md) |
| Terms and concepts | [glossary.md](references/glossary.md) |
| CLI toolboxes | [toolboxes.md](references/toolboxes.md) |

## Related Skills

- [designing](../designing/SKILL.md) - Double Diamond design sessions (phases 1-10)
- [conductor](../conductor/SKILL.md) - Implementation execution
- [orchestrator](../orchestrator/SKILL.md) - Multi-agent parallel execution
- [tracking](../tracking/SKILL.md) - Issue tracking and dependency graphs
- [handoff](../handoff/SKILL.md) - Session cycling and context preservation
- [creating-skills](../creating-skills/SKILL.md) - Skill authoring and best practices
- [sharing-skills](../sharing-skills/SKILL.md) - Contributing skills upstream
- [using-git-worktrees](../using-git-worktrees/SKILL.md) - Isolated workspaces

```

### ../../AGENTS.md

```markdown
# AGENTS.md - Maestro Plugin

Workflow skills plugin: Conductor, Designing, Tracking, Orchestrator.

## Project Detection

| Condition | Result |
|-----------|--------|
| `conductor/` exists | Use Conductor workflow |
| `.beads/` exists | Use Beads tracking |
| Neither | Standalone mode |

## Decision Trees

### bd vs TodoWrite

```
bd available?
├─ YES → Use bd CLI
└─ NO  → HALT (do not use TodoWrite as fallback)
```

### Execution Mode

Always FULL mode via orchestrator. Even single tasks spawn 1 worker for consistency.

## Commands Quick Reference

### Planning

| Trigger | Action |
|---------|--------|
| `ds` | Design session (Double Diamond) |
| `/conductor-setup` | Initialize project context |
| `/conductor-newtrack` | Create spec + plan + beads from design |

### Execution

| Trigger | Action |
|---------|--------|
| `bd ready --json` | Find available work |
| `/conductor-implement <track>` | Execute track with TDD |
| `/conductor-implement --no-tdd` | Execute without TDD |
| `ca`, `/conductor-autonomous` | Autonomous execution (Ralph loop) |
| `tdd` | Enter TDD mode |
| `finish branch` | Finalize and merge/PR |

### Beads

| Command | Action |
|---------|--------|
| `fb` | File beads from plan |
| `rb` | Review beads |
| `bd status` | Show ready + in_progress |
| `bd show <id>` | Read task context |
| `bd update <id> --status in_progress` | Claim task |
| `bd close <id> --reason <completed\|skipped\|blocked>` | Close task |
| `bd sync` | Sync to git |

### Handoffs

| Command | Action |
|---------|--------|
| `/conductor-handoff` | Save session context |
| `/conductor-handoff resume` | Load session context |

### Maintenance

| Trigger | Action |
|---------|--------|
| `/conductor-revise` | Update spec/plan mid-work |
| `/conductor-finish` | Complete track, extract learnings |
| `/conductor-status` | Display progress overview |

## Session Protocol

### First Message

1. Check `conductor/handoffs/` for recent handoffs (< 7 days)
2. If found: `📋 Prior context: [track] (Xh ago)`
3. Skip if: "fresh start", no `conductor/`, or handoffs > 7 days

### Preflight Triggers

| Command | Preflight |
|---------|-----------|
| `/conductor-implement` | ✅ Yes |
| `/conductor-orchestrate` | ✅ Yes |
| `ds` | ❌ Skip |
| `bd ready/show/list` | ❌ Skip |

### Session Start

```bash
bd ready --json                      # Find work
bd show <id>                         # Read context
bd update <id> --status in_progress  # Claim
```

### During Session

- Heartbeat every 5 min (automatic)
- TDD checkpoints tracked by default
- Idle > 30min → prompt for handoff

### Session End

```bash
bd update <id> --notes "COMPLETED: X. NEXT: Y"
bd close <id> --reason completed
bd sync
```

### Session Identity

- Format: `{BaseAgent}-{timestamp}` (internal)
- Registered on `/conductor-implement` or `/conductor-orchestrate`
- Stale threshold: 10 min → takeover prompt

### Ralph (Autonomous Mode)

| Phase | Action |
|-------|--------|
| Start | `ca` sets `ralph.active = true`, invokes ralph.sh |
| During | Ralph iterates through stories, updates passes status |
| End | `ralph.active = false`, `workflow.state = DONE` |

**Exclusive Lock:** `ci`/`co` commands blocked while `ralph.active` is true.

**Gotchas:**
- `ralph.active` lock prevents concurrent `ci`/`co` execution
- `progress.txt` is in track directory, not project root
- Ralph reads/writes `metadata.json.ralph.stories` directly

## Fallback Policy

| Condition | Action |
|-----------|--------|
| `bd` unavailable | HALT |
| `conductor/` missing | DEGRADE (standalone) |
| Agent Mail unavailable | HALT |

## Skill Discipline

**RULE:** Load [maestro-core](skills/maestro-core/SKILL.md) FIRST before any workflow skill for routing table and fallback policies.

**RULE:** Check skills BEFORE ANY RESPONSE. 1% chance = invoke Skill tool.

### Red Flags (Rationalizing)

| Thought | Reality |
|---------|---------|
| "Just a simple question" | Questions are tasks. Check. |
| "Need more context first" | Skill check BEFORE clarifying. |
| "Let me explore first" | Skills tell HOW to explore. |
| "Doesn't need formal skill" | If skill exists, use it. |
| "I remember this skill" | Skills evolve. Re-read. |
| "Skill is overkill" | Simple → complex. Use it. |

### Skill Priority

1. **Process skills** (`ds`, `/conductor-design`) → determine approach
2. **Implementation skills** (`frontend-design`, `mcp-builder`) → guide execution

### Skill Types

| Type | Behavior |
|------|----------|
| Rigid (TDD) | Follow exactly |
| Flexible (patterns) | Adapt to context |

## Directory Structure

```
conductor/
├── product.md, tech-stack.md, workflow.md  # Context
├── CODEMAPS/                               # Architecture
├── handoffs/                               # Session context
└── tracks/<id>/                            # Per-track
    ├── design.md, spec.md, plan.md
    └── metadata.json
```

## Build/Test

```bash
cat .claude-plugin/plugin.json | jq .   # Validate manifest
```

## Code Style

- Skills: Markdown + YAML frontmatter (`name`, `description` required)
- Directories: kebab-case
- SKILL.md name must match directory

## Versioning

| Type | Method |
|------|--------|
| Plugin | CI auto-bump (`feat:` minor, `fix:` patch, `feat!:` major while 0.x) |
| Skill | Manual frontmatter update |
| Skip CI | `[skip ci]` in commit |

**Pre-1.0 Semantics**: While version is 0.x.x, breaking changes (`feat!:` or `release:major`) bump MINOR not MAJOR. This prevents accidental 1.0.0 release. See [CHANGELOG-legacy.md](CHANGELOG-legacy.md) for pre-0.5.0 history.

## Critical Rules

- Use `--json` with `bd` for structured output
- Use `--robot-*` with `bv` (bare `bv` hangs)
- Never write production code without failing test first
- Always commit `.beads/` with code changes

## Detailed References

| Topic | Path |
|-------|------|
| Beads workflow | [skills/tracking/references/workflow-integration.md](skills/tracking/references/workflow-integration.md) |
| Handoff system | [skills/conductor/references/workflows/handoff.md](skills/conductor/references/workflows/handoff.md) |
| Agent coordination | [skills/orchestrator/references/agent-coordination.md](skills/orchestrator/references/agent-coordination.md) |
| Router | [skills/orchestrator/references/router.md](skills/orchestrator/references/router.md) |
| Beads integration | [skills/conductor/references/beads-integration.md](skills/conductor/references/beads-integration.md) |
| TDD checkpoints | [skills/conductor/references/tdd-checkpoints-beads.md](skills/conductor/references/tdd-checkpoints-beads.md) |
| Idle detection | [skills/conductor/references/workflows/handoff.md](skills/conductor/references/workflows/handoff.md) |

```

### ../conductor/SKILL.md

```markdown
---
name: conductor
description: Implementation execution for context-driven development. Trigger with ci, /conductor-implement, or /conductor-* commands. Use when executing tracks with specs/plans. For design phases, see designing skill. For session handoffs, see handoff skill.
---

# Conductor: Implementation Execution

Execute tracks with TDD and parallel routing.

## Entry Points

| Trigger | Action | Reference |
|---------|--------|-----------|
| `/conductor-setup` | Initialize project context | [workflows/setup.md](references/workflows/setup.md) |
| `/conductor-implement` | Execute track (auto-routes if parallel) | [workflows/implement.md](references/workflows/implement.md) |
| `ca`, `/conductor-autonomous` | **Run ralph.sh directly** (no Task/sub-agents) | [workflows/autonomous.md](references/workflows/autonomous.md) |
| `/conductor-status` | Display progress overview | [structure.md](references/structure.md) |
| `/conductor-revise` | Update spec/plan mid-work | [workflows.md#revisions](references/workflows.md#revisions) |

## Related Skills (Not Owned by Conductor)

| For... | Use Skill | Triggers |
|--------|-----------|----------|
| Design phases (1-8) | [designing](../designing/SKILL.md) | `ds`, `cn`, `/conductor-design`, `/conductor-newtrack` |
| Session handoffs | [handoff](../handoff/SKILL.md) | `ho`, `/conductor-finish`, `/conductor-handoff` |

## Quick Reference

| Phase | Purpose | Output | Skill |
|-------|---------|--------|-------|
| Requirements | Understand problem | design.md | designing |
| Plan | Create spec + plan | spec.md + plan.md | designing |
| **Implement** | Build with TDD | Code + tests | **conductor** |
| **Autonomous** | Ralph loop execution | Auto-verified stories | **conductor** |
| Reflect | Verify before shipping | LEARNINGS.md | handoff |

## Core Principles

- **Load core first** - Load [maestro-core](../maestro-core/SKILL.md) for routing table and fallback policies
- **TDD by default** - RED → GREEN → REFACTOR (use `--no-tdd` to disable)
- **Beads integration** - Zero manual `bd` commands in happy path
- **Parallel routing** - `## Track Assignments` in plan.md triggers orchestrator
- **Validation gates** - Automatic checks at each phase transition

## Directory Structure

```
conductor/
├── product.md, tech-stack.md, workflow.md  # Project context
├── code_styleguides/                       # Language-specific style rules
├── CODEMAPS/                               # Architecture docs
├── handoffs/                               # Session context
├── spikes/                                 # Research spikes (pl output)
└── tracks/<track_id>/                      # Per-track work
    ├── design.md, spec.md, plan.md         # Planning artifacts
    └── metadata.json                       # State tracking (includes planning state)
```

See [structure.md](references/structure.md) for full details.

## Beads Integration

All execution routes through orchestrator with Agent Mail coordination:
- Workers claim beads via `bd update --status in_progress`
- Workers close beads via `bd close --reason completed|skipped|blocked`
- File reservations via `file_reservation_paths`
- Communication via `send_message`/`fetch_inbox`

See [beads/integration.md](references/beads/integration.md) for all 13 integration points.

## `/conductor-implement` Auto-Routing

1. Read `metadata.json` - check `orchestrated` flag
2. Read `plan.md` - check for `## Track Assignments`
3. Check `beads.fileScopes` - file-scope based grouping (see [execution/file-scope-extractor](references/execution/file-scope-extractor.md))
4. If parallel detected (≥2 non-overlapping groups) → Load [orchestrator skill](../orchestrator/SKILL.md)
5. Else → Sequential execution with TDD

### File Scope Detection

File scopes determine parallel routing (see [execution/parallel-grouping](references/execution/parallel-grouping.md)):
- Tasks touching same files → sequential (same track)
- Tasks touching different files → parallel (separate tracks)

## Anti-Patterns

- ❌ Manual `bd` commands when workflow commands exist
- ❌ Ignoring validation gate failures
- ❌ Using conductor for design (use [designing](../designing/SKILL.md) instead)
- ❌ Using conductor for handoffs (use [handoff](../handoff/SKILL.md) instead)

## Related

- [designing](../designing/SKILL.md) - Double Diamond design + track creation
- [handoff](../handoff/SKILL.md) - Session cycling and finish workflow
- [tracking](../tracking/SKILL.md) - Issue tracking (beads)
- [orchestrator](../orchestrator/SKILL.md) - Parallel execution
- [maestro-core](../maestro-core/SKILL.md) - Routing policies

```

### ../orchestrator/SKILL.md

```markdown
---
name: orchestrator
description: Multi-agent parallel execution with autonomous workers. Use when plan.md has Track Assignments section or user triggers /conductor-orchestrate, "run parallel", "spawn workers". MUST load maestro-core skill first for routing.
---

# Orchestrator - Multi-Agent Parallel Execution

> **Spawn autonomous workers to execute tracks in parallel using Agent Mail coordination.**

## Agent Mail: CLI Primary, MCP Fallback

This skill uses a **lazy-load pattern** for Agent Mail:

| Priority | Tool | When Available |
|----------|------|----------------|
| **Primary** | `bun toolboxes/agent-mail/agent-mail.js` | Always (via Bash) |
| **Fallback** | MCP tools (via `mcp.json`) | When skill loaded + MCP server running |

**Detection flow:**
```
1. Try CLI: bun toolboxes/agent-mail/agent-mail.js health-check
   ↓ success? → Use CLI for all Agent Mail operations
   ↓ fails?
2. Fallback: MCP tools (lazy-loaded via skills/orchestrator/mcp.json)
```

**CLI benefits:** Zero token cost until used, no MCP server dependency.

## Core Principles

- **Load core first** - Load [maestro-core](../maestro-core/SKILL.md) for routing table and fallback policies
- **CLI first** - Use `bun toolboxes/agent-mail/agent-mail.js` CLI before falling back to MCP tools
- **Pre-register workers** before spawning (Agent Mail validates recipients)
- **Workers own their beads** - can `bd claim/close` directly (unlike sequential mode)
- **File reservations prevent conflicts** - reserve before edit, release on complete
- **Summary before exit** - all workers MUST send completion message
- **TDD by default** - workers follow RED → GREEN → REFACTOR cycle (use `--no-tdd` to disable)

## When to Use

| Trigger | Condition |
|---------|-----------| 
| Auto-routed | `/conductor-implement` when plan has Track Assignments |
| File-scope | `/conductor-implement` when ≥2 non-overlapping file groups detected |
| Direct | `/conductor-orchestrate` or `co` |
| Phrase | "run parallel", "spawn workers", "dispatch agents" |
| **See also** | `ca` for [autonomous execution](../conductor/references/workflows/autonomous.md) |

## Auto-Trigger Behavior

Parallel execution starts **automatically** when detected - no confirmation needed:

```
📊 Parallel execution detected:
- Track A: 2 tasks (src/api/)
- Track B: 2 tasks (lib/)
- Track C: 1 task (schemas/)

⚡ Spawning workers...
```

## Quick Reference

| Action | Tool |
|--------|------|
| Parse plan.md | `Read("conductor/tracks/<id>/plan.md")` |
| Initialize | `bun toolboxes/agent-mail/agent-mail.js macro-start-session` |
| Spawn workers | `Task()` for each track |
| Monitor | `bun toolboxes/agent-mail/agent-mail.js fetch-inbox` |
| Resolve blockers | `bun toolboxes/agent-mail/agent-mail.js reply-message` |
| Complete | `bun toolboxes/agent-mail/agent-mail.js send-message`, `bd close epic` |
| Track threads | `bun toolboxes/agent-mail/agent-mail.js summarize-thread` |
| Auto-routing | Auto-detect parallel via `metadata.json.beads` |

## 8-Phase Orchestrator Protocol

0. **Preflight** - Session identity, detect active sessions
1. **Read Plan** - Parse Track Assignments from plan.md
2. **Validate** - Health check Agent Mail CLI (HALT if unavailable)
3. **Initialize** - ensure_project, register orchestrator + all workers
4. **Spawn Workers** - Task() for each track (parallel)
5. **Monitor + Verify** - fetch_inbox, verify worker summaries
   - Workers use track threads (`TRACK_THREAD`) for bead-to-bead context
6. **Resolve** - reply_message for blockers
7. **Complete** - Send summary, close epic, `rb` review

See [references/workflow.md](references/workflow.md) for full protocol.

## Worker 4-Step Protocol

All workers MUST follow this exact sequence:

```
┌──────────────────────────────────────────────────────────────────────────────┐
│  STEP 1: INITIALIZE  - bun toolboxes/agent-mail/agent-mail.js macro-start-session   │
│  STEP 2: EXECUTE     - claim beads, do work, close beads                            │
│  STEP 3: REPORT      - bun toolboxes/agent-mail/agent-mail.js send-message          │
│  STEP 4: CLEANUP     - bun toolboxes/agent-mail/agent-mail.js release-file-reservations │
└──────────────────────────────────────────────────────────────────────────────┘
```

| Step | Tool | Required |
|------|------|----------|
| 1 | `bun toolboxes/agent-mail/agent-mail.js macro-start-session` | ✅ FIRST |
| 2 | `bd update`, `bd close` | ✅ |
| 3 | `bun toolboxes/agent-mail/agent-mail.js send-message` | ✅ LAST |
| 4 | `bun toolboxes/agent-mail/agent-mail.js release-file-reservations` | ✅ |

**Critical rules:**
- ❌ Never start work before `macro-start-session`
- ❌ Never return without `send-message` to orchestrator
- ❌ Never touch files outside assigned scope

See [references/worker-prompt.md](references/worker-prompt.md) for full template.

## Agent Routing

| Intent Keywords | Agent Type | File Reservation |
|-----------------|------------|------------------|
| `research`, `find`, `locate` | Research | None (read-only) |
| `review`, `check`, `audit` | Review | None (read-only) |
| `plan`, `design`, `architect` | Planning | `conductor/tracks/**` |
| `implement`, `build`, `create` | Execution | Task-specific scope |
| `fix`, `debug`, `investigate` | Debug | None (read-only) |

See [references/intent-routing.md](references/intent-routing.md) for mappings.

## Anti-Patterns

| ❌ Don't | ✅ Do |
|----------|-------|
| Spawn workers without pre-registration | Register all workers BEFORE spawning |
| Skip completion summary | Always send_message before exit |
| Ignore file reservation conflicts | Wait or resolve before proceeding |
| Use orchestration for simple tasks | Use sequential `/conductor-implement` |

## Lazy References

Load references only when needed:

| Phase | Trigger Condition | Reference |
|-------|-------------------|-----------|
| Always | On skill load | SKILL.md (this file) |
| Phase 3 (Initialize) | Setting up Agent Mail, project registration | [agent-mail.md](references/agent-mail.md) |
| Phase 4 (Spawn) | Before dispatching worker agents | [worker-prompt.md](references/worker-prompt.md) |
| Phase 6 (Handle Issues) | Cross-track dependencies, blocker resolution | [agent-coordination.md](references/agent-coordination.md) |

### All References

| Topic | File |
|-------|------|
| Full workflow | [workflow.md](references/workflow.md) |
| Architecture | [architecture.md](references/architecture.md) |
| Coordination modes | [coordination-modes.md](references/coordination-modes.md) |
| Agent Mail protocol | [agent-mail.md](references/agent-mail.md) |
| Agent Mail CLI | [agent-mail-cli.md](references/agent-mail-cli.md) |
| Worker prompt template | [worker-prompt.md](references/worker-prompt.md) |
| Preflight/session brain | [preflight.md](references/preflight.md) |
| Intent routing | [intent-routing.md](references/intent-routing.md) |
| Summary format | [summary-protocol.md](references/summary-protocol.md) |
| Auto-routing | [auto-routing.md](references/auto-routing.md) |
| Track threads | [track-threads.md](references/track-threads.md) |

## Related

- [maestro-core](../maestro-core/SKILL.md) - Routing and fallback policies
- [conductor](../conductor/SKILL.md) - Track management, `/conductor-implement`
- [tracking](../tracking/SKILL.md) - Issue tracking, `bd` commands

```

### references/workflow.md

```markdown

# Beads

## Overview

bd is a graph-based issue tracker for persistent memory across sessions. Use for multi-session work with complex dependencies; use TodoWrite for simple single-session tasks.

**Parallel Execution**: When using the orchestrator skill, multiple agents coordinate through Agent Mail for file reservations and messaging. Single-agent mode remains the default.

## When to Use bd vs TodoWrite

### Use bd when:
- **Multi-session work** - Tasks spanning multiple compaction cycles or days
- **Complex dependencies** - Work with blockers, prerequisites, or hierarchical structure
- **Knowledge work** - Strategic documents, research, or tasks with fuzzy boundaries
- **Side quests** - Exploratory work that might pause the main task
- **Project memory** - Need to resume work after weeks away with full context

### Use TodoWrite when:
- **Single-session tasks** - Work that completes within current session
- **Linear execution** - Straightforward step-by-step tasks with no branching
- **Immediate context** - All information already in conversation
- **Simple tracking** - Just need a checklist to show progress

**Key insight**: If resuming work after 2 weeks would be difficult without bd, use bd. If the work can be picked up from a markdown skim, TodoWrite is sufficient.

### Test Yourself: bd or TodoWrite?

Ask these questions to decide:

**Choose bd if:**
- ❓ "Will I need this context in 2 weeks?" → Yes = bd
- ❓ "Could conversation history get compacted?" → Yes = bd
- ❓ "Does this have blockers/dependencies?" → Yes = bd
- ❓ "Is this fuzzy/exploratory work?" → Yes = bd

**Choose TodoWrite if:**
- ❓ "Will this be done in this session?" → Yes = TodoWrite
- ❓ "Is this just a task list for me right now?" → Yes = TodoWrite
- ❓ "Is this linear with no branching?" → Yes = TodoWrite

**When in doubt**: Use bd. Better to have persistent memory you don't need than to lose context you needed.

**For detailed decision criteria and examples, read:** [references/BOUNDARIES.md](./BOUNDARIES.md)

## Surviving Compaction Events

**Critical**: Compaction events delete conversation history but preserve beads. After compaction, bd state is your only persistent memory.

**What survives compaction:**
- All bead data (issues, notes, dependencies, status)
- Complete work history and context

**What doesn't survive:**
- Conversation history
- TodoWrite lists
- Recent discussion context

**Writing notes for post-compaction recovery:**

Write notes as if explaining to a future agent with zero conversation context:

**Pattern:**
```markdown
notes field format:
- COMPLETED: Specific deliverables ("implemented JWT refresh endpoint + rate limiting")
- IN PROGRESS: Current state + next immediate step ("testing password reset flow, need user input on email template")
- BLOCKERS: What's preventing progress
- KEY DECISIONS: Important context or user guidance
```

**After compaction:** `bd show <issue-id>` reconstructs full context from notes field.

### Notes Quality Self-Check

Before checkpointing (especially pre-compaction), verify your notes pass these tests:

❓ **Future-me test**: "Could I resume this work in 2 weeks with zero conversation history?"
- [ ] What was completed? (Specific deliverables, not "made progress")
- [ ] What's in progress? (Current state + immediate next step)
- [ ] What's blocked? (Specific blockers with context)
- [ ] What decisions were made? (Why, not just what)

❓ **Stranger test**: "Could another developer understand this without asking me?"
- [ ] Technical choices explained (not just stated)
- [ ] Trade-offs documented (why this approach vs alternatives)
- [ ] User input captured (decisions that came from discussion)

**Good note example:**
```
COMPLETED: JWT auth with RS256 (1hr access, 7d refresh tokens)
KEY DECISION: RS256 over HS256 per security review - enables key rotation
IN PROGRESS: Password reset flow - email service working, need rate limiting
BLOCKERS: Waiting on user decision: reset token expiry (15min vs 1hr trade-off)
NEXT: Implement rate limiting (5 attempts/15min) once expiry decided
```

**Bad note example:**
```
Working on auth. Made some progress. More to do.
```

**For complete compaction recovery workflow, read:** [references/WORKFLOWS.md](./WORKFLOWS.md#compaction-survival)

## Session Start Protocol

**bd is available when:**
- Project has a `.beads/` directory (project-local database), OR
- `~/.beads/` exists (global fallback database for any directory)

**At session start, always check for bd availability and run ready check.**

### Session Start Checklist

Copy this checklist when starting any session where bd is available:

```
Session Start:
- [ ] Run bd ready --json to see available work
- [ ] Run bd list --status in_progress --json for active work
- [ ] If in_progress exists: bd show <issue-id> to read notes
- [ ] Report context to user: "X items ready: [summary]"
- [ ] If using global ~/.beads, mention this in report
- [ ] If nothing ready: bd blocked --json to check blockers
```

**Pattern**: Always check both `bd ready` AND `bd list --status in_progress`. Read notes field first to understand where previous session left off.

**Report format**:
- "I can see X items ready to work on: [summary]"
- "Issue Y is in_progress. Last session: [summary from notes]. Next: [from notes]. Should I continue with that?"

This establishes immediate shared context about available and active work without requiring user prompting.

**For detailed collaborative handoff process, read:** [references/WORKFLOWS.md](./WORKFLOWS.md#session-handoff)

**Note**: bd auto-discovers the database:
- Uses `.beads/*.db` in current project if exists
- Falls back to `~/.beads/default.db` otherwise
- No configuration needed

### When No Work is Ready

If `bd ready` returns empty but issues exist:

```bash
bd blocked --json
```

Report blockers and suggest next steps.

---

## Progress Checkpointing

Update bd notes at these checkpoints (don't wait for session end):

**Critical triggers:**
- **Context running low** - User says "running out of context" / "approaching compaction" / "close to token limit"
- **Token budget > 70%** - Proactively checkpoint when approaching limits
- **Major milestone reached** - Completed significant piece of work
- **Hit a blocker** - Can't proceed, need to capture what was tried
- **Task transition** - Switching issues or about to close this one
- **Before user input** - About to ask decision that might change direction

**Proactive monitoring during session:**
- At 70% token usage: "We're at 70% token usage - good time to checkpoint bd notes?"
- At 85% token usage: "Approaching token limit (85%) - checkpointing current state to bd"
- At 90% token usage: Automatically checkpoint without asking

**Current token usage**: Check `<system-warning>Token usage:` messages to monitor proactively.

**Checkpoint checklist:**

```
Progress Checkpoint:
- [ ] Update notes with COMPLETED/IN_PROGRESS/NEXT format
- [ ] Document KEY DECISIONS or BLOCKERS since last update
- [ ] Mark current status (in_progress/blocked/closed)
- [ ] If discovered new work: create issues with discovered-from
- [ ] Verify notes are self-explanatory for post-compaction resume
```

**Most important**: When user says "running out of context" OR when you see >70% token usage - checkpoint immediately, even if mid-task.

**Test yourself**: "If compaction happened right now, could future-me resume from these notes?"

---

## Degradation Signals

Monitor for signs of context degradation during work loops. When detected, trigger context compression to preserve session quality.

### Signal Types

| Signal | Definition | Threshold |
|--------|------------|-----------|
| `tool_repeat` | Same tool called on same target | Per-tool (see below) |
| `backtrack` | Revisiting a completed task | 1 occurrence |
| `quality_drop` | Test failures increase OR new lint errors appear | 1 occurrence |
| `contradiction` | Output conflicts with documented Decisions | 1 occurrence |

### Per-Tool Thresholds

| Tool | Threshold | Rationale |
|------|-----------|-----------|
| `file_write` / `edit_file` | 3 | Repeated edits suggest confusion |
| `bash` / `Bash` | 3 | Repeated commands suggest trial-and-error |
| `search` / `Grep` / `finder` | 5 | Searching is naturally iterative |
| `file_read` / `Read` | 10 | Reading is cheap, often needed |

### Trigger Rule

If 2+ signals fire within a task, trigger context compression.

This prevents single false positives from causing unnecessary compression while catching genuine degradation.

### Evaluation Timing

Evaluate degradation signals:
- **After each task completion** (implement.md Phase 3, step 6)
- **At token budget thresholds** (70%, 85%, 90%)
- **Before major phase transitions**

### Compression Action

When triggered:
1. Checkpoint current state to bd notes
2. Summarize completed work
3. Archive verbose context (tool outputs, intermediate states)
4. Reload essential context only (Intent, Decisions, Current State)

### Integration

This section extends Progress Checkpointing with degradation detection.

> **Cross-skill references:**
> - Load the [conductor](../../conductor/SKILL.md) skill for Phase 3 evaluation hooks
> - Load the [designing](../../designing/SKILL.md) skill for RECALL/REMEMBER integration

---

### Database Selection

bd automatically selects the appropriate database:
- **Project-local** (`.beads/` in project): Used for project-specific work
- **Global fallback** (`~/.beads/`): Used when no project-local database exists

**Use case for global database**: Cross-project tracking, personal task management, knowledge work that doesn't belong to a specific project.

**When to use --db flag explicitly:**
- Accessing a specific database outside current directory
- Working with multiple databases (e.g., project database + reference database)
- Example: `bd --db /path/to/reference/terms.db list`

**Database discovery rules:**
- bd looks for `.beads/*.db` in current working directory
- If not found, uses `~/.beads/default.db`
- Shell cwd can reset between commands - use absolute paths with --db when operating on non-local databases

**For complete session start workflows, read:** [references/WORKFLOWS.md](./WORKFLOWS.md#session-start)

## Core Operations

All bd commands support `--json` flag for structured output when needed for programmatic parsing.

### Essential Operations

**Check ready work:**
```bash
bd ready
bd ready --json              # For structured output
bd ready --priority 0        # Filter by priority
bd ready --assignee alice    # Filter by assignee
```

**Create new issue:**

**IMPORTANT**: Always quote title and description arguments with double quotes, especially when containing spaces or special characters.

```bash
bd create "Fix login bug"
bd create "Add OAuth" -p 0 -t feature
bd create "Write tests" -d "Unit tests for auth module" --assignee alice
bd create "Research caching" --design "Evaluate Redis vs Memcached"

# Examples with special characters (requires quoting):
bd create "Fix: auth doesn't handle edge cases" -p 1
bd create "Refactor auth module" -d "Split auth.go into separate files (handlers, middleware, utils)"
```

**Update issue status:**
```bash
bd update issue-123 --status in_progress
bd update issue-123 --priority 0
bd update issue-123 --assignee bob
bd update issue-123 --design "Decided to use Redis for persistence support"
```

**Close completed work:**
```bash
bd close issue-123
bd close issue-123 --reason "Implemented in PR #42"
bd close issue-1 issue-2 issue-3 --reason "Bulk close related work"
```

### Task Completion (Multi-Agent)

In multi-agent mode, use `done` instead of `bd close`:

```bash
done id="bd-42" msg="Implemented JWT refresh with rate limiting"
```

**What `done` does:**
- Closes the issue (like `bd close`)
- Releases all your file reservations
- Broadcasts completion message to team
- Makes blocked tasks ready

**Best practice**: One task per session. Complete with `done` before switching.

### Auto-Archive Plans on Close

When closing an issue that has a `**Source Plan:**` reference:

1. **Extract source plan path** from the issue description (e.g., `conductor/tracks/<id>/plan.md`)
2. **Query related issues by filename** (not full path, to avoid path format mismatches):
   ```bash
   # Extract just the filename for matching (handles absolute/relative path differences)
   PLAN_FILENAME=$(basename "<path>")
   bd list --json | jq --arg fn "$PLAN_FILENAME" '.[] | select(.description | test("Source Plan:.*" + $fn; "i"))'
   ```
3. **If all are closed** → archive the plan to the archive directory:
   ```bash
   # For plans in conductor/tracks/
   mkdir -p conductor/archive
   # Use timestamp suffix if destination already exists (re-archiving)
   DEST="conductor/archive/<id>"
   if [[ -d "$DEST" ]]; then
     DEST="conductor/archive/<id>-$(date +%Y%m%d%H%M%S)"
   fi
   mv conductor/tracks/<id>/ "$DEST/"
   ```

**Archive naming**: `YYYY-MM-DD-<original-name>.md` (date prefix when archived)

**Only archive when ALL issues from that plan are closed** — if any remain open, skip archiving.

**Show issue details:**
```bash
bd show issue-123
bd show issue-123 --json
```

**List issues:**
```bash
bd list
bd list --status open
bd list --priority 0
bd list --type bug
bd list --assignee alice
```

**For complete CLI reference with all flags and examples, read:** [references/CLI_REFERENCE.md](./CLI_REFERENCE.md)

## Field Usage Reference

Quick guide for when and how to use each bd field:

| Field | Purpose | When to Set | Update Frequency |
|-------|---------|-------------|------------------|
| **description** | Immutable problem statement | At creation | Never (fixed forever) |
| **design** | Initial approach, architecture, decisions | During planning | Rarely (only if approach changes) |
| **acceptance-criteria** | Concrete deliverables checklist (`- [ ]` syntax) | When design is clear | Mark `- [x]` as items complete |
| **notes** | Session handoff (COMPLETED/IN_PROGRESS/NEXT) | During work | At session end, major milestones |
| **status** | Workflow state (open→in_progress→closed) | As work progresses | When changing phases |
| **priority** | Urgency level (0=highest, 3=lowest) | At creation | Adjust if priorities shift |

**Key pattern**: Notes field is your "read me first" at session start. See [WORKFLOWS.md](./WORKFLOWS.md#session-handoff) for session handoff details.

---

## Issue Lifecycle Workflow

### 1. Discovery Phase (Proactive Issue Creation)

**During exploration or implementation, proactively file issues for:**
- Bugs or problems discovered
- Potential improvements noticed
- Follow-up work identified
- Technical debt encountered
- Questions requiring research

**Pattern:**
```bash
# When encountering new work during a task:
bd create "Found: auth doesn't handle profile permissions"
bd dep add current-task-id new-issue-id --type discovered-from

# Continue with original task - issue persists for later
```

**Key benefit**: Capture context immediately instead of losing it when conversation ends.

### 2. Execution Phase (Status Maintenance)

**Mark issues in_progress when starting work:**
```bash
bd update issue-123 --status in_progress
```

**Update throughout work:**
```bash
# Add design notes as implementation progresses
bd update issue-123 --design "Using JWT with RS256 algorithm"

# Update acceptance criteria if requirements clarify
bd update issue-123 --acceptance "- JWT validation works\n- Tests pass\n- Error handling returns 401"
```

**Close when complete:**
```bash
bd close issue-123 --reason "Implemented JWT validation with tests passing"
```

**Important**: Closed issues remain in database - they're not deleted, just marked complete for project history.

### 3. Planning Phase (Dependency Graphs)

For complex multi-step work, structure issues with dependencies before starting:

**Create parent epic:**
```bash
bd create "Implement user authentication" -t epic -d "OAuth integration with JWT tokens"
```

**Create subtasks:**
```bash
bd create "Set up OAuth credentials" -t task
bd create "Implement authorization flow" -t task
bd create "Add token refresh" -t task
```

**Link with dependencies:**
```bash
# parent-child for epic structure
bd dep add auth-epic auth-setup --type parent-child
bd dep add auth-epic auth-flow --type parent-child

# blocks for ordering
bd dep add auth-setup auth-flow
```

**For detailed dependency patterns and types, read:** [references/DEPENDENCIES.md](./DEPENDENCIES.md)

## Dependency Types Reference

bd supports four dependency types:

1. **blocks** - Hard blocker (issue A blocks issue B from starting)
2. **related** - Soft link (issues are related but not blocking)
3. **parent-child** - Hierarchical (epic/subtask relationship)
4. **discovered-from** - Provenance (issue B discovered while working on A)

**For complete guide on when to use each type with examples and patterns, read:** [references/DEPENDENCIES.md](./DEPENDENCIES.md)

## Integration with TodoWrite

**Both tools complement each other at different timescales:**

### Temporal Layering Pattern

**TodoWrite** (short-term working memory - this hour):
- Tactical execution: "Review Section 3", "Expand Q&A answers"
- Marked completed as you go
- Present/future tense ("Review", "Expand", "Create")
- Ephemeral: Disappears when session ends

**Beads** (long-term episodic memory - this week/month):
- Strategic objectives: "Continue work on strategic planning document"
- Key decisions and outcomes in notes field
- Past tense in notes ("COMPLETED", "Discovered", "Blocked by")
- Persistent: Survives compaction and session boundaries

### The Handoff Pattern

1. **Session start**: Read bead → Create TodoWrite items for immediate actions
2. **During work**: Mark TodoWrite items completed as you go
3. **Reach milestone**: Update bead notes with outcomes + context
4. **Session end**: TodoWrite disappears, bead survives with enriched notes

**After compaction**: TodoWrite is gone forever, but bead notes reconstruct what happened.

### Example: TodoWrite tracks execution, Beads capture meaning

**TodoWrite:**
```
[completed] Implement login endpoint
[in_progress] Add password hashing with bcrypt
[pending] Create session middleware
```

**Corresponding bead notes:**
```
bd update issue-123 --notes "COMPLETED: Login endpoint with bcrypt password
hashing (12 rounds). KEY DECISION: Using JWT tokens (not sessions) for stateless
auth - simplifies horizontal scaling. IN PROGRESS: Session middleware implementation.
NEXT: Need user input on token expiry time (1hr vs 24hr trade-off)."
```

**Don't duplicate**: TodoWrite tracks execution, Beads captures meaning and context.

**For patterns on transitioning between tools mid-session, read:** [references/BOUNDARIES.md](./BOUNDARIES.md#integration-patterns)

## Common Patterns

### Pattern 1: Knowledge Work Session

**Scenario**: User asks "Help me write a proposal for expanding the analytics platform"

**What you see**:
```bash
$ bd ready
# Returns: bd-42 "Research analytics platform expansion proposal" (in_progress)

$ bd show bd-42
Notes: "COMPLETED: Reviewed current stack (Mixpanel, Amplitude)
IN PROGRESS: Drafting cost-benefit analysis section
NEXT: Need user input on budget constraints before finalizing recommendations"
```

**What you do**:
1. Read notes to understand current state
2. Create TodoWrite for immediate work:
   ```
   - [ ] Draft cost-benefit analysis
   - [ ] Ask user about budget constraints
   - [ ] Finalize recommendations
   ```
3. Work on tasks, mark TodoWrite items completed
4. At milestone, update bd notes:
   ```bash
   bd update bd-42 --notes "COMPLETED: Cost-benefit analysis drafted.
   KEY DECISION: User confirmed $50k budget cap - ruled out enterprise options.
   IN PROGRESS: Finalizing recommendations (Posthog + custom ETL).
   NEXT: Get user review of draft before closing issue."
   ```

**Outcome**: TodoWrite disappears at session end, but bd notes preserve context for next session.

### Pattern 2: Side Quest Handling

During main task, discover a problem:
1. Create issue: `bd create "Found: inventory system needs refactoring"`
2. Link using discovered-from: `bd dep add main-task new-issue --type discovered-from`
3. Assess: blocker or can defer?
4. If blocker: `bd update main-task --status blocked`, work on new issue
5. If deferrable: note in issue, continue main task

### Pattern 3: Multi-Session Project Resume

Starting work after time away:
1. Run `bd ready` to see available work
2. Run `bd blocked` to understand what's stuck
3. Run `bd list --status closed --limit 10` to see recent completions
4. Run `bd show issue-id` on issue to work on
5. Update status and begin work

**For complete workflow walkthroughs with checklists, read:** [references/WORKFLOWS.md](./WORKFLOWS.md)

## Issue Creation

**Quick guidelines:**
- Ask user first for knowledge work with fuzzy boundaries
- Create directly for clear bugs, technical debt, or discovered work
- Use clear titles, sufficient context in descriptions
- Design field: HOW to build (can change during implementation)
- Acceptance criteria: WHAT success looks like (should remain stable)

### Issue Creation Checklist

Copy when creating new issues:

```
Creating Issue:
- [ ] Title: Clear, specific, action-oriented
- [ ] Description: Problem statement (WHY this matters) - immutable
- [ ] Design: HOW to build (can change during work)
- [ ] Acceptance: WHAT success looks like (stays stable)
- [ ] Priority: 0=critical, 1=high, 2=normal, 3=low
- [ ] Type: bug/feature/task/epic/chore
```

**Self-check for acceptance criteria:**

❓ "If I changed the implementation approach, would these criteria still apply?"
- → **Yes** = Good criteria (outcome-focused)
- → **No** = Move to design field (implementation-focused)

**Example:**
- ✅ Acceptance: "User tokens persist across sessions and refresh automatically"
- ❌ Wrong: "Use JWT tokens with 1-hour expiry" (that's design, not acceptance)

**For detailed guidance on when to ask vs create, issue quality, resumability patterns, and design vs acceptance criteria, read:** [references/ISSUE_CREATION.md](./ISSUE_CREATION.md)

## Alternative Use Cases

bd is primarily for work tracking, but can also serve as queryable database for static reference data (glossaries, terminology) with adaptations.

**For guidance on using bd for reference databases and static data, read:** [references/STATIC_DATA.md](./STATIC_DATA.md)

## Statistics and Monitoring

**Check project health:**
```bash
bd stats
bd stats --json
```

Returns: total issues, open, in_progress, closed, blocked, ready, avg lead time

**Find blocked work:**
```bash
bd blocked
bd blocked --json
```

Use stats to:
- Report progress to user
- Identify bottlenecks
- Understand project velocity

## Advanced Features

### Issue Types

```bash
bd create "Title" -t task        # Standard work item (default)
bd create "Title" -t bug         # Defect or problem
bd create "Title" -t feature     # New functionality
bd create "Title" -t epic        # Large work with subtasks
bd create "Title" -t chore       # Maintenance or cleanup
```

### Priority Levels

```bash
bd create "Title" -p 0    # Highest priority (critical)
bd create "Title" -p 1    # High priority
bd create "Title" -p 2    # Normal priority (default)
bd create "Title" -p 3    # Low priority
```

### Bulk Operations

```bash
# Close multiple issues at once
bd close issue-1 issue-2 issue-3 --reason "Completed in sprint 5"

# Create multiple issues from markdown file
bd create --file issues.md
```

### Dependency Visualization

```bash
# Show full dependency tree for an issue
bd dep tree issue-123

# Check for circular dependencies
bd dep cycles
```

### Built-in Help

```bash
# Quick start guide (comprehensive built-in reference)
bd quickstart

# Command-specific help
bd create --help
bd dep --help
```

## JSON Output

All bd commands support `--json` flag for structured output:

```bash
bd ready --json
bd show issue-123 --json
bd list --status open --json
bd stats --json
```

Use JSON output when you need to parse results programmatically or extract specific fields.

## Troubleshooting

**If bd command not found:**
- Check installation: `bd version`
- Verify PATH includes bd binary location

**If issues seem lost:**
- Use `bd list` to see all issues
- Filter by status: `bd list --status closed`
- Closed issues remain in database permanently

**If bd show can't find issue by name:**
- `bd show` requires issue IDs, not issue titles
- Workaround: `bd list | grep -i "search term"` to find ID first
- Then: `bd show issue-id` with the discovered ID
- For glossaries/reference databases where names matter more than IDs, consider using markdown format alongside the database

**If dependencies seem wrong:**
- Use `bd show issue-id` to see full dependency tree
- Use `bd dep tree issue-id` for visualization
- Dependencies are directional: `bd dep add from-id to-id` means from-id blocks to-id
- See [references/DEPENDENCIES.md](./DEPENDENCIES.md#common-mistakes)

**If database seems out of sync:**
- bd auto-syncs JSONL after each operation (5s debounce)
- bd auto-imports JSONL when newer than DB (after git pull)
- Manual operations: `bd export`, `bd import`

## Conductor Integration

When using Beads with Conductor workflows, the integration is automated through a facade layer. This section documents when manual `bd` commands are still appropriate and the constraints when working in multi-agent mode.

### Automated vs Manual bd Commands

**Conductor handles automatically:**

| Action | Conductor Command | Beads Action |
|--------|-------------------|--------------|
| Session init | `/conductor-implement` | Mode detection, session state |
| Claim task | `/conductor-implement` | `bd update --status in_progress` |
| TDD checkpoints | Default (use `--no-tdd` to disable) | Notes update at each phase |
| Close task | Task completion | `bd close --reason <reason>` |
| Sync | Session end | `bd sync` |
| Track init | `/conductor-newtrack` | Create epic + issues |

**Manual bd is appropriate for:**

| Scenario | Command | Why Manual |
|----------|---------|------------|
| Create side quest | `bd create` | Discovered work outside plan |
| Check blockers | `bd blocked --json` | Diagnostic, not workflow |
| View dependencies | `bd dep tree <id>` | Debugging |
| Ad-hoc notes | `bd update --notes` | Context capture |
| Recovery | `bd sync` | After crash or error |

### Conductor-Managed Session

When Conductor is managing a session:

1. **Don't manually claim tasks** - Use `/conductor-implement` which handles preflight, claim, and session state
2. **Don't manually close tasks** - Conductor closes with proper reason and notes format
3. **Don't skip sync** - Conductor syncs at session end with retry logic

### Parallel Execution with Orchestrator

For independent tasks that can be worked simultaneously, use `/conductor-implement` which auto-routes to the orchestrator skill when parallel execution is beneficial. See the [orchestrator skill](../../orchestrator/SKILL.md) for details.

---

## Troubleshooting Workflows

Detailed information organized by topic:

| Reference | Read When |
|-----------|-----------|
| [references/BOUNDARIES.md](./BOUNDARIES.md) | Need detailed decision criteria for bd vs TodoWrite, or integration patterns |
| [references/CLI_REFERENCE.md](./CLI_REFERENCE.md) | Need complete command reference, flag details, or examples |
| [references/WORKFLOWS.md](./WORKFLOWS.md) | Need step-by-step workflows with checklists for common scenarios |
| [references/DEPENDENCIES.md](./DEPENDENCIES.md) | Need deep understanding of dependency types or relationship patterns |
| [references/ISSUE_CREATION.md](./ISSUE_CREATION.md) | Need guidance on when to ask vs create issues, issue quality, or design vs acceptance criteria |
| [references/STATIC_DATA.md](./STATIC_DATA.md) | Want to use bd for reference databases, glossaries, or static data instead of work tracking |

```

### references/FILE_BEADS.md

```markdown
# File Beads Epics and Issues from Plan

Convert a plan into Beads epics and issues using **parallel subagents** for speed, with checkpointing for resume capability.

## Phase 0: Initialize & Check for Resume

### 0.1 Determine Track Context

Review the plan context: `$ARGUMENTS`

If no plan provided, check:
- Recent `/conductor-design` output in current context
- `conductor/tracks/` for design.md, spec.md, and plan.md files

**Extract track ID** from plan path (e.g., `conductor/tracks/auth_20251223/plan.md` → `auth_20251223`).

### 0.2 Check for Existing Progress

Check `metadata.json.beads` section in track directory:

```bash
jq '.beads' conductor/tracks/<track_id>/metadata.json 2>/dev/null
```

**If beads section exists:**
1. Read `status` field
2. **If `status: "complete"`:** Announce "Beads already filed for this track. Use `--force` to re-file." HALT.
3. **If `status: "in_progress"` or `"failed"`:** 
   - Read `resumeFrom` field to identify checkpoint (if present)
   - Read `epics` array to get already-created epics
   - Announce: "Resuming from checkpoint"
   - Skip to appropriate phase
4. **If `status: "pending"`:**
   - Fresh start, proceed to Phase 1

### 0.3 Validate Required State

**MANDATORY:** Verify metadata.json exists and has required sections.

```bash
TRACK_PATH="conductor/tracks/<track_id>"

# Check required files
MISSING=""
[ ! -f "$TRACK_PATH/metadata.json" ] && MISSING="$MISSING metadata.json"
[ ! -f "$TRACK_PATH/plan.md" ] && MISSING="$MISSING plan.md"

if [ -n "$MISSING" ]; then
  echo "❌ Cannot file beads. Missing:$MISSING"
  echo ""
  echo "Fix: Run /conductor-newtrack <track_id> first"
  exit 1
fi

# Validate JSON
if ! jq empty "$TRACK_PATH/metadata.json" 2>/dev/null; then
  echo "❌ Corrupted: metadata.json"
  exit 1
fi

# Ensure beads section exists
if ! jq -e '.beads' "$TRACK_PATH/metadata.json" > /dev/null 2>&1; then
  echo "ℹ️ Adding beads section to metadata.json"
  jq '.beads = {"status": "pending", "epicId": null, "epics": [], "issues": [], "planTasks": {}, "beadToTask": {}, "crossTrackDeps": [], "reviewStatus": null, "reviewedAt": null}' \
    "$TRACK_PATH/metadata.json" > "$TRACK_PATH/metadata.json.tmp.$$" && mv "$TRACK_PATH/metadata.json.tmp.$$" "$TRACK_PATH/metadata.json"
fi

echo "✓ State validated"
```

**If metadata.json missing or corrupted:** HALT. Do NOT auto-create.

### 0.4 Update Progress to In-Progress

Update `metadata.json.beads` to mark start:

```bash
jq --arg time "<current-timestamp>" \
   --arg thread "<current-thread-id>" \
   '.beads.status = "in_progress" | .beads.startedAt = $time' \
   "conductor/tracks/<track_id>/metadata.json" > "conductor/tracks/<track_id>/metadata.json.tmp.$$" && mv "conductor/tracks/<track_id>/metadata.json.tmp.$$" "conductor/tracks/<track_id>/metadata.json"
```

## Phase 1: Analyze Plan

**Identify for each epic:**

| Field | Description |
|-------|-------------|
| Epic title | Clear workstream name |
| Child tasks | Individual issues under this epic |
| Intra-epic deps | Dependencies within the epic |
| Cross-epic hints | Tasks that depend on other epics (by name, not ID) |
| Priority | 0-4 scale |

**Update checkpoint:** Set `resumeFrom: "phase2"`.

## Phase 2: Create Epics First (Sequential)

Create all epics FIRST to get stable IDs before parallelizing child issues.

**For each epic:**

```bash
bd create "Epic: Authentication" -t epic -p 1 --json
# Returns: {"id": "bd-1", ...}
```

**After EACH epic creation, update metadata.json.beads:**

```json
{
  "beads": {
    "status": "in_progress",
    "epicId": "bd-1",
    "epics": [
      {
        "id": "bd-1",
        "title": "Epic: Authentication",
        "status": "created",
        "createdAt": "<timestamp>",
        "reviewed": false
      },
      {
        "id": "bd-2",
        "title": "Epic: Database Layer",
        "status": "created",
        "createdAt": "<timestamp>",
        "reviewed": false
      }
    ],
    ...
  }
}
```

This ensures resume after interruption won't create duplicate epics.

**Update checkpoint:** Set `resumeFrom: "phase3"`.

## Phase 3: Parallel Dispatch for Child Issues (BATCHED)

Dispatch subagents in **batches of 5** to avoid rate limits.

### 3.1 Batch Calculation

```
If 12 epics:
  Batch 1: epics 1-5 (parallel)
  Batch 2: epics 6-10 (parallel)
  Batch 3: epics 11-12 (parallel)
```

### 3.2 Subagent Prompt Template

```markdown
Fill Epic: "<EPIC_TITLE>" (ID: <EPIC_ID>)

## Your Task
Create all child issues for this epic. The epic already exists.

## Epic Context
<PASTE_EPIC_SECTION_FROM_PLAN>

## Steps

1. For each task, create an issue with parent dependency:
   ```bash
   bd create "<task title>" -t <type> -p <priority> --deps <EPIC_ID> --json
   ```
   
   Include in each issue:
   - Clear action-oriented title
   - Acceptance criteria
   - Technical notes if relevant

2. Link intra-epic dependencies:
   ```bash
   bd dep add <child> <blocker> --type blocks --json
   ```

## Return Format

**CRITICAL: Validate your JSON before returning.**

Check that your response:
- Contains ONLY the JSON object (no markdown, no explanation)
- Has all required fields: epicId, epicTitle, issues, crossEpicDeps
- Each issue has: id, title, deps (array)
- crossEpicDeps items have: issueId, needsLinkTo

Return ONLY this JSON:
```json
{
  "epicId": "<EPIC_ID>",
  "epicTitle": "<title>",
  "issues": [
    {"id": "<issue-id>", "title": "...", "deps": ["<dep-id>"]}
  ],
  "crossEpicDeps": [
    {"issueId": "<issue-id>", "needsLinkTo": "<epic or task name>"}
  ]
}
```
```

### 3.3 Batch Dispatch

> **IMPORTANT:** You MUST actually invoke the Task tool. Do not just describe or write about dispatching — execute it.

**For each batch:**

```
// Dispatch batch (max 5 at once)
Task(description: "Fill Epic: Authentication (bd-1)", prompt: <above template>)
Task(description: "Fill Epic: Database Layer (bd-2)", prompt: <above template>)
Task(description: "Fill Epic: API Endpoints (bd-3)", prompt: <above template>)
// ... up to 5 parallel
```

**Wait for batch to complete.**

### 3.4 Handle Subagent Results

For each subagent result:

1. **Parse JSON response**
2. **If parse fails:**
   - **Retry once** with hint: "Your previous response was not valid JSON. Error: <parse-error>. Please return ONLY the JSON object."
   - **If retry fails:** Log warning, handle this epic in main agent context (fallback)
3. **If parse succeeds:**
   - Update progress file with issues
   - Record cross-epic dependencies for Phase 4

**Update metadata.json.beads after each batch:**

```json
{
  "beads": {
    "status": "in_progress",
    "epics": [...],
    "issues": ["bd-4", "bd-5", "bd-6", ...],
    ...
  }
}
```

### 3.5 Proceed to Next Batch

Repeat 3.3-3.4 until all batches complete.

**Update checkpoint:** Set `resumeFrom: "phase4"`.

## Phase 4: Collect & Link Cross-Epic Dependencies

When ALL subagents return:

1. Parse JSON results from each subagent
2. Build ID lookup table:
   ```
   "Authentication" → bd-1
   "Setup user table" → bd-5
   "Create auth middleware" → bd-8
   ```

3. Resolve cross-epic dependencies:
   ```bash
   bd dep add <from> <to> --type blocks --json
   ```

4. **Detect cross-track dependencies:**
   - If a task references another conductor track (e.g., "depends on api_20251223")
   - Record in `crossTrackDeps` array:
     ```json
     {"from": "bd-3", "to": "api_20251223:bd-7"}
     ```
   - Attempt to update both tracks' progress files

**Update checkpoint:** Set `resumeFrom: "phase5"`.

## Phase 5: Verify & Summarize

Run verification:

```bash
bd list --json
bd ready --json
```

Check:
- All epics have child issues
- No dependency cycles
- Some issues are ready (unblocked)

### Update Final metadata.json.beads

```json
{
  "beads": {
    "status": "complete",
    "startedAt": "<start-timestamp>",
    "epicId": "<primary-epic-id>",
    "epics": [
      {
        "id": "bd-1",
        "title": "Epic: Authentication",
        "status": "complete",
        "createdAt": "<timestamp>",
        "reviewed": false
      }
    ],
    "issues": ["bd-4", "bd-5", "bd-6", ...],
    "planTasks": {"1.1.1": "bd-4", "1.1.2": "bd-5", ...},
    "beadToTask": {"bd-4": "1.1.1", "bd-5": "1.1.2", ...},
    "crossTrackDeps": [
      {"from": "bd-3", "to": "api_20251223:bd-7"}
    ],
    "reviewStatus": null,
    "reviewedAt": null
  }
}
```

### Summary Format

Present to user:

```
## Filed Beads Summary

**Epics created:** 3
**Issues created:** 12

| Epic | Issues | Ready |
|------|--------|-------|
| Authentication | 4 | 2 |
| Database Layer | 5 | 1 |
| API Endpoints | 3 | 0 (blocked) |

**Start with:** bd-5 (Setup user table), bd-8 (Init auth config)

**Cross-epic deps linked:** 2
```

### After Completion

After parallel agents finish filing beads:

1. Summarize what was created (epic ID, issue count)
2. Update workflow state in metadata.json:
   ```bash
   jq --arg timestamp "<current-timestamp>" \
      '.workflow.state = "FILED" | .workflow.history += [{"state": "FILED", "at": $timestamp, "command": "fb"}]' \
      "conductor/tracks/<track_id>/metadata.json" > "conductor/tracks/<track_id>/metadata.json.tmp.$$" && mv "conductor/tracks/<track_id>/metadata.json.tmp.$$" "conductor/tracks/<track_id>/metadata.json"
   ```
3. Display completion with suggestion:
   ```
   ┌─────────────────────────────────────────┐
   │ ✓ Beads filed                           │
   │                                         │
   │ Epics: N                                │
   │ Issues: N                               │
   │                                         │
   │ → Next: rb (review beads)               │
   └─────────────────────────────────────────┘
   ```

## Phase 6: Auto-Orchestration

> **Automatic worker dispatch after beads are filed.**

After Phase 5 completes successfully, automatically trigger orchestration.

### 6.0 Idempotency Check

```bash
# Check if already orchestrated
orchestrated=$(jq -r '.beads.orchestrated // false' "conductor/tracks/<track_id>/metadata.json")
if [ "$orchestrated" = "true" ]; then
  echo "ℹ️ Already orchestrated - skipping Phase 6"
  # Skip to summary
fi
```

### 6.1 Query Dependency Graph

```bash
bv --robot-triage --graph-root <epic-id> --json
```

Output structure:
```json
{
  "quick_ref": { "open_count": 5, "blocked_count": 2, "ready_count": 3 },
  "beads": [
    { "id": "bd-1", "ready": true, "blocked_by": [] },
    { "id": "bd-2", "ready": false, "blocked_by": ["bd-1"] }
  ]
}
```

### 6.2 Generate Track Assignments

**Algorithm:**
1. Group ready beads (no blockers) into parallel tracks
2. Beads with blockers assigned to track of their primary blocker
3. Respect `max_workers` limit (default: 3)
4. If tracks exceed limit, merge smallest two

**Output format:**

| Track | Beads | Depends On |
|-------|-------|------------|
| 1 | bd-1, bd-2 | - |
| 2 | bd-3 | - |
| 3 | bd-4, bd-5 | bd-2, bd-3 |

### 6.3 Mark Orchestrated

```bash
TRACK_PATH="conductor/tracks/<track_id>"
tmp_file="$(mktemp "${TRACK_PATH}/metadata.json.tmp.XXXXXX")"

jq '.beads.orchestrated = true | .beads.orchestratedAt = (now | todate)' \
  "${TRACK_PATH}/metadata.json" > "$tmp_file" && \
  mv "$tmp_file" "${TRACK_PATH}/metadata.json"
```

### 6.4 Dispatch Workers

Call orchestrator with auto-generated Track Assignments.

See [auto-orchestrate.md](auto-orchestrate.md) for full protocol.

### 6.5 Agent Mail Requirement

If Agent Mail MCP unavailable:

```text
❌ Cannot proceed: Agent Mail required for parallel execution
```

HALT auto-orchestration. User must fix Agent Mail availability before proceeding with parallel dispatch.

## Phase 7: Final Review

After all workers complete, spawn `rb` sub-agent:

```python
Task(
  description: "Final review: rb for epic <epic-id>",
  prompt: """
Run rb to review all completed beads for epic <epic-id>.

## Your Task
1. Verify all beads are properly closed
2. Check for any orphaned work or missing implementations
3. Validate acceptance criteria from spec.md
4. Report any issues or concerns

## Expected Output
Summary of review findings and overall quality assessment.
"""
)
```

This is a synchronous wait - the main agent blocks until rb completes, then collects the review findings.

Present completion summary after rb finishes.

## Priority Guide

| Priority | Use For |
|----------|---------|
| 0 | Critical path blockers, security |
| 1 | Core functionality, high value |
| 2 | Standard work (default) |
| 3 | Nice-to-haves, polish |
| 4 | Backlog, future |

## Error Recovery

| Error | Recovery |
|-------|----------|
| Subagent returns invalid JSON | Retry once with hint, then fallback to main agent |
| Batch interrupted | Resume from `resumeFrom` field in progress file |
| Epic creation fails | Log error, continue with remaining epics |
| Dependency cycle detected | Log warning, skip that dependency |
| Cross-track reference fails | Log in progress file, suggest manual linking |

## Why This Approach?

- **Epics first (sequential)** — Prevents ID collisions; epic IDs are stable before parallelization
- **Child issues (parallel, batched)** — Each subagent works on a different epic, no conflicts, rate-limit safe
- **Checkpointing** — Resume after any interruption without duplicate work
- **JSON validation** — Catches subagent formatting issues early
- **Context hygiene** — `bd create` output stays in subagent contexts
- **Reliable linking** — Cross-epic dependencies resolve correctly since all epic IDs known upfront

```

### references/REVIEW_BEADS.md

```markdown

# Review and Refine Beads Issues

Review, proofread, and polish filed Beads epics and issues using **parallel subagents** for speed.

## Phase 0: Pre-Check & Track Discovery

### 0.1 Track Integrity Validation

Before any operations, validate track integrity per `skills/conductor/validation/track-checks.md`:

1. **Validate JSON files:** All state files must parse correctly (HALT on corruption)
2. **track_id validation:** Auto-fix mismatches in state files (directory name is source of truth)
3. **File existence matrix:** Verify track has valid file combination

**If validation fails:** HALT and report the issue.

### 0.2 Scan for Track Context

If `$ARGUMENTS` contains a track_id:
- Look for `.fb-progress.json` at `conductor/tracks/<track_id>/.fb-progress.json`

If no track_id provided, scan all tracks:
```bash
find conductor/tracks -name ".fb-progress.json" -type f 2>/dev/null
```

**If multiple tracks found:**
```
Found beads in multiple tracks:
1. auth_20251223 (5 epics, filed 2h ago)
2. api_20251223 (3 epics, filed 1d ago)

Which track to review? [1/2/all]
```

### 0.3 Check Track Directory Exists

Before reading progress file, verify the track directory exists:

```bash
test -d conductor/tracks/<track_id>
```

**If track directory missing (Edge Case 3: Deleted Track):**
```
⚠️ Track '<track_id>' deleted. Reviewing beads without track context.
```
- Continue without track context
- Use `bd list -t epic --json` to find epics directly
- Skip progress file operations

### 0.4 Check Progress File Status

Read `.fb-progress.json` from track directory:

```bash
cat conductor/tracks/<track_id>/.fb-progress.json
```

**Status checks:**

| Status | Behavior |
|--------|----------|
| `status: "in_progress"` | Warn: "Beads filing incomplete. Wait for fb to finish or resume with `fb <track_id>`." Consider halting. |
| `status: "failed"` | Warn: "Previous fb run failed. Resume with `fb <track_id>` before reviewing." |
| `status: "complete"` | Proceed with review |
| File not found | Warn: "No beads found for this track. Run `fb` first." HALT. |

**If proceeding:** Extract epic IDs from progress file for focused review.

### 0.5 Sync Stale Progress (Edge Case 4)

Before proceeding, compare progress file against current beads state:

```bash
bd list -t epic --json
```

**Compare:**
- Epic `updated_at` (from beads) vs progress file `lastVerified` timestamp
- If epic `updated_at` > progress `lastVerified` → needs re-sync
- If epic in progress file but missing from `bd list` → remove from progress

**If stale (auto-correct with diff):**
```
ℹ️ Progress file synced with beads:
  - Removed: bd-3 (Epic: Auth) — no longer exists
  - Removed: bd-4 (Epic: API) — no longer exists
  Continuing with 3 epics.
```

Update progress file with corrected epic list and new `lastVerified` timestamp.

### 0.6 Initialize Review State

Create/update `.fb-progress.json` with review tracking:

```json
{
  "trackId": "<track_id>",
  "status": "complete",
  "reviewStatus": "in_progress",
  "reviewStartedAt": "<timestamp>",
  "reviewThreadId": "<current-thread-id>",
  ...
}
```

## Phase 1: Load & Distribute

Get all epics from track or arguments:

**If track context available:**
```bash
# Get epic IDs from progress file
jq '.epics[].id' conductor/tracks/<track_id>/.fb-progress.json
```

**Otherwise:**
```bash
bd list -t epic --json
```

If specific IDs were provided (`$ARGUMENTS`), focus on those. Otherwise, review all epics.

**For each epic, gather its child issues:**
```bash
bd show <epic-id> --json
```

## Phase 2: Parallel Epic Reviews

Dispatch **ALL subagents in parallel** — each reviews one epic and its children.

### Subagent Prompt Template

```markdown
Review Epic: "<EPIC_TITLE>" (ID: <EPIC_ID>)

## Your Task
Review and refine this epic and all its child issues.

## Issues to Review
<LIST_OF_ISSUE_IDS_AND_TITLES>

## Review Checklist

For EACH issue, verify:

### Clarity
- [ ] Title is action-oriented and specific
- [ ] Description is clear and unambiguous
- [ ] A developer unfamiliar with the codebase could understand
- [ ] No jargon without explanation

### Completeness
- [ ] Acceptance criteria are defined and testable
- [ ] Technical implementation hints provided where helpful
- [ ] Relevant file paths or modules mentioned
- [ ] Edge cases and error handling considered

### Dependencies
- [ ] All blocking dependencies are linked
- [ ] Dependencies are minimal (not over-constrained)

### Scope
- [ ] Issue is appropriately sized (not too large)
- [ ] No duplicate or overlapping issues

### Priority
- [ ] Priority reflects actual importance

## Common Fixes

1. **Vague titles**: "Fix bug" → "Fix null pointer in UserService.getProfile"
2. **Missing context**: Add relevant file paths, function names
3. **Implicit knowledge**: Make assumptions explicit
4. **Missing acceptance criteria**: Add "Done when..." statements

## Update Commands

```bash
bd update <id> --title "Improved title" --json
bd update <id> --description "New description" --json
bd update <id> --acceptance "Acceptance criteria" --json
bd update <id> --priority <new-priority> --json
```

## After Review - Mark as Reviewed

After reviewing each issue, add the "reviewed" label:
```bash
bd update <id> --label reviewed --json
```

## Return Format

**CRITICAL: Validate your JSON before returning.**

Return ONLY this JSON (no other text):
```json
{
  "epicId": "<EPIC_ID>",
  "epicTitle": "<title>",
  "issuesReviewed": 5,
  "issuesUpdated": 3,
  "changes": [
    {"id": "<issue-id>", "change": "Clarified title"},
    {"id": "<issue-id>", "change": "Added acceptance criteria"}
  ],
  "concerns": [
    {"id": "<issue-id>", "issue": "Needs user input on scope"}
  ],
  "crossEpicIssues": [
    {"id": "<issue-id>", "issue": "May conflict with Epic Y task Z"}
  ]
}
```
```

### Parallel Dispatch Example

> **IMPORTANT:** You MUST actually invoke the Task tool. Do not just describe or write about dispatching — execute it.

```
// Dispatch ALL at once
Task(description: "Review Epic: Authentication (bd-1)", prompt: <above template>)
Task(description: "Review Epic: Database Layer (bd-2)", prompt: <above template>)
Task(description: "Review Epic: API Endpoints (bd-3)", prompt: <above template>)
```

**All subagents run in parallel** — no waiting between dispatches.

## Phase 3: Cross-Epic Validation

When ALL subagents return, perform cross-epic validation:

### 3.1 Check for Cycles

```bash
bd dep cycles --json
```

If cycles found, fix them:
```bash
bd dep remove <from> <to> --json
```

### 3.2 Validate Cross-Epic Links

Review `crossEpicIssues` from all subagents:
- Check if flagged conflicts are real
- Verify cross-epic dependencies make sense
- Add missing cross-epic deps if needed:
  ```bash
  bd dep add <from> <to> --type blocks --json
  ```

### 3.3 Check for Orphans (Edge Case 2)

```bash
bd list --json
```

Verify:
- All issues belong to an epic (have parent dep)
- No dangling references to deleted issues

**If orphan beads found (warn and include):**
```
⚠️ Found 1 epic without track origin. Including in review.
  (This may indicate fb didn't complete properly)
```
- Do NOT skip orphan beads
- Include them in review anyway
- Flag in summary as potential issue

### 3.4 Verify Critical Path

```bash
bd ready --json
```

Check:
- Some issues are ready (unblocked)
- Critical path items have correct priorities
- Parallelization opportunities preserved

### 3.5 Update Progress File

Update `.fb-progress.json` with review state:

```json
{
  "trackId": "<track_id>",
  "status": "complete",
  "reviewStatus": "complete",
  "lastVerified": "<timestamp>",
  
  "epics": [
    {
      "id": "bd-1",
      "title": "Epic: Authentication",
      "status": "complete",
      "createdAt": "<timestamp>",
      "reviewed": true,
      "reviewedAt": "<timestamp>"
    }
  ],
  ...
}
```

## Phase 4: Summary & Handoff

### 4.1 Collect All Concerns

Aggregate `concerns` from all subagent results into a single list:

```
**Needs User Input:**
- bd-12: Scope unclear - should this include admin users?
- bd-18: Technical approach - use Redis or in-memory cache?
- bd-25: Priority unclear - is this blocking release?
```

**Do not pause mid-review for user input.** Complete the full review first, then present all concerns together.

### 4.2 Summary Format

Present combined results:

```
## Review Summary

**Epics reviewed:** 3
**Issues reviewed:** 15
**Issues updated:** 8

| Epic | Reviewed | Updated | Concerns |
|------|----------|---------|----------|
| Authentication | 5 | 3 | 1 |
| Database Layer | 6 | 4 | 0 |
| API Endpoints | 4 | 1 | 2 |

**Cross-epic validation:**
- Cycles: 0
- Orphans: 0
- Cross-deps verified: 4

**Needs User Input (3):**
- bd-12: Scope unclear - should this include admin users?
- bd-18: Technical approach - use Redis or in-memory cache?
- bd-25: Priority unclear - is this blocking release?

**Ready for implementation:** 6 issues
```

### 4.3 HANDOFF Block (REQUIRED)

**You MUST output this block — do not skip:**

1. Update workflow state in metadata.json:
   ```bash
   jq --arg timestamp "<current-timestamp>" \
      '.workflow.state = "REVIEWED" | .workflow.history += [{"state": "REVIEWED", "at": $timestamp, "command": "rb"}]' \
      "conductor/tracks/<track_id>/metadata.json" > "conductor/tracks/<track_id>/metadata.json.tmp.$$" && mv "conductor/tracks/<track_id>/metadata.json.tmp.$$" "conductor/tracks/<track_id>/metadata.json"
   ```

2. Display completion with suggestion:
   ```
   ┌─────────────────────────────────────────┐
   │ ✓ Beads reviewed                        │
   │                                         │
   │ Epics reviewed: N                       │
   │ Issues updated: N                       │
   │ Ready to start: N epics in parallel     │
   │                                         │
   │ → Next: Start epic {first-epic-id}      │
   │   Or: bd ready (show all ready tasks)   │
   │   Alt: /conductor-implement <track_id>  │
   └─────────────────────────────────────────┘
   ```

3. Include handoff info:
   ```markdown
   ## HANDOFF

   **Command:** `Start epic <first-epic-id>`
   **Epics:** <count> epics reviewed
   **Ready issues:** <count>
   **First task:** <first-ready-issue-id> - <title>

   Copy the command above to start a new session.
   ```

### 4.4 Completion Message

Say: **"Issues reviewed. Run `/conductor-implement` to start execution."**

## Priority Guide

| Priority | Use For |
|----------|---------|
| 0 | Critical path blockers, security |
| 1 | Core functionality, high value |
| 2 | Standard work (default) |
| 3 | Nice-to-haves, polish |
| 4 | Backlog, future |

## Error Recovery

| Error | Recovery |
|-------|----------|
| Progress file not found | Suggest running `fb` first |
| fb still in progress | Wait or suggest resuming fb |
| Multiple tracks found | Prompt user to select |
| Subagent returns invalid JSON | Retry once, then handle in main agent |
| Epic has no child issues | Flag as concern, continue |
| Track directory deleted | Warn, review beads without track context |
| Orphan beads found | Warn, include in review anyway |
| Stale progress file | Auto-sync with beads, show diff of changes |

## Why This Approach?

- **Track-aware** — Discovers track context from progress files
- **Pre-flight checks** — Ensures beads are filed before review
- **Parallel reviews** — Each epic reviewed simultaneously
- **Cross-epic validation** — Catches inter-epic issues after parallel phase
- **Two-layer tracking** — Progress file + beads labels for audit trail
- **Deferred concerns** — Collects all user-input items, presents at end
- **No conflicts** — Each subagent updates different epic's issues
- **Fast execution** — All epics reviewed at once
- **Comprehensive** — Both intra-epic and cross-epic quality checks

```

tracking | SkillHub