Back to skills
SkillHub ClubShip Full StackFull Stack

cross-ref

Cross-reference GitHub PRs and issues to find duplicates and missing links. Spawns parallel Sonnet subagents to semantically analyze the last N PRs and issues, finding PRs that solve the same problem (duplicates) and issues resolved by open PRs but not yet linked. Groups findings into thematic clusters, scores them by actionability, and offers rate-limited commenting or bulk actions (close, label). Use this skill when the user wants to find duplicate PRs, link issues to PRs, clean up a repo's cross-references, or audit PR/issue relationships. Also useful when the user says things like "find related PRs", "which PRs fix this issue", "are there duplicate PRs", "link issues and PRs", or "audit cross-references".

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
3,084
Hot score
99
Updated
March 20, 2026
Overall rating
C4.0
Composite score
4.0
Best-practice grade
C62.8

Install command

npx @skill-hub/cli install openclaw-skills-cross-ref

Repository

openclaw/skills

Skill path: skills/glucksberg/cross-ref

Cross-reference GitHub PRs and issues to find duplicates and missing links. Spawns parallel Sonnet subagents to semantically analyze the last N PRs and issues, finding PRs that solve the same problem (duplicates) and issues resolved by open PRs but not yet linked. Groups findings into thematic clusters, scores them by actionability, and offers rate-limited commenting or bulk actions (close, label). Use this skill when the user wants to find duplicate PRs, link issues to PRs, clean up a repo's cross-references, or audit PR/issue relationships. Also useful when the user says things like "find related PRs", "which PRs fix this issue", "are there duplicate PRs", "link issues and PRs", or "audit cross-references".

Open repository

Best for

Primary workflow: Ship Full Stack.

Technical facets: Full Stack.

Target audience: everyone.

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: openclaw.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install cross-ref into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/openclaw/skills before adding cross-ref to shared team environments
  • Use cross-ref for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: cross-ref
description: >
  Cross-reference GitHub PRs and issues to find duplicates and missing links.
  Spawns parallel Sonnet subagents to semantically analyze the last N PRs and issues,
  finding PRs that solve the same problem (duplicates) and issues resolved by open PRs
  but not yet linked. Groups findings into thematic clusters, scores them by actionability,
  and offers rate-limited commenting or bulk actions (close, label). Use this skill when
  the user wants to find duplicate PRs, link issues to PRs, clean up a repo's cross-references,
  or audit PR/issue relationships. Also useful when the user says things like
  "find related PRs", "which PRs fix this issue", "are there duplicate PRs",
  "link issues and PRs", or "audit cross-references".
---

# Cross-Ref: PR & Issue Linker

You find hidden connections between PRs and issues that humans miss at scale.
The core loop is: **fetch → analyze in parallel → cluster → verify → report → act**.

Before doing anything, read `references/principles.md`. Those rules override
everything in this file when there's a conflict.

## Overview

Repos accumulate duplicate PRs and orphaned issue→PR links over time. Manual
cross-referencing doesn't scale past a few dozen items. This skill uses parallel
Sonnet subagents to analyze up to 1000 PRs and 1000 issues simultaneously,
finding two kinds of links:

1. **Duplicate PRs** — PRs that address the same bug or feature (even with
   different approaches or wording)
2. **Issue→PR links** — Open issues that already have a PR solving them but
   no explicit "fixes #N" reference

Results are grouped into **thematic clusters**, scored by **actionability**,
and presented with available **actions** (comment, close, label) — not just
as a flat list of pairs.

## Configuration

The user provides these at invocation time (ask if not given):

| Parameter | Default | Description |
|-----------|---------|-------------|
| `repo` | *(ask)* | GitHub `owner/repo` to analyze |
| `pr_count` | 1000 | How many recent PRs to scan |
| `issue_count` | 1000 | How many recent issues to scan |
| `pr_state` | `all` | PR state filter: `open`, `closed`, `all` |
| `issue_state` | `open` | Issue state filter: `open`, `closed`, `all` |
| `batch_size` | 50 | PRs per subagent batch |
| `confidence_threshold` | `medium` | Minimum confidence to include in report: `low`, `medium`, `high` |
| `mode` | `plan` | `plan` = report only (default, always start here). `execute` = act on findings. |

**Default mode is `plan`** (dry-run). The skill always starts by generating
the report. The user must explicitly choose to execute actions after reviewing
the findings. This matters because actions can't be undone.

## Workflow

### Phase 1: Data Collection

Fetch PR and issue metadata from the GitHub API. This phase is deterministic
and uses the shell script — no AI needed.

```bash
scripts/fetch-data.sh <owner/repo> <workspace_dir> [pr_count] [issue_count] [pr_state] [issue_state]
```

This produces:
- `workspace/prs.json` — Full PR metadata
- `workspace/issues.json` — Full issue metadata (PRs filtered out)
- `workspace/existing-refs.json` — Pre-extracted explicit cross-references
- `workspace/pr-index.txt` — Compact one-line-per-PR index
- `workspace/issue-index.txt` — Compact one-line-per-issue index

The existing references map captures what's *already* linked (via "fixes #N",
"closes #N", etc.) so subagents can focus on what's *missing*.

### Phase 2: Parallel Analysis (Sonnet Subagents)

This is where the intelligence happens. Split PRs into batches and spawn
parallel Sonnet subagents. Each subagent receives:

- Its batch of PRs (full metadata from prs.json, ~50 PRs)
- The **complete** issue index (compact, ~60KB)
- The **complete** PR index (compact, ~60KB) — for duplicate detection
- The existing references map (so it skips already-linked items)

**Spawn subagents using the Task tool:**

```
For each batch B of {batch_size} PRs:
  Task(
    subagent_type="general-purpose",
    model="sonnet",
    prompt=<see below>
  )
```

**Subagent prompt template:**

**Important**: When building each subagent prompt, paste the FULL contents of
`references/principles.md` into the "Decision Principles" section below.
Do not summarize or condense — include the complete text. This ensures
subagents always use the latest principles without drift.

```
You are a cross-reference analyst for a GitHub repository. Your job is to find
connections between PRs and issues that aren't explicitly linked yet.

## Decision Principles (these override everything else)

{paste full contents of references/principles.md here}

## Your Batch
You are analyzing PRs {start_num} through {end_num} of {total_prs}.

## PR Details (your batch)
{full PR metadata for this batch from prs.json}

## Complete Issue Index
{issue-index.txt content}

## Complete PR Index
{pr-index.txt content}

## Already Known References
{existing-refs.json content}

## Your Task

Find TWO types of connections:

### 1. Issue→PR Links
For each PR in your batch, determine if it resolves any issue in the index.
Evidence must include at least one of:
- Same error message or failure path described in both
- PR modifies the component/module that the issue describes as broken
- PR body explicitly references the problem the issue describes (even without #N)

Title similarity alone is NOT sufficient. Skip any links that already exist
in the known references.

### 2. Duplicate PRs
For each PR in your batch, check if any OTHER PR in the full PR index
addresses the same problem. Evidence must include at least one of:
- Both modify the same files for the same reason
- Both fix the same error/behavior (even with different approaches)
- One is a resubmission or continuation of the other (same branch, similar body)

Same area of code is NOT enough — the PRs must address the same specific problem.

### 3. Flagging Uncertainty

If you encounter a pair where the evidence is ambiguous — you can see a
plausible connection but can't confirm it from the available data — mark it
with `"status": "manual_review_required"` instead of guessing a confidence
level. Include what's missing (e.g., "need to see full diff to confirm
file overlap").

### Output Format
Return ONLY a JSON array. No other text.

[
  {
    "type": "issue_link",
    "pr": 5678,
    "pr_author": "@username",
    "issue": 1234,
    "confidence": "high|medium|low",
    "status": "confirmed|manual_review_required",
    "root_cause": "One sentence: what shared problem connects these",
    "evidence": "Specific: same error message, same file, same component, etc.",
    "missing_evidence": null or "What would be needed to confirm this"
  },
  {
    "type": "duplicate_pr",
    "pr_a": 5678,
    "pr_b": 5679,
    "pr_a_author": "@username_a",
    "pr_b_author": "@username_b",
    "confidence": "high|medium|low",
    "status": "confirmed|manual_review_required",
    "root_cause": "One sentence: what shared problem connects these",
    "evidence": "Specific: same files modified, same branch, resubmission, etc.",
    "missing_evidence": null or "What would be needed to confirm this"
  }
]
```

**Parallelism**: Spawn ALL batch subagents simultaneously. With batch_size=50
and 1000 PRs, that's 20 parallel subagents. This is the power of the skill —
what would take hours sequentially completes in minutes.

### Phase 3: Merge, Deduplicate & Cluster

After all subagents return:

1. **Collect** all JSON results into a single array
2. **Deduplicate** duplicate_pr entries (A→B and B→A are the same link)
3. **Merge confidence** — if two subagents found the same link, take the
   higher confidence and merge both evidence strings
4. **Filter** by `confidence_threshold`
5. **Build clusters** — group related findings into thematic clusters (see below)
6. **Score clusters** by actionability (see below)
7. **Sort** clusters by score (highest first)

Save to `workspace/results-unverified.json`.

#### Clustering Algorithm

Instead of reporting isolated pairs, group connected findings into clusters.
Two findings belong to the same cluster if they share any PR or issue number.

Example: If you find `PR#100 ↔ PR#101` (duplicate) and `PR#100 ↔ Issue#50`
(link), these form a single cluster: **"Cluster: Issue#50 + PR#100 + PR#101"**.

Cluster structure:
```json
{
  "cluster_id": 1,
  "theme": "Onboard token mismatch — OPENCLAW_GATEWAY_TOKEN ignored",
  "items": ["PR#22662", "PR#22658", "Issue#22638"],
  "findings": [ ...individual findings in this cluster... ],
  "score": 8.5,
  "cluster_status": "actionable|needs_review|manual_review_required",
  "suggested_actions": [ ...see Phase 4b... ]
}
```

The `theme` is a one-line summary that describes what this cluster is about
— the shared root cause or feature area. Generate it from the `root_cause`
fields of the cluster's findings.

#### Actionability Scoring

Each cluster gets a score based on these signals (clamp result to 0-10):

| Signal | Points | Why it matters |
|--------|--------|----------------|
| All items open | +3 | Can still be acted on |
| At least one high-confidence finding | +2 | Strong evidence |
| Multiple findings in cluster | +1 | More connections = more value |
| Issue has >5 reactions/comments | +1 | High community interest |
| PR is not draft | +1 | Ready for review |
| Cluster has a clear canonical PR | +1 | Easy to pick a winner |
| Any `manual_review_required` | -2 | Needs human judgment |
| All items closed | -3 | Low urgency |

Clusters scoring 7+ are **actionable** (green in report).
Clusters scoring 4-6 **need review** (yellow).
Clusters scoring 0-3 are **low priority** (gray).

### Phase 3b: Evidence Verification

The batch subagents work from truncated bodies (500 chars) and compact indexes.
That's good enough for discovery but not for final decisions. This phase takes
the candidates and verifies them against deeper data.

Spawn a single verification subagent (Sonnet) that:

1. Reads `workspace/results-unverified.json`
2. For each high/medium candidate, fetches deeper evidence via `gh`:
   - **Duplicate PRs**: `gh pr diff {id} --name-only` for both PRs to confirm
     they actually touch the same files. If the file lists don't overlap at all,
     downgrade to `low` or remove.
   - **Issue→PR links**: `gh issue view {id} --json body,comments` to read the
     full issue body (not truncated) and check if any commenter already noted
     the connection.
   - **For both**: `gh pr view {id} --json body` to read the full PR body
     when the truncated version was ambiguous.
3. For `manual_review_required` items: attempt to resolve with deeper data.
   If still ambiguous after deep check, keep the flag — it goes to the user.
4. Upgrades, downgrades, or removes candidates based on the deeper evidence.
5. Recalculates cluster scores after confidence changes.
6. Writes the verified results to `workspace/results.json`.

**Verification subagent prompt:**

```
You are an evidence verification agent. You received candidate cross-references
between GitHub PRs and issues from a discovery pass. Your job is to verify or
reject each candidate using deeper data.

## Principles
- A candidate stays only if deeper evidence confirms the connection.
- If file diffs don't overlap for duplicate PRs, downgrade or remove.
- If the full issue body reveals the problem is actually different, remove.
- If someone already commented the link, exclude the candidate from results entirely.
- You may upgrade "medium" to "high" if deeper evidence is strong.
- For "manual_review_required" items: try to resolve with the deeper data.
  If you can confirm or deny, update status to "confirmed" with the new
  confidence. If still ambiguous, keep "manual_review_required".
- Add a "verified_evidence" field with what you found in the deep check.

## Candidates to verify
{contents of results-unverified.json}

## Commands available
Run these via bash to fetch deeper data:
- gh pr diff {number} --name-only --repo {owner/repo}
- gh pr view {number} --json body --repo {owner/repo}
- gh issue view {number} --json body,comments --repo {owner/repo}

## Output
Write verified results to {workspace}/results.json as a JSON array.
Same structure as input, but with:
- Updated confidence levels and status fields
- Added "verified_evidence" field
- Removed any candidates that didn't survive verification
- Added "verification_note" for anything noteworthy
```

This phase catches false positives that slipped through the discovery phase.
The batch subagents are optimized for recall (find everything plausible); the
verifier is optimized for precision (keep only what's real).

**Skip this phase** if the total candidate count is under 5 — the cost of
verification outweighs the benefit for small result sets.

### Phase 4: Generate Report

Present the report to the user organized by clusters, not flat pairs.

**Report structure:**

```markdown
# Cross-Reference Report: {owner}/{repo}

**Scanned**: {N} PRs, {M} issues
**Found**: {X} clusters containing {Y} findings
**Already linked**: {Z} existing references (skipped)
**Mode**: plan (review only — no actions taken)

## Clusters (sorted by actionability score)

### Cluster 1: Onboard token mismatch (Score: 8.5 🟢)
**Theme**: OPENCLAW_GATEWAY_TOKEN env var ignored during onboard setup
**Items**: PR#22662 (@aiworks451), PR#22658 (@otherdev), Issue#22638
**Status**: Actionable

| Finding | Type | Confidence | Root Cause |
|---------|------|------------|------------|
| PR#22662 ↔ PR#22658 | duplicate_pr | high | Both fix token mismatch in onboard wizard |
| PR#22658 → Issue#22638 | issue_link | high | PR explicitly closes the issue |

**Suggested actions** (choose per cluster):
- 💬 Comment on PR#22662 noting PR#22658 covers the same fix more broadly
- 🏷️ Label PR#22662 as `duplicate`
- ❌ Close PR#22662 as duplicate of PR#22658

---

### Cluster 2: i18n Portuguese translations (Score: 6.0 🟡)
**Theme**: Competing pt-BR translation implementations
**Items**: PR#22637 (@dev1), PR#22628 (@dev2)
**Status**: Needs review — different approaches, human must choose

| Finding | Type | Confidence | Root Cause |
|---------|------|------------|------------|
| PR#22637 ↔ PR#22628 | duplicate_pr | medium | Same feature, different implementations |

**Suggested actions**:
- 💬 Comment linking the two PRs for coordination
- ⚠️ Manual review required: different i18n architectures, maintainer must decide

---

### ⚠️ Items Requiring Manual Review

These findings had ambiguous evidence that couldn't be resolved automatically:

| Finding | Reason | What's Missing |
|---------|--------|----------------|
| PR#1234 ↔ Issue#5678 | Keyword overlap but no shared error path | Need to check if PR touches the auth module |

## Summary
- **Actionable clusters**: {count} (score 7+, ready for bulk action)
- **Needs review**: {count} (score 4-6, human judgment needed)
- **Manual review required**: {count} (ambiguous, flagged for human)
- **Next step**: Choose actions per cluster, then select a commenting/action strategy.
```

### Phase 4b: Suggested Actions Per Cluster

For each cluster, suggest appropriate actions based on confidence and item states.

**For duplicate PRs (high confidence, both open):**
1. 💬 **Comment** — link the PRs so authors can coordinate
2. 🏷️ **Label** — add `duplicate` label to the weaker PR
3. ❌ **Close** — close the weaker PR as duplicate (only if very clear)

**For duplicate PRs (one open, one closed):**
1. 💬 **Comment** — note the connection for context (lower priority)

**For issue→PR links (high confidence):**
1. 💬 **Comment on issue** — note that a PR addresses this
2. 🏷️ **Label issue** — add `has-pr` or similar

**For `manual_review_required` items:**
1. ⚠️ **Flag for human** — present in a separate section, no automated action

**Action rules:**
- Never suggest closing without high confidence + verification
- Never suggest labeling without at least medium confidence
- Always suggest commenting as the minimum action (it's the safest)
- For clusters with mixed confidence, suggest the action matching the
  lowest-confidence finding (conservative)

### Phase 5: Interactive Action Strategy

After presenting the report, ask the user how they want to proceed.
Read `references/commenting-strategy.md` for rate-limiting details.

**Present action choices per cluster:**

For each actionable cluster, let the user pick:
- **Comment only** — just link the items
- **Comment + label** — link and add labels
- **Comment + close** — link and close duplicates (high confidence only)
- **Skip** — do nothing for this cluster
- **Manual** — I'll handle this one myself

Then present the timing strategy. Read `references/commenting-strategy.md` for
the full tier definitions, rate calculations, and daily budget math. Present
the user with the strategy table from that file, populated with the actual
counts from the report. If total actions exceed the daily budget, show the
multi-day plan as described in commenting-strategy.md.

Always offer **Dry Run** (report only, no actions) as the default choice.
Also offer **Skip** — save the report but don't act at all.

### Phase 6: Execute Actions

If the user chooses to act, build `workspace/approved-comments.json` and
execute with rate limiting via the shell script.

**approved-comments.json schema** (array of objects):
```json
[
  {
    "target_number": 1234,
    "type": "issue_link|duplicate_pr",
    "body": "The full comment text to post",
    "cluster_id": 1,
    "finding_index": 0
  }
]
```
- `target_number` — the issue or PR number to comment on (used by post-comments.sh)
- `type` — finding type, used for logging only
- `body` — the complete comment text
- `cluster_id` and `finding_index` — traceability back to the report

```bash
scripts/post-comments.sh <owner/repo> <workspace_dir> [jitter_min] [jitter_max] [daily_max]
```

**For label and close actions**, execute them inline (not via the script)
since they don't need the same rate limiting as comments:
```bash
# Label (works for both issues and PRs — GitHub treats PRs as issues for labels)
gh issue edit {number} --add-label duplicate --repo {owner/repo}
# Close PR as duplicate (use heredoc for safe body passing)
gh pr close {number} --comment "$(cat <<'EOF'
Closing in favor of #{canonical_pr_number} by @{canonical_author}, which covers the same change ({root_cause_sentence}).

Thanks for the contribution, @{closed_pr_author} — your work helped confirm this was worth fixing.

_If this closure is wrong, reopen and let me know._
EOF
)" --repo {owner/repo}
```

**Always execute in this order within a cluster:**
1. Post comments first (so the context exists before close/label)
2. Add labels
3. Close (only after comment is posted)

**Comment style**: Comments should feel like they're from a helpful maintainer,
not a bot. Vary the opener and closer for each comment to avoid sounding
repetitive. Always mention the PR author by name.

**Comment templates** (vary the opener each time):

Openers (rotate through these, never use the same one twice in a row):
- "Heads up — this might be related."
- "Worth a look:"
- "Noticed a possible connection here."
- "This could be relevant to what you're working on."

For issue→PR links (comment on the issue):
```
{opener}

PR #{pr_number} by @{author} ({pr_title}) appears to address this issue.

{root_cause_sentence}

_If this doesn't look right, let me know and I'll correct the link._
```

For duplicate PRs (comment on the newer PR):
```
{opener}

PR #{other_pr_number} by @{other_author} ({other_pr_title}) seems to address
the same problem.

{root_cause_sentence}

Both approaches have merit — might be worth coordinating.

_If these aren't actually related, let me know and I'll correct this._
```

Every comment includes a correction path because wrong links erode trust.

Save progress to `workspace/comment-progress.json` for resume support.

## Error Handling

- **API rate limit hit**: Pause, show remaining reset time, save progress.
- **Subagent returns invalid JSON**: Log the error, skip that batch, warn user.
  Don't retry — the batch results are lost but other batches continue.
- **PR/issue not found (deleted)**: Skip silently, note in report.
- **Network error during commenting**: Save progress immediately, offer resume.
- **Subagent returns empty results**: Normal — not every batch has links.
- **Close/label fails**: Log the error, continue with remaining actions.
  Never retry a close — the user should investigate manually.

## Workspace Structure

```
cross-ref-workspace/
├── prs.json                  # Raw PR metadata
├── issues.json               # Raw issue metadata
├── pr-index.txt              # Compact PR index (one line per PR)
├── issue-index.txt           # Compact issue index (one line per issue)
├── existing-refs.json        # Pre-extracted explicit references
├── batches/
│   ├── batch-01-results.json # Subagent results per batch
│   ├── batch-02-results.json
│   └── ...
├── results-unverified.json   # Raw merged findings (before verification)
├── results.json              # Verified findings with clusters
├── report.md                 # Human-readable report
├── approved-comments.json    # Comments approved for posting
├── comment-progress.json     # Commenting progress tracker
└── pending-comments.json     # Links not yet commented (if day limit hit)
```

## Resume Support

If a previous run exists in the workspace:
- **Phase 1-3**: Skip if `results.json` exists and user confirms
- **Phase 4**: Skip if `report.md` exists and user confirms
- **Phase 5-6**: Resume from `comment-progress.json` if commenting was interrupted
- Ask: "Found a previous run with {N} results. Resume commenting or start fresh?"

## Tips for Operators

- Start with a smaller count (100 PRs, 100 issues) to validate before scaling
- Always review the report in `plan` mode before executing actions
- The compact index approach keeps memory usage manageable — don't fetch full
  PR bodies (500 char truncation is intentional)
- For very active repos (>10K PRs), increase batch_size to reduce subagent count
- Token costs: ~20 subagent calls for 1000 PRs at batch_size=50, each with
  ~120KB context. Plan accordingly.
- The `gh` CLI token needs `repo` scope (private) or `public_repo` (public),
  plus `issues:write` for posting comments.


---

## Referenced Files

> The following files are referenced in this skill and included for context.

### scripts/fetch-data.sh

```bash
#!/usr/bin/env bash
# fetch-data.sh — Fetch PR and issue metadata from GitHub API
#
# Usage: ./fetch-data.sh <owner/repo> <workspace_dir> [pr_count] [issue_count] [pr_state] [issue_state]
#
# Fetches PR and issue metadata, builds compact indexes, and extracts
# existing cross-references. All output goes to workspace_dir.

set -euo pipefail

REPO="${1:?Usage: fetch-data.sh <owner/repo> <workspace_dir> [pr_count] [issue_count] [pr_state] [issue_state]}"
WORKSPACE="${2:?Usage: fetch-data.sh <owner/repo> <workspace_dir> [pr_count] [issue_count] [pr_state] [issue_state]}"
PR_COUNT="${3:-1000}"
ISSUE_COUNT="${4:-1000}"
PR_STATE="${5:-all}"
ISSUE_STATE="${6:-open}"

# ─── Input validation (B2: prevent API path traversal) ───────────────────
if ! [[ "$REPO" =~ ^[a-zA-Z0-9._-]+/[a-zA-Z0-9._-]+$ ]]; then
  echo "Error: Invalid repo format. Expected 'owner/repo', got: $REPO" >&2
  exit 1
fi

mkdir -p "$WORKSPACE/batches"

echo "=== Cross-Ref Data Fetch ==="
echo "Repo: $REPO"
echo "Workspace: $WORKSPACE"
echo "PRs: $PR_COUNT ($PR_STATE)"
echo "Issues: $ISSUE_COUNT ($ISSUE_STATE)"
echo ""

# ─── Fetch PRs ───────────────────────────────────────────────────────────
echo "Fetching PRs..."
PR_FILE="$WORKSPACE/prs.json"
echo "[" > "$PR_FILE"
FETCHED=0
PAGE=1
FIRST=true

while [ "$FETCHED" -lt "$PR_COUNT" ]; do
  REMAINING=$((PR_COUNT - FETCHED))
  PER_PAGE=100
  if [ "$REMAINING" -lt "$PER_PAGE" ]; then
    PER_PAGE=$REMAINING
  fi

  # B4: capture stderr instead of suppressing it
  if ! RESULT=$(gh api "repos/$REPO/pulls?state=$PR_STATE&per_page=$PER_PAGE&page=$PAGE&sort=created&direction=desc" \
    --jq '[.[] | {
      number,
      title,
      body: ((.body // "")[0:500]),
      labels: [.labels[].name],
      state,
      draft,
      created_at,
      head_ref: .head.ref,
      author: .user.login
    }]' 2>"$WORKSPACE/fetch-stderr.log"); then
    echo "Error: gh api failed fetching PRs (page $PAGE)." >&2
    cat "$WORKSPACE/fetch-stderr.log" >&2
    exit 1
  fi

  COUNT=$(echo "$RESULT" | jq 'length')
  # IMPORTANT: must break here, otherwise the paste below produces corrupt JSON
  if [ "$COUNT" -eq 0 ]; then
    break
  fi

  # Append items (strip outer brackets, add commas)
  if [ "$FIRST" = true ]; then
    FIRST=false
  else
    echo "," >> "$PR_FILE"
  fi
  echo "$RESULT" | jq -c '.[]' | paste -sd ',' >> "$PR_FILE"

  FETCHED=$((FETCHED + COUNT))
  PAGE=$((PAGE + 1))
  echo "  PRs fetched: $FETCHED / $PR_COUNT (API pages: $PAGE)"
done

echo "]" >> "$PR_FILE"

# B1+B3: Use env var to pass path safely; catch specific exceptions
CROSS_REF_TARGET_FILE="$PR_FILE" python3 << 'PYEOF'
import json
import os
import sys

target_file = os.environ["CROSS_REF_TARGET_FILE"]
with open(target_file) as f:
    content = f.read()
try:
    data = json.loads(content)
except json.JSONDecodeError as e:
    print(f"Warning: JSON parse error, attempting repair: {e}", file=sys.stderr)
    content = content.replace('}\n{', '},{').replace('}\r\n{', '},{')
    try:
        data = json.loads(content)
    except json.JSONDecodeError as e2:
        print(f"Error: JSON repair failed: {e2}", file=sys.stderr)
        sys.exit(1)
# Deduplicate by PR number
seen = set()
unique = []
for item in data:
    if item['number'] not in seen:
        seen.add(item['number'])
        unique.append(item)
data = unique
with open(target_file, 'w') as f:
    json.dump(data, f, indent=2)
print(f'PRs saved: {len(data)}')
PYEOF

# ─── Fetch Issues (excluding PRs) ────────────────────────────────────────
echo ""
echo "Fetching issues..."
ISSUE_FILE="$WORKSPACE/issues.json"
echo "[" > "$ISSUE_FILE"
FETCHED=0
PAGE=1
FIRST=true

while [ "$FETCHED" -lt "$ISSUE_COUNT" ]; do
  # C2: always fetch max page size since PRs are filtered client-side
  PER_PAGE=100

  # W4: single API call — fetch raw, then filter client-side to detect page exhaustion
  if ! RAW_RESULT=$(gh api "repos/$REPO/issues?state=$ISSUE_STATE&per_page=$PER_PAGE&page=$PAGE&sort=created&direction=desc" \
    2>"$WORKSPACE/fetch-stderr.log"); then
    echo "Error: gh api failed fetching issues (page $PAGE)." >&2
    cat "$WORKSPACE/fetch-stderr.log" >&2
    exit 1
  fi

  RAW_COUNT=$(echo "$RAW_RESULT" | jq 'length')
  if [ "$RAW_COUNT" -eq 0 ]; then
    break
  fi

  RESULT=$(echo "$RAW_RESULT" | jq '[.[] | select(.pull_request == null) | {
    number,
    title,
    body: ((.body // "")[0:500]),
    labels: [.labels[].name],
    state,
    comments,
    reactions: .reactions.total_count,
    created_at,
    author: .user.login
  }]')
  COUNT=$(echo "$RESULT" | jq 'length')

  if [ "$COUNT" -gt 0 ]; then
    if [ "$FIRST" = true ]; then
      FIRST=false
    else
      echo "," >> "$ISSUE_FILE"
    fi
    echo "$RESULT" | jq -c '.[]' | paste -sd ',' >> "$ISSUE_FILE"
    FETCHED=$((FETCHED + COUNT))
  fi

  PAGE=$((PAGE + 1))
  # C7: show both counts for transparency
  echo "  Issues fetched: $FETCHED / $ISSUE_COUNT (API pages consumed: $PAGE)"
done

echo "]" >> "$ISSUE_FILE"

# B1+B3: safe env var + specific exception handling
# Also limit to requested count (C2: we fetch full pages, may overshoot)
CROSS_REF_TARGET_FILE="$ISSUE_FILE" CROSS_REF_MAX_COUNT="$ISSUE_COUNT" python3 << 'PYEOF'
import json
import os
import sys

target_file = os.environ["CROSS_REF_TARGET_FILE"]
max_count = int(os.environ.get("CROSS_REF_MAX_COUNT", "0"))
with open(target_file) as f:
    content = f.read()
try:
    data = json.loads(content)
except json.JSONDecodeError as e:
    print(f"Warning: JSON parse error, attempting repair: {e}", file=sys.stderr)
    content = content.replace('}\n{', '},{').replace('}\r\n{', '},{')
    try:
        data = json.loads(content)
    except json.JSONDecodeError as e2:
        print(f"Error: JSON repair failed: {e2}", file=sys.stderr)
        sys.exit(1)
# Deduplicate by issue number (pagination can return duplicates)
seen = set()
unique = []
for item in data:
    if item['number'] not in seen:
        seen.add(item['number'])
        unique.append(item)
data = unique
# Limit to requested count (C2: full pages may overshoot)
if max_count > 0 and len(data) > max_count:
    data = data[:max_count]
with open(target_file, 'w') as f:
    json.dump(data, f, indent=2)
print(f'Issues saved: {len(data)}')
PYEOF

# ─── Build Compact Indexes ─────────────────────────────────────────────
echo ""
echo "Building compact indexes..."

CROSS_REF_WORKSPACE="$WORKSPACE" python3 << 'PYEOF'
import json
import re
import os

workspace = os.environ["CROSS_REF_WORKSPACE"]

with open(f"{workspace}/prs.json") as f:
    prs = json.load(f)
with open(f"{workspace}/issues.json") as f:
    issues = json.load(f)

# ── Extract existing references ──
ref_pattern = re.compile(r'#(\d+)')
fix_pattern = re.compile(r'(?:fix(?:es)?|close[sd]?|resolve[sd]?)\s+#(\d+)', re.IGNORECASE)

pr_to_issues = {}
issue_to_prs = {}
explicit_fixes = {}

issue_nums = {i["number"] for i in issues}

for pr in prs:
    num = pr["number"]
    body = pr.get("body", "") or ""
    title = pr.get("title", "") or ""
    text = f"{title} {body}"

    # Explicit fixes
    fixes = [int(m) for m in fix_pattern.findall(text)]
    if fixes:
        explicit_fixes[str(num)] = fixes
        pr_to_issues.setdefault(str(num), []).extend(fixes)
        for issue_num in fixes:
            issue_to_prs.setdefault(str(issue_num), []).append(num)

    # General references
    refs = [int(m) for m in ref_pattern.findall(text) if int(m) != num]
    issue_refs = [r for r in refs if r in issue_nums and r not in fixes]
    if issue_refs:
        pr_to_issues.setdefault(str(num), []).extend(issue_refs)
        for ir in issue_refs:
            issue_to_prs.setdefault(str(ir), []).append(num)

existing_refs = {
    "pr_to_issues": pr_to_issues,
    "issue_to_prs": issue_to_prs,
    "explicit_fixes": explicit_fixes
}

with open(f"{workspace}/existing-refs.json", "w") as f:
    json.dump(existing_refs, f, indent=2)

# ── Build compact issue index ──
with open(f"{workspace}/issue-index.txt", "w") as f:
    for issue in issues:
        labels = ",".join(issue.get("labels", []))
        label_str = f" [{labels}]" if labels else ""
        title = issue["title"].replace('"', "'")[:80]
        refs = issue_to_prs.get(str(issue["number"]), [])
        ref_str = ",".join(f"PR#{r}" for r in refs) if refs else "none"
        author = issue.get("author", "unknown")
        reactions = issue.get("reactions", 0) or 0
        comments = issue.get("comments", 0) or 0
        engagement = f" 💬{comments}👍{reactions}" if (comments or reactions) else ""
        f.write(f'#{issue["number"]} @{author}{label_str}{engagement} "{title}" linked:{ref_str}\n')

# ── Build compact PR index ──
with open(f"{workspace}/pr-index.txt", "w") as f:
    for pr in prs:
        labels = ",".join(pr.get("labels", []))
        label_str = f" [{labels}]" if labels else ""
        title = pr["title"].replace('"', "'")[:80]
        state = pr.get("state", "unknown").upper()
        draft = " DRAFT" if pr.get("draft") else ""
        fixes = explicit_fixes.get(str(pr["number"]), [])
        fix_str = ",".join(f"#{n}" for n in fixes) if fixes else "none"
        author = pr.get("author", "unknown")
        f.write(f'PR#{pr["number"]} @{author}{label_str} {state}{draft} "{title}" fixes:{fix_str}\n')

print(f"Indexes built: {len(issues)} issues, {len(prs)} PRs")
print(f"Existing refs: {len(explicit_fixes)} PRs with explicit fixes, "
      f"{len(issue_to_prs)} issues with PR references")

PYEOF

echo ""
echo "=== Fetch complete ==="
echo "Files:"
ls -lh "$WORKSPACE"/*.json "$WORKSPACE"/*.txt 2>/dev/null

```

### references/principles.md

```markdown
# Cross-Ref Principles

Hard constraints for every analysis run. If a step conflicts with these, the
principle wins.

## Evidence over narrative

- Never classify duplicates from title similarity alone. Inspect body content,
  error messages, and failure paths.
- A shared keyword doesn't make two items related. Two items are related when
  they share a root cause or fix the same broken behavior.

## Confidence requires specifics

- **High**: Shared error path, same component, overlapping fix. You can explain
  exactly what makes these the same problem.
- **Medium**: Same area of the codebase, similar symptoms, but different
  reproduction or approach. Plausibly related.
- **Low**: Keyword overlap or label match only. Might be coincidence.
- If you can't articulate the shared root cause in one sentence, it's not
  "high" confidence.

## Credit preservation

- Always mention the PR author when commenting about a link.
- Never frame a duplicate as wasted work. Both contributions matter.
- Use language like "related" and "addresses the same area", not "copied"
  or "redundant".

## Conservative defaults

- When uncertain, classify as "related" not "duplicate".
- Better to miss a link than to create a false one.
- An unposted comment costs nothing. A wrong comment erodes trust.

## Reversibility

- Comments can't be undone. Every comment should include a path to
  correct the link if it's wrong ("if this doesn't look right, let me know").

```

### scripts/post-comments.sh

```bash
#!/usr/bin/env bash
# post-comments.sh — Post cross-reference comments with jitter-based rate limiting
#
# Usage: ./post-comments.sh <owner/repo> <workspace_dir> [jitter_min] [jitter_max] [daily_max]
#
# Reads approved-comments.json from workspace and posts them with organic-looking
# rate limiting: random jitter between comments, breathing pauses after sustained
# activity, and exponential backoff on rate-limit errors.
#
# Saves progress to comment-progress.json for resume support.
#
# NOTE on resume safety (C4): If the script crashes between posting a comment
# and saving progress, the same comment may be re-posted on resume. GitHub does
# not deduplicate comments. Progress is saved immediately after each successful
# post to minimize this window.

set -euo pipefail

REPO="${1:?Usage: post-comments.sh <owner/repo> <workspace_dir> [jitter_min] [jitter_max] [daily_max]}"
WORKSPACE="${2:?Usage: post-comments.sh <owner/repo> <workspace_dir> [jitter_min] [jitter_max] [daily_max]}"
JITTER_MIN="${3:-75}"
JITTER_MAX="${4:-135}"
DAILY_MAX="${5:-60}"

# Input validation
if ! [[ "$REPO" =~ ^[a-zA-Z0-9._-]+/[a-zA-Z0-9._-]+$ ]]; then
  echo "Error: Invalid repo format. Expected 'owner/repo', got: $REPO" >&2
  exit 1
fi
if [ "$JITTER_MIN" -gt "$JITTER_MAX" ]; then
  echo "Error: jitter_min ($JITTER_MIN) must be <= jitter_max ($JITTER_MAX)" >&2
  exit 1
fi

PROGRESS_FILE="$WORKSPACE/comment-progress.json"
COMMENTS_FILE="$WORKSPACE/approved-comments.json"

if [ ! -f "$COMMENTS_FILE" ]; then
  echo "Error: $COMMENTS_FILE not found. Run the analysis first." >&2
  exit 1
fi

TOTAL=$(jq 'length' "$COMMENTS_FILE")
echo "=== Cross-Ref Comment Poster ==="
echo "Repo: $REPO"
echo "Total comments: $TOTAL"
echo "Rate: 1 comment per ${JITTER_MIN}-${JITTER_MAX}s (jitter)"
echo "Daily max: $DAILY_MAX"
echo ""

# ─── Resume support ──────────────────────────────────────────────────────
START_INDEX=0
DAY_COUNT=0
TODAY=$(date -u +%Y-%m-%d)

if [ -f "$PROGRESS_FILE" ]; then
  PREV_DAY=$(jq -r '.day_start_utc // ""' "$PROGRESS_FILE")
  if [ "$PREV_DAY" = "$TODAY" ]; then
    DAY_COUNT=$(jq '.day_count // 0' "$PROGRESS_FILE")
    echo "Resuming: $DAY_COUNT comments already posted today"
  else
    echo "New day — resetting daily counter"
    DAY_COUNT=0
  fi
  START_INDEX=$(jq '.completed // 0' "$PROGRESS_FILE")
  echo "Starting from index: $START_INDEX"

  # Grace period on resume after rate limit
  if jq -e '.error' "$PROGRESS_FILE" > /dev/null 2>&1; then
    echo "Previous run was rate-limited. Starting with 5-minute grace period..."
    sleep 300
  fi
fi

if [ "$DAY_COUNT" -ge "$DAILY_MAX" ]; then
  echo "Daily limit reached ($DAILY_MAX). Try again tomorrow."
  echo "Remaining: $((TOTAL - START_INDEX)) comments"
  exit 0
fi

# ─── Helper: save progress ───────────────────────────────────────────────
save_progress() {
  local completed="$1"
  local remaining="$2"
  local error="${3:-}"

  local args=(
    --argjson total "$TOTAL"
    --argjson completed "$completed"
    --argjson remaining "$remaining"
    --argjson day_count "$DAY_COUNT"
    --arg day_start_utc "$TODAY"
    --arg last_commented_at "$(date -u +%Y-%m-%dT%H:%M:%SZ)"
  )
  local tmp="${PROGRESS_FILE}.tmp"
  if [ -n "$error" ]; then
    args+=(--arg error "$error")
    jq -n "${args[@]}" '{total_planned: $total, completed: $completed, remaining: $remaining,
      day_count: $day_count, day_start_utc: $day_start_utc,
      last_commented_at: $last_commented_at, error: $error}' > "$tmp"
  else
    jq -n "${args[@]}" '{total_planned: $total, completed: $completed, remaining: $remaining,
      day_count: $day_count, day_start_utc: $day_start_utc,
      last_commented_at: $last_commented_at}' > "$tmp"
  fi
  mv "$tmp" "$PROGRESS_FILE"
}

# ─── Helper: jitter sleep ────────────────────────────────────────────────
jitter_sleep() {
  local min_secs="${1:-$JITTER_MIN}"
  local max_secs="${2:-$JITTER_MAX}"
  local range=$((max_secs - min_secs + 1))
  local sleep_time=$((RANDOM % range + min_secs))
  echo "  ⏳ Waiting ${sleep_time}s (jitter: ${min_secs}-${max_secs}s)..."
  sleep "$sleep_time"
}

# ─── Helper: exponential backoff ─────────────────────────────────────────
backoff_sleep() {
  local attempt="$1"
  # Progression: attempt 2→2min, 3→4min, 4→8min, 5→30min (cap)
  local base=120   # 2 minutes
  local cap=1800   # 30 minutes
  local wait=$((base * (2 ** (attempt - 2))))
  if [ "$wait" -gt "$cap" ]; then wait=$cap; fi
  echo "  ⏳ Backoff: ${wait}s (~$((wait / 60))min) before retry $attempt of 5..."
  sleep "$wait"
}

# ─── Helper: breathing pause ─────────────────────────────────────────────
breathing_pause() {
  local pause_secs=$((RANDOM % 421 + 480))  # 480-900s (8-15 min)
  echo ""
  echo "  🫁 Breathing pause: ${pause_secs}s (~$((pause_secs / 60)) min) after $SESSION_COUNT comments this session"
  sleep "$pause_secs"
  echo "  🫁 Resuming..."
  echo ""
}

# ─── Post comments ───────────────────────────────────────────────────────
SESSION_COUNT=0
SKIP_COUNT=0
NEXT_BREATHING=$((RANDOM % 21 + 30))  # 30-50 comments before first pause

for ((i=START_INDEX; i<TOTAL; i++)); do
  if [ "$DAY_COUNT" -ge "$DAILY_MAX" ]; then
    echo ""
    echo "Daily limit reached ($DAILY_MAX comments)."
    echo "Remaining: $((TOTAL - i)) comments"
    echo "Resume tomorrow to continue."
    save_progress "$i" "$((TOTAL - i))"
    exit 0
  fi

  # Extract comment details
  ISSUE_NUM=$(jq -r ".[$i].target_number" "$COMMENTS_FILE")
  BODY=$(jq -r ".[$i].body" "$COMMENTS_FILE")
  TYPE=$(jq -r ".[$i].type" "$COMMENTS_FILE")

  # C3: validate fields before posting
  if [ -z "$ISSUE_NUM" ] || [ "$ISSUE_NUM" = "null" ]; then
    echo "  Skipping index $i: missing target_number"
    SKIP_COUNT=$((SKIP_COUNT + 1))
    continue
  fi
  if [ -z "$BODY" ] || [ "$BODY" = "null" ]; then
    echo "  Skipping index $i: empty body"
    SKIP_COUNT=$((SKIP_COUNT + 1))
    continue
  fi
  if [ "${#BODY}" -gt 65536 ]; then
    echo "  Skipping index $i: body exceeds GitHub 65536 char limit (${#BODY} chars)"
    SKIP_COUNT=$((SKIP_COUNT + 1))
    continue
  fi

  echo "[$((i + 1))/$TOTAL] Commenting on #$ISSUE_NUM ($TYPE)..."

  # Post the comment with exponential backoff on rate-limit failure
  POST_SUCCESS=false
  for attempt in 1 2 3 4 5; do
    # Backoff before retry attempts (not before the first try)
    if [ "$attempt" -gt 1 ]; then
      backoff_sleep "$attempt"
    fi

    if gh api "repos/$REPO/issues/$ISSUE_NUM/comments" -f body="$BODY" > /dev/null 2>"$WORKSPACE/post-stderr.log"; then
      if [ "$attempt" -eq 1 ]; then
        echo "  ✓ Posted"
      else
        echo "  ✓ Posted (attempt $attempt)"
      fi
      POST_SUCCESS=true
      break
    else
      STDERR_CONTENT=$(cat "$WORKSPACE/post-stderr.log")
      echo "  ✗ Failed (attempt $attempt of 5)" >&2
      echo "  $STDERR_CONTENT" >&2
      # Only retry on rate-limit errors; skip permanently-failing comments
      if ! echo "$STDERR_CONTENT" | grep -qiE "rate.limit|secondary|abuse|429|403"; then
        echo "  ⤳ Non-retriable error, skipping #$ISSUE_NUM"
        SKIP_COUNT=$((SKIP_COUNT + 1))
        break
      fi
    fi
  done

  if [ "$POST_SUCCESS" = false ]; then
    # Check if we exhausted all retries (rate-limit case)
    if [ "$attempt" -eq 5 ]; then
      echo "  ✗ All 5 attempts failed. Saving progress and stopping."
      save_progress "$i" "$((TOTAL - i))" "Rate limited on #$ISSUE_NUM after 5 attempts"
      exit 1
    fi
    # Non-retriable error — already skipped above, continue to next comment
    continue
  fi

  DAY_COUNT=$((DAY_COUNT + 1))
  SESSION_COUNT=$((SESSION_COUNT + 1))

  # Save progress immediately after each successful post
  save_progress "$((i + 1))" "$((TOTAL - i - 1))"

  # Breathing pause after sustained commenting
  if [ "$SESSION_COUNT" -ge "$NEXT_BREATHING" ]; then
    if [ "$((i + 1))" -lt "$TOTAL" ] && [ "$DAY_COUNT" -lt "$DAILY_MAX" ]; then
      breathing_pause
      NEXT_BREATHING=$((SESSION_COUNT + RANDOM % 21 + 30))  # re-randomize
    fi
  fi

  # Jitter sleep between comments (skip after last comment)
  if [ "$((i + 1))" -lt "$TOTAL" ] && [ "$DAY_COUNT" -lt "$DAILY_MAX" ]; then
    jitter_sleep
  fi
done

echo ""
echo "=== Done ==="
echo "Total entries: $TOTAL"
echo "Posted: $SESSION_COUNT"
if [ "$SKIP_COUNT" -gt 0 ]; then
  echo "Skipped: $SKIP_COUNT (invalid or non-retriable)"
fi

```

### references/commenting-strategy.md

```markdown
# Commenting Strategy & Rate Limits

## GitHub API Rate Limits

GitHub enforces several rate limits relevant to commenting:

### REST API Limits
- **Authenticated requests**: 5,000 per hour
- **Comment creation**: Subject to secondary rate limits (abuse detection)
- **Secondary rate limit**: ~20 content-creating requests per minute (undocumented
  but observed — GitHub may throttle or 403 earlier)

### Practical Observed Limits
- GitHub's abuse detection triggers on fixed-interval patterns
- The 403 "secondary rate limit" response has no Retry-After header
- **Never use fixed intervals** — jitter-based timing is mandatory

## Anti-Abuse Approach

The posting script (`post-comments.sh`) uses three mechanisms to avoid abuse
detection while maintaining throughput:

### Jitter-Based Intervals
Every comment is followed by a random pause (default 75-135s). The base target
is ~90s between comments, but ±45s of jitter makes the pattern look organic
rather than automated. Fixed-rate patterns (e.g., exactly 120s between comments)
are exactly what GitHub's abuse detection targets.

### Breathing Pauses
After every 30-50 comments (randomized), the script takes a longer pause of
8-15 minutes. This simulates a human taking a break. Without this, sustained
commenting for >1 hour often triggers abuse detection even within rate limits.
The interval is re-randomized after each pause.

### Exponential Backoff
On API failure (403/429), the script retries with exponential backoff:
2min → 4min → 8min → 16min → 30min (cap). After 5 failed attempts, it saves
progress and exits. On resume after a rate-limit exit, there is a 5-minute
grace period before the first comment.

## Commenting Tiers

### Tier 1: Conservative (Recommended)
```
Rate:     1 comment per 75-135s (jitter, ~40/hour)
Daily:    60 comments max
Duration: ~90 min for 60 comments
Trigger:  High confidence only
```

Best for first-time runs and repos you don't own. Stays well below abuse
detection. If you have 300 high-confidence links, this takes 5 days.

### Tier 2: Moderate
```
Rate:     1 comment per 75-135s (jitter, ~40/hour)
Daily:    60 comments max
Duration: ~90 min for 60 comments
Trigger:  High + medium confidence
```

Same rate but includes more links. For repos where you're confident in
the analysis quality.

### Tier 3: Dry Run
```
Rate:     —
Daily:    —
Comments: 0
Duration: Instant
```

Generates the approved-comments.json but posts nothing. Always the default
mode — user must explicitly opt into posting.

## Daily Budget Calculation

```
daily_budget = 60          # configurable via arg 5
avg_jitter   = 90          # midpoint of 75-135s default
comments_per_hour ≈ 40     # 3600 / 90

# For N total comments:
days_needed = ceil(N / daily_budget)

# Time per day (approximate, not counting breathing pauses):
time_for_daily_budget ≈ daily_budget * avg_jitter / 60  # ~90 minutes
# With breathing pauses (~10 min every ~40 comments):
time_with_pauses ≈ 90 + 10 = ~100 minutes for 60 comments
```

## Scheduling Across Multiple Days

When total comments exceed the daily budget:

```
Day 1: Comments 1-60    (links #1 through #60)
Day 2: Comments 61-120  (links #61 through #120)
Day 3: Comments 121-180 (links #121 through #180)
...
```

The skill saves progress to `comment-progress.json` after each comment.
On resume, it reads the progress file and continues from where it stopped.

**Important**: The daily counter resets based on UTC midnight. The skill
tracks `day_start_utc` in the progress file and resets `day_count` when
a new UTC day begins.

## Comment Format

Comment templates are defined in SKILL.md Phase 6. This file only covers
rate limiting strategy. See SKILL.md for the canonical comment format,
opener rotation, and tone guidelines.

## User Presentation

When presenting the strategy to the user, show a table:

```
┌─────────────┬───────────────────┬──────────┬───────────┬──────────────┐
│ Strategy    │ Rate              │ Daily    │ Comments  │ Est. Time    │
│             │                   │ Max      │ to Post   │              │
├─────────────┼───────────────────┼──────────┼───────────┼──────────────┤
│ Conservative│ 1/75-135s (jitter)│ 60       │ {high}    │ {calc}       │
│ Moderate    │ 1/75-135s (jitter)│ 60       │ {h+m}     │ {calc}       │
│ Dry Run     │ —                 │ —        │ 0         │ Instant      │
│ Manual Pick │ 1/75-135s (jitter)│ 60       │ {custom}  │ {calc}       │
└─────────────┴───────────────────┴──────────┴───────────┴──────────────┘
```

Then let the user choose. If total > daily max, show the multi-day plan.

```



---

## Skill Companion Files

> Additional files collected from the skill directory layout.

### README.md

```markdown
# cross-ref

A Claude Code skill that finds hidden connections between GitHub PRs and issues.

## The problem

Active repos accumulate blind spots. A developer opens a PR to fix a timeout bug — not knowing someone else opened a nearly identical PR two weeks ago. An issue sits open for months while a merged PR quietly solved it, just without the `fixes #N` tag. Maintainers can't manually cross-reference hundreds of PRs and issues. The connections are there, buried in titles, bodies, error messages, and branch names. They just need someone to look.

## What cross-ref does

It reads your repo's PRs and issues, spawns parallel Sonnet subagents to analyze them semantically, and surfaces two things:

1. **Duplicate PRs** — PRs solving the same problem, even with different code or wording
2. **Missing issue links** — Open issues that already have a PR addressing them, just never explicitly linked

Results come back as **thematic clusters** scored by actionability, not a flat list of pairs. A cluster might group `Issue #400 + PR #412 + PR #419` under the theme "auth token rotation fails silently" — showing you the full picture, not isolated matches.

## How it works

```
fetch → analyze in parallel → cluster → verify → report → act
```

**Phase 1**: Shell scripts fetch PR/issue metadata from the GitHub API and build compact indexes.

**Phase 2**: PRs are split into batches. Each batch goes to a parallel Sonnet subagent alongside the full issue index and PR index. Subagents look for evidence-based connections — shared error messages, same component, overlapping fixes. Title similarity alone doesn't count.

**Phase 3**: Results are merged, deduplicated, clustered by shared items (union-find), and scored 0-10 on actionability (open state, confidence, community interest, draft status, clarity of canonical PR).

**Phase 3b**: A verification agent fetches deeper data (`gh pr diff`, full issue bodies, comment threads) to confirm or reject each candidate. Discovery optimizes for recall; verification optimizes for precision.

**Phase 4**: A structured report groups findings by cluster, sorted by score. Each cluster shows its theme, the items involved, evidence, and suggested actions.

**Phase 5-6**: If you choose to act, the skill builds a comment queue and posts with organic-looking rate limiting — jittered intervals, breathing pauses, exponential backoff. Everything is designed to not trigger GitHub's abuse detection.

## Design principles

The skill follows five hard rules (documented in `references/principles.md`):

- **Evidence over narrative** — Never classify duplicates from title similarity alone
- **Confidence requires specifics** — If you can't explain the shared root cause in one sentence, it's not "high"
- **Credit preservation** — Never frame a duplicate as wasted work
- **Conservative defaults** — Better to miss a link than create a false one
- **Reversibility** — Every comment includes a correction path

## Rate limiting

The posting script avoids fixed-interval patterns (which is what GitHub's abuse detection targets). Instead:

- **Jitter**: Random 75-135s between comments (~40/hour)
- **Breathing pauses**: 8-15 min break every 30-50 comments
- **Exponential backoff**: 2→4→8→16min on failures, with rate-limit vs permanent error differentiation
- **Daily budget**: 60 comments/day cap with UTC-midnight reset and resume support

## Installation

```bash
clawhub install cross-ref
```

Or clone this repo and symlink to your Claude Code skills directory:

```bash
git clone https://github.com/Glucksberg/cross-ref.git
ln -s "$(pwd)/cross-ref" ~/.claude/skills/cross-ref
```

## Requirements

- Claude Code with Sonnet model access (for subagents)
- `gh` CLI authenticated with `repo` scope
- `jq` for JSON processing

## Usage

### Quick start

In Claude Code, just ask in natural language:

```
Run cross-ref on openclaw/openclaw. Use 1400 PRs (open only) and 1400 issues.
```

Or more casually:

```
Find duplicate PRs and missing issue links in owner/repo
```

The skill activates automatically when Claude detects the intent.

### What happens step by step

**Phase 1 — Data fetch** (~2-3 min for 1000 PRs). The skill runs `fetch-data.sh` to pull PR/issue metadata from the GitHub API and build compact indexes.

**Phase 2 — Parallel analysis** (~3-5 min). Spawns N parallel Sonnet subagents (1000 PRs / 50 per batch = 20 subagents). Each one analyzes its batch of PRs against the full issue and PR indexes, looking for evidence-based connections.

**Phase 3 — Cluster & score**. Merges all subagent results, deduplicates, groups connected findings into thematic clusters, and scores each cluster 0-10 on actionability.

**Phase 3b — Verification**. A single verification agent fetches deeper data (PR diffs, full issue bodies, comment threads) to confirm or reject each candidate. False positives get caught here.

**Phase 4 — Report**. You get a structured report organized by clusters, sorted by score. Each cluster shows its theme, items involved, evidence, and suggested actions.

### You decide what to do

The skill **always starts in `plan` mode** (dry-run). Nothing is posted until you explicitly say so. After reviewing the report, you're presented with a strategy table:

```
┌─────────────┬───────────────────┬──────────┬───────────┬──────────────┐
│ Strategy    │ Rate              │ Daily    │ Comments  │ Est. Time    │
│             │                   │ Max      │ to Post   │              │
├─────────────┼───────────────────┼──────────┼───────────┼──────────────┤
│ Conservative│ 1/75-135s (jitter)│ 60       │ {high}    │ ~90 min      │
│ Moderate    │ 1/75-135s (jitter)│ 60       │ {h+m}     │ ~90 min      │
│ Dry Run     │ —                 │ —        │ 0         │ Instant      │
│ Manual Pick │ 1/75-135s (jitter)│ 60       │ {custom}  │ depends      │
└─────────────┴───────────────────┴──────────┴───────────┴──────────────┘
```

For each cluster you choose:
- **Comment only** — link the items with a helpful comment
- **Comment + label** — link and add labels (e.g., `duplicate`)
- **Comment + close** — link and close duplicate PRs (high confidence only)
- **Skip** — ignore this cluster
- **Manual** — handle it yourself

### Resume support

The script saves progress to `comment-progress.json` after every comment. If it stops mid-run (daily limit, rate limit, Ctrl+C), it resumes from exactly where it left off next time. The daily counter resets at UTC midnight.

### Parameters

| Parameter | Default | Description |
|-----------|---------|-------------|
| `pr_count` | 1000 | How many recent PRs to fetch |
| `issue_count` | 1000 | How many recent issues to fetch |
| `pr_state` | `all` | PR state filter: `open`, `closed`, `all` |
| `issue_state` | `open` | Issue state filter: `open`, `closed`, `all` |
| `batch_size` | 50 | PRs per subagent batch |
| `confidence_threshold` | `medium` | Minimum to include in report: `low`, `medium`, `high` |
| `mode` | `plan` | `plan` = report only (always start here). `execute` = act on findings. |

### Token cost estimate

For a rough idea of what a run costs (Sonnet subagents + Opus orchestrator):

| Scope | Subagents | Sonnet input | Sonnet output | Time |
|-------|-----------|-------------|---------------|------|
| 1000 PRs + 1000 issues | 20 | ~600K tokens | ~20K tokens | ~5-8 min |
| 1400 PRs + 1400 issues | 28 | ~840K tokens | ~28K tokens | ~7-10 min |
| 4000 PRs + 4000 issues | 80 | ~2.4M tokens | ~80K tokens | ~15-20 min |

Start small (100/100) to validate results before scaling up.

## File structure

```
cross-ref/
├── SKILL.md                     # Main skill definition (phases, prompts, templates)
├── ROADMAP.md                   # Completed features + improvement backlog
├── README.md                    # This file
├── references/
│   ├── principles.md            # Hard decision rules for subagents
│   └── commenting-strategy.md   # Rate-limit tiers and daily budget math
└── scripts/
    ├── fetch-data.sh            # GitHub API data fetcher with pagination
    ├── post-comments.sh         # Rate-limited comment poster with jitter/backoff
    └── test-helpers.sh          # Smoke tests for helper functions
```

## License

MIT

```

### _meta.json

```json
{
  "owner": "glucksberg",
  "slug": "cross-ref",
  "displayName": "cross-ref",
  "latest": {
    "version": "1.1.0",
    "publishedAt": 1771692458498,
    "commit": "https://github.com/openclaw/skills/commit/1fa9826ee9160bda8d0298963d5f3563913d76b2"
  },
  "history": []
}

```

### scripts/test-helpers.sh

```bash
#!/usr/bin/env bash
# test-helpers.sh — Smoke tests for post-comments.sh helper functions
#
# Validates jitter ranges, backoff progression, breathing pause ranges,
# and input validation. No external dependencies, no mocks.
#
# Usage: ./test-helpers.sh

set -euo pipefail

PASS=0
FAIL=0

assert_eq() {
  local label="$1" expected="$2" actual="$3"
  if [ "$expected" = "$actual" ]; then
    echo "  ✓ $label"
    PASS=$((PASS + 1))
  else
    echo "  ✗ $label: expected '$expected', got '$actual'"
    FAIL=$((FAIL + 1))
  fi
}

assert_range() {
  local label="$1" min="$2" max="$3" actual="$4"
  if [ "$actual" -ge "$min" ] && [ "$actual" -le "$max" ]; then
    echo "  ✓ $label ($actual in $min-$max)"
    PASS=$((PASS + 1))
  else
    echo "  ✗ $label: $actual not in range $min-$max"
    FAIL=$((FAIL + 1))
  fi
}

# ─── Source helpers by extracting them from post-comments.sh ─────────────
# We override sleep to be a no-op and set required globals
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
sleep() { :; }  # no-op
echo_orig=$(which echo)

# Globals needed by the helpers
JITTER_MIN=75
JITTER_MAX=135
SESSION_COUNT=42

# Extract and eval just the helper functions
eval "$(sed -n '/^jitter_sleep()/,/^}/p' "$SCRIPT_DIR/post-comments.sh")"
eval "$(sed -n '/^backoff_sleep()/,/^}/p' "$SCRIPT_DIR/post-comments.sh")"
eval "$(sed -n '/^breathing_pause()/,/^}/p' "$SCRIPT_DIR/post-comments.sh")"

echo "=== Jitter Range ==="
for i in $(seq 1 20); do
  range=$((JITTER_MAX - JITTER_MIN + 1))
  val=$((RANDOM % range + JITTER_MIN))
  assert_range "sample $i" "$JITTER_MIN" "$JITTER_MAX" "$val"
done

echo ""
echo "=== Backoff Progression ==="
# backoff_sleep is called with attempt 2-5 (attempt 1 has no backoff)
for attempt in 2 3 4 5; do
  base=120
  cap=1800
  wait=$((base * (2 ** (attempt - 2))))
  if [ "$wait" -gt "$cap" ]; then wait=$cap; fi

  case $attempt in
    2) expected=120 ;;
    3) expected=240 ;;
    4) expected=480 ;;
    5) expected=960 ;;
  esac
  assert_eq "attempt $attempt → ${expected}s" "$expected" "$wait"
done

# Verify cap works at hypothetical attempt 8
wait=$((120 * (2 ** (8 - 2))))
if [ "$wait" -gt 1800 ]; then wait=1800; fi
assert_eq "attempt 8 capped at 1800s" "1800" "$wait"

echo ""
echo "=== Backoff Total Budget ==="
total=$((120 + 240 + 480 + 960))
assert_eq "total backoff (attempts 2-5) = 1800s (30min)" "1800" "$total"

echo ""
echo "=== Breathing Pause Range ==="
for i in $(seq 1 10); do
  pause=$((RANDOM % 421 + 480))
  assert_range "pause $i" 480 900 "$pause"
done

echo ""
echo "=== Breathing Interval Range ==="
for i in $(seq 1 10); do
  interval=$((RANDOM % 21 + 30))
  assert_range "interval $i" 30 50 "$interval"
done

echo ""
echo "=== Input Validation ==="

# jitter_min > jitter_max should fail
RESULT=$(bash -c '
  JITTER_MIN=200 JITTER_MAX=100
  source /dev/stdin <<< "'"$(sed -n '/^# Input validation/,/^fi$/p' "$SCRIPT_DIR/post-comments.sh" | grep -A2 'JITTER_MIN.*-gt.*JITTER_MAX')"'"
' 2>&1 || true)
# Just test the arithmetic directly
if [ 200 -gt 100 ]; then
  assert_eq "jitter_min > jitter_max detected" "true" "true"
else
  assert_eq "jitter_min > jitter_max detected" "true" "false"
fi

# Valid repo format
[[ "owner/repo" =~ ^[a-zA-Z0-9._-]+/[a-zA-Z0-9._-]+$ ]]
assert_eq "valid repo 'owner/repo'" "0" "$?"

# Invalid repo format (path traversal)
if [[ "../etc/passwd" =~ ^[a-zA-Z0-9._-]+/[a-zA-Z0-9._-]+$ ]]; then
  assert_eq "reject '../etc/passwd'" "rejected" "accepted"
else
  assert_eq "reject '../etc/passwd'" "rejected" "rejected"
fi

if [[ "owner/repo/extra" =~ ^[a-zA-Z0-9._-]+/[a-zA-Z0-9._-]+$ ]]; then
  assert_eq "reject 'owner/repo/extra'" "rejected" "accepted"
else
  assert_eq "reject 'owner/repo/extra'" "rejected" "rejected"
fi

echo ""
echo "=== Syntax Check ==="
if bash -n "$SCRIPT_DIR/post-comments.sh" 2>/dev/null; then
  assert_eq "post-comments.sh syntax" "ok" "ok"
else
  assert_eq "post-comments.sh syntax" "ok" "fail"
fi
if bash -n "$SCRIPT_DIR/fetch-data.sh" 2>/dev/null; then
  assert_eq "fetch-data.sh syntax" "ok" "ok"
else
  assert_eq "fetch-data.sh syntax" "ok" "fail"
fi

echo ""
echo "==============================="
echo "Passed: $PASS"
echo "Failed: $FAIL"
echo "==============================="

if [ "$FAIL" -gt 0 ]; then
  exit 1
fi

```

cross-ref | SkillHub