Back to skills
SkillHub ClubShip Full StackFull Stack

reprompter

Transform messy prompts into well-structured, effective prompts — single or multi-agent. Use when: "reprompt", "reprompt this", "clean up this prompt", "structure my prompt", rough text needing XML tags and best practices, "reprompter teams", "repromptception", "run with quality", "smart run", "smart agents", multi-agent tasks, audits, parallel work, anything going to agent teams. Don't use when: simple Q&A, pure chat, immediate execution-only tasks. See "Don't Use When" section for details. Outputs: Structured XML/Markdown prompt, quality score (before/after), optional team brief + per-agent sub-prompts, agent team output files. Success criteria: Single mode quality score ≥ 7/10; Repromptception per-agent prompt quality score 8+/10; all required sections present, actionable and specific.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
3,131
Hot score
99
Updated
March 20, 2026
Overall rating
C0.0
Composite score
0.0
Best-practice grade
B75.6

Install command

npx @skill-hub/cli install openclaw-skills-reprompter

Repository

openclaw/skills

Skill path: skills/aytuncyildizli/reprompter

Transform messy prompts into well-structured, effective prompts — single or multi-agent. Use when: "reprompt", "reprompt this", "clean up this prompt", "structure my prompt", rough text needing XML tags and best practices, "reprompter teams", "repromptception", "run with quality", "smart run", "smart agents", multi-agent tasks, audits, parallel work, anything going to agent teams. Don't use when: simple Q&A, pure chat, immediate execution-only tasks. See "Don't Use When" section for details. Outputs: Structured XML/Markdown prompt, quality score (before/after), optional team brief + per-agent sub-prompts, agent team output files. Success criteria: Single mode quality score ≥ 7/10; Repromptception per-agent prompt quality score 8+/10; all required sections present, actionable and specific.

Open repository

Best for

Primary workflow: Ship Full Stack.

Technical facets: Full Stack.

Target audience: everyone.

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: openclaw.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install reprompter into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/openclaw/skills before adding reprompter to shared team environments
  • Use reprompter for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: reprompter
description: |
  Transform messy prompts into well-structured, effective prompts — single or multi-agent.
  Use when: "reprompt", "reprompt this", "clean up this prompt", "structure my prompt", rough text needing XML tags and best practices, "reprompter teams", "repromptception", "run with quality", "smart run", "smart agents", multi-agent tasks, audits, parallel work, anything going to agent teams.
  Don't use when: simple Q&A, pure chat, immediate execution-only tasks. See "Don't Use When" section for details.
  Outputs: Structured XML/Markdown prompt, quality score (before/after), optional team brief + per-agent sub-prompts, agent team output files.
  Success criteria: Single mode quality score ≥ 7/10; Repromptception per-agent prompt quality score 8+/10; all required sections present, actionable and specific.
compatibility: |
  Single mode works on all Claude surfaces (Claude.ai, Claude Code, API).
  Repromptception mode requires Claude Code with tmux and CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1.
metadata:
  author: AytuncYildizli
  version: 7.0.0
---

# RePrompter v7.0

> **Your prompt sucks. Let's fix that.** Single prompts or full agent teams — one skill, two modes.

---

## Two Modes

| Mode | Trigger | What happens |
|------|---------|-------------|
| **Single** | "reprompt this", "clean up this prompt" | Interview → structured prompt → score |
| **Repromptception** | "reprompter teams", "repromptception", "run with quality", "smart run", "smart agents" | Plan team → reprompt each agent → tmux Agent Teams → evaluate → retry |

Auto-detection: if task mentions 2+ systems, "audit", or "parallel" → ask: "This looks like a multi-agent task. Want to use Repromptception mode?"

Definition — **2+ systems** means at least two distinct technical domains that can be worked independently. Examples: frontend + backend, API + database, mobile app + backend, infrastructure + application code, security audit + cost audit.

## Don't Use When

- User wants a simple direct answer (no prompt generation needed)
- User wants casual chat/conversation
- Task is immediate execution-only with no reprompting step
- Scope does not involve prompt design, structure, or orchestration

> Clarification: RePrompter **does** support code-related tasks (feature, bugfix, API, refactor) by generating better prompts. It does **not** directly apply code changes in Single mode. Direct code execution belongs to coding-agent unless Repromptception execution mode is explicitly requested.

---

## Mode 1: Single Prompt

### Process

1. **Receive raw input**
2. **Input guard** — if input is empty, a single word with no verb, or clearly not a task → ask the user to describe what they want to accomplish
   - Reject examples: "hi", "thanks", "lol", "what's up", "good morning", random emoji-only input
   - Accept examples: "fix login bug", "write API tests", "improve this prompt"
3. **Quick Mode gate** — under 20 words, single action, no complexity indicators → generate immediately
4. **Smart Interview** — use `AskUserQuestion` with clickable options (2-5 questions max)
5. **Generate + Score** — apply template, show before/after quality metrics

### ⚠️ MUST GENERATE AFTER INTERVIEW

After interview completes, IMMEDIATELY:
1. Select template based on task type
2. Generate the full polished prompt
3. Show quality score (before/after table)
4. Ask if user wants to execute or copy

```
❌ WRONG: Ask interview questions → stop
✅ RIGHT: Ask interview questions → generate prompt → show score → offer to execute
```

### Interview Questions

Ask via `AskUserQuestion`. **Max 5 questions total.**

**Standard questions** (priority order — drop lower ones if task-specific questions are needed):
1. Task type: Build Feature / Fix Bug / Refactor / Write Tests / API Work / UI / Security / Docs / Content / Research / Multi-Agent
   - If user selects **Multi-Agent** while currently in **Single mode**, immediately transition to **Repromptception Phase 1 (Team Plan)** and confirm team execution mode (Parallel vs Sequential).
2. Execution mode: Single Agent / Team (Parallel) / Team (Sequential) / Let RePrompter decide
3. Motivation: User-facing / Internal tooling / Bug fix / Exploration / Skip *(drop first if space needed)*
4. Output format: XML Tags / Markdown / Plain Text / JSON *(drop first if space needed)*

**Task-specific questions** (MANDATORY for compound prompts — replace lower-priority standard questions):
- Extract keywords from prompt → generate relevant follow-up options
- Example: prompt mentions "telegram" → ask about alert type, interactivity, delivery
- **Vague prompt fallback:** if input has no extractable keywords (e.g., "make it better"), ask open-ended: "What are you working on?" and "What's the goal?" before proceeding

### Auto-Detect Complexity

| Signal | Suggested mode |
|--------|---------------|
| 2+ distinct systems (e.g., frontend + backend, API + DB, mobile + backend) | Team (Parallel) |
| Pipeline (fetch → transform → deploy) | Team (Sequential) |
| Single file/component | Single Agent |
| "audit", "review", "analyze" across areas | Team (Parallel) |

### Quick Mode

Enable when ALL true:
- < 20 words (excluding code blocks)
- Exactly 1 action verb from: add, fix, remove, rename, move, delete, update, create, implement, write, change, configure, test, run
- Single target (one file, component, or identifier)
- No conjunctions (and, or, plus, also)
- No vague modifiers (better, improved, some, maybe, kind of)

**Force interview if ANY present:** compound tasks ("and", "plus"), state management ("track", "sync"), vague modifiers ("better", "improved"), integration work ("connect", "combine", "sync"), broad scope nouns after any action verb, ambiguous pronouns ("it", "this", "that" without clear referent).

### Task Types & Templates

Detect task type from input. Each type has a dedicated template in `docs/references/`:

| Type | Template | Use when |
|------|----------|----------|
| Feature | `feature-template.md` | New functionality (default fallback) |
| Bugfix | `bugfix-template.md` | Debug + fix |
| Refactor | `refactor-template.md` | Structural cleanup |
| Testing | `testing-template.md` | Test writing |
| API | `api-template.md` | Endpoint/API work |
| UI | `ui-template.md` | UI components |
| Security | `security-template.md` | Security audit/hardening |
| Docs | `docs-template.md` | Documentation |
| Content | `content-template.md` | Blog posts, articles, marketing copy |
| Research | `research-template.md` | Analysis/exploration |
| Multi-Agent | `swarm-template.md` | Multi-agent coordination |
| Team Brief | `team-brief-template.md` | Team orchestration brief |

**Priority** (most specific wins): api > security > ui > testing > bugfix > refactor > content > docs > research > feature. For multi-agent tasks, use `swarm-template` for the team brief and the type-specific template for each agent's sub-prompt.

**How it works:** Read the matching template from `docs/references/{type}-template.md`, then fill it with task-specific context. Templates are NOT loaded into context by default — only read on demand when generating a prompt. If the template file is not found, fall back to the Base XML Structure below.

> To add a new task type: create `docs/references/{type}-template.md` following the XML structure below, then add it to the table above.

### Base XML Structure

All templates follow this core structure (8 required tags). Use as fallback if no specific template matches:

Exception: `team-brief-template.md` uses Markdown format for orchestration briefs. This is intentional — see template header for rationale.

```xml
<role>{Expert role matching task type and domain}</role>

<context>
- Working environment, frameworks, tools
- Available resources, current state
</context>

<task>{Clear, unambiguous single-sentence task}</task>

<motivation>{Why this matters — priority, impact}</motivation>

<requirements>
- {Specific, measurable requirement 1}
- {At least 3-5 requirements}
</requirements>

<constraints>
- {What NOT to do}
- {Boundaries and limits}
</constraints>

<output_format>{Expected format, structure, length}</output_format>

<success_criteria>
- {Testable condition 1}
- {Measurable outcome 2}
</success_criteria>
```

### Project Context Detection

Auto-detect tech stack from current working directory ONLY:
- Scan `package.json`, `tsconfig.json`, `prisma/schema.prisma`, etc.
- Session-scoped — different directory = fresh context
- Opt out with "no context", "generic", or "manual context"
- Never scan parent directories or carry context between sessions

---

## Mode 2: Repromptception (Agent Teams)

### TL;DR

```
Raw task in → quality output out. Every agent gets a reprompted prompt.

Phase 1: Score raw prompt, plan team, define roles (YOU do this, ~30s)
Phase 2: Write XML-structured prompt per agent (YOU do this, ~2min)
Phase 3: Launch tmux Agent Teams (AUTOMATED)
Phase 4: Read results, score, retry if needed (YOU do this)
```

**Key insight:** The reprompt phase costs ZERO extra tokens — YOU write the prompts, not another AI.

### Phase 1: Team Plan (~30 seconds)

1. **Score raw prompt** (1-10): Clarity, Specificity, Structure, Constraints, Decomposition
   - Phase 1 uses 5 quick-assessment dimensions. The full 6-dimension scoring (adding Verifiability) is used in Phase 4 evaluation.
2. **Pick mode:** parallel (independent agents) or sequential (pipeline with dependencies)
3. **Define team:** 2-5 agents max, each owns ONE domain, no overlap
4. **Write team brief** to `/tmp/rpt-brief-{taskname}.md` (use unique tasknames to avoid collisions between concurrent runs)

### Phase 2: Repromptception (~2 minutes)

For EACH agent:
1. Pick the best-matching template from `docs/references/` (or use base XML structure)
2. Read it, then apply these **per-agent adaptations**:

- `<role>`: Specific expert title for THIS agent's domain
- `<context>`: Add exact file paths (verified with `ls`), what OTHER agents handle (boundary awareness)
- `<requirements>`: At least 5 specific, independently verifiable requirements
- `<constraints>`: Scope boundary with other agents, read-only vs write, file/directory boundaries
- `<output_format>`: Exact path `/tmp/rpt-{taskname}-{agent-domain}.md`, required sections
- `<success_criteria>`: Minimum N findings, file:line references, no hallucinated paths

**Score each prompt — target 8+/10.** If under 8, add more context/constraints.

Write all to `/tmp/rpt-agent-prompts-{taskname}.md`

### Phase 3: Execute (tmux Agent Teams)

```bash
# 1. Start Claude Code with Agent Teams
tmux new-session -d -s {session} "cd /path/to/workdir && CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1 claude --model opus"
# placeholders:
# - {session}: unique tmux session name (example: rpt-auth-audit)
# - /path/to/workdir: absolute repository path for the target project (example: /tmp/reprompter-check)

# 2. Wait for startup
sleep 12

# 3. Send prompt — MUST use -l (literal), Enter SEPARATE
# IMPORTANT: Include POLLING RULES to prevent lead TaskList loop bug
tmux send-keys -t {session} -l 'Create an agent team with N teammates. CRITICAL: Use model opus for ALL tasks.

POLLING RULES — YOU MUST FOLLOW THESE:
- After sending tasks, poll TaskList at most 10 times
- If ALL tasks show "done" status, IMMEDIATELY stop polling
- After 3 consecutive TaskList calls showing the same status, STOP polling regardless
- Once you stop polling: read the output files, then write synthesis
- DO NOT call TaskList more than 20 times total under any circumstances

Teammate 1 (ROLE): TASK. Write output to /tmp/rpt-{taskname}-{domain}.md. ... After all complete, synthesize into /tmp/rpt-{taskname}-final.md'
sleep 0.5
tmux send-keys -t {session} Enter

# 4. Monitor (poll every 15-30s)
tmux capture-pane -t {session} -p -S -100

# 5. Verify outputs
ls -la /tmp/rpt-{taskname}-*.md

# 6. Cleanup
tmux kill-session -t {session}
```

#### Critical tmux Rules

⚠️ **WARNING: Default teammate model is HAIKU unless explicitly overridden. Always set `--model opus` in both CLI launch command and team prompt.**

| Rule | Why |
|------|-----|
| Always `send-keys -l` (literal flag) | Without it, special chars break |
| Enter sent SEPARATELY | Combined fails for multiline |
| sleep 0.5 between text and Enter | Buffer processing time |
| sleep 12 after session start | Claude Code init time |
| `--model opus` in CLI AND prompt | Default teammate = HAIKU |
| Each agent writes own file | Prevents file conflicts |
| Unique taskname per run | Prevents collisions between concurrent sessions |

### Phase 4: Evaluate + Retry

1. Read each agent's report
2. Score against success criteria from Phase 2:
   - 8+/10 → ACCEPT
   - 4-6/10 → RETRY with delta prompt (tell them what's missing)
   - < 4/10 → RETRY with full rewrite
   
   **Accept checklist** (use alongside score — all must pass):
   - [ ] All required output sections present
   - [ ] Requirements from Phase 2 independently verifiable
   - [ ] No hallucinated file paths or line numbers
   - [ ] Scope boundaries respected (no overlap with other agents)
3. Max 2 retries (3 total attempts)
4. Deliver final report to user

**Delta prompt pattern:**
```
Previous attempt scored 5/10.
✅ Good: Sections 1-3 complete
❌ Missing: Section 4 empty, line references wrong
This retry: Focus on gaps. Verify all line numbers.
```

### Expected Cost & Time

| Team size | Time | Cost |
|-----------|------|------|
| 2 agents | ~5-8 min | ~$1-2 |
| 3 agents | ~8-12 min | ~$2-3 |
| 4 agents | ~10-15 min | ~$2-4 |

Estimates cover Phase 3 (execution) only. Add ~3 minutes for Phases 1-2 and ~5-8 minutes per retry. Each agent uses ~25-70% of their 200K token context window.

### Fallback: sessions_spawn (OpenClaw only)

When tmux/Claude Code is unavailable but running inside OpenClaw:
```
sessions_spawn(task: "<per-agent prompt>", model: "opus", label: "rpt-{role}")
```
Note: `sessions_spawn` is an OpenClaw-specific tool. Not available in standalone Claude Code.

**No tmux or OpenClaw?** Run agents sequentially: execute each agent's prompt one at a time in the same Claude Code session. Slower but works everywhere.

---

## Quality Scoring

**Always show before/after metrics:**

| Dimension | Weight | Criteria |
|-----------|--------|----------|
| Clarity | 20% | Task unambiguous? |
| Specificity | 20% | Requirements concrete? |
| Structure | 15% | Proper sections, logical flow? |
| Constraints | 15% | Boundaries defined? |
| Verifiability | 15% | Success measurable? |
| Decomposition | 15% | Work split cleanly? (Score 10 if task is correctly atomic) |

```markdown
| Dimension | Before | After | Change |
|-----------|--------|-------|--------|
| Clarity | 3/10 | 9/10 | +200% |
| Specificity | 2/10 | 8/10 | +300% |
| Structure | 1/10 | 10/10 | +900% |
| Constraints | 0/10 | 7/10 | new |
| Verifiability | 2/10 | 8/10 | +300% |
| Decomposition | 0/10 | 8/10 | new |
| **Overall** | **1.45/10** | **8.35/10** | **+476%** |
```

> **Bias note:** Scores are self-assessed. Treat as directional indicators, not absolutes.

---

## Closed-Loop Quality (v6.0+)

For both modes, RePrompter supports post-execution evaluation:

1. **IMPROVE** — Score raw → generate structured prompt
2. **EXECUTE** — **Repromptception mode only**: route to agent(s), collect output. **Single mode does not execute code/commands; it only generates prompts.**
3. **EVALUATE** — Score output/prompt against success criteria (0-10)
4. **RETRY** — Thresholds: Single mode retry if score < 7; Repromptception retry if score < 8. Max 2 retries.

---

## Advanced Features

### Reasoning-Friendly Prompting (Claude 4.x)
Prompts should be less prescriptive about HOW. Focus on WHAT — clear task, requirements, constraints, success criteria. Let the model's own reasoning handle execution strategy.

**Example:** Instead of "Step 1: read the file, Step 2: extract the function" → "Extract the authentication logic from auth.ts into a reusable middleware. Requirements: ..."

### Response Prefilling (API only)
Prefill assistant response start to enforce format:
- `{` → forces JSON output
- `## Analysis` → skips preamble, starts with content
- `| Column |` → forces table format

### Context Engineering
Generated prompts should COMPLEMENT runtime context (CLAUDE.md, skills, MCP tools), not duplicate it. Before generating:
1. Check what context is already loaded (project files, skills, MCP servers)
2. Reference existing context: "Using the project structure from CLAUDE.md..."
3. Add ONLY what's missing — avoid restating what the model already knows

### Token Budget
Keep generated prompts under ~2K tokens for single mode, ~1K per agent for Repromptception. Longer prompts waste context window without improving quality. If a prompt exceeds budget, split into phases or move detail into constraints.

### Uncertainty Handling
Always include explicit permission for the model to express uncertainty rather than fabricate:
- Add to constraints: "If unsure about any requirement, ask for clarification rather than assuming"
- For research tasks: "Clearly label confidence levels (high/medium/low) for each finding"
- For code tasks: "Flag any assumptions about the codebase with TODO comments"

---

## Settings (for Repromptception mode)

> Note: `CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS` is an experimental flag that may change in future Claude Code versions. Check [Claude Code docs](https://docs.anthropic.com/en/docs/claude-code) for current status.

In `~/.claude/settings.json`:
```json
{
  "env": {
    "CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS": "1"
  },
  "preferences": {
    "teammateMode": "tmux",
    "model": "opus"
  }
}
```

| Setting | Values | Effect |
|---------|--------|--------|
| `CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS` | `"1"` | Enables agent team spawning |
| `teammateMode` | `"tmux"` / `"default"` | `tmux`: each teammate gets a visible split pane. `default`: teammates run in background |
| `model` | `"opus"` / `"sonnet"` | Teammates default to Haiku. Always set `model: opus` explicitly in your prompt — do not rely on runtime defaults. |

---

## Proven Results

### Single Prompt (v6.0)
Rough crypto dashboard prompt: **1.6/10 → 9.0/10** (+462%)

### Repromptception E2E (v6.1)
3 Opus agents, sequential pipeline (PromptAnalyzer → PromptEngineer → QualityAuditor):

| Metric | Value |
|--------|-------|
| Original score | 2.15/10 |
| After Repromptception | **9.15/10** (+326%) |
| Quality audit | PASS (99.1%) |
| Weaknesses found → fixed | 24/24 (100%) |
| Cost | $1.39 |
| Time | ~8 minutes |

### Repromptception vs Raw Agent Teams (v7.0)
Same audit task, 4 Opus agents:

| Metric | Raw | Repromptception | Delta |
|--------|-----|----------------|-------|
| CRITICAL findings | 7 | 14 | +100% |
| Total findings | ~40 | 104 | +160% |
| Cost savings identified | $377/mo | $490/mo | +30% |
| Token bloat found | 45K | 113K | +151% |
| Cross-validated findings | 0 | 5 | — |

---

## Tips

- **More context = fewer questions** — mention tech stack, files
- **"expand"** — if Quick Mode gave too simple a result, re-run with full interview
- **"quick"** — skip interview for simple tasks
- **"no context"** — skip auto-detection
- Context is per-project — switching directories = fresh detection

---

## Test Scenarios

See [TESTING.md](TESTING.md) for 13 verification scenarios + anti-pattern examples.

---

## Appendix: Extended XML Tags

Templates may add domain-specific tags beyond the 8 required base tags. Always include all base tags first.

| Extended Tag | Used In | Purpose |
|-------------|---------|---------|
| `<symptoms>` | bugfix | What the user sees, error messages |
| `<investigation_steps>` | bugfix | Systematic debugging steps |
| `<endpoints>` | api | Endpoint specifications |
| `<component_spec>` | ui | Component props, states, layout |
| `<agents>` | swarm | Agent role definitions |
| `<task_decomposition>` | swarm | Work split per agent |
| `<coordination>` | swarm | Inter-agent handoff rules |
| `<research_questions>` | research | Specific questions to answer |
| `<methodology>` | research | Research approach and methods |
| `<reasoning>` | research | Reasoning notes space (non-sensitive, concise) |
| `<current_state>` | refactor | Before state of the code |
| `<target_state>` | refactor | Desired after state |
| `<coverage_requirements>` | testing | What needs test coverage |
| `<threat_model>` | security | Threat landscape and vectors |
| `<structure>` | docs | Document organization |
| `<reference>` | docs | Source material to reference |


---

## Referenced Files

> The following files are referenced in this skill and included for context.

### TESTING.md

```markdown
# RePrompter Test Scenarios

Verification scenarios for the RePrompter skill. Run these manually to validate behavior after changes.

---

## Scenario 1: Quick Mode - Simple Input

**Input:** "add a loading spinner"
**Expected:** Quick Mode activates, generates prompt immediately without interview.
**Verify:** No AskUserQuestion call, output includes `<role>`, `<task>`, `<requirements>`.

## Scenario 2: Quick Mode - Complex Rejection

**Input:** "update dashboard tracking and configure alerts"
**Expected:** Quick Mode is REJECTED (compound task + integration/state signals: "and", "tracking", "configure", "alerts").
**Verify:** Full interview runs. AskUserQuestion is called with at least task type + execution mode.

## Scenario 3: Full Interview Flow

**Input:** "we need some kind of authentication thing, maybe oauth"
**Expected:** Full interview with AskUserQuestion. All required high-priority questions asked (lower-priority questions may be dropped when replaced by task-specific mandatory questions).
**Verify:**
- Task Type question appears
- Execution Mode question appears
- Motivation question appears
- Generated prompt includes all XML sections
- Quality score is shown (before/after)

## Scenario 4: Team Mode

**Input:** "build a real-time chat system with websockets, database, and React frontend"
**Expected:** Team mode detected or offered. Team brief generated with 2-5 agent roles.
**Verify:**
- Execution Mode question offers team options
- If team selected: team brief is generated at `/tmp/rpt-brief-*.md`
- Per-agent sub-prompts are generated (one per agent)
- Each sub-prompt is scoped to that agent's responsibility

## Scenario 5: Context Detection

**Setup:** Run from a directory with `package.json` (Next.js), `tsconfig.json`, `prisma/schema.prisma`.
**Input:** "add user profile page"
**Expected:** Auto-detects tech stack and includes in context.
**Verify:**
- Context mentions Next.js, TypeScript, Prisma
- Source transparency: "Auto-detected from: [pwd]"
- No parent directory scanning

## Scenario 6: No Project Fallback

**Setup:** Run from home directory (`~`) or empty directory.
**Input:** "create a REST API"
**Expected:** No auto-detection. Generic context used or user asked for tech stack.
**Verify:**
- Message: "No project context detected"
- No framework assumptions in generated prompt

## Scenario 7: Opt-Out

**Setup:** Run from a project directory with config files.
**Input:** "reprompt no context - add a button"
**Expected:** Auto-detection skipped despite project files existing.
**Verify:**
- No tech stack in context
- Generic prompt generated
- Opt-out keyword detected ("no context")

## Scenario 8: Closed-Loop Quality (v6.0+)

**Input:** "reprompter run with quality - audit the auth module"
**Expected:** Full loop: improve prompt -> execute -> evaluate -> retry if needed.
**Verify:**
- Prompt is generated and scored
- Execution happens (single agent or team)
- Output is evaluated against success criteria
- If Repromptception score < 8, retry with delta prompt (Single mode threshold remains < 7 for prompt quality)
- Max 2 retries observed

## Scenario 9: Edge Cases

### 9a: Empty Input
**Input:** "" (empty)
**Expected:** Ask user to provide a prompt. Do not generate.

### 9b: Non-English Input
**Input:** "ajouter un bouton de connexion" (French)
**Expected:** Detect language, generate prompt in French.

### 9c: Code Block Input
**Input:** "fix this: ```js\nconst x = undefined.foo\n```"
**Expected:** Treat code as context, extract intent ("fix undefined access"), generate debugging prompt.

### 9d: Very Long Input (500+ words)
**Input:** [paste a 600-word requirements document]
**Expected:** Summarize key points, confirm with user, flag as complex, run full interview.

### 9e: Conflicting Choices
**Scenario:** User selects "Fix Bug" as task type but "Team (Parallel)" as execution mode.
**Expected:** Ask clarifying follow-up: "You chose Bug Fix but also Team Parallel - is this a multi-service bug?"

---

## Scenario 10: Repromptception E2E

**Input:** "reprompter teams - audit the auth module for security issues and test coverage gaps"
**Expected:** Full Repromptception pipeline (Phase 1-4).
**Verify:**
- Phase 1: Team brief written to `/tmp/rpt-brief-*.md` with 2-3 agents
- Phase 2: Per-agent XML prompts written to `/tmp/rpt-agent-prompts-*.md`, each scored 8+/10
- Phase 3: tmux session created, agents execute in parallel
- Phase 4: Results evaluated, each agent output has required sections
- Final synthesis delivered to user

## Scenario 11: Delta Retry

**Setup:** Manually create a partial output file that would score 5/10 (missing sections).
**Input:** Trigger Phase 4 evaluation on the partial output.
**Expected:** Retry triggered with delta prompt specifying exact gaps.
**Verify:**
- Delta prompt lists ✅ good sections and ❌ missing sections
- Retry uses the same agent role and constraints
- Max 2 retries enforced (3 total attempts)

## Scenario 12: Template Loading

**Input:** "reprompt - fix the login timeout bug" (should load bugfix-template)
**Expected:** bugfix-template.md read from `docs/references/`, not base XML.
**Verify:**
- Template file actually read (not just base structure used)
- Bug-specific sections present (symptoms, investigation steps)
- If template file deleted, falls back to Base XML Structure gracefully

## Scenario 13: Concurrent Sessions

**Setup:** Start two Repromptception runs simultaneously with different tasknames.
**Expected:** No file collisions between runs.
**Verify:**
- Each run uses unique taskname in file paths
- Output files don't overwrite each other
- Both sessions complete independently

---

## Anti-Patterns (Should NOT Happen)

| Anti-Pattern | Why It's Wrong |
|-------------|----------------|
| Stop after interview without generating | Step 4 (generation) is required |
| Quick Mode on compound prompts | Complexity keywords should force interview |
| Cross-project context leakage | Session isolation must be enforced |
| Generate in English for non-English input | Should match input language |
| Skip task-specific questions for complex prompts | Domain-specific questions are mandatory |

```



---

## Skill Companion Files

> Additional files collected from the skill directory layout.

### README.md

```markdown
<picture>
  <source media="(prefers-color-scheme: dark)" srcset="assets/logo-dark.svg">
  <source media="(prefers-color-scheme: light)" srcset="assets/logo.svg">
  <img alt="RePrompter" src="assets/logo.svg" width="440">
</picture>

<br/>

**Your prompt sucks. Let's fix that.**

[![Version](https://img.shields.io/badge/version-7.0.0-0969da)](https://github.com/aytuncyildizli/reprompter/releases)
[![License](https://img.shields.io/github/license/aytuncyildizli/reprompter?color=2da44e)](LICENSE)
[![Stars](https://img.shields.io/github/stars/aytuncyildizli/reprompter?style=flat&color=f0883e)](https://github.com/aytuncyildizli/reprompter/stargazers)
[![Issues](https://img.shields.io/github/issues/aytuncyildizli/reprompter?color=da3633)](https://github.com/aytuncyildizli/reprompter/issues)
![Claude Code](https://img.shields.io/badge/Claude%20Code-primary-111111)
![OpenClaw](https://img.shields.io/badge/OpenClaw-supported-7c3aed)
![LLM](https://img.shields.io/badge/Prompt%20Mode-Any%20Structured%20LLM-0ea5e9)

---

RePrompter interviews you, figures out what you actually want, and writes the prompt you were too lazy to write yourself. **v7 merges single-prompt and team orchestration into one skill** — it detects complexity, picks execution mode, and scores everything.

Compatibility:
- **Single prompt-improvement mode:** Claude Code, OpenClaw, or any structured-prompt LLM
- **Repromptception team orchestration mode:** Claude Code / OpenClaw (tmux Agent Teams + orchestration flow)

<br/>

## The Problem

You type this:

```
uhh build a crypto dashboard, maybe coingecko data, add caching, test it too, don't break existing api
```

That's a **1.6/10** prompt. The LLM will guess scope, skip constraints, hallucinate requirements, and produce something you'll rewrite anyway.

## What RePrompter Does

It turns that into a **9.0/10** prompt in ~15 seconds. No prompt engineering skills required:

<br/>
<p align="center">
  <img src="assets/demo.gif" alt="RePrompter demo — rough prompt to structured output in 15 seconds" width="720">
</p>
<br/>

---

## How It Works

```
You type rough prompt
        ↓
  Quick Mode gate
        │
  Simple task? ──→ Generate immediately
        │
  Complex task? ──→ Interactive interview (clickable options)
        │                    │
        │            Complexity detection
        │            Execution mode selection
        │            Template matching
        │                    │
        ↓                    ↓
  Structured prompt ← Quality scored (before vs after)
        │
  Single agent? ──→ One polished prompt
        │
  Multi-agent? ──→ Team brief + per-agent sub-prompts
```

### Quick Mode
Simple, single-action prompts skip the interview entirely. No latency tax for `"fix the typo in header.tsx"`.

### Interactive Interview
For anything non-trivial, RePrompter asks **structured, clickable questions** — not generic fluff. If you mention "tracking", it asks tracking questions. If you mention "signals", it asks signal delivery questions.

<details>
<summary><strong>Example interview (actual shape)</strong></summary>

```json
{
  "questions": [
    {
      "header": "Task Type",
      "question": "What type of task is this?",
      "options": [
        {"label": "Build Feature", "description": "Create new functionality"},
        {"label": "Fix Bug", "description": "Debug and resolve an issue"},
        {"label": "Refactor", "description": "Improve existing code structure"},
        {"label": "Multi-Agent/Swarm", "description": "Coordinate multiple agents"}
      ]
    },
    {
      "header": "Execution Mode",
      "question": "How should this be executed?",
      "options": [
        {"label": "Single Agent", "description": "One agent handles everything"},
        {"label": "Team (Parallel)", "description": "Split into specialized agents"},
        {"label": "Team (Sequential)", "description": "Pipeline handoffs"},
        {"label": "Let Reprompter decide", "description": "Auto-detect from complexity"}
      ]
    }
  ]
}
```

</details>

### Auto Complexity Detection

| Task signal | Suggested mode |
|---|---|
| Frontend + backend + tests | Team (Parallel) |
| Fetch → transform → deploy | Team (Sequential) |
| Single file/component | Single Agent |
| Research + implementation | Team (Parallel) |

---

## Before / After

### Input
```
uhh build a crypto dashboard, maybe coingecko data, add caching, test it too, don't break existing api
```

### Output
```xml
<role>
Senior full-stack engineer specializing in real-time dashboard systems,
REST API integration, and cache-aware data pipelines.
</role>

<context>
- Frontend + backend + tests required (cross-layer scope)
- Data source: CoinGecko API (free tier, 30 calls/min rate limit)
- Existing API behavior must be preserved — consumers depend on current response schema
- Execution mode: Team (Parallel) due to independent frontend/backend/test workstreams
</context>

<motivation>
User-facing feature with reliability requirements. Users need real-time crypto
prices without page refresh. Prioritize correctness, cache resilience, and
non-breaking backward compatibility for existing API consumers.
</motivation>

<task>
Deliver a real-time crypto dashboard backed by a cache-aware CoinGecko proxy API,
with unit tests for both API and UI, without breaking existing API contracts.
</task>

<requirements>
- Build dashboard UI with loading, error, empty, and stale-data states
- Implement backend CoinGecko proxy with JSON schema validation + configurable cache TTL
- Preserve backward compatibility for all existing API consumers
- Add deterministic unit tests for frontend rendering states and backend edge cases
- Cache must serve stale data on upstream failure (stale-while-revalidate pattern)
</requirements>

<constraints>
- No direct client-side calls to CoinGecko (all traffic through proxy)
- No breaking changes to existing API response fields or status codes
- Mock all external network boundaries in tests — zero real HTTP calls
- Rate limit CoinGecko calls to stay within free tier (30/min)
</constraints>

<output_format>
- Backend: /api/prices endpoint returning { prices: [...], cached: bool, updatedAt: ISO }
- Frontend: React component with 5s auto-refresh interval
- Tests: Vitest suite with ≥80% branch coverage
</output_format>

<success_criteria>
- Dashboard auto-updates every 5s and shows "stale" indicator when cache is old
- Proxy returns normalized data within 200ms (cache hit) / 2s (cache miss)
- Existing API integration tests still pass with zero modifications
- New unit tests cover: success, upstream error, cache hit, cache miss, rate limit paths
</success_criteria>
```

### Quality Jump

| Dimension | Before | After | Delta |
|---|---:|---:|---:|
| Clarity | 3/10 | 9/10 | +200% |
| Specificity | 2/10 | 9/10 | +350% |
| Structure | 1/10 | 10/10 | +900% |
| Constraints | 0/10 | 8/10 | new |
| Verifiability | 1/10 | 9/10 | +800% |
| Decomposition | 2/10 | 9/10 | +350% |
| **Overall** | **1.6/10** | **9.0/10** | **+462%** |

---

## Team Mode

This is where RePrompter stops being "prompt cleanup" and becomes **orchestration**.

When auto-detection finds multiple systems (UI + API + tests), it generates:
1. A **team coordination brief** with handoff rules
2. **Per-agent sub-prompts** with scoped responsibilities

<details>
<summary><strong>📋 Team Brief (generated artifact)</strong></summary>

```markdown
# Reprompter Team Brief

- Execution Mode: Team (Parallel)
- Overall Task: Real-time crypto dashboard with cache-aware backend and full unit coverage

## Agent Roles
1. Frontend Agent — dashboard UI, polling, loading/error/stale states
2. Backend Agent — CoinGecko proxy API, schema validation, cache strategy
3. Tests Agent — deterministic unit tests for frontend + backend behavior

## Coordination Rules
- Backend publishes API contract to /tmp/api-contract.md first
- Frontend consumes contract without shape drift
- Tests use shared DTO definitions from backend contract
- Each agent writes to own output file (no conflicts)
- Integration checkpoint: lead reads all 3 outputs before final merge
```

</details>

<details>
<summary><strong>🎨 Frontend Agent — full Repromptception prompt</strong></summary>

```xml
<role>
Senior frontend engineer specializing in real-time React dashboards
with WebSocket/polling patterns and graceful degradation.
</role>

<context>
- Framework: Next.js 14 with App Router (detected from package.json)
- Backend agent is building /api/prices endpoint (see /tmp/api-contract.md)
- No direct CoinGecko calls from client — all data via backend proxy
- Other agents handle backend (Agent 2) and tests (Agent 3)
</context>

<task>
Implement the dashboard UI component for real-time crypto price display
with 5-second auto-refresh, loading/error/stale states, and responsive layout.
</task>

<requirements>
- Auto-refresh every 5 seconds via polling (not WebSocket)
- Show loading skeleton on initial fetch
- Show error state with retry button on fetch failure
- Show "stale" indicator when data is older than 30 seconds
- Display: coin name, price, 24h change (green/red), sparkline
- Responsive: mobile-first, 1-column on mobile, grid on desktop
</requirements>

<constraints>
- Do NOT call CoinGecko directly — only use /api/prices
- Do NOT modify any existing pages or components
- Use existing design system tokens (colors, spacing, fonts)
- Keep component tree shallow (max 3 levels deep)
</constraints>

<output_format>
Write complete implementation to /tmp/rpt-frontend.md including:
- Component code (React/TSX)
- Custom hook for polling logic
- CSS/Tailwind styles
- Type definitions
</output_format>

<success_criteria>
- All 4 states render correctly (loading, data, error, stale)
- No CoinGecko imports in any frontend file
- Component renders within 100ms (no heavy computation in render)
- Lighthouse accessibility score ≥ 90
</success_criteria>
```

</details>

<details>
<summary><strong>⚙️ Backend Agent — full Repromptception prompt</strong></summary>

```xml
<role>
Senior backend engineer specializing in API integration,
resilient caching patterns, and rate-limit-aware proxy design.
</role>

<context>
- Next.js 14 API routes (App Router, /app/api/)
- CoinGecko free tier: 30 calls/min rate limit
- Existing /api/ routes must not break — consumers depend on current schema
- Frontend agent (Agent 1) will consume /api/prices
- Tests agent (Agent 3) will test this endpoint
</context>

<task>
Build a cache-aware /api/prices endpoint that proxies CoinGecko,
validates responses, and serves stale data on upstream failure.
</task>

<requirements>
- GET /api/prices returns { prices: CoinPrice[], cached: boolean, updatedAt: string }
- In-memory cache with configurable TTL (default 10s)
- Stale-while-revalidate: serve cached data when CoinGecko is down
- JSON schema validation on CoinGecko response before caching
- Rate limiter: max 25 calls/min to CoinGecko (5 call buffer)
- Publish API contract to /tmp/api-contract.md for other agents
</requirements>

<constraints>
- Do NOT modify existing API routes or their response schemas
- Do NOT expose CoinGecko API key to frontend
- Do NOT use external cache (Redis) — in-memory only for now
- Error responses must follow existing API error format
</constraints>

<output_format>
Write complete implementation to /tmp/rpt-backend.md including:
- API route handler code
- Cache module with TTL logic
- Rate limiter module
- Type definitions + API contract
</output_format>

<success_criteria>
- Cache hit returns in < 50ms
- Upstream failure returns last cached data (not 500)
- Rate limiter prevents > 25 calls/min to CoinGecko
- Zero breaking changes to existing routes (verified by existing tests)
</success_criteria>
```

</details>

<details>
<summary><strong>🧪 Tests Agent — full Repromptception prompt</strong></summary>

```xml
<role>
Senior test engineer specializing in deterministic unit tests,
API boundary mocking, and React component testing with Vitest.
</role>

<context>
- Test framework: Vitest + React Testing Library (from vitest.config.ts)
- Frontend agent (Agent 1) builds dashboard component
- Backend agent (Agent 2) builds /api/prices endpoint
- Read their outputs from /tmp/rpt-frontend.md and /tmp/rpt-backend.md
- All external HTTP calls must be mocked — zero real network in tests
</context>

<task>
Create comprehensive unit tests for both the frontend dashboard component
and the backend /api/prices endpoint, covering all edge cases.
</task>

<requirements>
- Backend tests: success, upstream error, cache hit, cache miss, rate limit, schema validation failure
- Frontend tests: loading state, data render, error state + retry, stale indicator, auto-refresh
- Minimum 15 test cases total (8 backend + 7 frontend)
- Each test must be deterministic — no timers, no real HTTP, no flaky assertions
- Mock CoinGecko responses with realistic fixtures
- Test cache TTL expiry with fake timers (vi.useFakeTimers)
</requirements>

<constraints>
- Do NOT make real HTTP calls to any external service
- Do NOT modify existing test files or test utilities
- Use vi.mock() for fetch/HTTP, vi.useFakeTimers() for time-dependent logic
- Each test must complete in < 100ms
</constraints>

<output_format>
Write complete test suite to /tmp/rpt-tests.md including:
- Backend test file (*.test.ts)
- Frontend test file (*.test.tsx)
- Mock fixtures (CoinGecko response shapes)
- Coverage expectations
</output_format>

<success_criteria>
- All 15+ tests pass deterministically
- ≥ 80% branch coverage on both frontend and backend
- Zero network calls in test execution
- Tests run in < 2 seconds total
</success_criteria>
```

</details>

---

## Installation

### Claude Code

```bash
mkdir -p skills/reprompter
curl -sL https://github.com/aytuncyildizli/reprompter/archive/main.tar.gz | \
  tar xz --strip-components=1 -C skills/reprompter
```

Claude Code auto-discovers `skills/reprompter/SKILL.md`.

### OpenClaw

```bash
# Copy to your OpenClaw workspace
cp -R reprompter /path/to/workspace/skills/reprompter
```

### Any Structured-Prompt LLM

Use `SKILL.md` as the behavior spec. Templates are in `docs/references/`.

> Note: Non-Claude runtimes are supported for **prompt-improvement mode**. Repromptception orchestration features (tmux Agent Teams/session tools) are Claude Code/OpenClaw specific.

---

## Quick Start

After installing, just say one of these trigger phrases:

```
reprompt this: build a REST API with auth and rate limiting
```

```
reprompter teams - audit the auth module for security and test coverage
```

**Single mode** triggers: "reprompt", "reprompt this", "clean up this prompt", "structure my prompt"
**Team mode** triggers: "reprompter teams", "repromptception", "run with quality", "smart run", "smart agents"

RePrompter will interview you (2-5 questions), generate a structured XML prompt, and show a before/after quality score.

---

## Quality Dimensions

Every transformation is scored on six weighted dimensions:

| Dimension | Weight | What it checks |
|---|---:|---|
| Clarity | 20% | Is the task unambiguous? |
| Specificity | 20% | Are requirements concrete and scoped? |
| Structure | 15% | Is prompt structure complete and logical? |
| Constraints | 15% | Are boundaries explicit? |
| Verifiability | 15% | Can output be validated objectively? |
| Decomposition | 15% | Is work split cleanly (steps or agents)? |

**Overall score** = weighted average. Most rough prompts score 1–3. RePrompter typically outputs 8–9+.

---

## Templates

| Template | Use case |
|---|---|
| `feature-template` | New functionality |
| `bugfix-template` | Debug + fix |
| `refactor-template` | Structural cleanup |
| `testing-template` | Unit/integration test tasks |
| `api-template` | Endpoint/API work |
| `ui-template` | UI component implementation |
| `security-template` | Security hardening/audit tasks |
| `docs-template` | Technical docs |
| `content-template` | Blog posts, articles, marketing copy |
| `research-template` | Analysis / option exploration |
| `swarm-template` | Multi-agent coordination |
| `team-brief-template` | Team orchestration brief |

> Templates live in `docs/references/` and are read on demand (not loaded into context). Team brief is generated during Repromptception Phase 1.

---

## v7.0 — Unified Skill + Repromptception 🧠

**v7.0 merges `reprompter` + `reprompter-teams` into a single skill with two modes.** No more separate skills — one SKILL.md handles both single prompts and full agent team orchestration.

Most agent orchestration tools improve the overall task, then hand vague sub-tasks to each agent. RePrompter individually RePrompts every agent's prompt:

```
Raw task
    ↓
Layer 1: Team Plan — roles, coordination, brief
    ↓
Layer 2: Repromptception — each agent's sub-task gets its own
         full RePrompter pass (score, improve, add constraints,
         success criteria, output format)
    ↓
Execute — every agent starts with an 8+/10 prompt
    ↓
Evaluate — score output against success criteria
    ↓
Retry (if needed) — delta prompts targeting specific gaps
```

**Before Repromptception:** Raw task given to 4 agents:
> "audit my system for security, cost waste, config issues, and memory bloat"
>
> That's a **2.5/10** prompt. Each agent gets a vague one-liner and has to guess scope, output format, and success criteria.

**After Repromptception:** Each agent gets a structured XML prompt (all 4 shown below).

The team lead sends all 4 agents in parallel. Each writes to their own `/tmp/` file. No scope overlap.

<details open>
<summary><strong>🔒 Agent 1: SecurityAuditor (score: 2.0 → 8.9)</strong></summary>

```xml
<role>
Senior application security engineer specializing in Python web applications,
OWASP Top 10, and credential hygiene in git-tracked repositories.
</role>

<context>
- Codebase: Python 3.11, psycopg2, urllib3, FastAPI. DB: Neon Postgres + SQLite.
- 76 Python files across scripts/whatsapp-memory/, scripts/finance/, scripts/norget/
- Known issue: .gitignore was recently expanded but credentials may exist in git history
- Other agents: TokenCostAuditor (#2), ConfigAuditor (#3), MemoryBloatAuditor (#4)
- YOUR scope: source code security ONLY
</context>

<task>Audit all Python source files for security vulnerabilities, hardcoded credentials,
injection risks, and unsafe patterns.</task>

<requirements>
- SQL injection: parameterized queries vs string formatting in all DB calls
- Hardcoded secrets: API keys, OAuth tokens, passwords in source code
- SSRF: URL construction in urllib/requests — user input in URLs
- Subprocess calls: shell=True, unsanitized arguments
- Minimum 8 findings across at least 3 severity levels
</requirements>

<constraints>
- Source code ONLY — do not audit .env, memory/, or config (other agents do that)
- READ-ONLY: report only, do not modify files
- Verify every file:line reference before reporting
</constraints>

<output_format>
/tmp/rpc2-audit-security.md — findings table with severity, file:line, fix suggestion
</output_format>

<success_criteria>
- ≥8 findings, every one with exact file:line, ≥1 CRITICAL + 2 HIGH, concrete fixes
</success_criteria>
```

</details>

<details>
<summary><strong>💸 Agent 2: TokenCostAuditor (score: 2.2 → 9.0)</strong></summary>

```xml
<role>
Cost optimization engineer specializing in LLM API usage analysis,
cron job efficiency, and AI session token consumption patterns.
</role>

<context>
- 52 cron jobs in .openclaw/cron/jobs.json (each spawns isolated AI session)
- 3 gateways: Mahmut (port 18789), Ziggy (18795), ZeroClaw (18790)
- Model: Claude Opus 4.6 ($15/M input, $75/M output)
- Known waste: some crons use full AI sessions to run simple bash scripts
- Other agents: SecurityAuditor (#1), ConfigAuditor (#3), MemoryBloatAuditor (#4)
</context>

<task>Analyze all cron jobs for token waste, calculate monthly costs per job,
identify redundancies, and propose a tiered savings plan.</task>

<requirements>
- Calculate cost per job: frequency × avg tokens × model pricing
- Identify jobs that can be converted from AI sessions to pure bash/launchd
- Find duplicate jobs across gateways
- Group savings into tiers: immediate ($0 effort), this week, this month
- Total monthly spend and achievable target
</requirements>

<constraints>
- Analyze cron jobs ONLY — do not audit source code or memory files
- Use real pricing ($15/M input, $75/M output for Opus 4.6)
- Do not disable or modify any jobs — report recommendations only
</constraints>

<output_format>
/tmp/rpc2-audit-tokens.md — cost table per job, savings tiers, total reduction
</output_format>

<success_criteria>
- Every job has estimated monthly cost, ≥$200/mo in identified savings, tiered action plan
</success_criteria>
```

</details>

<details>
<summary><strong>⚙️ Agent 3: ConfigSettingsAuditor (score: 1.8 → 8.9)</strong></summary>

```xml
<role>
DevSecOps engineer specializing in configuration security, secrets management,
.gitignore hygiene, and mechanical enforcement of safety rules.
</role>

<context>
- OpenClaw config: openclaw.json + .openclaw/ directory
- Claude Code settings: ~/.claude/settings.json (deny list, env vars)
- Safety rules in SOUL.md (8 hard rules — but are they mechanically enforced?)
- .gitignore recently expanded but may still miss sensitive paths
- Other agents: SecurityAuditor (#1), TokenCostAuditor (#2), MemoryBloatAuditor (#4)
</context>

<task>Audit all configuration files for security gaps, missing enforcement of safety rules,
credential exposure risks, and .gitignore completeness.</task>

<requirements>
- Check .gitignore covers: .env, memory/, secrets/, logs/, *.sqlite, PII files
- Check settings.json deny list enforces SOUL.md rules (kill commands, tweet posting)
- Check for credentials in config files, entity files, memory summaries
- Check gateway config for unnecessary permissions or exposed endpoints
- Verify each SOUL.md hard rule has mechanical enforcement (not just prompt compliance)
</requirements>

<constraints>
- Config and settings ONLY — do not audit Python source code or cron jobs
- Do not modify any config files — report gaps only
- Check both Mahmut and ZeroClaw configs if accessible
</constraints>

<output_format>
/tmp/rpc2-audit-config.md — gap analysis table, SOUL.md enforcement matrix, remediation steps
</output_format>

<success_criteria>
- Every SOUL.md rule checked for mechanical enforcement, ≥10 findings, prioritized P0/P1/P2
</success_criteria>
```

</details>

<details>
<summary><strong>🧠 Agent 4: MemoryBloatAuditor (score: 2.0 → 8.7)</strong></summary>

```xml
<role>
Systems optimization engineer specializing in context window management,
memory file deduplication, and token budget analysis for LLM-powered assistants.
</role>

<context>
- Memory files: MEMORY.md, memory/*.md, memory/entities/*.md, memory/summaries/*.json
- Entity files auto-generated by PARA synthesis (daily cron)
- Known issue: openclaw-setup.md is 29K words (38K tokens) — largest single file
- Context window: 1M tokens (Opus 4.6), but bloat reduces useful conversation space
- Other agents: SecurityAuditor (#1), TokenCostAuditor (#2), ConfigAuditor (#3)
</context>

<task>Analyze all memory and entity files for bloat, duplication, misfiled facts,
and stale content. Quantify token savings from cleanup.</task>

<requirements>
- Measure total tokens loaded per session (all injected files)
- Identify duplicate content across: MEMORY.md ↔ entity files ↔ SESSION_STATE.md
- Find misfiled entity facts (e.g., unrelated content in wrong entity file)
- Identify stale/completed TODOs, old audit reports, obsolete sections
- Calculate token savings per cleanup action
</requirements>

<constraints>
- Memory and entity files ONLY — do not audit source code, config, or cron jobs
- Do not delete or modify files — report recommendations with exact paths + line numbers
- Count tokens using ~4 chars per token approximation
</constraints>

<output_format>
/tmp/rpc2-audit-bloat.md — bloat inventory table, per-file token count, cleanup actions with savings
</output_format>

<success_criteria>
- Total token count for all injected files, ≥50K tokens in identified savings, specific line ranges to remove
</success_criteria>
```

</details>

**4-phase loop:** Team Plan → Repromptception → Execute → Evaluate+Retry

Trigger words: `"reprompter teams"`, `"repromptception"`, `"run with quality"`, `"smart run"`, `"smart agents"`

Normal single-prompt usage is unchanged — Repromptception only activates for team/multi-agent tasks.

### Proven Results

**E2E test** — 3 Opus agents, sequential pipeline:

| Metric | Value |
|--------|-------|
| Original prompt score | 2.15 / 10 |
| After Repromptception | **9.15 / 10** |
| Delta | **+7.00 points (+326%)** |
| Quality audit | **PASS (99.1%)** |
| Weaknesses found → fixed | 24 → 24 (100%) |
| Cost | $1.39 |
| Time | ~8 minutes |

**Repromptception vs Raw Agent Teams** — same audit, 4 Opus agents:

| Metric | Raw | Repromptception | Delta |
|--------|-----|----------------|-------|
| CRITICAL findings | 7 | 14 | **+100%** |
| Total findings | ~40 | 104 | **+160%** |
| Cost savings found | $377/mo | $490/mo | **+30%** |
| Token bloat found | 45K | 113K | **+151%** |
| Cross-validated findings | 0 | 5 | — |

Methodology: scores from parallel audit runs with identical task prompts

The pipeline runs via **Claude Code Agent Teams** with `teammateMode: "tmux"` for real-time split-pane monitoring. All orchestration docs are now in SKILL.md (TEAMS.md removed in v7).

---

## Other Features

- **Extended thinking** — Favors outcome clarity over rigid step scripting
- **Response prefilling** — Suggests `{` prefills for JSON-first API workflows
- **Context engineering** — Prompts complement runtime context, don't duplicate it
- **Token budget** — Keeps prompts compact (~2K single mode, ~1-2K per agent)
- **Uncertainty handling** — Explicit permission to ask, not fabricate
- **Motivation capture** — Maps "why this matters" into `<motivation>` so priority survives execution
- **Closed-loop quality** — Execute → Evaluate → Retry (Repromptception mode only — Single mode generates prompts, does not execute; max 2 retries, delta prompts)

---

## Contributing

Contributions welcome! See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.

- 🐛 [Report a bug](https://github.com/aytuncyildizli/reprompter/issues/new?template=bug_report.md)
- 💡 [Request a feature](https://github.com/aytuncyildizli/reprompter/issues/new?template=feature_request.md)
- 📝 Submit a template PR

---

## License

MIT — see [LICENSE](LICENSE).

---

## Star History

<p align="center">
  <a href="https://www.star-history.com/#AytuncYildizli/reprompter&Date">
    <img src="https://api.star-history.com/svg?repos=AytuncYildizli/reprompter&type=Date" alt="Star History Chart" width="600">
  </a>
</p>

---

<p align="center">
  <sub>If RePrompter saved you from writing another messy prompt, consider giving it a ⭐</sub>
</p>

```

### _meta.json

```json
{
  "owner": "aytuncyildizli",
  "slug": "reprompter",
  "displayName": "RePrompter",
  "latest": {
    "version": "7.0.0",
    "publishedAt": 1771191889061,
    "commit": "https://github.com/openclaw/skills/commit/4b3c37df4b74257ceed0e9b6cdf0d17879461340"
  },
  "history": []
}

```

### assets/demo.svg

```svg
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 900 520" fill="none">
  <style>
    @keyframes fadeInLeft { from { opacity: 0; transform: translateX(-20px); } to { opacity: 1; transform: translateX(0); } }
    @keyframes fadeInRight { from { opacity: 0; transform: translateX(20px); } to { opacity: 1; transform: translateX(0); } }
    @keyframes arrowPulse { 0%, 100% { opacity: 0.4; } 50% { opacity: 1; } }
    @keyframes typewriter { from { width: 0; } to { width: 100%; } }
    @keyframes scoreCount { from { opacity: 0; transform: translateY(8px); } to { opacity: 1; transform: translateY(0); } }
    .left-panel { animation: fadeInLeft 0.6s ease-out; }
    .right-panel { animation: fadeInRight 0.6s ease-out 0.3s both; }
    .arrow { animation: arrowPulse 2s ease-in-out infinite; }
    .score-row:nth-child(1) { animation: scoreCount 0.4s ease-out 0.8s both; }
    .score-row:nth-child(2) { animation: scoreCount 0.4s ease-out 1.0s both; }
    .score-row:nth-child(3) { animation: scoreCount 0.4s ease-out 1.2s both; }
    .mono { font-family: 'SF Mono', 'Fira Code', 'Cascadia Code', Consolas, monospace; }
    .sans { font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif; }
  </style>
  
  <!-- Background -->
  <rect width="900" height="520" rx="16" fill="#0d1117"/>
  
  <!-- ===== LEFT: Before ===== -->
  <g class="left-panel">
    <!-- Terminal window -->
    <rect x="24" y="24" width="390" height="340" rx="10" fill="#161b22" stroke="#30363d" stroke-width="1"/>
    <!-- Title bar -->
    <circle cx="44" cy="44" r="5" fill="#ff5f57"/>
    <circle cx="60" cy="44" r="5" fill="#febc2e"/>
    <circle cx="76" cy="44" r="5" fill="#28c840"/>
    <text x="200" y="48" class="mono" font-size="11" fill="#484f58" text-anchor="middle">rough-prompt.txt</text>
    
    <!-- Label -->
    <rect x="32" y="62" width="60" height="20" rx="4" fill="#da3633" opacity="0.15"/>
    <text x="62" y="76" class="mono" font-size="10" fill="#f85149" text-anchor="middle" font-weight="600">BEFORE</text>
    
    <!-- Messy prompt text -->
    <text x="40" y="106" class="mono" font-size="12" fill="#8b949e">
      <tspan x="40" dy="0">uhh build a crypto dashboard,</tspan>
      <tspan x="40" dy="20">maybe coingecko data, add</tspan>
      <tspan x="40" dy="20">caching, test it too, don't</tspan>
      <tspan x="40" dy="20">break existing api</tspan>
    </text>
    
    <!-- Score bars (low) -->
    <g transform="translate(40, 200)">
      <text x="0" y="12" class="mono" font-size="10" fill="#484f58">Clarity</text>
      <rect x="90" y="2" width="120" height="12" rx="3" fill="#21262d"/>
      <rect x="90" y="2" width="36" height="12" rx="3" fill="#da3633"/>
      <text x="218" y="12" class="mono" font-size="10" fill="#f85149">3/10</text>
      
      <text x="0" y="34" class="mono" font-size="10" fill="#484f58">Specificity</text>
      <rect x="90" y="24" width="120" height="12" rx="3" fill="#21262d"/>
      <rect x="90" y="24" width="24" height="12" rx="3" fill="#da3633"/>
      <text x="218" y="34" class="mono" font-size="10" fill="#f85149">2/10</text>
      
      <text x="0" y="56" class="mono" font-size="10" fill="#484f58">Structure</text>
      <rect x="90" y="46" width="120" height="12" rx="3" fill="#21262d"/>
      <rect x="90" y="46" width="12" height="12" rx="3" fill="#da3633"/>
      <text x="218" y="56" class="mono" font-size="10" fill="#f85149">1/10</text>
      
      <text x="0" y="78" class="mono" font-size="10" fill="#484f58">Constraints</text>
      <rect x="90" y="68" width="120" height="12" rx="3" fill="#21262d"/>
      <rect x="90" y="68" width="0" height="12" rx="3" fill="#da3633"/>
      <text x="218" y="78" class="mono" font-size="10" fill="#f85149">0/10</text>
      
      <text x="0" y="100" class="mono" font-size="10" fill="#484f58">Overall</text>
      <rect x="90" y="90" width="120" height="14" rx="3" fill="#21262d"/>
      <rect x="90" y="90" width="19" height="14" rx="3" fill="#da3633"/>
      <text x="218" y="102" class="mono" font-size="11" fill="#f85149" font-weight="700">1.6</text>
    </g>
  </g>
  
  <!-- ===== CENTER: Arrow ===== -->
  <g class="arrow" transform="translate(424, 170)">
    <rect x="0" y="0" width="52" height="36" rx="18" fill="#0969da" opacity="0.1"/>
    <path d="M12 18 L34 18" stroke="url(#arrowGrad)" stroke-width="3" stroke-linecap="round"/>
    <path d="M28 11 L36 18 L28 25" stroke="url(#arrowGrad)" stroke-width="3" stroke-linecap="round" stroke-linejoin="round" fill="none"/>
  </g>
  <defs>
    <linearGradient id="arrowGrad" x1="0%" y1="0%" x2="100%" y2="0%">
      <stop offset="0%" style="stop-color:#0969da"/>
      <stop offset="100%" style="stop-color:#a371f7"/>
    </linearGradient>
  </defs>
  
  <!-- ===== RIGHT: After ===== -->
  <g class="right-panel">
    <rect x="486" y="24" width="390" height="340" rx="10" fill="#161b22" stroke="#30363d" stroke-width="1"/>
    <circle cx="506" cy="44" r="5" fill="#ff5f57"/>
    <circle cx="522" cy="44" r="5" fill="#febc2e"/>
    <circle cx="538" cy="44" r="5" fill="#28c840"/>
    <text x="680" y="48" class="mono" font-size="11" fill="#484f58" text-anchor="middle">reprompter-output.xml</text>
    
    <rect x="494" y="62" width="52" height="20" rx="4" fill="#238636" opacity="0.15"/>
    <text x="520" y="76" class="mono" font-size="10" fill="#3fb950" text-anchor="middle" font-weight="600">AFTER</text>
    
    <!-- Structured output -->
    <text x="502" y="100" class="mono" font-size="11">
      <tspan fill="#7c3aed">&lt;role&gt;</tspan>
    </text>
    <text x="514" y="116" class="mono" font-size="10.5" fill="#e6edf3">Senior full-stack engineer</text>
    <text x="502" y="132" class="mono" font-size="11"><tspan fill="#7c3aed">&lt;/role&gt;</tspan></text>
    
    <text x="502" y="156" class="mono" font-size="11"><tspan fill="#0969da">&lt;task&gt;</tspan></text>
    <text x="514" y="172" class="mono" font-size="10.5" fill="#e6edf3">Deliver real-time crypto</text>
    <text x="514" y="186" class="mono" font-size="10.5" fill="#e6edf3">dashboard with cache-aware</text>
    <text x="514" y="200" class="mono" font-size="10.5" fill="#e6edf3">CoinGecko proxy API...</text>
    <text x="502" y="216" class="mono" font-size="11"><tspan fill="#0969da">&lt;/task&gt;</tspan></text>
    
    <text x="502" y="240" class="mono" font-size="11"><tspan fill="#3fb950">&lt;constraints&gt;</tspan></text>
    <text x="514" y="256" class="mono" font-size="10.5" fill="#8b949e">No breaking API changes</text>
    <text x="514" y="270" class="mono" font-size="10.5" fill="#8b949e">Mock external boundaries</text>
    <text x="502" y="286" class="mono" font-size="11"><tspan fill="#3fb950">&lt;/constraints&gt;</tspan></text>
    
    <!-- Score bars (high) -->
    <g transform="translate(502, 300)">
      <text x="0" y="12" class="mono" font-size="10" fill="#484f58">Overall</text>
      <rect x="90" y="2" width="120" height="14" rx="3" fill="#21262d"/>
      <rect x="90" y="2" width="108" height="14" rx="3" fill="#238636"/>
      <text x="218" y="14" class="mono" font-size="11" fill="#3fb950" font-weight="700">9.0</text>
    </g>
  </g>
  
  <!-- ===== BOTTOM: Stats bar ===== -->
  <g transform="translate(0, 390)">
    <rect x="24" y="0" width="852" height="106" rx="10" fill="#161b22" stroke="#30363d" stroke-width="1"/>
    
    <!-- Stat boxes -->
    <g transform="translate(60, 20)">
      <text x="0" y="16" class="sans" font-size="32" fill="#58a6ff" font-weight="700">+462%</text>
      <text x="0" y="36" class="mono" font-size="11" fill="#484f58">quality improvement</text>
    </g>
    
    <line x1="240" y1="12" x2="240" y2="94" stroke="#21262d" stroke-width="1"/>
    
    <g transform="translate(280, 20)">
      <text x="0" y="16" class="sans" font-size="32" fill="#a371f7" font-weight="700">6</text>
      <text x="0" y="36" class="mono" font-size="11" fill="#484f58">quality dimensions</text>
    </g>
    
    <line x1="430" y1="12" x2="430" y2="94" stroke="#21262d" stroke-width="1"/>
    
    <g transform="translate(470, 20)">
      <text x="0" y="16" class="sans" font-size="32" fill="#3fb950" font-weight="700">11</text>
      <text x="0" y="36" class="mono" font-size="11" fill="#484f58">prompt templates</text>
    </g>
    
    <line x1="620" y1="12" x2="620" y2="94" stroke="#21262d" stroke-width="1"/>
    
    <g transform="translate(660, 20)">
      <text x="0" y="16" class="sans" font-size="32" fill="#f0883e" font-weight="700">Team</text>
      <text x="0" y="36" class="mono" font-size="11" fill="#484f58">multi-agent support</text>
    </g>

    <!-- Subtle tagline -->
    <text x="220" y="76" class="mono" font-size="10" fill="#30363d">Your prompt sucks. Let's fix that. → Interview → Score → Ship it.</text>
  </g>
</svg>
```

### assets/logo-dark.svg

```svg
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 520 120" fill="none">
  <defs>
    <linearGradient id="grad1" x1="0%" y1="0%" x2="100%" y2="100%">
      <stop offset="0%" style="stop-color:#58a6ff;stop-opacity:1" />
      <stop offset="100%" style="stop-color:#a371f7;stop-opacity:1" />
    </linearGradient>
  </defs>
  
  <!-- Icon: Terminal bracket with transform arrow -->
  <g transform="translate(10, 10)">
    <rect x="0" y="0" width="100" height="100" rx="12" fill="#0d1117" stroke="url(#grad1)" stroke-width="2.5"/>
    <circle cx="18" cy="16" r="4.5" fill="#ff5f57"/>
    <circle cx="32" cy="16" r="4.5" fill="#febc2e"/>
    <circle cx="46" cy="16" r="4.5" fill="#28c840"/>
    <rect x="14" y="36" width="30" height="3" rx="1.5" fill="#484f58" opacity="0.5"/>
    <rect x="14" y="44" width="22" height="3" rx="1.5" fill="#484f58" opacity="0.4"/>
    <rect x="14" y="52" width="28" height="3" rx="1.5" fill="#484f58" opacity="0.3"/>
    <path d="M48 46 L62 46" stroke="url(#grad1)" stroke-width="2.5" stroke-linecap="round"/>
    <path d="M57 41 L63 46 L57 51" stroke="url(#grad1)" stroke-width="2.5" stroke-linecap="round" stroke-linejoin="round" fill="none"/>
    <rect x="68" y="36" width="20" height="3" rx="1.5" fill="#58a6ff"/>
    <rect x="68" y="44" width="16" height="3" rx="1.5" fill="#a371f7"/>
    <rect x="68" y="52" width="18" height="3" rx="1.5" fill="#58a6ff"/>
    <text x="14" y="78" font-family="monospace" font-size="14" fill="#484f58" opacity="0.5">~$</text>
    <text x="55" y="78" font-family="monospace" font-size="14" fill="url(#grad1)">&lt;/&gt;</text>
  </g>
  
  <text x="128" y="52" font-family="-apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif" font-size="42" font-weight="700" fill="#f0f6fc">
    Re<tspan fill="url(#grad1)">Prompter</tspan>
  </text>
  <text x="128" y="82" font-family="-apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif" font-size="15" fill="#8b949e" font-weight="400">
    Your prompt sucks. Let's fix that.
  </text>
</svg>
```

### assets/logo.svg

```svg
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 520 120" fill="none">
  <defs>
    <linearGradient id="grad1" x1="0%" y1="0%" x2="100%" y2="100%">
      <stop offset="0%" style="stop-color:#0969da;stop-opacity:1" />
      <stop offset="100%" style="stop-color:#7c3aed;stop-opacity:1" />
    </linearGradient>
  </defs>
  
  <!-- Icon: Terminal bracket with transform arrow -->
  <g transform="translate(10, 10)">
    <!-- Terminal window frame -->
    <rect x="0" y="0" width="100" height="100" rx="12" fill="#1a1b26" stroke="url(#grad1)" stroke-width="2.5"/>
    
    <!-- Terminal dots -->
    <circle cx="18" cy="16" r="4.5" fill="#ff5f57"/>
    <circle cx="32" cy="16" r="4.5" fill="#febc2e"/>
    <circle cx="46" cy="16" r="4.5" fill="#28c840"/>
    
    <!-- Messy input lines (left side, faded) -->
    <rect x="14" y="36" width="30" height="3" rx="1.5" fill="#565869" opacity="0.5"/>
    <rect x="14" y="44" width="22" height="3" rx="1.5" fill="#565869" opacity="0.4"/>
    <rect x="14" y="52" width="28" height="3" rx="1.5" fill="#565869" opacity="0.3"/>
    
    <!-- Transform arrow -->
    <path d="M48 46 L62 46" stroke="url(#grad1)" stroke-width="2.5" stroke-linecap="round"/>
    <path d="M57 41 L63 46 L57 51" stroke="url(#grad1)" stroke-width="2.5" stroke-linecap="round" stroke-linejoin="round" fill="none"/>
    
    <!-- Clean output lines (right side, bright) -->
    <rect x="68" y="36" width="20" height="3" rx="1.5" fill="#0969da"/>
    <rect x="68" y="44" width="16" height="3" rx="1.5" fill="#7c3aed"/>
    <rect x="68" y="52" width="18" height="3" rx="1.5" fill="#0969da"/>
    
    <!-- XML bracket hints -->
    <text x="14" y="78" font-family="monospace" font-size="14" fill="#565869" opacity="0.5">~$</text>
    <text x="55" y="78" font-family="monospace" font-size="14" fill="url(#grad1)">&lt;/&gt;</text>
  </g>
  
  <!-- Text: RePrompter — dark text for light backgrounds -->
  <text x="128" y="52" font-family="-apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif" font-size="42" font-weight="700" fill="#1f2328">
    Re<tspan fill="url(#grad1)">Prompter</tspan>
  </text>
  
  <!-- Tagline -->
  <text x="128" y="82" font-family="-apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif" font-size="15" fill="#636c76" font-weight="400">
    Your prompt sucks. Let's fix that.
  </text>
</svg>
```

### assets/social-preview.svg

```svg
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 1280 640" fill="none">
  <defs>
    <linearGradient id="bg" x1="0%" y1="0%" x2="100%" y2="100%">
      <stop offset="0%" style="stop-color:#0d1117"/>
      <stop offset="100%" style="stop-color:#161b22"/>
    </linearGradient>
    <linearGradient id="accent" x1="0%" y1="0%" x2="100%" y2="0%">
      <stop offset="0%" style="stop-color:#0969da"/>
      <stop offset="100%" style="stop-color:#a371f7"/>
    </linearGradient>
  </defs>
  
  <rect width="1280" height="640" fill="url(#bg)"/>
  
  <!-- Subtle grid pattern -->
  <g opacity="0.03">
    <line x1="0" y1="80" x2="1280" y2="80" stroke="#fff" stroke-width="1"/>
    <line x1="0" y1="160" x2="1280" y2="160" stroke="#fff" stroke-width="1"/>
    <line x1="0" y1="240" x2="1280" y2="240" stroke="#fff" stroke-width="1"/>
    <line x1="0" y1="320" x2="1280" y2="320" stroke="#fff" stroke-width="1"/>
    <line x1="0" y1="400" x2="1280" y2="400" stroke="#fff" stroke-width="1"/>
    <line x1="0" y1="480" x2="1280" y2="480" stroke="#fff" stroke-width="1"/>
    <line x1="0" y1="560" x2="1280" y2="560" stroke="#fff" stroke-width="1"/>
  </g>
  
  <!-- Terminal icon (large) -->
  <g transform="translate(540, 120)">
    <rect x="0" y="0" width="200" height="160" rx="20" fill="#161b22" stroke="url(#accent)" stroke-width="3"/>
    <circle cx="28" cy="24" r="7" fill="#ff5f57"/>
    <circle cx="50" cy="24" r="7" fill="#febc2e"/>
    <circle cx="72" cy="24" r="7" fill="#28c840"/>
    
    <!-- Messy → Clean -->
    <rect x="24" y="56" width="50" height="5" rx="2.5" fill="#484f58" opacity="0.5"/>
    <rect x="24" y="68" width="38" height="5" rx="2.5" fill="#484f58" opacity="0.4"/>
    <rect x="24" y="80" width="44" height="5" rx="2.5" fill="#484f58" opacity="0.3"/>
    
    <path d="M88 70 L112 70" stroke="url(#accent)" stroke-width="4" stroke-linecap="round"/>
    <path d="M105 60 L115 70 L105 80" stroke="url(#accent)" stroke-width="4" stroke-linecap="round" stroke-linejoin="round" fill="none"/>
    
    <rect x="126" y="56" width="50" height="5" rx="2.5" fill="#58a6ff"/>
    <rect x="126" y="68" width="38" height="5" rx="2.5" fill="#a371f7"/>
    <rect x="126" y="80" width="44" height="5" rx="2.5" fill="#58a6ff"/>
    
    <text x="24" y="124" font-family="monospace" font-size="22" fill="#484f58">~$</text>
    <text x="110" y="124" font-family="monospace" font-size="22" fill="url(#accent)">&lt;/&gt;</text>
  </g>
  
  <!-- Title -->
  <text x="640" y="350" font-family="-apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif" font-size="72" font-weight="800" fill="#f0f6fc" text-anchor="middle">
    Re<tspan fill="url(#accent)">Prompter</tspan>
  </text>
  
  <!-- Tagline -->
  <text x="640" y="400" font-family="-apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif" font-size="26" fill="#e6edf3" text-anchor="middle" font-weight="500">
    Your prompt sucks. Let's fix that.
  </text>
  
  <!-- Stats -->
  <g transform="translate(0, 460)">
    <text x="280" y="30" font-family="monospace" font-size="18" fill="#58a6ff" text-anchor="middle" font-weight="700">+462% quality</text>
    <text x="280" y="52" font-family="monospace" font-size="13" fill="#484f58" text-anchor="middle">avg improvement</text>
    
    <text x="520" y="30" font-family="monospace" font-size="18" fill="#a371f7" text-anchor="middle" font-weight="700">6 dimensions</text>
    <text x="520" y="52" font-family="monospace" font-size="13" fill="#484f58" text-anchor="middle">quality scoring</text>
    
    <text x="760" y="30" font-family="monospace" font-size="18" fill="#3fb950" text-anchor="middle" font-weight="700">11 templates</text>
    <text x="760" y="52" font-family="monospace" font-size="13" fill="#484f58" text-anchor="middle">built-in</text>
    
    <text x="1000" y="30" font-family="monospace" font-size="18" fill="#f0883e" text-anchor="middle" font-weight="700">team mode</text>
    <text x="1000" y="52" font-family="monospace" font-size="13" fill="#484f58" text-anchor="middle">multi-agent</text>
  </g>
  
  <!-- Bottom accent line -->
  <rect x="0" y="634" width="1280" height="6" fill="url(#accent)"/>
</svg>
```

### scripts/create-past-releases.sh

```bash
#!/usr/bin/env bash
# Create GitHub Releases for all versions in CHANGELOG.md
# Run once after merging. Requires: gh CLI with repo write access.
# Usage: ./scripts/create-past-releases.sh [--dry-run]
#
# NOTE: Tags must already exist pointing to the correct commits.
# This script will NOT create tags — it only creates GitHub Releases.
# Create tags manually first:
#   git tag v5.0.0 <commit-hash>
#   git tag v6.0.0 <commit-hash>
#   git push origin --tags

set -euo pipefail

DRY_RUN=false
[[ "${1:-}" == "--dry-run" ]] && DRY_RUN=true

CHANGELOG="CHANGELOG.md"
VERSIONS=$(grep -E -o '^## v[0-9]+\.[0-9]+\.[0-9]+' "$CHANGELOG" | sed 's/^## v//')
FIRST_VERSION=$(echo "$VERSIONS" | head -n 1)

for VERSION in $VERSIONS; do
  TAG="v${VERSION}"

  # Safety: skip if tag doesn't exist (don't create tags on HEAD)
  if ! git rev-parse "$TAG" >/dev/null 2>&1; then
    echo "⚠️  Tag $TAG not found — skipping (create it manually on the correct commit first)"
    continue
  fi

  # Extract notes using awk with -v to avoid escaping issues
  NOTES_FILE=$(mktemp)
  awk -v ver="$VERSION" '
    $0 ~ "^## v" ver { found=1; next }
    /^## v[0-9]/ { if(found) exit }
    found { print }
  ' "$CHANGELOG" > "$NOTES_FILE"

  if [ ! -s "$NOTES_FILE" ]; then
    echo "Release ${TAG}" > "$NOTES_FILE"
  fi

  FLAGS=""
  [ "$VERSION" = "$FIRST_VERSION" ] && FLAGS="--latest"

  if $DRY_RUN; then
    echo "=== $TAG $FLAGS ==="
    head -n 3 "$NOTES_FILE"
    echo "---"
  else
    echo "Creating release $TAG..."
    gh release create "$TAG" --title "$TAG" --notes-file "$NOTES_FILE" $FLAGS 2>&1 || echo "  ⚠️  Skipped (may already exist)"
  fi

  rm -f "$NOTES_FILE"
done
echo "Done!"

```

### scripts/package-skill.sh

```bash
#!/bin/bash
# Package reprompter skill for Claude.ai upload
# Excludes repo-level files per Anthropic's Skills Guide:
# "Don't include README.md inside your skill folder"

set -euo pipefail

SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
REPO_DIR="$(dirname "$SCRIPT_DIR")"
OUT="$REPO_DIR/reprompter-skill.zip"

cd "$REPO_DIR"

rm -f "$OUT"

zip -r "$OUT" . \
  -x ".git/*" \
  -x ".github/*" \
  -x "README.md" \
  -x "CONTRIBUTING.md" \
  -x "CHANGELOG.md" \
  -x "TESTING.md" \
  -x "LICENSE" \
  -x ".gitignore" \
  -x "assets/demo.*" \
  -x "assets/social-preview.*" \
  -x "scripts/create-past-releases.sh" \
  -x "scripts/package-skill.sh" \
  -x "reprompter-skill.zip"

echo "✅ Packaged to: $OUT"
echo "Contents:"
unzip -l "$OUT" | tail -n +4 | head -n -2

```

### scripts/validate-templates.sh

```bash
#!/usr/bin/env bash
set -euo pipefail

REQUIRED_TAGS=(
  role
  context
  task
  motivation
  requirements
  constraints
  output_format
  success_criteria
)

TEMPLATE_DIR="docs/references"
EXCEPTION_TEMPLATE="team-brief-template.md"

if [[ ! -d "$TEMPLATE_DIR" ]]; then
  echo "ERROR: Template directory not found: $TEMPLATE_DIR"
  exit 1
fi

echo "Validating templates in $TEMPLATE_DIR"
echo "Required tags: ${REQUIRED_TAGS[*]}"
echo "Skipping explicit Markdown exception: $EXCEPTION_TEMPLATE"
echo

shopt -s nullglob
templates=("$TEMPLATE_DIR"/*.md)
shopt -u nullglob

if [[ ${#templates[@]} -eq 0 ]]; then
  echo "ERROR: No templates found in $TEMPLATE_DIR"
  exit 1
fi

checked=0
passed=0
failed=0

for template in "${templates[@]}"; do
  name="$(basename "$template")"

  if [[ "$name" == "$EXCEPTION_TEMPLATE" ]]; then
    echo "SKIP  $name (Markdown exception)"
    continue
  fi

  ((checked+=1))
  missing=()

  for tag in "${REQUIRED_TAGS[@]}"; do
    if ! grep -qi "<${tag}>" "$template"; then
      missing+=("$tag")
    fi
  done

  if [[ ${#missing[@]} -eq 0 ]]; then
    echo "PASS  $name"
    ((passed+=1))
  else
    echo "FAIL  $name"
    echo "      Missing tags: ${missing[*]}"
    ((failed+=1))
  fi
done

echo
if [[ "$failed" -eq 0 ]]; then
  echo "All $passed templates passed validation (checked: $checked, skipped: 1)."
  exit 0
else
  echo "$failed template(s) failed validation (checked: $checked, passed: $passed, skipped: 1)."
  exit 1
fi

```

reprompter | SkillHub