Back to skills
SkillHub ClubDesign ProductFull StackDesigner

workflow-creator

Create workflow-* skills by composing existing skills into end-to-end chains. Turns a user idea into a workflow_spec.md SSOT (via workflow-brainstorm), discovers available skills locally + from skills.sh, and generates a new workflow-<slug>/ skill package. Use when you want to design a new workflow, chain multiple skills into a flow, or turn scattered atomic skills into a resumable plan-then-confirm workflow.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
322
Hot score
99
Updated
March 20, 2026
Overall rating
C4.3
Composite score
4.3
Best-practice grade
F37.6

Install command

npx @skill-hub/cli install heyvhuang-ship-faster-workflow-creator

Repository

Heyvhuang/ship-faster

Skill path: skills/workflow-creator

Create workflow-* skills by composing existing skills into end-to-end chains. Turns a user idea into a workflow_spec.md SSOT (via workflow-brainstorm), discovers available skills locally + from skills.sh, and generates a new workflow-<slug>/ skill package. Use when you want to design a new workflow, chain multiple skills into a flow, or turn scattered atomic skills into a resumable plan-then-confirm workflow.

Open repository

Best for

Primary workflow: Design Product.

Technical facets: Full Stack, Designer.

Target audience: everyone.

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: Heyvhuang.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install workflow-creator into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/Heyvhuang/ship-faster before adding workflow-creator to shared team environments
  • Use workflow-creator for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: workflow-creator
description: "Create workflow-* skills by composing existing skills into end-to-end chains. Turns a user idea into a workflow_spec.md SSOT (via workflow-brainstorm), discovers available skills locally + from skills.sh, and generates a new workflow-<slug>/ skill package. Use when you want to design a new workflow, chain multiple skills into a flow, or turn scattered atomic skills into a resumable plan-then-confirm workflow."
---

# Workflow Creator

Create a new `workflow-<slug>/` skill package that chains existing skills with Ship Faster-standard artifacts.

## Hard rules (do not skip)

- **Compose skills, don't copy them**: a workflow's job is orchestration. Do not paste long best-practice content from other skills into the workflow. Instead: map each step to a single existing skill and point to it.
- **One step = one skill**: every step in `workflow_spec.md` must map to exactly one skill (or a tiny, verifiable manual action).
- **Missing required skill = stop**: do not approximate a missing skill by rewriting its logic. When a required skill is missing locally, you must look up candidates on `https://skills.sh/`, suggest 2-3 options + install commands, and wait for the user.
- **Artifact-first, resumable**: state lives in `proposal.md`, `tasks.md`, `context.json` under `run_dir/`.
- **Plan-then-confirm execution**: generated workflows must write a plan first, then ask the user to confirm execution; the confirmation must be recorded under `tasks.md -> ## Approvals`.

## Defaults (Ship Faster standard)

- **Artifact store**: `runs/` by default, OpenSpec when `openspec/project.md` exists (unless overridden in `context.json`).
- **Required run files**: `proposal.md`, `tasks.md`, `context.json`.
- **Execution model**: plan first, then ask the user to confirm execution; on confirmation, record approval in `tasks.md`.

## Inputs (paths only)

- Optional: `repo_root` (skills repository root)
- Optional: `workflow_spec_path` (if the user already wrote a spec)

## Outputs

- New or updated workflow skill folder:
  - `workflow-<slug>/SKILL.md`
  - `workflow-<slug>/references/workflow-spec.md` (SSOT)

## References

- Prompt pack (verbatim prompts): [references/prompt-pack.md](references/prompt-pack.md)
- Workflow spec template (SSOT): [references/workflow-spec-template.md](references/workflow-spec-template.md)
- Example generated workflow skill: [references/example-workflow-skill.md](references/example-workflow-skill.md)
- Example workflow spec (valid): [references/example-workflow-spec.md](references/example-workflow-spec.md)

## Process

### 0) Resolve skills root (deterministic)

Resolve `skills_root` using this priority:

1. If user provides `repo_root`, use it.
2. Else search upward from the current working directory for:
   - `<dir>/skills/manifest.json` (monorepo layout) -> `skills_root = <dir>/skills/`
   - `<dir>/manifest.json` (skills-repo layout) -> `skills_root = <dir>/`

### 1) Create or load `workflow_spec.md` (SSOT)

If `workflow_spec_path` is not provided:

1. Call `workflow-brainstorm` first to converge on:
   - core goal (1 sentence)
   - acceptance criteria (3-7 bullets)
   - non-goals (1-5 bullets)
   - constraints (risk preference, timeline)
   - 5-10 real trigger phrases the user would say

2. **Skill Discovery (REQUIRED)** - before finalizing required_skills:

   a) **Local skills scan**: List all potentially relevant skills under `skills_root`:
      ```bash
      ls -1 ~/.claude/skills/ | grep -E "(tool-|review-|workflow-)" 
      ```
      Identify which local skills could serve steps in the workflow.

   b) **skills.sh lookup (MANDATORY)**: Fetch the leaderboard from `https://skills.sh/` and identify relevant skills:
      - Search for keywords related to the workflow goal
      - Note top skills by install count in relevant categories
      - Identify 3-5 potentially useful external skills

   c) **Present findings to user**:
      ```
      ## Skill Discovery Results
      
      ### Local Skills (available)
      | Skill | Relevance | Notes |
      |-------|-----------|-------|
      | tool-xxx | HIGH | ... |
      | review-yyy | MEDIUM | ... |
      
      ### External Skills (skills.sh)
      | Skill | Installs | Source | Relevance |
      |-------|----------|--------|-----------|
      | skill-name | 10K | owner/repo | ... |
      
      **Want to inspect any external skills before deciding?** (list numbers or "none")
      ```

   d) **If user wants to inspect**: fetch skill details from skills.sh page, show SKILL.md content, then ask again.

   e) **User confirms final skill selection**: only then proceed to write spec.

3. Write a `workflow_spec.md` using the template in `references/workflow-spec-template.md`.

### 2) Validate spec (deterministic)

Run:

```bash
python3 scripts/validate_workflow_spec.py /path/to/workflow_spec.md
```

Fix any validation errors before generating.

### 3) Resolve skill dependencies

1. Read `required_skills` / `optional_skills` from the spec.
2. Check which skills exist locally under `skills_root`.
3. If any required skill is missing:
   - Stop.
   - Use the prompt in `references/prompt-pack.md` to do a skills.sh lookup.
   - Suggest 2-3 candidates (links + why) and provide install command suggestions, but do not auto-install.

### 4) Generate/update `workflow-<slug>/`

1. Create `workflow-<slug>/` under `skills_root` if missing.
2. Write SSOT:
   - `workflow-<slug>/references/workflow-spec.md` (copy from the validated spec).
3. Generate/update `workflow-<slug>/SKILL.md`:
   - Frontmatter `name` must match directory name (`workflow-<slug>`).
   - Frontmatter `description` must embed the spec `triggers` (routing fuel).
   - Include Ship Faster artifact backend selection rules (`runs/` vs OpenSpec).
   - Include the plan-then-confirm execution policy:
     - Plan stage writes checklist to `tasks.md`.
     - Ask user to confirm start.
    - On confirmation, append an approval record under `tasks.md -> ## Approvals` (timestamp + scope) and start execution.
    - Map each chain step to one skill.
    - Keep the workflow concise: link to other skills for deep details.

### 5) Basic validation (required)

Run:

```bash
python3 scripts/validate_skill_md.py /path/to/workflow-<slug>
```

If it fails, fix frontmatter/name/line endings until it passes.

Note: The repo also includes `skill-creator/scripts/quick_validate.py`, but it may require extra Python deps. Use the validator in this skill as the default.


---

## Referenced Files

> The following files are referenced in this skill and included for context.

### references/prompt-pack.md

```markdown
# Prompt Pack: skill-workflow-creator

Use these prompts verbatim to reduce drift and missed steps.

## Prompt 1: Convert a vague idea into `workflow_spec.md` (SSOT)

```
You are creating a new workflow-* skill. Your output must be a `workflow_spec.md` that will become the SSOT.

Rules:
- Ask exactly ONE question at a time (prefer multiple-choice).
- Do NOT start generating a workflow folder until the spec is confirmed.
- MUST run Skill Discovery (Prompt 1.5) before finalizing required_skills.

Goal:
- Produce a complete `workflow_spec.md` using `references/workflow-spec-template.md`.

Must capture:
- slug (kebab-case)
- 5-10 triggers (real user phrases)
- required_skills + optional_skills (after Skill Discovery)
- Goal & Non-goals
- Skill Chain (each step maps to exactly ONE skill)
- Verification & Stop Rules

Stop when:
- the user has not confirmed the spec.
```

## Prompt 1.5: Skill Discovery (MANDATORY before finalizing spec)

```
Goal: Before writing workflow_spec.md, discover ALL potentially useful skills (local + external).

This step is REQUIRED, not optional. Do not skip even if you think you know all relevant skills.

## Step A: Scan Local Skills

List all skills under skills_root that might be relevant:
- ls ~/.claude/skills/ | grep -E "(tool-|review-|workflow-)"
- For each potentially relevant skill, note: name, purpose, relevance to this workflow

## Step B: Search skills.sh (MANDATORY)

Fetch https://skills.sh/ and identify relevant skills:
1. Look at the leaderboard for high-install skills in relevant categories
2. Search for keywords related to the workflow goal
3. Note 3-5 external skills that could be useful

Prefer reputable sources:
- vercel-labs/agent-skills (React, Next.js, Web design)
- anthropics/skills (general purpose)
- expo/skills (React Native)
- supabase/agent-skills (database)
- stripe/ai (payments)

## Step C: Present Findings to User

Format:
```md
## Skill Discovery Results

### Local Skills (already available)
| Skill | Relevance | Could serve step |
|-------|-----------|------------------|
| tool-xxx | HIGH | Step 2: ... |
| review-yyy | MEDIUM | Step 4: ... |

### External Skills (from skills.sh)
| Skill | Installs | Source | Relevance | Could serve step |
|-------|----------|--------|-----------|------------------|
| vercel-react-best-practices | 37K | vercel-labs/agent-skills | HIGH | Step 3: perf review |
| audit-website | 2K | squirrelscan/skills | MEDIUM | Step 2: full audit |

**Want to inspect any external skills before deciding?**
Enter skill numbers (e.g., "1, 3") or "none" to continue with local only.
```

## Step D: If User Wants to Inspect

For each requested skill:
1. Fetch the skill page from skills.sh (e.g., https://skills.sh/owner/repo/skill-name)
2. Show the SKILL.md content
3. After showing all requested, ask: "Add any of these to required_skills? (list numbers or 'none')"

## Step E: Confirm Final Selection

Present final skill selection:
```md
## Final Skill Selection

**Required skills:**
- [local] tool-design-style-selector
- [local] review-quality
- [NEW] vercel-react-best-practices (install: npx skills add vercel-labs/agent-skills --skill vercel-react-best-practices)

**Optional skills:**
- [local] tool-systematic-debugging

Confirm this selection? (y/n/adjust)
```

Only proceed to write workflow_spec.md after user confirms.
```

## Prompt 2: Validate the spec (deterministic)

```
Run:
python3 ~/.claude/skills/skill-workflow-creator/scripts/validate_workflow_spec.py "<workflow_spec_path>"

If validation fails:
- Do not continue.
- Fix the spec until it passes.
```

## Prompt 3: Local dependency check (required)

```
Given `workflow_spec.md`, list required_skills and optional_skills.

Check whether each required skill exists locally under the skills_root.

If any required skill is missing:
- Stop and run the skills.sh lookup prompt.
```

## Prompt 4: skills.sh lookup (only when a required skill is missing)

```
Goal: suggest 2-3 repos from https://skills.sh/ that likely contain the missing capability.

Inputs:
- missing_skills: <list>
- what capability is needed: <1 sentence>

Do:
1) Open https://skills.sh/ (leaderboard) and identify relevant skills.
2) Prefer reputable sources first (examples):
   - vercel-labs/agent-skills
   - anthropics/skills
   - expo/skills
3) For each suggestion, provide:
   - link to the skill page on skills.sh (if available)
   - repo to install
   - install command (do NOT run it):
     npx skills add <owner/repo>
   - why it matches the missing capability

Stop and ask the user to install before continuing.
```

## Prompt 5: Generate the workflow skill (compose, don't copy)

```
You are generating `workflow-<slug>/SKILL.md` from `workflow_spec.md`.

Hard rules:
- Workflows orchestrate skills; do not paste long best-practice content from other skills.
- For each step: reference the relevant skill and specify input/output artifacts (paths only).
- Execution model: plan-then-confirm.
- Missing required skill: stop (do not approximate).

SKILL.md must include:
- YAML frontmatter: name=workflow-<slug>, description that embeds triggers.
- Link to SSOT: references/workflow-spec.md
- Run_dir backend rules (runs vs OpenSpec) and required artifacts (proposal/tasks/context).
- Process:
  - Initialize run
  - Plan (write tasks.md checklist + verification)
  - Ask user to confirm
  - On confirm: write approval record to tasks.md -> ## Approvals
  - Execute by calling the step skills in order
  - Verify and persist evidence
```

## Prompt 6: Validate generated workflow skill (deterministic)

```
Run:
python3 ~/.claude/skills/skill-workflow-creator/scripts/validate_skill_md.py "<workflow_dir>"

If it fails:
- Fix frontmatter/name/line endings until it passes.
```

```

### references/workflow-spec-template.md

```markdown
---
slug: "<kebab-case-without-workflow-prefix>"
title: "<Human readable title>"
description: "<1-2 sentences: what this workflow achieves, for who>"

triggers:
  - "<real user phrase that should trigger this workflow>"
  - "<keyword cluster>"
  - "<...5-10 total>"

artifact_store: "auto"   # auto|runs|openspec
execution: "plan-then-confirm"

skills_sh_lookup: true
required_skills:
  - "workflow-brainstorm"
optional_skills: []

inputs:
  - name: "repo_root"
    kind: "path"
    required: true
    notes: "Project root"
  - name: "run_dir"
    kind: "path"
    required: false
    notes: "If provided, use it; otherwise resolve per artifact_store"

outputs:
  required:
    - "proposal.md"
    - "tasks.md"
    - "context.json"
  optional:
    - "design.md"
    - "evidence/"
    - "logs/"
---

# Workflow Spec: <title>

## Goal & Non-goals

### Goal

- <What "done" means, in 1 sentence>
- <3-7 acceptance criteria bullets>

### Non-goals

- <1-5 bullets: what we explicitly will not do>

### Constraints (optional)

- Timeline:
- Risk preference:
- Stack constraints:
- External access constraints (prod, billing, secrets):

## Skill Chain

Write steps as a chain. Each step must map to ONE skill (or a tiny, verifiable manual action).

### Step 0: Initialize run (Ship Faster artifact contract)

- Purpose: create/resume `proposal.md`, `tasks.md`, `context.json`
- Notes: OpenSpec auto-detect; never rely on chat history for resume

### Step 1: Brainstorm (when goal is vague)

- Skill: `workflow-brainstorm`
- Inputs (paths only): `repo_root`, `run_dir`
- Writes: `evidence/YYYY-MM-DD-<topic>-design.md`
- Outcome: acceptance criteria + non-goals + constraints

### Step 2: Plan the chain into `tasks.md`

- Skill: <your planning skill or "this workflow itself">
- Writes: `tasks.md` checklist + `## Approvals` placeholder

### Step 3+: Execution steps (each mapped to a skill)

For each step:

- Skill: `<skill-name>`
- Inputs (paths only):
- Output artifacts (paths only):
- Confirmation points (if any):
- Failure handling / fallback:

## Verification & Stop Rules

### Verification (minimum)

- <commands to run, or checks to perform>
- <where to write evidence: `evidence/...` and index it in `tasks.md`>

### Stop rules (hard)

- If verification fails: stop and run `tool-systematic-debugging` before more edits
- If a required skill is missing: stop and suggest candidates (skills.sh), do not improvise
- If an action is high-risk (data loss, billing, prod deploy): write an approval item in `tasks.md` and wait

### Confirm-to-execute policy (required)

- Default behavior: write plan first, then ask user "Start execution?"
- If user confirms in chat: begin execution and append an approval record under `tasks.md -> ## Approvals` (timestamp + scope)

```

### references/example-workflow-skill.md

```markdown
# Example: Generated Workflow Skill

This is an example `workflow-<slug>/SKILL.md` generated from a workflow spec.

```md
---
name: workflow-seo-schema-fix
description: "Plan-then-confirm workflow to audit and improve SEO + structured data for a web project. Use when the user asks for an SEO audit, technical SEO fixes, schema markup/JSON-LD, rich snippets, structured data, or indexing issues. Triggers: SEO audit, technical SEO, schema markup, JSON-LD, rich snippets."
---

# Workflow: SEO + Schema Fix (Plan -> Confirm -> Execute)

## Core principles

- Pass paths only (never paste large content into chat).
- Artifact-first and resumable: `proposal.md`, `tasks.md`, `context.json` are the resume surface.
- Confirmation points: plan first; for any high-risk action, record approval in `tasks.md`.

## Inputs (paths only)

- `repo_root`: project root (default ".")
- Optional: `run_dir` (if the user already has an active run)

## Outputs (written under `run_dir/`)

- Required: `proposal.md`, `tasks.md`, `context.json`
- Optional: `design.md`, `evidence/`, `logs/`

## Spec (SSOT)

- Read first: `references/workflow-spec.md`

## Run directory backend (Ship Faster standard)

Resolve the active `run_dir` deterministically:

1) If `context.json` sets `artifact_store: runs|openspec`, follow it.
2) Else if `openspec/project.md` exists under `repo_root`, use `openspec/changes/<change-id>/`.
3) Else use `runs/seo-schema-fix/active/<run_id>/`.

## Required skills

- `workflow-brainstorm` (spec clarification when goal is vague)
- `review-seo-audit` (diagnose)
- `tool-schema-markup` (implement structured data)
- `review-quality` (final quality/verdict)
- Optional fallback: `tool-systematic-debugging` (when build/tests fail)

If a required skill is missing locally:

- Stop.
- Suggest 2-3 candidates from `https://skills.sh/` (do not install automatically).

## Process

### 0) Initialize run (required)

- Create/resume `proposal.md`, `tasks.md`, `context.json`.
- In `tasks.md`, ensure sections exist:
  - `## Checklist`
  - `## Verification`
  - `## Approvals`
  - `## Evidence index`

### 1) Load spec and write plan (required)

- Open: `references/workflow-spec.md`
- Populate `proposal.md` and `tasks.md` with an executable checklist + verification.

### 2) Confirm -> execute (required)

- Ask the user: "Start execution?"
- When the user confirms in chat:
  - Append an approval record to `tasks.md -> ## Approvals` with timestamp + scope
  - Execute the checklist in small batches with verification evidence

### 3) Stop rules (hard)

- If verification fails: stop and run `tool-systematic-debugging` before more edits.
- If any action is high-risk (prod deploy, data loss, billing): write an explicit approval item in `tasks.md` and wait.
```

```

### references/example-workflow-spec.md

```markdown
---
slug: "seo-schema-fix"
title: "SEO + Schema Fix"
description: "Audit and improve SEO and schema markup for a web project."

triggers:
  - "seo audit"
  - "technical seo"
  - "fix schema markup"
  - "json-ld"
  - "rich snippets"

artifact_store: "auto"
execution: "plan-then-confirm"

skills_sh_lookup: true
required_skills:
  - "workflow-brainstorm"
  - "review-seo-audit"
  - "tool-schema-markup"
  - "review-quality"
optional_skills:
  - "tool-systematic-debugging"
---

# Workflow Spec: SEO + Schema Fix

## Goal & Non-goals

### Goal

- Produce an SEO audit report and a schema markup plan.
- Implement the schema markup changes safely with verification evidence.

### Non-goals

- Redesign the UI.
- Migrate frameworks (e.g., to Next.js) unless explicitly required.

## Skill Chain

### Step 0: Initialize run

- Create/resume `proposal.md`, `tasks.md`, `context.json`.

### Step 1: Diagnose

- Skill: `review-seo-audit`
- Output: `evidence/seo-audit.md`

### Step 2: Implement schema

- Skill: `tool-schema-markup`
- Output: code changes + `evidence/schema-changes.md`

### Step 3: Final quality pass

- Skill: `review-quality`
- Output: `evidence/review-quality.md`

## Verification & Stop Rules

### Verification

- Run project build/typecheck/tests per repo conventions.
- Record outcomes under `evidence/` and index them in `tasks.md`.

### Stop rules

- If verification fails: stop and run `tool-systematic-debugging` before more edits.
- If a required skill is missing: stop and suggest candidates (skills.sh), do not improvise.
- If an action is high-risk (prod deploy, data loss, billing): write an approval item in `tasks.md` and wait.

```

### scripts/validate_workflow_spec.py

```python
#!/usr/bin/env python3

import argparse
import re
from pathlib import Path


REQUIRED_KEYS = {
    "slug",
    "title",
    "description",
    "triggers",
    "artifact_store",
    "execution",
    "skills_sh_lookup",
    "required_skills",
}

ALLOWED_ARTIFACT_STORES = {"auto", "runs", "openspec"}


def _read_text_lf(path: Path) -> str:
    raw = path.read_bytes()
    if b"\r" in raw:
        raise ValueError("workflow_spec.md must use LF line endings (found CRLF/CR)")
    return raw.decode("utf-8")


def _extract_frontmatter(md: str) -> tuple[list[str], str]:
    if not md.startswith("---\n"):
        raise ValueError("Missing YAML frontmatter (expected starting ---)")

    lines = md.splitlines(keepends=False)
    # Find closing frontmatter fence.
    end_idx = None
    for i in range(1, len(lines)):
        if lines[i].strip() == "---":
            end_idx = i
            break
    if end_idx is None:
        raise ValueError("Invalid frontmatter block (missing closing ---)")

    fm_lines = lines[1:end_idx]
    body = "\n".join(lines[end_idx + 1 :]) + ("\n" if md.endswith("\n") else "")
    return fm_lines, body


def _strip_inline_comment(value: str) -> str:
    # Good-enough inline comment stripping for typical skill specs.
    # If the value starts with a quote, do not strip.
    v = value.strip()
    if not v or v[0] in ('"', "'"):
        return v
    return v.split("#", 1)[0].rstrip()


def _unquote(s: str) -> str:
    s = s.strip()
    if len(s) >= 2 and s[0] == s[-1] and s[0] in ('"', "'"):
        return s[1:-1]
    return s


def _parse_inline_list(value: str) -> list[str] | None:
    v = _strip_inline_comment(value).strip()
    if not (v.startswith("[") and v.endswith("]")):
        return None
    inner = v[1:-1].strip()
    if not inner:
        return []
    parts = [p.strip() for p in inner.split(",")]
    return [_unquote(p) for p in parts if p]


def _parse_frontmatter(fm_lines: list[str]) -> dict:
    out: dict = {}
    i = 0
    while i < len(fm_lines):
        line = fm_lines[i]
        i += 1

        if not line.strip() or line.lstrip().startswith("#"):
            continue

        # Top-level key: value
        m = re.match(r"^([A-Za-z0-9_-]+):\s*(.*)$", line)
        if not m:
            continue

        key = m.group(1)
        raw_value = m.group(2)

        # Inline list support: optional_skills: []
        inline_list = _parse_inline_list(raw_value)
        if inline_list is not None:
            out[key] = inline_list
            continue

        v = _strip_inline_comment(raw_value).strip()
        if v:
            if v in {"true", "false"}:
                out[key] = v == "true"
            else:
                out[key] = _unquote(v)
            continue

        # Block value (we only parse string lists for keys we care about)
        block_lines: list[str] = []
        while i < len(fm_lines):
            nxt = fm_lines[i]
            if re.match(r"^([A-Za-z0-9_-]+):\s*", nxt):
                break
            block_lines.append(nxt)
            i += 1

        items: list[str] = []
        for b in block_lines:
            m_item = re.match(r"^\s*-\s*(.*)$", b)
            if not m_item:
                continue
            item = _strip_inline_comment(m_item.group(1)).strip()
            if item:
                items.append(_unquote(item))

        out[key] = items

    return out


def _is_kebab(s: str) -> bool:
    if not isinstance(s, str):
        return False
    s = s.strip()
    if not s:
        return False
    if s.startswith("-") or s.endswith("-"):
        return False
    if "--" in s:
        return False
    return bool(re.match(r"^[a-z0-9]+(?:-[a-z0-9]+)*$", s))


def validate_workflow_spec(path: Path) -> list[str]:
    errors: list[str] = []
    md = _read_text_lf(path)
    fm_lines, body = _extract_frontmatter(md)
    fm = _parse_frontmatter(fm_lines)

    missing = sorted(REQUIRED_KEYS - set(fm.keys()))
    if missing:
        errors.append(f"Missing required frontmatter keys: {', '.join(missing)}")

    slug = fm.get("slug")
    if not isinstance(slug, str) or not _is_kebab(slug):
        errors.append("frontmatter.slug must be kebab-case (lowercase letters/digits/hyphens)")

    title = fm.get("title")
    if not isinstance(title, str) or not title.strip():
        errors.append("frontmatter.title must be a non-empty string")

    description = fm.get("description")
    if not isinstance(description, str) or not description.strip():
        errors.append("frontmatter.description must be a non-empty string")

    triggers = fm.get("triggers")
    if not isinstance(triggers, list) or not triggers:
        errors.append("frontmatter.triggers must be a non-empty list of strings")
    else:
        clean = [t for t in triggers if isinstance(t, str) and t.strip()]
        if len(clean) != len(triggers):
            errors.append("frontmatter.triggers items must be non-empty strings")
        if len(clean) < 5 or len(clean) > 10:
            errors.append("frontmatter.triggers must contain 5-10 items")

    artifact_store = fm.get("artifact_store")
    if not isinstance(artifact_store, str) or artifact_store.strip() not in ALLOWED_ARTIFACT_STORES:
        allowed = ", ".join(sorted(ALLOWED_ARTIFACT_STORES))
        errors.append(f"frontmatter.artifact_store must be one of: {allowed}")

    execution = fm.get("execution")
    if execution != "plan-then-confirm":
        errors.append('frontmatter.execution must be "plan-then-confirm"')

    skills_sh_lookup = fm.get("skills_sh_lookup")
    if not isinstance(skills_sh_lookup, bool):
        errors.append("frontmatter.skills_sh_lookup must be a boolean")

    required_skills = fm.get("required_skills")
    if not isinstance(required_skills, list) or not required_skills:
        errors.append("frontmatter.required_skills must be a non-empty list of strings")
    else:
        bad = [x for x in required_skills if not isinstance(x, str) or not x.strip()]
        if bad:
            errors.append("frontmatter.required_skills items must be non-empty strings")

    optional_skills = fm.get("optional_skills")
    if optional_skills is not None:
        if not isinstance(optional_skills, list):
            errors.append("frontmatter.optional_skills must be a list of strings")
        else:
            bad = [x for x in optional_skills if not isinstance(x, str) or not x.strip()]
            if bad:
                errors.append("frontmatter.optional_skills items must be non-empty strings")

    required_headings = [
        "## Goal & Non-goals",
        "## Skill Chain",
        "## Verification & Stop Rules",
    ]
    for h in required_headings:
        if h not in body:
            errors.append(f"Missing required section heading: {h}")

    return errors


def main() -> int:
    parser = argparse.ArgumentParser(description="Validate a workflow_spec.md against the workflow-creator contract")
    parser.add_argument("spec_path", help="Path to workflow_spec.md")
    args = parser.parse_args()

    spec_path = Path(args.spec_path).resolve()
    if not spec_path.exists():
        print(f"ERROR: file not found: {spec_path}")
        return 1
    if not spec_path.is_file():
        print(f"ERROR: not a file: {spec_path}")
        return 1

    try:
        errors = validate_workflow_spec(spec_path)
    except Exception as e:
        print(f"ERROR: {e}")
        return 1

    if errors:
        for e in errors:
            print(f"ERROR: {e}")
        return 1

    print("OK: workflow_spec.md is valid")
    return 0


if __name__ == "__main__":
    raise SystemExit(main())

```

### scripts/validate_skill_md.py

```python
#!/usr/bin/env python3

import argparse
import re
from pathlib import Path


def _read_text_lf(path: Path) -> str:
    raw = path.read_bytes()
    if b"\r" in raw:
        raise ValueError("SKILL.md must use LF line endings (found CRLF/CR)")
    return raw.decode("utf-8")


def _extract_frontmatter(md: str) -> dict:
    if not md.startswith("---\n"):
        raise ValueError("Missing YAML frontmatter (expected starting ---)")
    lines = md.splitlines(keepends=False)
    end_idx = None
    for i in range(1, len(lines)):
        if lines[i].strip() == "---":
            end_idx = i
            break
    if end_idx is None:
        raise ValueError("Invalid frontmatter block (missing closing ---)")

    fm_lines = lines[1:end_idx]
    out: dict = {}
    for line in fm_lines:
        if not line.strip() or line.lstrip().startswith("#"):
            continue
        m = re.match(r"^([A-Za-z0-9_-]+):\s*(.*)$", line)
        if not m:
            continue
        key = m.group(1)
        value = m.group(2).strip()
        # Minimal scalar parsing only.
        if len(value) >= 2 and value[0] == value[-1] and value[0] in ('"', "'"):
            value = value[1:-1]
        out[key] = value
    return out


def _is_kebab(s: str) -> bool:
    if not isinstance(s, str):
        return False
    s = s.strip()
    if not s:
        return False
    if s.startswith("-") or s.endswith("-"):
        return False
    if "--" in s:
        return False
    return bool(re.match(r"^[a-z0-9]+(?:-[a-z0-9]+)*$", s))


def validate_skill(path: Path) -> list[str]:
    errors: list[str] = []
    if path.is_dir():
        skill_dir = path
        skill_md = path / "SKILL.md"
    else:
        skill_md = path
        skill_dir = path.parent

    if not skill_md.exists():
        return [f"SKILL.md not found: {skill_md}"]

    md = _read_text_lf(skill_md)
    fm = _extract_frontmatter(md)

    name = fm.get("name")
    if not isinstance(name, str) or not name.strip():
        errors.append("frontmatter.name must be a non-empty string")
    else:
        name = name.strip()
        if not _is_kebab(name):
            errors.append("frontmatter.name must be kebab-case (lowercase letters/digits/hyphens)")
        if len(name) > 64:
            errors.append("frontmatter.name must be <= 64 characters")
        if name != skill_dir.name:
            errors.append(f"frontmatter.name '{name}' must match directory name '{skill_dir.name}'")

    description = fm.get("description")
    if not isinstance(description, str) or not description.strip():
        errors.append("frontmatter.description must be a non-empty string")
    else:
        if len(description.strip()) > 1024:
            errors.append("frontmatter.description must be <= 1024 characters")

    return errors


def main() -> int:
    parser = argparse.ArgumentParser(description="Validate a skill folder's SKILL.md (no external deps)")
    parser.add_argument("path", help="Path to skill directory or SKILL.md")
    args = parser.parse_args()

    p = Path(args.path).resolve()
    if not p.exists():
        print(f"ERROR: not found: {p}")
        return 1

    try:
        errors = validate_skill(p)
    except Exception as e:
        print(f"ERROR: {e}")
        return 1

    if errors:
        for e in errors:
            print(f"ERROR: {e}")
        return 1

    print("OK: skill SKILL.md is valid")
    return 0


if __name__ == "__main__":
    raise SystemExit(main())

```