Back to skills
SkillHub ClubWrite Technical DocsFull StackTech Writer

writing-job-descriptions

Write outcome-based, high-signal job descriptions and role scorecards that attract the right candidates and filter the wrong ones. Use for job description, job posting, job ad, role scorecard, hiring brief. Category: Hiring & Teams.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
35
Hot score
90
Updated
March 20, 2026
Overall rating
C4.3
Composite score
4.3
Best-practice grade
A92.0

Install command

npx @skill-hub/cli install liqiongyu-lenny-skills-plus-writing-job-descriptions

Repository

liqiongyu/lenny_skills_plus

Skill path: skills/writing-job-descriptions

Write outcome-based, high-signal job descriptions and role scorecards that attract the right candidates and filter the wrong ones. Use for job description, job posting, job ad, role scorecard, hiring brief. Category: Hiring & Teams.

Open repository

Best for

Primary workflow: Write Technical Docs.

Technical facets: Full Stack, Tech Writer.

Target audience: everyone.

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: liqiongyu.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install writing-job-descriptions into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/liqiongyu/lenny_skills_plus before adding writing-job-descriptions to shared team environments
  • Use writing-job-descriptions for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 1.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: "writing-job-descriptions"
description: "Write outcome-based, high-signal job descriptions and role scorecards that attract the right candidates and filter the wrong ones. Use for job description, job posting, job ad, role scorecard, hiring brief. Category: Hiring & Teams."
---

# Writing Job Descriptions (Outcome-Based)

## Scope

**Covers**
- Turning a vague “we should hire X” into a clear **role outcome + success scorecard**
- Defining **competency spikes** (major/minor) instead of a generic laundry list
- Writing a **high-signal job description** that is honest about context (pace, constraints, trade-offs)
- Building a lightweight **iteration loop** to improve the JD after real candidate conversations

**When to use**
- “Write a job description / job posting for …”
- “Create a role scorecard / success profile for a new hire.”
- “Make this JD more high-signal (it’s generic and attracting everyone).”
- “Rewrite our JD around outcomes instead of responsibilities.”

**When NOT to use**
- You haven’t decided whether to hire vs restructure/contract/automation (do org planning first)
- You need a full interview loop / evaluation rubric / hiring process design (separate workstream)
- You need legal/HR review for compliance wording (this skill is not legal advice)

## Inputs

**Minimum required**
- Role title + level + function (e.g., “Senior Product Designer”, “Staff Backend Engineer”)
- Team/context (what you build; who the role reports to; key partners)
- Why hire now + the “progress” this role must create
- Success definition: 3–6 outcomes for **12 months after start**
- Working model + constraints (remote/hybrid, time zones, travel, on-call, pace)

**Missing-info strategy**
- Ask up to 5 questions from [references/INTAKE.md](references/INTAKE.md).
- If answers aren’t available, proceed with explicit assumptions and offer 2 versions: **conservative/inclusive** and **high-intensity/polarizing** (if appropriate).

## Outputs (deliverables)

Produce a **Job Description Pack** in Markdown (in-chat; or as files if requested):

1) **Context snapshot**
2) **Role scorecard:** success at 12 months (+ optional 30/60/90)
3) **Competency spike map:** majors/minors + “evidence of strength”
4) **Job description draft (public):** outcome-based, high-signal
5) **Filters:** who will thrive / who should not apply (honest, non-discriminatory)
6) **Iteration plan + version log:** what to test and how to update after candidate conversations
7) **Risks / Open questions / Next steps** (always included)

Templates: [references/TEMPLATES.md](references/TEMPLATES.md)  
Expanded guidance: [references/WORKFLOW.md](references/WORKFLOW.md)

## Workflow (7 steps)

### 1) Intake + constraints (don’t start writing yet)
- **Inputs:** user request; [references/INTAKE.md](references/INTAKE.md).
- **Actions:** Clarify role, level, “why now”, constraints (location, pace, comp bands if available), and what “good” looks like. Identify what you can/can’t say publicly.
- **Outputs:** Context snapshot + assumptions/unknowns list.
- **Checks:** You can state in one sentence: “We are hiring X to achieve Y by Z.”

### 2) Define “success 12 months later” (scorecard)
- **Inputs:** business goals, current pains, manager expectations.
- **Actions:** Write 3–6 outcomes that would make you “clink champagne” in 12 months. Add measurable indicators where possible.
- **Outputs:** Role scorecard (12-month success).
- **Checks:** Outcomes describe business impact and shipped/owned artifacts, not just activities.

### 3) Decide the competency spikes (major/minor)
- **Inputs:** role scorecard.
- **Actions:** Choose 1 **major** spike and 1–2 **minor** spikes. Define what “strong” looks like and how to recognize it (work samples, narratives, portfolio, shipped systems).
- **Outputs:** Competency spike map.
- **Checks:** Spikes explain why a generalist won’t work; each spike ties to at least one 12-month outcome.

### 4) Translate outcomes into responsibilities (progress over laundry lists)
- **Inputs:** scorecard + spikes.
- **Actions:** Convert outcomes into 6–10 responsibilities phrased as progress (“Own X end-to-end”, “Reduce Y from A→B”) rather than “attend meetings”. Remove arbitrary requirements.
- **Outputs:** Responsibilities section draft.
- **Checks:** Every responsibility maps to at least one outcome; anything that doesn’t map is cut or re-justified.

### 5) Add the “truth” section (high-signal + filtering)
- **Inputs:** team reality: pace, constraints, trade-offs.
- **Actions:** Write a candid “How we work / What’s hard here” section and a “Who will thrive / Who won’t” filter. Use polarizing clarity without illegal/discriminatory language.
- **Outputs:** Context truth + filters.
- **Checks:** A candidate can self-select in/out; claims are honest and specific (not hype).

### 6) Draft the public job description (clean, inclusive, skimmable)
- **Inputs:** templates; company/role basics.
- **Actions:** Assemble a complete JD using [references/TEMPLATES.md](references/TEMPLATES.md). Keep requirements minimal; separate must-haves vs nice-to-haves; avoid jargon and bias.
- **Outputs:** JD draft (public).
- **Checks:** In 90 seconds, a qualified candidate can answer: “What will I accomplish? Why here? What do I need to be great at?”

### 7) Iterate + quality gate + finalize pack
- **Inputs:** JD draft; any candidate feedback; hiring manager review.
- **Actions:** Propose what to test (which section is failing: attract vs filter). Create an iteration log. Run [references/CHECKLISTS.md](references/CHECKLISTS.md) and score with [references/RUBRIC.md](references/RUBRIC.md). Add Risks/Open questions/Next steps.
- **Outputs:** Final Job Description Pack.
- **Checks:** The pack is internally aligned and externally high-signal; unknowns are explicit; iteration triggers are defined.

## Quality gate (required)
- Use [references/CHECKLISTS.md](references/CHECKLISTS.md) and [references/RUBRIC.md](references/RUBRIC.md).
- Always include: **Risks**, **Open questions**, **Next steps**.

## Examples

**Example 1 (Startup, high-pace):** “Write a job description for a founding Product Designer for a seed-stage B2B AI tool. We need someone who can ship end-to-end in ambiguity. Include success at 12 months and a candid ‘what’s hard here’ section.”  
Expected: clear 12-month outcomes, a design-major spike, honest pace/constraints, and filters that self-select the wrong candidates out.

**Example 2 (Scale-up, specialized spike):** “Create a role scorecard + job posting for a Staff Backend Engineer owning reliability for a high-traffic API. Emphasize systems thinking and incident ownership.”  
Expected: outcome-based responsibilities tied to reliability outcomes, plus a clear major spike (operational excellence) and measurable success criteria.

**Boundary example:** “Write a JD for a ‘rockstar generalist’ to ‘do whatever is needed’ (no outcomes).”  
Response: refuse to invent a laundry list; run intake, define 12-month outcomes and spikes first, then draft.


---

## Referenced Files

> The following files are referenced in this skill and included for context.

### references/INTAKE.md

```markdown
# Intake (Writing Job Descriptions)

Ask **up to 5 questions at a time**. If answers aren’t available, proceed with explicit assumptions and label unknowns.

## Minimum intake (pick the best 5)
1) **What role are we hiring and at what level?** (title, level, function)
2) **Why hire now?** What’s the business problem/opportunity and what “progress” must this hire create?
3) **What does success look like 12 months after start?** List 3–6 outcomes (with any measurable indicators).
4) **What should this person major in (1) and minor in (1–2)?** (the “spikes” you most need)
5) **What’s the real operating context?** (pace, ambiguity, constraints, team size, manager style, collaboration model)

## Helpful details (as needed)
- Role scope boundaries: what’s explicitly **out of scope**?
- Location/remote requirements, time zones, travel, on-call expectations (if any)
- Compensation range/band (if available and appropriate to share publicly)
- What can/can’t be said publicly? (stealth, sensitive roadmap, customer names)
- What’s currently wrong with your JD? (too generic, wrong candidates, low volume, misalignment, etc.)
- Do you want 1 draft or 2 variants (conservative/inclusive vs more polarizing)?

## If the user can’t answer
Proceed with:
- A **draft scorecard** with 3–5 placeholder outcomes (explicitly labeled assumptions)
- A default spike structure: **1 major + 2 minors** (based on role title)
- Two JD variants:
  - **Variant A:** inclusive + broadly appealing (still outcome-based)
  - **Variant B:** more candid/polarizing about constraints and pace (only if appropriate)


```

### references/TEMPLATES.md

```markdown
# Templates (Copy/Paste)

Use these templates to produce the **Job Description Pack**.

## 0) Context snapshot
- Company / product:
- Team:
- Role title + level:
- Reports to:
- Why hire now:
- 12-month success headline (1 sentence):
- Working model (remote/hybrid/on-site) + location/time zone:
- Constraints (travel/on-call/security/clearance/etc.):
- What can’t be said publicly:
- Assumptions / unknowns:

## 1) Role scorecard (12 months later)

### Role mission (1 sentence)

### Success at 12 months (3–6 outcomes)
1) Outcome:
   - Evidence/artifacts:
   - Metrics/indicators (if any):
2) …

### Key stakeholders + interfaces
- …

### Scope boundaries (in / out)
**In scope**
- …

**Out of scope**
- …

### Optional: 30/60/90 sketch
- 30 days:
- 60 days:
- 90 days:

## 2) Competency spike map (major/minor)

Choose 1 major and 1–2 minors.

| Spike | Major/Minor | What “strong” looks like | Evidence to look for | Common anti-signals |
|---|---|---|---|---|
|  |  |  |  |  |

## 3) Job description (public) template

### Title

### About <Company>
(1–3 sentences: what you do, for whom, why it matters)

### Why this role exists
(the “progress” the role creates; 2–4 bullets)

### Success in 12 months
- …

### What you’ll do (responsibilities)
- …

### What we’re looking for (spikes)
**You should major in**
- …

**You should minor in**
- …

**Nice-to-haves**
- …

### How we work (truths + constraints)
- Pace/cadence:
- Ambiguity:
- Decision-making:
- Cross-functional:
- On-call/travel/time zones (if any):

### Who will thrive here
- …

### Who should not apply
- …

### Logistics
- Location / remote policy:
- Compensation range (optional/if allowed):
- Benefits (optional):

### Hiring process (optional)
- Step 1:
- Step 2:
- Step 3:

### Equal opportunity / accommodations (placeholder)
Include your standard EEO statement and accommodations language, reviewed by HR/legal as needed.

## 4) Iteration plan + version log

### What we’ll test
- Attract the right candidates: (which section + how we’ll know)
- Filter out the wrong candidates: (which section + how we’ll know)
- Reduce misalignment: (which section + how we’ll know)

### Iteration triggers
- Review after: (e.g., 20 applicants or 10 screens)
- Owners:
- Decision cadence:

### Version log
| Version | Date | Change | Why (evidence) | Result to watch |
|---|---|---|---|---|
| v0 |  |  |  |  |


```

### references/WORKFLOW.md

```markdown
# Expanded Workflow (Writing Job Descriptions)

Use this when you need more guidance than `SKILL.md` provides.

## Principles (what makes a JD “high-signal”)
1) **Outcome-first:** start with what changes in 12 months, not a list of tasks.
2) **Spikes, not soup:** define the 1–3 capabilities that truly matter (major/minor).
3) **Progress over bureaucracy:** responsibilities should describe progress and ownership, not meeting attendance.
4) **Honest filtering:** tell the truth about constraints (pace, ambiguity, rigor) so candidates can self-select.
5) **Iterate like a product:** a JD is a living document; improve it based on candidate conversations.

## Suggested workflow (7 steps)

### Step 1 — Intake and “public vs private”
Goal: avoid writing a perfect JD for the wrong role.
- Decide what can be published vs internal-only.
- Capture constraints that affect candidate fit: time zones, travel, on-call, in-office days, pace, ambiguity.

### Step 2 — Create the role scorecard (12 months later)
Goal: make the role measurable and screenable.
- Write 3–6 outcomes that describe business impact and shipped/owned artifacts.
- Prefer “verbs + objects + impact”: “Reduce X from A→B”, “Ship Y that enables Z”.
- Add 2–5 leading indicators (optional) and a 30/60/90 sketch (optional).

### Step 3 — Define competency spikes (major/minor)
Goal: avoid “want everything” hiring.
- Pick 1 major spike and 1–2 minor spikes.
- For each spike: define what “strong” looks like (examples) and what “weak” looks like (anti-signals).
- Add “evidence” hints (portfolio, narratives, shipped systems, references).

### Step 4 — Convert outcomes to responsibilities
Goal: remove arbitrary lists.
- For each 12-month outcome, add 1–2 responsibilities that describe ownership.
- Cap at 6–10 responsibilities total.
- Move “nice-to-have” tasks into a short “You might also…” section (optional).

### Step 5 — Add truth + filters (polarizing clarity, safely)
Goal: the JD should repel the wrong candidates.
- Create a “How we work” section that is candid and specific (cadences, ambiguity, decision-making, pace).
- Add “Who will thrive” and “Who should not apply” sections focused on **work preferences and role fit**, not identity traits.

Examples of safe, specific truth statements:
- “You’ll often work from problem statements, not detailed specs.”
- “We ship weekly and expect you to own outcomes, not just tasks.”
- “This role involves being on-call for incidents ~1 week/month.” (if true)

Avoid:
- “Young”, “native”, “digital native”, “rockstar”, “must work long hours”, or any protected-class proxy language.

### Step 6 — Draft the public JD (skimmable)
Goal: a qualified candidate should understand the role in <2 minutes.
- Use clear headings and bullets.
- Separate **must-haves** vs **nice-to-haves**.
- Keep requirements tight; reduce “years of experience” unless truly necessary.
- If sharing compensation is possible/required, include a range and note how leveling works.

### Step 7 — Iterate (treat it like a product)
Goal: improve the JD based on real signal.
- Add an iteration trigger: “review after 10–15 screens or 20 applicants”.
- Track what’s failing:
  - Low volume → opening paragraphs and distribution, not more requirements
  - Wrong fit → spikes, filters, and “what’s hard here”
  - Misaligned expectations → 12-month success section and constraints
- Keep a version log with what changed and why.


```

### references/CHECKLISTS.md

```markdown
# Checklists

Use these to validate the Job Description Pack before finalizing.

## A) Role clarity checklist
- [ ] Role title, level, reporting line, and team context are explicit
- [ ] “Why hire now” and the role’s purpose are stated in plain language
- [ ] Public vs internal-only info is clearly separated (no accidental leakage)

## B) 12-month outcomes checklist (scorecard)
- [ ] Includes 3–6 outcomes for **12 months after start**
- [ ] Outcomes describe business impact and owned artifacts (not generic activities)
- [ ] At least 1–2 measurable indicators exist (or are clearly labeled “TBD”)
- [ ] Scope boundaries are explicit (in/out)

## C) Spike specificity checklist (major/minor)
- [ ] Exactly 1 **major** spike and 1–2 **minor** spikes are defined
- [ ] Each spike ties to at least one 12-month outcome
- [ ] “What strong looks like” is concrete (examples), not buzzwords
- [ ] Must-haves vs nice-to-haves are separated (no “must have everything”)

## D) High-signal + filtering checklist
- [ ] Responsibilities are phrased as ownership/progress, not meeting lists
- [ ] “How we work / what’s hard here” is candid and specific
- [ ] “Who will thrive / who should not apply” enables self-selection
- [ ] Filters are about role fit (work preferences, environment), not identity traits

## E) Inclusivity + compliance checklist (non-legal)
- [ ] Avoids biased or exclusionary language (e.g., “rockstar”, “native”, protected-class proxies)
- [ ] Avoids unverifiable hype (“best-in-class”, “once-in-a-lifetime”) unless substantiated
- [ ] Includes (or flags for addition) an EEO / accommodations statement consistent with company policy
- [ ] Location/remote and any required constraints (travel/on-call/clearance) are stated truthfully

## F) Iteration readiness checklist
- [ ] Includes an iteration trigger (review after N applicants/screens)
- [ ] Version log exists with what will change and what signal will drive the change
- [ ] Includes **Risks**, **Open questions**, **Next steps**


```

### references/RUBRIC.md

```markdown
# Rubric (1–5)

Score the Job Description Pack. A strong pack is typically **≥ 22/30** with no category below 3.

## 1) Outcome clarity (1–5)
- **1:** No 12-month outcomes; mostly tasks and traits.
- **3:** Outcomes exist but are vague or not tied to business impact.
- **5:** Outcomes are specific, measurable where possible, and clearly tied to business impact + owned artifacts.

## 2) Spike definition (1–5)
- **1:** Laundry list of skills; no prioritization.
- **3:** Some prioritization but unclear what “strong” looks like.
- **5:** 1 major + 1–2 minors are crisp, evidence-backed, and directly linked to outcomes.

## 3) Signal & differentiation (1–5)
- **1:** Generic JD could apply anywhere.
- **3:** Some company context, but role still feels template-y.
- **5:** Clear “why here / why now”, concrete constraints, and specific challenges that attract the right candidates.

## 4) Honesty & filtering (1–5)
- **1:** Oversells or hides constraints; no filters.
- **3:** Some truth statements but weak self-selection.
- **5:** Candid, specific “what’s hard” + “who will thrive/won’t” that helps candidates self-select without bias.

## 5) Inclusivity & compliance hygiene (1–5)
- **1:** Includes biased language or ambiguous requirements; missing basic guardrails.
- **3:** Mostly clean but still has jargon/bias risk or unclear constraints.
- **5:** Clear, inclusive language; constraints stated truthfully; EEO/accommodations placeholder included; avoids protected-class proxies.

## 6) Iteration readiness (1–5)
- **1:** Treated as a one-and-done document.
- **3:** Mentions iteration but no triggers or tracking.
- **5:** Has iteration triggers, owners, and a version log tied to candidate signal.


```

writing-job-descriptions | SkillHub