running-design-reviews
Run high-signal design reviews (design critique / design crit / design feedback) by producing a Design Review Pack: review brief + requested feedback, agenda + facilitation script, feedback log prioritized by Value→Ease→Delight, decision record, and follow-up plan.
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Install command
npx @skill-hub/cli install liqiongyu-lenny-skills-plus-running-design-reviews
Repository
Skill path: skills/running-design-reviews
Run high-signal design reviews (design critique / design crit / design feedback) by producing a Design Review Pack: review brief + requested feedback, agenda + facilitation script, feedback log prioritized by Value→Ease→Delight, decision record, and follow-up plan.
Open repositoryBest for
Primary workflow: Design Product.
Technical facets: Full Stack, Designer.
Target audience: everyone.
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: liqiongyu.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install running-design-reviews into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/liqiongyu/lenny_skills_plus before adding running-design-reviews to shared team environments
- Use running-design-reviews for development workflows
Works across
Favorites: 0.
Sub-skills: 0.
Aggregator: No.
Original source / Raw SKILL.md
--- name: "running-design-reviews" description: "Run high-signal design reviews (design critique / design crit / design feedback) by producing a Design Review Pack: review brief + requested feedback, agenda + facilitation script, feedback log prioritized by Value→Ease→Delight, decision record, and follow-up plan." --- # Running Design Reviews ## Scope **Covers** - Planning a design review with a clear decision and requested feedback type(s) - Running a live demo–centered critique (or async review when needed) - Capturing feedback without “design-by-committee” - Synthesizing feedback using **Value → Ease of Use → Delight** prioritization - Recording decisions, tradeoffs, and follow-ups so the review changes the work **When to use** - “Prepare and run a design critique for this Figma prototype.” - “We need a structured design review agenda and feedback log.” - “Help us review this flow and decide what to change before we ship.” - “Turn messy comments into prioritized feedback + next steps.” **When NOT to use** - You don’t have a defined problem, target user, or goal yet (use `problem-definition` first). - You need build-ready interaction specs / acceptance criteria (use `writing-specs-designs`). - You need evidence from users rather than expert critique (use `usability-testing`). - You’re doing launch planning, comms, rollout/rollback (use `shipping-products`). ## Inputs **Minimum required** - Design artifact(s): link(s) or screenshots (e.g., Figma/prototype) + what parts are in scope - The decision needed (what will change after the review) - Target user + job-to-be-done (1–2 sentences) - Success criteria (1–3) and constraints (time, platform, accessibility, tech) - Review format + logistics: live vs async, time box, attendees/roles **Missing-info strategy** - Ask up to **5** questions from [references/INTAKE.md](references/INTAKE.md), then proceed. - If answers aren’t available, make explicit assumptions and clearly label them. - Do not request secrets or credentials. ## Outputs (deliverables) Produce a **Design Review Pack** in Markdown (in-chat by default; write to files if requested), in this order: 1) **Design review brief / pre-read** (context, decision, requested feedback, links) 2) **Agenda + facilitation script** (timed, prompts, roles) 3) **Feedback log** (captured + categorized + prioritized) 4) **Decision record** (decisions, tradeoffs, owners, due dates) 5) **Follow-up message + next review plan** (what changed, what’s next) 6) **Risks / Open questions / Next steps** (always included) Templates: [references/TEMPLATES.md](references/TEMPLATES.md) ## Workflow (7 steps) ### 1) Classify the review and lock the decision - **Inputs:** Request + artifact(s) + constraints. - **Actions:** Identify the review type (concept / flow / content / visual polish / ship-readiness). Write the decision statement (“After this review we will decide ___”). - **Outputs:** Review type + decision statement + scope boundary (in/out). - **Checks:** Everyone can answer: “What will change after this review?” ### 2) Set the requested feedback (and what NOT to comment on) - **Inputs:** Decision statement + stage of design. - **Actions:** Specify 1–3 feedback questions (e.g., “Is the value proposition clear?”, “Where does the flow break?”, “What edge cases are missing?”). Explicitly defer aesthetics/minutiae until Value/Ease are validated. - **Outputs:** Requested feedback list + “out of scope” feedback. - **Checks:** Feedback questions map directly to the decision. ### 3) Assign roles (incl. a sponsor) and prepare a live demo - **Inputs:** Attendees list + timeline/risk. - **Actions:** Assign: **Presenter**, **Facilitator**, **Note-taker**, and a **Sponsor/DRI** (senior owner who focuses on “why” + core concept). Decide whether leadership must review all user-facing screens before ship (for high-craft products). - **Outputs:** Roles list + demo plan (what will be shown, in what order). - **Checks:** Decision rights are clear; the review is anchored in a live demo, not a slide deck. ### 4) Produce the pre-read (context first, then artifacts) - **Inputs:** [references/TEMPLATES.md](references/TEMPLATES.md) (brief template) + project context. - **Actions:** Write a 1–2 page brief: problem → user → success criteria → constraints → options considered → risks/tradeoffs → open questions → links. - **Outputs:** Shareable pre-read + “how to review” instructions. - **Checks:** A reviewer can give useful feedback asynchronously without a live context dump. ### 5) Run the review (big picture → Value → Ease → Delight) - **Inputs:** Agenda + demo + notes/feedback log. - **Actions:** Start with goals/feelings (“What’s bothering us overall?”), then evaluate: 1) **Value:** is it solving the right problem? 2) **Ease:** can users do it without friction? 3) **Delight:** polish, aesthetics, extra joy (only after 1–2) Capture feedback as **observations + impact + suggestion**, not opinions. - **Outputs:** Filled feedback log with categories and severities. - **Checks:** The review does not get stuck in minutiae before Value/Ease are resolved. ### 6) Synthesize + prioritize feedback into a change plan - **Inputs:** Feedback log. - **Actions:** Deduplicate comments; resolve conflicts by returning to goals and constraints; prioritize by user impact and risk. Convert top items into explicit changes with owners and due dates. - **Outputs:** Prioritized change list + updated feedback log status/owners. - **Checks:** Top 3 issues are clear; each has a proposed action and owner. ### 7) Decide, document tradeoffs, and close the loop - **Inputs:** Proposed change plan + remaining open questions. - **Actions:** Record decisions and rationale; list tradeoffs and risks; define what must be re-reviewed. Send a follow-up summary and schedule the next review or ship gate. - **Outputs:** Decision record + follow-up message + Risks/Open questions/Next steps. - **Checks:** Decisions and action items are captured in writing; no critical decision is left implicit. ## Quality gate (required) - Use [references/CHECKLISTS.md](references/CHECKLISTS.md) and score with [references/RUBRIC.md](references/RUBRIC.md). - Always include: **Risks**, **Open questions**, **Next steps**. ## Examples See [references/EXAMPLES.md](references/EXAMPLES.md). ## Reference files - [references/INTAKE.md](references/INTAKE.md) - [references/WORKFLOW.md](references/WORKFLOW.md) - [references/TEMPLATES.md](references/TEMPLATES.md) - [references/CHECKLISTS.md](references/CHECKLISTS.md) - [references/RUBRIC.md](references/RUBRIC.md) - [references/SOURCE_SUMMARY.md](references/SOURCE_SUMMARY.md) - [references/EXAMPLES.md](references/EXAMPLES.md) --- ## Referenced Files > The following files are referenced in this skill and included for context. ### references/INTAKE.md ```markdown # Intake (Question Bank) Ask up to **5** questions at a time. If information is missing, proceed with explicit assumptions and label them. ## If you can only answer 5, answer these 1) What is the **decision** we need from this review (what changes after)? 2) What is **in scope** (which flow/screens/states) and what is out of scope? 3) Who is the **target user** and what job are they trying to get done? 4) What stage is this design in: concept / flow / polish / ship-ready? 5) What kind of feedback do you want: Value / Ease / Delight (or which risks/tradeoffs)? ## Context - What problem are we solving and why now? - What are the 1–3 success criteria? Any guardrails (a11y, performance, legal, brand)? - What constraints matter most (timeline, platform, tech limitations, content readiness)? ## Artifacts - Link(s) to design artifact(s) (Figma/prototype) or share screenshots. - What should reviewers look at first (start state / happy path / key edge case)? - Which states are included: loading/empty/error/permissions/offline? - Any known unknowns / areas of uncertainty? ## Review mechanics - Live review or async? Time box (30/45/60 min)? - Who’s attending and what are their roles? - Who is the **Sponsor/DRI** (senior owner for “why” + core concept)? - Who will take notes and own follow-up? ## Requested feedback (make it explicit) - What 1–3 questions should reviewers answer? - “Is the value proposition clear?” - “Where does the flow break or feel confusing?” - “What edge cases are missing?” - “What is the biggest risk/tradeoff?” - What should reviewers **not** focus on yet (e.g., visual polish, microcopy)? ## Quality bar / ship gate - Is this a “must-review all screens” product area (high craft / founder-led quality bar)? - Do we need a final “screen-by-screen” pass before shipping? ``` ### references/TEMPLATES.md ```markdown # Templates Use these templates to produce a **Design Review Pack**. Output in-chat by default, or write to files if requested. ## 1) Design review brief / pre-read **Project / feature:** **Owner (presenter):** **Sponsor/DRI:** **Review type:** (concept / flow / content / polish / ship-readiness) **Decision needed:** (what will change after this review?) **Requested feedback (1–3 questions):** **Out of scope for this review:** ### Context **Target user + JTBD:** **Problem + why now:** **Success criteria (1–3):** **Constraints/guardrails:** (timeline, platform, a11y, tech, content, legal) ### What we’re reviewing **In scope:** (flows/screens/states) **Known unknowns:** **Options considered:** (if any) ### Risks / tradeoffs to focus on - ### Artifacts / links - Prototype / Figma: - Screenshots (if needed): - Related doc (PRD/spec): ### How to review (instructions) - Start with big-picture: what feels off overall? - Then prioritize feedback by **Value → Ease → Delight**. - Log feedback using the table template below. ## 2) Agenda + facilitation script (45 min default) **Roles:** Facilitator / Sponsor / Presenter / Note-taker 1) **Priming (5 min)** - Goal + decision needed - Requested feedback questions - What is out of scope today 2) **Live demo (15 min)** - Presenter walkthrough (happy path + top edge case) - Sponsor interrupts only for “why” / concept clarity 3) **Feedback capture (15 min)** - Round 1: Value - Round 2: Ease - Round 3: Delight (only if time) 4) **Synthesis + decisions (10 min)** - Top 3 issues - Decisions made today vs open questions - Owners + due dates Facilitation prompts: - “What problem is this solving, in one sentence?” - “Where does the flow break or feel confusing?” - “What’s the riskiest assumption here?” - “If we ship this, what might we regret?” ## 3) Feedback log (copy/paste table) | ID | Area/screen | Observation | Impact on user/business | Category (Value/Ease/Delight) | Severity (P0/P1/P2) | Suggested change | Owner | Due | Status | |---:|-------------|------------|--------------------------|-------------------------------|---------------------|------------------|-------|-----|--------| | 1 | | | | | | | | | | ## 4) Decision record (copy/paste) **Decision(s) made:** - **Rationale (why):** **Tradeoffs accepted:** - **Not doing (explicit):** - **Owners + due dates:** - ## 5) Follow-up message (copy/paste) Subject: Design review outcomes — <project> **What we reviewed:** **Decisions:** (bullets) **Top feedback (prioritized):** (Value → Ease → Delight) **Action items:** (owner + due date) **Open questions:** **Next review / ship gate:** (date/time + what will be re-reviewed) ## 6) Risks / Open questions / Next steps **Risks:** **Open questions:** **Next steps (1–3):** ``` ### references/CHECKLISTS.md ```markdown # Checklists (Quality Gate) Use these before finalizing the Design Review Pack. ## A) Scope + decision - [ ] The review type is explicit (concept / flow / content / polish / ship-readiness). - [ ] The decision needed is explicit (“After this review we will decide ___”). - [ ] In-scope vs out-of-scope is clear. - [ ] Target user + JTBD is stated in 1–2 sentences. - [ ] Success criteria + constraints/guardrails are listed. ## B) Requested feedback (structure) - [ ] The presenter asked for 1–3 specific feedback questions. - [ ] Reviewers were told what NOT to focus on yet (avoid premature minutiae). - [ ] Feedback is prioritized by **Value → Ease → Delight**. ## C) Roles + mechanics - [ ] Facilitator, Presenter, Note-taker, and Sponsor/DRI are named. - [ ] Time box and agenda are set. - [ ] Review is anchored in a live demo (or an async walkthrough) rather than slides. ## D) Feedback capture quality - [ ] Feedback is recorded as observation + impact (not just preferences). - [ ] Conflicting feedback is reconciled via goals/constraints (not consensus). - [ ] The top edge cases/states were considered (loading/empty/error/permissions). ## E) Outcomes + follow-through - [ ] Top 3 issues are identified and prioritized. - [ ] Action items have owners + due dates. - [ ] Decisions and tradeoffs are documented. - [ ] A follow-up message is ready to send. ## F) Finalization - [ ] Risks, open questions, and next steps are included at the end. - [ ] No secrets/credentials were requested or recorded. ``` ### references/RUBRIC.md ```markdown # Rubric (Score 1–5 per category) Use this to score the Design Review Pack. Target: **≥24/30**. ## 1) Decision clarity 1 = No decision; “get feedback” only 3 = Decision exists but still fuzzy 5 = Clear decision statement + explicit in/out scope ## 2) Preparation quality 1 = No pre-read; unclear context 3 = Some context; artifacts exist but are hard to review 5 = Brief is reviewable async; links/states/constraints are clear ## 3) Feedback quality (structure + hierarchy) 1 = Preference wars / minutiae-first 3 = Some structure but inconsistent 5 = Feedback captured as observation+impact and prioritized Value→Ease→Delight ## 4) Synthesis + prioritization 1 = Long list; no priorities 3 = Priorities exist but not tied to impact/risk 5 = Top issues + severities + rationale are clear; conflicts resolved via goals/constraints ## 5) Decisions + tradeoffs captured 1 = Decisions left implicit 3 = Some decisions captured; tradeoffs missing 5 = Decision record includes rationale + tradeoffs + owners/dates ## 6) Follow-through 1 = No owners/dates; no next review plan 3 = Owners exist; weak next step clarity 5 = Action plan is executable; follow-up message + next gate scheduled ## Interpretation - **24–30:** High-quality review; proceed. - **18–23:** Acceptable but needs tightening (usually decision clarity or prioritization). - **≤17:** Re-run with better prep and clearer requested feedback. ``` ### references/EXAMPLES.md ```markdown # Examples ## Example 1 (Flow review, cross-functional) **Prompt:** “Use `running-design-reviews`. We have a new web onboarding flow (Figma link). Decision: choose between Flow A and Flow B to ship next sprint. Target user: first-time admin setting up a team. Constraints: must be accessible (WCAG AA), limited eng bandwidth. Run a 45-minute live review and output a Design Review Pack.” **Expected output:** Brief with decision + requested feedback; timed agenda; feedback log categorized by Value/Ease/Delight; decision record with rationale and owners; follow-up message + next review gate. ## Example 2 (Ship-readiness review) **Prompt:** “Run a ship-readiness design review for 12 screens in our checkout flow. We need a final pass on edge cases, error states, and microcopy before release. Output a structured checklist, a feedback log, and a decision record.” **Expected output:** Pre-read that calls out in-scope screens/states; agenda oriented around edge cases; feedback log with severities; a clear go/no-go recommendation with risks, open questions, and next steps. ## Boundary example (when NOT to use) **Prompt:** “Can you give me general feedback on this Dribbble shot?” (no user, no goal, no decision) **Expected response:** Ask for the decision and target user + success criteria. If it’s purely aesthetic inspiration with no product context, decline the Design Review Pack and recommend a different critique format (or propose a lightweight visual critique template only after goals are set). ``` ### references/WORKFLOW.md ```markdown # Workflow Notes (How to Run the Review) Use this file as supporting detail for `SKILL.md`. ## Roles (minimum) - **Presenter:** walks through the artifact and decisions needed. - **Facilitator:** keeps time, enforces feedback order, prevents design-by-committee. - **Sponsor/DRI:** senior owner who focuses on “why”, core concept quality, and final calls. - **Note-taker:** captures feedback in the log and drafts the follow-up. ## Review types (pick one) - **Concept review:** Is this direction worth pursuing? (Value-dominant) - **Flow review:** Does the interaction/system make sense end-to-end? (Ease-dominant) - **Content review:** Clarity, comprehension, tone, information hierarchy (Value/Ease) - **Visual polish review:** Craft, aesthetics, consistency (Delight-dominant, but only after Value/Ease) - **Ship-readiness review:** Edge cases, states, regressions, accessibility (high rigor) ## The feedback hierarchy (enforce this order) 1) **Value:** Does this solve the right problem in a way users will want? 2) **Ease:** Can users do it without confusion, excessive effort, or dead ends? 3) **Delight:** Does it feel great? Is the craft/polish appropriate? When debates devolve into preferences, pull back to: goal → user → constraint → evidence. ## Agenda templates (timeboxed) ### 30 minutes (tight review) 1) Context + requested feedback (3 min) 2) Live demo (10 min) 3) Feedback capture (12 min) 4) Synthesis + decisions + next steps (5 min) ### 45 minutes (default) 1) Context + requested feedback (5 min) 2) Live demo (15 min) 3) Feedback capture (15 min) 4) Synthesis + decisions + next steps (10 min) ### 60 minutes (complex flow) 1) Context + requested feedback (5 min) 2) Live demo (20 min) 3) Feedback capture (20 min) 4) Synthesis + decisions + next steps (15 min) ## How to prevent “design-by-committee” - Ask reviewers to state **observations** and **impact** before solutions. - Require the presenter to restate feedback in their own words before accepting it. - Timebox solutioning; park deeper explorations as follow-up tasks. - If feedback conflicts, the Sponsor/DRI decides based on goals and constraints. ## Async review variant (when you can’t meet live) 1) Share the pre-read + a short video walkthrough (optional). 2) Request feedback via 1–3 explicit questions. 3) Require feedback to be logged into the same table (Value/Ease/Delight). 4) Synthesize and circulate a decision record + change plan. ``` ### references/SOURCE_SUMMARY.md ```markdown # Source Summary Source: `sources/refound/raw/running-design-reviews/SKILL.md` The provided source skill contained “Top Insights” from multiple design/product leaders and a reference to a deeper file (`references/guest-insights.md`) that was not present in the raw source folder. This skill pack therefore operationalizes the insights that were present and adds missing execution layers: contracts, workflow, templates, and quality gates. ## Preserved insights → operational rules ### 1) Prioritize feedback by Value → Ease → Delight (Julie Zhuo) - **Rule:** Do not spend time on aesthetics/minutiae until the design’s value and ease-of-use are validated. - **Checks:** Feedback log categories are used; “Delight” is deferred if Value/Ease is unresolved. - **Artifacts:** Feedback log + agenda structure that enforces the hierarchy. ### 2) Assign a Sponsor/DRI and use live demos (Karri Saarinen) - **Rule:** Every major project has a sponsor/DRI who focuses on “why” and the core concept, and reviews happen via live demos when possible. - **Checks:** Roles are named; agenda includes a demo walkthrough. - **Artifacts:** Brief lists sponsor; agenda + facilitation script. ### 3) Maintain a high quality bar with senior review for shipped screens (Dmitry Zlokazov) - **Rule:** For high-craft products/areas, require a ship gate where a senior owner reviews all user-facing screens and key states. - **Checks:** Ship-readiness review type is available; outcomes include a next gate. - **Artifacts:** Decision record + follow-up plan including final review gate. ### 4) Structure feedback requests around risks/tradeoffs (Geoff Charles) - **Rule:** Presenters must ask for specific types of feedback and explicitly surface risks/tradeoffs. - **Checks:** Brief includes requested feedback questions and risks/tradeoffs. - **Artifacts:** Design review brief template. ### 5) Start big-picture before minutiae (Jessica Hische) - **Rule:** Begin critiques with overall goals/feelings (“what’s bothering me?”) before pixel-level comments. - **Checks:** Facilitation prompts start with big-picture and move to details. - **Artifacts:** Agenda script + checklist. ```