product-taste-intuition
Build stronger product taste + intuition as a PM by running a Taste Calibration Sprint (benchmark set, product critique notes, intuition→hypothesis log, validation plan, practice loop). Use for “product taste”, “product sense”, “intuition”, “calibrate taste”.
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Install command
npx @skill-hub/cli install liqiongyu-lenny-skills-plus-product-taste-intuition
Repository
Skill path: skills/product-taste-intuition
Build stronger product taste + intuition as a PM by running a Taste Calibration Sprint (benchmark set, product critique notes, intuition→hypothesis log, validation plan, practice loop). Use for “product taste”, “product sense”, “intuition”, “calibrate taste”.
Open repositoryBest for
Primary workflow: Ship Full Stack.
Technical facets: Full Stack.
Target audience: everyone.
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: liqiongyu.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install product-taste-intuition into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/liqiongyu/lenny_skills_plus before adding product-taste-intuition to shared team environments
- Use product-taste-intuition for development workflows
Works across
Favorites: 0.
Sub-skills: 0.
Aggregator: No.
Original source / Raw SKILL.md
--- name: "product-taste-intuition" description: "Build stronger product taste + intuition as a PM by running a Taste Calibration Sprint (benchmark set, product critique notes, intuition→hypothesis log, validation plan, practice loop). Use for “product taste”, “product sense”, “intuition”, “calibrate taste”." --- # Product Taste & Intuition ## Scope **Covers** - Developing **product taste** (what “good” looks like) through deliberate exposure, observation, and critique - Using intuition as a **hypothesis generator** (turning “gut feel” into testable hypotheses) - Building a repeatable **practice loop** (exposure hours → analysis → validation → updated taste rules) **When to use** - “Help me improve my product taste / product sense.” - “Calibrate what ‘good onboarding’ looks like for our product category.” - “Turn my intuition about this flow into testable hypotheses.” - “Create a structured way to study great products and extract patterns.” **When NOT to use** - You need to decide *what to build* (use `problem-definition`, `prioritizing-roadmap`, or `defining-product-vision`). - You need user evidence first (use `conducting-user-interviews` or `usability-testing`). - You want aesthetic critique only (this is product experience: value, UX, clarity, trust, speed—not just visuals). - You can’t name any target user, use case, or the “taste domain” you want to improve (we’ll narrow first). ## Inputs **Minimum required** - Taste domain to improve (pick 1): onboarding, activation, navigation/IA, editor/workflow, pricing/packaging UX, notifications, retention loops, trust/safety, performance/latency feel, copy/voice - Target user + top job-to-be-done for that domain - 3–10 benchmark products/experiences to study (or “unknown—please propose”) - Time box (e.g., 60–120 min sprint; or a 2–4 week practice plan) - Constraints (platform, geography, accessibility, compliance, brand voice, etc.) **Missing-info strategy** - Ask up to 5 questions from [references/INTAKE.md](references/INTAKE.md). - If inputs remain missing, proceed with explicit assumptions and provide 2 scope options (narrow vs broad). ## Outputs (deliverables) Produce a **Taste Calibration Pack** (in-chat Markdown; or as files if requested): 1) **Taste Calibration Brief** (domain, target user/job, what “good” means, constraints) 2) **Benchmark Set** (5–10 products) + “why these” + what to study 3) **Product Study Notes** (1 page per benchmark) using a consistent critique template 4) **Taste Rules + Anti-Patterns** (do/don’t rules derived from evidence) 5) **Intuition → Hypothesis Log** (testable hypotheses + predicted signals) 6) **Validation Plan** (qual + quant checks; smallest viable tests) 7) **Practice Plan** (2–4 weeks: exposure hours + weekly synthesis cadence) 8) **Risks / Open questions / Next steps** (always included) Templates: [references/TEMPLATES.md](references/TEMPLATES.md) ## Workflow (8 steps) ### 1) Intake + pick the taste domain (narrow the problem) - **Inputs:** User context; [references/INTAKE.md](references/INTAKE.md). - **Actions:** Choose 1 taste domain and 1 “moment” (e.g., first-run onboarding). Define target user + job + constraints. Set time box. - **Outputs:** Taste Calibration Brief (draft). - **Checks:** A stakeholder can answer: “What specific experience are we calibrating taste for?” ### 2) Define “good taste” as decision criteria (not vibes) - **Inputs:** Domain + user/job. - **Actions:** Draft 6–10 criteria (e.g., clarity, time-to-value, trust, agency, error recovery, perceived speed, cognitive load). Add explicit tradeoffs (what you’ll sacrifice). - **Outputs:** Criteria list + tradeoffs section in the brief. - **Checks:** Criteria are observable in-product (you can point to UI/behavior), not generic adjectives. ### 3) Build the benchmark set (exposure hours, curated) - **Inputs:** Known benchmarks (or none). - **Actions:** Select 5–10 exemplars (direct, adjacent, and at least 1 “gold standard”). For each: what you’re studying and why it’s relevant. - **Outputs:** Benchmark Set table. - **Checks:** Set includes at least 2 “outside the category” references to avoid local maxima. ### 4) Study like a voracious user (structured observation) - **Inputs:** Benchmarks; critique template. - **Actions:** Use each product as the target user. Capture micro-moments: friction, delight, confusion, trust breaks. Record “what happened” before “why it’s good/bad”. - **Outputs:** Product Study Notes (draft). - **Checks:** Each benchmark note includes at least 3 concrete moments with screenshots/quotes if available (or precise descriptions). ### 5) Synthesize: turn observations into taste rules + anti-patterns - **Inputs:** Study notes across benchmarks. - **Actions:** Cluster patterns. Convert into rules: **DO/DO NOT**, plus rationale and where it applies. Add anti-patterns that create “AI slop” (generic, incoherent, misaligned experiences). - **Outputs:** Taste Rules + Anti-Patterns. - **Checks:** Each rule is backed by ≥ 2 observations from different benchmarks (or explicitly marked “hypothesis”). ### 6) Intuition as hypothesis generator (make it testable) - **Inputs:** Rules + your gut reactions. - **Actions:** Write intuition statements (“It feels off because…”) and convert into testable hypotheses with predicted signals and counter-signals. - **Outputs:** Intuition → Hypothesis Log. - **Checks:** Each hypothesis has a clear falsification condition (“If X doesn’t change after Y, we were wrong.”). ### 7) Validate with smallest viable checks (qual + quant) - **Inputs:** Hypothesis log; available data/research access. - **Actions:** Choose the lightest validation per hypothesis: usability task, intercept prompt, session replay review, funnel slice, A/B smoke test, copy test, etc. Define success metrics and sample. - **Outputs:** Validation Plan with owners/cadence if known. - **Checks:** Validation steps are feasible within the stated time box and don’t require sensitive data. ### 8) Create a practice loop + quality gate + finalize - **Inputs:** Draft pack. - **Actions:** Build a 2–4 week practice plan (exposure hours schedule + weekly synthesis). Run [references/CHECKLISTS.md](references/CHECKLISTS.md) and score with [references/RUBRIC.md](references/RUBRIC.md). Add Risks/Open questions/Next steps. - **Outputs:** Final Taste Calibration Pack. - **Checks:** A reader can follow the practice plan without additional context; assumptions are explicit. ## Quality gate (required) - Use [references/CHECKLISTS.md](references/CHECKLISTS.md) and [references/RUBRIC.md](references/RUBRIC.md). - Always include: **Risks**, **Open questions**, **Next steps**. ## Examples **Example 1 (Onboarding):** “Calibrate our onboarding taste vs best-in-class. Target users are first-time PMs. Time box: 90 minutes. Output a Taste Calibration Pack.” Expected: benchmark set, critique notes, taste rules, hypotheses, and a lightweight validation plan. **Example 2 (B2B workflow UX):** “My gut says our ‘create project’ flow feels slow and confusing. Turn that into testable hypotheses and a validation plan.” Expected: intuition→hypothesis log with falsification conditions and smallest viable checks. **Boundary example:** “Tell me what good taste is in general.” Response: require a specific domain + target user/job; otherwise produce a menu of domain options and propose a narrow starting point. --- ## Referenced Files > The following files are referenced in this skill and included for context. ### references/INTAKE.md ```markdown # Intake (question bank) Ask up to **5** questions, then proceed with explicit assumptions. ## A) Goal + scope 1) What **taste domain** are we calibrating? (pick 1) onboarding, activation, navigation/IA, editor/workflow, pricing/packaging UX, notifications, retention loops, trust/safety, performance/latency feel, copy/voice 2) Who is the **target user** and what is their top **job-to-be-done** in this domain? 3) What decision will this work inform? (e.g., redesign direction, critique standard, experiment backlog, PM skill-building) 4) What’s the **time box**? (single 60–120 min sprint vs 2–4 week practice plan) ## B) Constraints + context 5) Platform + constraints: web/mobile, accessibility level, compliance, brand voice, markets/geos, technical limitations 6) Where are you starting from? (existing flow, new concept, competitor catch-up, quality bar reset) ## C) Benchmarks + references 7) Any known benchmark products you admire (or dislike) for this domain? 8) Should we include “outside-category” references? (default: yes, 2–3) ## D) Evidence + validation access 9) What evidence is available today? (analytics, funnel, session replays, support tickets, interviews) 10) What validation methods are feasible this week? (usability calls, prototype tests, A/B, copy test, dogfooding) ## E) Output preferences 11) Should the output be **in-chat Markdown** or written as **files**? If files, what folder path? 12) Any preferences for tone/voice (e.g., direct, brand-aligned), or a rubric style you already use? ``` ### references/TEMPLATES.md ```markdown # Templates Use these templates to produce the Taste Calibration Pack. ## 1) Taste Calibration Brief (template) ### Context - **Taste domain:** (e.g., onboarding) - **Target user + job:** (who / what they’re trying to do) - **Decision this informs:** (redesign direction, quality bar, experiment backlog, etc.) - **Time box:** (e.g., 90 minutes; or 4-week practice plan) - **Constraints:** (platform, accessibility, compliance, brand voice, technical constraints) ### “Good” decision criteria (6–10) - Criteria list (observable, not vibes) ### Tradeoffs / non-goals - What you will intentionally sacrifice or deprioritize ## 2) Benchmark Set (table) | Benchmark | Why it’s relevant | What to study | Notes / links | |---|---|---|---| | | | | | ## 3) Product Study Notes (1 page per benchmark) ### Benchmark - Name + platform + user persona used ### Quick summary - What this product does well in this domain (1–2 sentences) ### Moments (capture “what happened”) | Moment | What I did | What happened | Emotion / friction | Why it might work (hypothesis) | |---|---|---|---|---| | | | | | | ### Pattern candidates - Potential rules (draft): DO / DO NOT ### Copy/pasteable artifacts - Key microcopy, interaction details, defaults, empty states, error handling, etc. ## 4) Taste Rules + Anti-Patterns ### Taste rules (DO / DO NOT) | Rule | Rationale | Evidence (benchmarks/moments) | Applies to | Exceptions | |---|---|---|---|---| | DO: | | | | | | DO NOT: | | | | | ### Anti-patterns (“slop filters”) | Anti-pattern | How it shows up | Why it’s harmful | Replacement rule | |---|---|---|---| | | | | | ## 5) Intuition → Hypothesis Log | Intuition statement (“it feels…”) | Hypothesis (testable) | Predicted signal | Counter-signal | Smallest viable test | |---|---|---|---|---| | | | | | | ## 6) Validation Plan (smallest viable checks) ### Hypotheses to validate (top 3–7) - Link to rows in the hypothesis log ### Tests | Hypothesis | Method | Sample | Success metric | Decision rule | Owner | When | |---|---|---:|---|---|---|---| | | | | | | | | ### Instrumentation / tracking notes - Events, funnels, qualitative tags, logging plan ## 7) Practice Plan (2–4 weeks) ### Cadence - **Exposure hours:** (# sessions/week × minutes/session) - **Weekly synthesis:** (time + output) - **Peer calibration:** (optional: 1 session/week) ### Weekly plan | Week | Focus | Benchmarks | Output | |---:|---|---|---| | 1 | Baseline + criteria | | Taste brief + benchmark set | | 2 | Deep studies | | Study notes + rule drafts | | 3 | Hypotheses + tests | | Hypothesis log + validation | | 4 | Consolidate | | Final rules + next steps | ``` ### references/CHECKLISTS.md ```markdown # Checklists Use these before finalizing the Taste Calibration Pack. ## 1) Pack completeness (must-have) - [ ] A single, explicit **taste domain** is chosen (not “product quality” in general). - [ ] Target **user + job** are stated. - [ ] “Good” criteria are **observable** and include **tradeoffs / non-goals**. - [ ] Benchmark set includes **5–10** items and at least **2 outside-category** references (or justification for not doing so). - [ ] Study notes include **concrete moments** (what happened) before interpretation. - [ ] Taste rules are written as **DO / DO NOT** and reference supporting evidence. - [ ] Hypotheses are **testable** with predicted signals + counter-signals. - [ ] Validation plan uses the **smallest viable tests** that fit the time box. - [ ] A 2–4 week **practice cadence** is specified (even if the immediate output is a 90-minute sprint). - [ ] Includes **Risks / Open questions / Next steps**. ## 2) “Taste rule” quality - [ ] Rule is specific enough to apply to a UI decision this week. - [ ] Rule describes *when* it applies and includes exceptions. - [ ] Rule is not merely “make it simpler / better / clearer” without a mechanism. - [ ] Rule has ≥ 2 evidence points or is explicitly marked “hypothesis”. ## 3) Hypothesis quality - [ ] Hypothesis is falsifiable (clear fail condition). - [ ] The predicted signal is measurable (qual tags or quant metrics). - [ ] The smallest viable test does not require sensitive or unavailable data. ## 4) Validation plan realism - [ ] Tests fit the time box and staffing realities. - [ ] There is a decision rule (what you’ll do if it passes/fails). - [ ] You’re not over-indexing on one method (mix qual + quant where feasible). ``` ### references/RUBRIC.md ```markdown # Rubric (1–5) Score the Taste Calibration Pack. A “ship-ready” pack is typically **≥ 24/35** with no category below 3. ## 1) Domain focus (1–5) - **1:** Vague (“improve product taste”) with no specific domain or moment. - **3:** Domain is named; scope is mostly clear. - **5:** Domain + moment + target user/job are crisp and clearly bounded. ## 2) Criteria + tradeoffs (1–5) - **1:** Generic adjectives; no tradeoffs. - **3:** Criteria are mostly observable; tradeoffs exist but are thin. - **5:** Criteria are concrete, prioritized, and include explicit tradeoffs/non-goals. ## 3) Benchmark set quality (1–5) - **1:** Random list; no rationale; no outside-category references. - **3:** Reasonable set with some rationale. - **5:** Curated set with clear “what to study” per benchmark and diversity across categories. ## 4) Observation depth (1–5) - **1:** Opinions without moments. - **3:** Some concrete moments captured; inconsistent structure. - **5:** Consistent, moment-based notes across benchmarks with precise details and pattern candidates. ## 5) Rules + anti-patterns (1–5) - **1:** Rules are vague or ungrounded. - **3:** Rules exist but lack applicability guidance or evidence. - **5:** DO/DO NOT rules are evidence-backed, scoped, and immediately reusable in product work. ## 6) Hypotheses + validation (1–5) - **1:** Intuition remains subjective; no falsification. - **3:** Some hypotheses and tests; decision rules missing. - **5:** Hypotheses are testable with predicted/counter-signals and realistic smallest-viable validation steps. ## 7) Practice loop (1–5) - **1:** One-off critique with no habit plan. - **3:** A cadence is described but not operational. - **5:** A 2–4 week plan with exposure hours + weekly synthesis + optional peer calibration is ready to execute. ```