designing-growth-loops
Design growth loops (viral/referral/acquisition loops, flywheels) and produce a Growth Loop Design Pack (loop map, loop scorecard, channel fit + paid-loop feasibility, experiment backlog, measurement plan). Use for growth teams creating new growth loops or innovating beyond incremental optimization.
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Install command
npx @skill-hub/cli install liqiongyu-lenny-skills-plus-designing-growth-loops
Repository
Skill path: skills/designing-growth-loops
Design growth loops (viral/referral/acquisition loops, flywheels) and produce a Growth Loop Design Pack (loop map, loop scorecard, channel fit + paid-loop feasibility, experiment backlog, measurement plan). Use for growth teams creating new growth loops or innovating beyond incremental optimization.
Open repositoryBest for
Primary workflow: Grow & Distribute.
Technical facets: Full Stack, Designer.
Target audience: everyone.
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: liqiongyu.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install designing-growth-loops into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/liqiongyu/lenny_skills_plus before adding designing-growth-loops to shared team environments
- Use designing-growth-loops for development workflows
Works across
Favorites: 0.
Sub-skills: 0.
Aggregator: No.
Original source / Raw SKILL.md
--- name: "designing-growth-loops" description: "Design growth loops (viral/referral/acquisition loops, flywheels) and produce a Growth Loop Design Pack (loop map, loop scorecard, channel fit + paid-loop feasibility, experiment backlog, measurement plan). Use for growth teams creating new growth loops or innovating beyond incremental optimization." --- # Designing Growth Loops ## Scope **Covers** - Turning a growth goal into a **loop-based growth model** (micro loops + macro loops) - Designing and documenting loops: **viral/referral**, **content/UGC**, **SEO**, **partner/integration**, **sales-assisted**, and **paid acquisition loops** - Choosing channels using a **Customer × Business × Medium** fit check - Validating paid loops with **unit economics (LTV, CAC, payback)** gating - Producing an actionable loop plan: loop map → scorecard → experiments → measurement **When to use** - “Design a growth loop / viral loop / referral loop” - “Create a growth flywheel for <product>” - “Map our micro + macro growth loops and prioritize which to build” - “We need new growth loops (not just optimize ads/onboarding)” - “Decide whether a paid acquisition loop is viable” **When NOT to use** - You haven’t clarified the ICP/problem or value proposition (use `problem-definition`). - You’re still establishing PMF and need a PMF signal set (use `measuring-product-market-fit`). - You only need an experiment list/prioritization, not loop design (use `prioritizing-roadmap`). - You’re making a one-way-door launch decision (use `shipping-products` / `running-decision-processes`). ## Inputs **Minimum required** - Product + target user/ICP (and 1–2 key segments) - Current stage (pre-PMF / early PMF / growth / mature) and current primary growth channel(s) - The core value moment (what users do when they “get value”) - A baseline snapshot of the growth system (best available): acquisition sources, conversion funnel, retention/engagement, referrals/sharing - Constraints: budget, timebox, brand/safety, platform policy, legal/privacy, engineering capacity - For paid loops: rough unit economics (LTV, gross margin, CAC/payback targets) or a proxy **Missing-info strategy** - Ask up to 5 questions from [references/INTAKE.md](references/INTAKE.md), then proceed. - If data is missing, proceed with explicit assumptions and label confidence. - Do not request secrets or PII; prefer aggregated metrics or redacted excerpts. ## Outputs (deliverables) Produce a **Growth Loop Design Pack** (Markdown in-chat; or as files if requested) containing: 1) **Context snapshot** (goal, ICP/segments, constraints, timebox) 2) **Loop inventory + baseline** (current loops and where the system currently gets growth) 3) **Loop map (qualitative model)** (micro loops + macro loop; how loops connect) 4) **Loop candidates + mechanism library** (platform/channel mechanisms; ethical/policy-compliant) 5) **Loop scorecard + selection** (top 1–2 loops to build/scale; optimize vs innovate recommendation) 6) **Measurement plan** (loop KPIs, leading indicators, required instrumentation) 7) **Experiment backlog + 30/60/90 plan** (tests, sequencing, dependencies, owners if known) 8) **Risks / Open questions / Next steps** (always included) Templates and checklists: - [references/TEMPLATES.md](references/TEMPLATES.md) - [references/CHECKLISTS.md](references/CHECKLISTS.md) - [references/RUBRIC.md](references/RUBRIC.md) - Expanded guidance: [references/WORKFLOW.md](references/WORKFLOW.md) ## Workflow (7 steps) ### 1) Intake + growth goal framing - **Inputs:** User context; [references/INTAKE.md](references/INTAKE.md). - **Actions:** Clarify the growth goal (what metric, by when), the target segment(s), and constraints (budget, brand, platform rules, capacity). Decide whether the priority is **innovation** (new loop) vs **optimization** (existing loop). - **Outputs:** Context snapshot + “decision this work informs.” - **Checks:** A stakeholder can answer: “Which metric changes by when, and what will we do differently if it doesn’t?” ### 2) Baseline the current growth system (loops + funnel) - **Inputs:** Current acquisition sources, funnel, retention, referral/share, unit economics (if any). - **Actions:** Inventory existing loops (even if weak). Identify the **core value moment** and the “loop output” that could feed back (invites, content, word-of-mouth, spend, integrations). - **Outputs:** Loop inventory + baseline table. - **Checks:** Baseline includes at least one number for each: acquisition volume, activation rate, retention/engagement proxy. ### 3) Generate loop candidates (micro + macro) - **Inputs:** Baseline + constraints. - **Actions:** Create 6–10 loop hypotheses across categories (viral/referral, content/UGC, SEO, partner/integration, sales, paid). For each, specify: input → action → output → feedback. Include at least one “bigger bet” loop if in a fast-moving category. - **Outputs:** Loop candidates list + draft mechanism library. - **Checks:** Each candidate has a plausible “self-reinforcing” feedback path and a likely cycle time. ### 4) Model loops qualitatively (shared understanding) - **Inputs:** Loop candidates; stakeholder context. - **Actions:** Produce a qualitative **loop map**: micro loops connected into a macro loop. Document assumptions, bottlenecks, and where you expect compounding. - **Outputs:** Loop map (diagram or table) + bottleneck hypotheses. - **Checks:** Someone unfamiliar with the product can explain “how we grow” in 60 seconds using the map. ### 5) Quantify + prioritize (scorecard + gates) - **Inputs:** Qual loop map; best-available metrics. - **Actions:** Estimate loop throughput with simple math (conversion × frequency × invites/content × acceptance). Score loops using a scorecard (impact, confidence, effort, cycle time). Apply gates: - **Paid loops:** only proceed if LTV/margin supports CAC/payback targets. - **Channel fit:** ensure Customer × Business × Medium alignment. - **Outputs:** Loop scorecard + top 1–2 loop picks + innovate/optimize split recommendation. - **Checks:** Each chosen loop has (a) a measurable KPI, (b) a first experiment, and (c) a reason it wins vs alternatives. ### 6) Design the measurement plan (metrics + instrumentation) - **Inputs:** Selected loop(s). - **Actions:** Define loop KPIs and leading indicators; specify required events/properties and dashboards. Identify instrumentation gaps and the minimum tracking needed to learn. - **Outputs:** Measurement + instrumentation plan. - **Checks:** Every experiment metric is traceable to an event definition and a data source. ### 7) Build the experiment plan + quality gate - **Inputs:** Draft pack; [references/CHECKLISTS.md](references/CHECKLISTS.md) and [references/RUBRIC.md](references/RUBRIC.md). - **Actions:** Create an experiment backlog and 30/60/90 plan (sequencing, dependencies, owners if known). Run the checklist and score with the rubric. Always include **Risks / Open questions / Next steps**. - **Outputs:** Final Growth Loop Design Pack. - **Checks:** Next 2 weeks of work are unblocked and measurable; risks include policy/ethics considerations. ## Quality gate (required) - Use [references/CHECKLISTS.md](references/CHECKLISTS.md) and [references/RUBRIC.md](references/RUBRIC.md). - Always include: **Risks**, **Open questions**, **Next steps**. ## Examples **Example 1 (B2B SaaS, partner/integration loop):** “Use `designing-growth-loops`. Product: AI onboarding assistant for mid-market HR teams. Goal: +30% WAU in 90 days. Channels today: outbound + partnerships. Output: a Growth Loop Design Pack with an integration/partner loop and a referral loop, including metrics and a 30/60/90 experiment plan.” **Example 2 (B2C, viral/content loop):** “We’re building a mobile photo editor for creators. Goal: grow from 20k to 60k MAU in 8 weeks. Output a loop map, a mechanism library for Instagram/TikTok sharing, and a prioritized experiment backlog.” **Boundary example (not a loop problem):** “Write copy for our landing page headline.” Response: this is primarily copywriting/positioning, not loop design; clarify the goal and use `copywriting` or a messaging skill instead. --- ## Referenced Files > The following files are referenced in this skill and included for context. ### references/INTAKE.md ```markdown # Intake (Question Bank) Ask up to 5 questions at a time. If answers are missing, proceed with explicit assumptions and label confidence. ## A) Goal + constraints (required) 1) What is the growth goal (metric + target + date)? (e.g., WAU, revenue, activated users) 2) Which segment(s) matter most right now (ICP, persona, geo, plan tier)? 3) What are the key constraints: budget, team capacity, brand risk, platform policies, legal/privacy, timebox? 4) Are we optimizing an existing loop, or do we need a new loop? (If unclear, default to a balanced plan.) ## B) Product + value moment (required) 5) What is the “core value moment” (the action that means the user got value)? 6) What is the primary user job-to-be-done / use case? 7) What are the top reasons users churn or don’t activate (if known)? ## C) Baseline growth system (best available) 8) What currently drives acquisition? (top 3 sources + rough volumes) 9) What does the funnel look like? (visit → signup → activation → retained) 10) What retention/engagement proxy do you have (e.g., D7/D30, WAU/MAU, repeat purchase)? 11) Do users share/invite today? If yes, how often and through what mechanism? ## D) Unit economics (needed for paid loops) 12) Rough LTV (or ARPA × gross margin × expected lifetime) and CAC today (if any)? 13) Any payback constraints? (e.g., 3 months, 12 months) 14) Gross margin and variable costs that affect paid viability? ## E) Channels + distribution context 15) Which channels have you tried and what happened? 16) Where does your audience already spend attention? (communities, platforms, search, partners) 17) Any channel-specific strengths/weaknesses? (e.g., strong content, strong product virality, strong sales motion) ## F) Measurement + tooling 18) What analytics tooling exists today (events, attribution, dashboards)? 19) Any known instrumentation gaps that make measurement hard? ## Data handling rules - Do not request secrets (API keys, tokens). - If the user data includes PII, ask for aggregated metrics or redacted excerpts. ``` ### references/TEMPLATES.md ```markdown # Templates (Copy/Paste) Use these templates to produce a **Growth Loop Design Pack**. If writing files, keep everything under a user-specified folder (e.g., `docs/growth-loops/`). --- ## 1) Context snapshot **Product:** **Stage:** pre-PMF / early PMF / growth / mature **Target segment(s):** **Growth goal (metric + target + date):** **Decision this informs:** **Current primary channels:** **Constraints (budget/capacity/brand/policy/legal/privacy):** **Core value moment (definition):** --- ## 2) Loop inventory + baseline ### Baseline (best available) | Area | Metric / definition | Current | Source | Notes / confidence | |---|---|---:|---|---| | Acquisition volume | | | | | | Activation rate | | | | | | Retention/engagement proxy | | | | | | Referral/share | | | | | | Unit economics (if known) | LTV, margin, CAC, payback | | | | ### Existing loops (if any) | Loop name | Type | Input → Action → Output → Feedback | Strength today (H/M/L) | Evidence | |---|---|---|---|---| | | | | | | --- ## 3) Loop hypothesis card (for candidates) **Loop name:** **Type:** viral/referral / content/UGC / SEO / partner/integration / sales / paid **Target segment:** **Core mechanism (1 sentence):** ### Loop model - **Input:** (what starts the loop?) - **Action:** (what users/teams do) - **Output:** (what gets produced: invites, content, spend, integrations) - **Feedback:** (how output creates new input) - **Cycle time:** (how long for one loop turn?) ### Preconditions - Product prerequisites: - Operational prerequisites: - Risk/policy constraints: ### Success definition - **Loop KPI:** - **Leading indicators (early):** - **Lagging indicator (business):** --- ## 4) Loop map (qualitative model) ### Micro loops → macro loop table | Micro loop | Input | Action | Output | Feedback path | Primary bottleneck | |---|---|---|---|---|---| | | | | | | | ### Optional Mermaid diagram (if supported) ```mermaid flowchart LR A[Acquisition input] --> B[Activation/value moment] B --> C[Loop output: invites/content/$$] C --> A ``` --- ## 5) Channel fit triad (Customer × Business × Medium) | Candidate channel | Customer need/context | Business goal fit | Medium strength match (audio/video/text/interactive) | Verdict (Go/No-go) | Notes | |---|---|---|---|---|---| | | | | | | | --- ## 6) Paid loop feasibility gate Use this section if recommending paid acquisition. **Known/estimated:** - LTV (gross): - Gross margin %: - Target CAC: - Payback target (months): **Gate checks (directional):** - LTV × margin supports target CAC (and payback window is realistic) - Retention is strong enough that CAC won’t be “wasted” - Attribution and conversion tracking are good enough to learn **Verdict:** viable / not yet viable / needs more data **If not yet viable, prerequisites:** (e.g., retention, monetization, onboarding, measurement) --- ## 7) Loop scorecard (pick top 1–2) | Loop candidate | Impact | Confidence | Effort | Cycle time | Risk | Notes | Total | |---|---:|---:|---:|---:|---:|---|---:| | | | | | | | | | Scoring tip: use 1–5 per column, define what “5” means for your context. --- ## 8) Experiment backlog | Priority | Experiment | Loop | Hypothesis (mechanism) | Metric (leading) | Metric (lagging) | Effort | Timebox | Dependencies | |---:|---|---|---|---|---|---|---|---| | 1 | | | | | | | | | --- ## 9) Measurement + instrumentation plan ### Metrics | Loop | KPI (headline) | Leading indicators | Data source | Notes | |---|---|---|---|---| | | | | | | ### Required events/properties | Event name | When it fires | Properties | Used for | |---|---|---|---| | | | | | --- ## 10) 30/60/90 plan ### Next 30 days (de-risk + first tests) - ### 60 days (iterate toward a working loop) - ### 90 days (scale winners, cut losers) - --- ## 11) Risks / Open questions / Next steps **Risks** - **Open questions** - **Next steps** - ``` ### references/CHECKLISTS.md ```markdown # Checklists (Quality Gate) Use this checklist before finalizing a Growth Loop Design Pack. ## A) Scope + decision clarity - [ ] The growth goal is explicit (metric + target + date). - [ ] Target segment(s) are defined (not “everyone”). - [ ] Constraints are stated (budget, capacity, brand risk, platform policies, legal/privacy). - [ ] The decision this work informs is explicit (which loop(s) we invest in; whether paid is viable; innovate vs optimize split). ## B) Baseline is grounded - [ ] Baseline includes at least one metric for acquisition, activation, and retention/engagement. - [ ] Existing loops (if any) are documented with evidence. - [ ] The core value moment is defined and consistent with the product’s value proposition. ## C) Loop model quality (qualitative) - [ ] Micro loops are specified as input → action → output → feedback. - [ ] A macro loop connects the micro loops (shared understanding of “how we grow”). - [ ] Cycle time is estimated per loop (how long for one turn). - [ ] Bottlenecks are named (what limits compounding today). ## D) Prioritization + gating - [ ] Loop scorecard is present and picks top 1–2 loops (with rationale). - [ ] Channel fit triad is applied (Customer × Business × Medium). - [ ] Paid loop feasibility is gated by unit economics and measurement feasibility (if applicable). ## E) Measurement + learning plan - [ ] Each selected loop has a KPI + leading indicators. - [ ] Instrumentation requirements are specified (events/properties) or gaps are called out. - [ ] Experiments have hypotheses, metrics, and timeboxes. ## F) Safety + ethics - [ ] Mechanisms are policy-compliant and avoid manipulative/dark-pattern tactics. - [ ] Risks include brand/trust and platform policy considerations. ## G) Required closing section - [ ] The pack includes **Risks**, **Open questions**, and **Next steps**. ``` ### references/RUBRIC.md ```markdown # Rubric (Score 1–5) Score the Growth Loop Design Pack across dimensions. Use scores to decide whether to execute as-is or iterate. ## 1) Decision usefulness 1 = No decision; mostly brainstorming 3 = Decision is named but implications are fuzzy 5 = Clear decision + clear “what we’ll do differently” based on outcomes ## 2) Loop clarity (micro + macro) 1 = Tactics list; no feedback paths 3 = Some loops defined but weak connections/bottlenecks 5 = Clear micro loops + coherent macro loop; bottlenecks explicit; explainable in 60 seconds ## 3) Evidence grounding 1 = No baseline metrics; no assumptions labeled 3 = Baseline exists but is incomplete or confidence unclear 5 = Baseline + assumptions + confidence are explicit; claims tied to evidence or labeled hypotheses ## 4) Prioritization quality 1 = No scorecard; “pick everything” 3 = Scorecard exists but selection rationale is thin 5 = Scorecard drives a clear top 1–2 selection with trade-offs and rationale ## 5) Unit economics + paid gating (if applicable) 1 = Recommends paid without LTV/CAC/payback reasoning 3 = Mentions unit economics but no clear gate 5 = Paid loop viability is explicitly gated; prerequisites listed if not viable ## 6) Measurement + instrumentation readiness 1 = Metrics vague; no instrumentation plan 3 = Metrics defined but data sources/gaps unclear 5 = KPI + leading indicators; required events/properties; gaps + fixes are explicit ## 7) Actionability 1 = Vague recommendations (“go viral”) 3 = Some experiments but weak sequencing/dependencies 5 = Prioritized backlog + 30/60/90 plan; next 2 weeks unblocked ## 8) Safety + ethics 1 = Encourages manipulative tactics / ignores policy risk 3 = Mentions safety but weak application 5 = Explicit policy/ethics constraints; risks and mitigations are concrete ### Interpretation (suggested) - **30–40:** Strong; execute and review cadence weekly. - **22–29:** Good directional; tighten weak areas before big bets. - **<22:** Rework inputs/baseline/loop definitions; too speculative. ``` ### references/WORKFLOW.md ```markdown # Workflow (Expanded) This file expands the steps from `../SKILL.md` with additional guidance, heuristics, and common pitfalls. ## Step 1 — Intake + goal framing Anchor on a decision (what you’ll do differently), not curiosity. Common decisions: - Invest in **new loops** vs optimize existing channels - Choose the **top 1–2 loops** for the next quarter - Decide whether **paid acquisition** is viable (unit economics gate) Pitfall: “We need growth” without a metric/timebox yields a generic brainstorming doc. ## Step 2 — Baseline the current growth system Even if you’re “pre-loops,” you still have a system: - a channel that brings users in - a value moment - some retention (even if poor) Baseline should include: - Acquisition: where users come from + rough volume - Activation: what % hits the value moment - Retention: a proxy (D7/D30, WAU/MAU, repeat purchase) - Referral/share: any organic sharing/invites today Pitfall: designing loops without knowing where the real bottleneck is (activation vs retention vs distribution). ## Step 3 — Generate loop candidates (micro + macro) Treat loop design as hypothesis generation. Loop categories (choose what fits your product): - Viral/referral loop (invites, sharing, collaboration) - Content/UGC loop (users create content that acquires users) - SEO loop (content → search traffic → more content) - Partner/integration loop (distribution via ecosystems) - Sales loop (champions → expansion → internal referrals) - Paid loop (spend → acquisition → revenue → reinvest) Heuristic: in fast-moving categories, allocate meaningful effort to “bigger bet” loop innovation, not just incremental tuning. Pitfall: listing tactics (“do TikTok”) without specifying the feedback mechanism (how outputs become future inputs). ## Step 4 — Model loops qualitatively (shared understanding) You want a model that is: - simple enough to explain - explicit about assumptions - clear about bottlenecks Represent loops using: - a table (input → action → output → feedback) - and optionally a diagram (e.g., Mermaid) if your environment supports it Pitfall: “funnel thinking” only—loops require a feedback path and cycle time. ## Step 5 — Quantify + prioritize (scorecard + gates) Use best-available numbers to approximate throughput. Simple loop math (example): `new_users_per_week ≈ active_users × share_rate × invite_accept_rate` Prioritization dimensions: - Impact (if it works, how big) - Confidence (evidence, precedent, prerequisites) - Effort (engineering, content ops, sales ops) - Cycle time (how fast you’ll learn / compound) - Risk (policy, brand, trust) Gates: - **Paid loop gate:** if LTV/margins can’t support CAC/payback, do not recommend scaling paid; recommend fixing retention/monetization first. - **Channel fit triad:** customer need × business goal × medium strength must align (don’t force audio if your product needs visuals). Pitfall: choosing a loop that looks exciting but has a 6–12 month cycle time when you need learning in 2–4 weeks. ## Step 6 — Measurement plan (metrics + instrumentation) A loop is only real if it’s measurable. Define: - Loop KPI (one headline measure per loop) - Leading indicators (early signals) - Required instrumentation (events + properties) - Attribution assumptions (what you can/can’t infer) Pitfall: shipping experiments without event definitions; you can’t learn and the loop “dies in analytics.” ## Step 7 — Experiment plan + quality gate Translate loops into a practical plan: - 30 days: de-risk prerequisites + first tests - 60 days: iterate toward a working version - 90 days: scale the winners + cut losers Quality gate reminders: - Run [CHECKLISTS.md](CHECKLISTS.md) - Score with [RUBRIC.md](RUBRIC.md) - Always include **Risks / Open questions / Next steps** ```