Back to skills
SkillHub ClubShip Full StackFull Stack

skill-from-masters

Help users create high-quality skills by discovering and incorporating proven methodologies from domain experts. Use this skill BEFORE skill-creator when users want to create a new skill - it enhances skill-creator by first identifying expert frameworks and best practices to incorporate. Triggers on requests like "help me create a skill for X" or "I want to make a skill that does Y". This skill guides methodology selection, then hands off to skill-creator for the actual skill generation.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
1,313
Hot score
99
Updated
March 20, 2026
Overall rating
C5.1
Composite score
5.1
Best-practice grade
C62.8

Install command

npx @skill-hub/cli install gbsoss-skill-from-masters-skill-from-masters

Repository

GBSOSS/skill-from-masters

Skill path: skill-from-masters

Help users create high-quality skills by discovering and incorporating proven methodologies from domain experts. Use this skill BEFORE skill-creator when users want to create a new skill - it enhances skill-creator by first identifying expert frameworks and best practices to incorporate. Triggers on requests like "help me create a skill for X" or "I want to make a skill that does Y". This skill guides methodology selection, then hands off to skill-creator for the actual skill generation.

Open repository

Best for

Primary workflow: Ship Full Stack.

Technical facets: Full Stack.

Target audience: everyone.

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: GBSOSS.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install skill-from-masters into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/GBSOSS/skill-from-masters before adding skill-from-masters to shared team environments
  • Use skill-from-masters for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: skill-from-masters
description: Help users create high-quality skills by discovering and incorporating proven methodologies from domain experts. Use this skill BEFORE skill-creator when users want to create a new skill - it enhances skill-creator by first identifying expert frameworks and best practices to incorporate. Triggers on requests like "help me create a skill for X" or "I want to make a skill that does Y". This skill guides methodology selection, then hands off to skill-creator for the actual skill generation.
---

# Skill From Masters

Create skills that embody the wisdom of domain masters. This skill helps users discover and incorporate proven methodologies from recognized experts before generating a skill.

## Core Philosophy

Most professional domains have outstanding practitioners who have codified their methods through books, talks, interviews, and frameworks. A skill built on these proven methodologies is far more valuable than one created from scratch.

The goal is not just "good enough" — it's reaching the highest level of human expertise in that domain.

## Critical Requirements for Non-Technical Skills

**Technical skills have standard answers.** Writing code, debugging, or configuring systems — these have relatively objective quality bars.

**Non-technical skills vary dramatically in quality.** Skills involving decision-making, communication, persuasion, or judgment can range from mediocre to world-class. The difference comes from incorporating deep expertise.

For non-technical skills (writing, sales, hiring, product decisions, etc.), follow these requirements:

### 1. **Narrow, Specific Task Definition** ⚠️ CRITICAL
- The task must be **extremely specific and well-defined**
- ❌ BAD: "Write a sales email" (too broad)
- ✅ GOOD: "Write a B2B cold outreach email to enterprise CTOs"
- ✅ GOOD: "Write a project status report email to executive stakeholders"
- Different contexts require completely different skills
- If the user's request is too broad, help them narrow it down through questions

### 2. **Model Selection: Opus Required** 🎯 MANDATORY
- Non-technical skills MUST use **Claude Opus** (claude-opus-4-5)
- DO NOT use Sonnet, Haiku, or any other model
- Opus has the reasoning depth needed for nuanced, judgment-based tasks
- The quality difference is substantial for these domains

### 3. **Methodology Research: Clear & Reliable Conclusions** 🔍 ESSENTIAL
- Continue searching and communicating until you reach **very clear, reliable conclusions**
- Don't stop at surface-level research
- Sources to exhaust:
  - The model's own training knowledge
  - Web search for current best practices
  - Golden examples from top practitioners
  - Counter-examples and common mistakes
- Keep iterating until the methodology is crystal clear and well-validated

### 4. **Consider Plan Mode for Complex Tasks** 🎯 RECOMMENDED
- For complex or multi-faceted skills, prefer thinking through the approach first
- Better to think more before acting
- Use plan mode to structure the methodology research and synthesis

### 5. **Test Broadly, Then Iterate** ✅ REQUIRED
- Have the agent think through **extensive test scenarios**
- Test across diverse contexts, edge cases, and failure modes
- Review test results and optimize before finalizing
- Quality emerges from iteration, not first drafts

## Workflow

**Before Starting: Consider Plan Mode** 🎯

For complex or high-stakes skill creation (especially non-technical skills), consider using **plan mode**:
- Allows more upfront thinking before taking action
- Helps structure the methodology research systematically
- Reduces the risk of missing important considerations
- Better for skills involving judgment, persuasion, or complex decision-making

To use plan mode, the user can invoke it explicitly, or you can suggest: "This is a complex skill involving [decision-making/communication/etc]. Would you like me to use plan mode to think through the methodology research more carefully?"

### Step 1: Understand and Narrow the Skill Intent

**CRITICAL FOR NON-TECHNICAL SKILLS:** Ensure the task is narrow and specific enough.

Most users will start with a broad request. Your job is to help them narrow it down systematically until the task is specific enough that methodology and quality criteria are unambiguous.

## The 5-Layer Narrowing Framework

Use this systematic approach to guide users from broad to specific:

---

#### **Layer 1: Domain Identification**

Identify the core domain(s) of the broad task.

**Template questions:**
```
"[Broad task] can mean different things depending on the domain. Which of these is closest to what you need?
- [Domain A - description]
- [Domain B - description]
- [Domain C - description]
- Something else?"
```

**Example:**
- User: "I want a skill for making decisions"
- You: "Decision-making varies greatly by domain. Which is closest?
  - Product decisions (features, roadmap, prioritization)?
  - Hiring decisions (candidate evaluation, role definition)?
  - Investment decisions (startups, projects, resource allocation)?
  - Strategic business decisions (market entry, partnerships)?
  - Something else?"

---

#### **Layer 2: Context Constraints (5W1H)**

Once you know the domain, use structured questions to narrow the context:

**Template questions:**
```
- **WHO**: Who will use this skill? (Role, seniority, expertise level)
- **WHAT**: What specific output or decision does it produce?
- **WHERE**: What organizational context? (Startup, enterprise, remote, industry)
- **WHEN**: At what stage/timing is it used? (Launch, crisis, quarterly, daily)
- **WHY**: What's the primary goal/outcome? (Alignment, approval, action, learning)
- **HOW**: What constraints exist? (Time limits, process requirements, stakeholders)
```

**Example:**
- User: "Product decisions"
- You: "Let me understand the context better:
  - WHO: Are you a PM, founder, or executive making these decisions?
  - WHAT: Is this about choosing features, prioritization, or go/no-go launches?
  - WHERE: Startup (limited resources) or enterprise (many stakeholders)?
  - WHEN: Daily sprint planning or quarterly roadmap decisions?
  - WHY: Main goal - team alignment, exec approval, or customer validation?"

---

#### **Layer 3: Comparative Narrowing**

Present 2-3 similar but distinct scenarios. Force a choice or clarification.

**Template:**
```
"Within [narrowed domain], I want to make sure we're targeting the right scenario. Which is closest?

A. [Specific scenario A - with context]
B. [Specific scenario B - with context]
C. [Specific scenario C - with context]
D. None of these / Combination

Which matches best?"
```

**Example:**
- User: "Feature prioritization for a PM at a B2B SaaS startup"
- You: "Got it. Let me narrow further:

  A. **Weekly sprint planning**: Prioritizing stories within a committed roadmap
  B. **Quarterly roadmap planning**: Deciding which themes/initiatives for next quarter
  C. **Reactive prioritization**: Customer escalation or urgent bug vs planned work
  D. **Resource allocation**: Multiple products/teams competing for engineering time

  Which scenario is most common for you, or do you need to handle multiple?"

---

#### **Layer 4: Boundary Validation (Via Negativa)**

Confirm what the skill does NOT include. This clarifies the edges.

**Template:**
```
"To make sure we're aligned, let me confirm the boundaries. This skill:
- ✅ INCLUDES: [Specific scenario X]
- ✅ INCLUDES: [Specific scenario Y]
- ❌ EXCLUDES: [Related but different scenario A]
- ❌ EXCLUDES: [Related but different scenario B]
- ❌ EXCLUDES: [Related but different scenario C]

Does this match your needs?"
```

**Example:**
- User confirms: "Quarterly roadmap planning for a B2B SaaS PM"
- You: "Perfect. To confirm scope, this skill:
  - ✅ INCLUDES: Quarterly planning with engineering constraints
  - ✅ INCLUDES: Balancing customer requests, tech debt, and strategic bets
  - ❌ EXCLUDES: Daily sprint story prioritization
  - ❌ EXCLUDES: Crisis/reactive priority decisions
  - ❌ EXCLUDES: Multi-product portfolio prioritization (VP-level)

  Is this right?"

---

#### **Layer 5: Concrete Case Anchoring**

Ask for a real, specific example from their experience. This locks in the scope.

**Template:**
```
"Can you describe a specific recent situation where you needed this skill? Include:
- What was the situation/context?
- What inputs did you have?
- What decision/output did you need to produce?
- What made it difficult or time-consuming?
- What would an ideal outcome have looked like?"
```

**Example:**
- User: "Yes, the scope is right"
- You: "Great! One last thing to anchor our methodology research. Can you describe a recent quarterly planning session? For example:
  - How many features/initiatives were you choosing between?
  - What stakeholders were involved?
  - What made it hard? (Conflicting data? Stakeholder disagreement? Technical uncertainty?)
  - How long did the process take?
  - What would have made it easier or better?"

---

## Stop Condition: Is It Narrow Enough?

**Stop narrowing when you can answer YES to all:**

1. ✅ **Unique methodology**: Would experts in this specific scenario have unique advice (not generic)?
2. ✅ **Clear quality bar**: Could someone judge if the output is "excellent" vs "mediocre"?
3. ✅ **Specific constraints**: Are there context-specific rules, tradeoffs, or failure modes?
4. ✅ **Concrete example**: Has the user described a real scenario where they'd use this?
5. ✅ **Excludes alternatives**: Is it clear what related tasks this does NOT cover?

**If ANY answer is NO, keep narrowing.**

---

## Common Mistakes: Still Too Broad

Even after narrowing, watch for these signs the scope is still too broad:

❌ **Too broad:**
- "Write better emails" → Includes too many email types
- "Make product decisions" → Covers too many decision types
- "Create marketing content" → Content types vary wildly
- "Improve team communication" → Communication contexts differ greatly

✅ **Narrow enough:**
- "Write B2B cold outreach emails to enterprise CTOs"
- "Quarterly roadmap prioritization for B2B SaaS PMs with 3-5 eng team"
- "Create LinkedIn thought leadership posts for technical founders"
- "Run effective incident postmortems for distributed systems teams"

**Rule of thumb:** If you can describe the skill in one sentence with specific role, context, and output type, you're probably narrow enough.

---

## Quick Reference: Narrowing Question Flow

```
Broad Request
    ↓
Layer 1: "Which domain?" → [Pick one]
    ↓
Layer 2: "5W1H context?" → [Answer constraints]
    ↓
Layer 3: "Which specific scenario?" → [Choose from 2-3 options]
    ↓
Layer 4: "What's excluded?" → [Confirm boundaries]
    ↓
Layer 5: "Give me a real example" → [Describe concrete case]
    ↓
Check Stop Condition → [All 5 YES?]
    ↓
✅ Narrow enough → Proceed to Step 2
❌ Still broad → Continue narrowing
```

### Step 2: Identify Skill Type

**CRITICAL:** Different skill types require fundamentally different methodologies and quality criteria.

Consult `references/skill-taxonomy.md` for the full taxonomy. The core types are:

| Type | Core Operation | Key Question |
|---|---|---|
| **Summary** | Compress | Need comprehensive coverage? |
| **Insight** | Extract | Need to find what really matters? |
| **Generation** | Create | Need new content created? |
| **Decision** | Choose | Need to make a choice? |
| **Evaluation** | Judge | Need quality judgment? |
| **Diagnosis** | Trace | Need to find root cause? |
| **Persuasion** | Bridge | Need to change someone's mind? |
| **Planning** | Decompose | Need a roadmap? |
| **Research** | Discover | Need knowledge gathered? |
| **Facilitation** | Elicit | Need to extract info from others? |
| **Transformation** | Map | Need format conversion? |

**How to Identify:**

Ask the user: "Based on what you described, this sounds like a **[Type]** skill—the goal is to [core operation]. Is that right?"

**Common Confusions to Clarify:**
- Summary vs Insight: "Do you need comprehensive coverage, or just the key signals that matter?"
- Decision vs Evaluation: "Do you need to make a choice, or judge the quality of something?"
- Research vs Insight: "Do you need to gather information, or interpret what it means?"

**Why This Matters:**

Each type has different:
- **Methodology sources** to draw from
- **Quality criteria** to evaluate output
- **Output format** conventions

Document the identified type before proceeding.

### Step 3: Identify Relevant Domains

Map the skill to one or more methodology domains. A single skill may span multiple domains.

Example mappings:
- "Sales email skill" → Sales, Writing, Persuasion
- "User interview skill" → User Research, Interviewing, Product Discovery
- "Presentation skill" → Storytelling, Visual Design, Persuasion
- "Code review skill" → Software Engineering, Feedback, Communication

### Step 4: Surface Expert Methodologies (Until Crystal Clear)

**GOAL:** Don't stop until you have **very clear, reliable conclusions** about the best methodology.

**Layer 1: Local Database**
Consult `references/methodology-database.md` for known frameworks.

**Layer 2: Web Search for Experts**
Search the web to discover additional experts and methodologies:
- Search: "[domain] best practices expert"
- Search: "[domain] framework methodology"
- Search: "[domain] master practitioner"

**Layer 3: Deep Dive on Selected Experts**
For promising experts, search for their original content:
- Search: "[expert name] methodology interview"
- Search: "[expert name] [domain] transcript"
- Search: "[expert name] framework explained"

Fetch and read primary sources when available (articles, talk transcripts, blog posts).

**Layer 4: Keep Iterating Until Clear** ⚠️ NEW
- Don't stop at the first search results
- If methodologies seem unclear or conflicting, dig deeper
- Look for:
  - Model's own knowledge (you have extensive training data)
  - Current web best practices
  - Golden examples from practitioners
  - Anti-patterns and common mistakes
- **Continue the research loop** until you can confidently say: "This is the proven way to do this"

For each relevant domain, present:
- Key experts and their core contributions
- Specific frameworks, principles, or processes
- Source materials (books, talks, interviews)
- **Confidence level** in the methodology (keep searching if low)

### Step 5: Find Golden Examples

Before finalizing methodology selection, search for exemplary outputs:
- Search: "best [output type] examples"
- Search: "[output type] template [top company]"
- Search: "award winning [output type]"

Understanding what excellence looks like helps define the quality bar.

### Step 6: Collaborative Selection

Present the methodologies to the user and discuss:
- Which frameworks resonate with their goals?
- Are there conflicts between methodologies to resolve?
- Should they combine multiple approaches?
- Any specific principles they want to emphasize or exclude?

Guide the user to select 1-3 primary methodologies that will form the skill's foundation.

### Step 7: Extract Actionable Principles

For each selected methodology, search for and distill:

**The Why (Core Principles)**
- Search: "[methodology] core principles"
- Search: "why [methodology] works"

**The How (Concrete Process)**
- Search: "[methodology] step by step"
- Search: "[methodology] implementation guide"

**The What (Quality Criteria)**
- Search: "[methodology] checklist"
- Search: "[methodology] evaluation criteria"

**The Pitfalls (Common Mistakes)**
- Search: "[domain] common mistakes"
- Search: "[methodology] pitfalls avoid"

Fetch primary sources to get exact wording and nuance, not just summaries.

### Step 8: Cross-Validate

Compare insights across multiple sources:
- What principles appear consistently? (high confidence)
- Where do experts disagree? (flag for user)
- What's unique to each approach? (differentiation)

Synthesize a coherent framework that takes the best from each source.

### Step 9: Design Test Scenarios (Before Generation)

**CRITICAL:** Before generating the skill, design comprehensive test scenarios.

Work with the user to identify:

**Diverse Test Cases:**
- Typical scenarios (the common case)
- Edge cases (unusual but valid situations)
- Boundary conditions (where the methodology might break down)
- Failure modes (what could go wrong)

**Context Variations:**
- Different user expertise levels
- Different organizational contexts (startup vs enterprise)
- Different constraints (time, resources, stakeholder complexity)
- Cultural or industry differences

**Quality Validation:**
- What does "excellent" output look like?
- What are the most common mistakes to avoid?
- How will we know if the skill is working?

Document these test scenarios — they'll be used after generation to validate and iterate.

### Step 10: Generate the Skill

With methodologies confirmed and test scenarios designed, **invoke the `skill-creator` skill** to generate the final skill with proper format.

**HOW TO INVOKE:**
```
Use the Skill tool with: skill: "skill-creator:skill-creator"
```

This ensures:
- Proper YAML frontmatter (name, description)
- Correct directory structure
- Validation before packaging
- Imperative writing style (not second person)

**For non-technical skills, CRITICAL:**
- Add `model: opus` in the YAML frontmatter
- This ensures the skill uses Claude Opus, not a weaker model

The generated skill should:

1. Credit the methodology sources in a comment (documenting provenance)
2. Translate expert wisdom into actionable instructions
3. Include concrete examples derived from golden examples found
4. Capture quality criteria as explicit checkpoints
5. Include "don't do this" anti-patterns from pitfall research
6. Match the quality bar of the best human practitioners
7. **Include the test scenarios** as part of the skill's self-validation

### Step 11: Test, Review, and Iterate

**Don't stop at first generation.** Quality emerges through iteration.

1. **Run Test Scenarios**: Apply the skill to each test case designed in Step 9
2. **Evaluate Results**: Compare outputs against quality criteria
3. **Identify Gaps**: Where did the skill fall short?
4. **Refine Methodology**: Do we need additional expert guidance?
5. **Regenerate**: Update the skill based on learnings
6. **Repeat**: Until the skill consistently produces excellent results

Involve the user in this evaluation — they know their domain and can spot nuances.

## Quality Checklist

Before finalizing, verify:

**Scope & Clarity:**
- [ ] Is the task definition narrow and specific enough? (Not too broad)
- [ ] Would someone reading the skill know exactly when to use it?

**Methodology Depth:**
- [ ] Did we search beyond the local database?
- [ ] Did we find primary sources, not just summaries?
- [ ] Did we reach **very clear, reliable conclusions** about the methodology?
- [ ] Did we find golden examples of the output?
- [ ] Did we identify common mistakes to avoid?
- [ ] Did we cross-validate across multiple experts?

**Generation Quality:**
- [ ] For non-technical skills: Did we use **Opus model** for generation?
- [ ] Does the skill encode specific, actionable steps (not vague principles)?
- [ ] Are test scenarios included in the skill?

**Testing & Iteration:**
- [ ] Did we design comprehensive test scenarios beforehand?
- [ ] Did we test the skill across diverse contexts and edge cases?
- [ ] Did we iterate based on test results?
- [ ] Does the skill consistently produce excellent outputs?

**Plan Mode (if applicable):**
- [ ] For complex skills: Did we consider using plan mode?
- [ ] Did we structure our thinking before acting?

## Example Interaction: Full Narrowing Process

This example shows the complete 5-layer narrowing in action.

---

**User**: I want to create a skill for writing product requirement documents.

**Assistant**: Great! PRDs are an important skill. Let me help narrow this down using a systematic approach.

**[Layer 1: Domain Identification]**

PRDs can serve different purposes depending on the audience and goal. Which is closest to what you need?
- **Engineering-focused specs**: Detailed feature requirements for implementation
- **Stakeholder alignment docs**: High-level documents to get buy-in from executives/leadership
- **Design briefs**: Product requirements focused on user experience and design direction
- **External vendor specs**: Technical requirements for third-party development

Which type of PRD do you most often need to write?

**User**: Engineering-focused specs.

**Assistant**: Perfect. Now let me understand the context better.

**[Layer 2: Context Constraints (5W1H)]**

- **WHO**: What's your role? (PM, Senior PM, Product Lead, or Founder?)
- **WHERE**: What type of company? (Startup, mid-size, or enterprise? B2B or B2C?)
- **WHEN**: How often do you write these? (Every sprint? Per quarter? Per major initiative?)
- **WHY**: Primary goal - is it to get eng buy-in, ensure nothing is missed, or facilitate estimation?
- **HOW**: Any constraints - do you follow a specific template? Any required sections?

**User**: I'm a Senior PM at a B2B SaaS company (100 people, growing fast). I write PRDs for major features (usually 3-5 per quarter). Main goal is to ensure engineers have everything they need to start building without constant back-and-forth. No required template, but we try to keep docs under 2 pages.

**Assistant**: Excellent context. Let me narrow further.

**[Layer 3: Comparative Narrowing]**

Within "engineering PRDs for major features," which scenario is most common?

A. **Net-new features**: Building something completely new (new product area, new capability)
B. **Major enhancements**: Significant upgrades to existing features (new workflow, architecture change)
C. **Cross-system features**: Features requiring coordination across multiple teams/services
D. **Customer-driven builds**: Custom features for enterprise customers with tight timelines

Which is most frequent, or do you need to handle multiple types?

**User**: Mostly A and B. Sometimes C, but rarely D.

**Assistant**: Got it. Let me confirm the boundaries.

**[Layer 4: Boundary Validation]**

So this skill should:
- ✅ INCLUDE: Net-new features and major enhancements for engineering teams
- ✅ INCLUDE: B2B SaaS context with 3-10 engineer scope typically
- ✅ INCLUDE: Cross-team coordination when needed
- ❌ EXCLUDE: Small story-level specs (those go directly in Jira/tickets)
- ❌ EXCLUDE: Customer-specific custom builds with tight deadlines
- ❌ EXCLUDE: Executive-facing strategy documents or roadmap presentations

Does this match what you need?

**User**: Yes, exactly!

**Assistant**: Perfect. One final question to anchor the methodology research.

**[Layer 5: Concrete Case Anchoring]**

Can you describe a recent PRD you wrote that was either particularly challenging or time-consuming? For example:
- What feature were you speccing?
- What made it hard to write the PRD?
- How long did it take you?
- Did engineers come back with lots of questions, or did it work well?

**User**: Last month I wrote a PRD for an API rate limiting feature. Took me about 6 hours spread over 3 days. The hard part was explaining the business logic (what counts toward limits, how to handle bursts) clearly enough that engineers understood the intent but could still design the technical solution. We did 2 rounds of clarification before they felt ready to estimate. Would love to get that down to 3 hours and one round of Q&A.

**Assistant**: Perfect! Now I have a crystal-clear picture.

**📌 Final Skill Definition:**

"Create engineering-focused PRDs for B2B SaaS senior PMs writing specs for net-new features and major enhancements. PRDs should be 1-2 pages, clearly communicate business logic and user needs without over-specifying technical implementation, and enable engineering teams to estimate and start building with minimal back-and-forth."

This passes all 5 stop conditions:
1. ✅ Unique methodology - Yes (balancing business logic vs technical freedom)
2. ✅ Clear quality bar - Yes (minimize eng questions, enable estimation)
3. ✅ Specific constraints - Yes (B2B SaaS, 1-2 pages, major features)
4. ✅ Concrete example - Yes (API rate limiting feature, 6→3 hours)
5. ✅ Excludes alternatives - Yes (not small stories, not exec docs)

Ready to move to Step 2: Methodology Research. Since this is a non-technical skill involving communication and judgment, I'd recommend using **plan mode** to think through the methodology research carefully. Should I proceed in plan mode?

**User**: Yes, go ahead.

**Assistant**: Perfect, that's specific enough. Since this is a non-technical skill involving communication and judgment, I'd recommend using **plan mode** to think through the methodology research carefully. Should I proceed in plan mode?

**User**: Yes, go ahead.

**Assistant**: [Enters plan mode and structures the research approach]

Let me search systematically for the best methodologies on engineering-focused PRD writing.

[Searches: "PRD best practices expert", "product requirements document methodology for engineers", "best PRD examples B2B SaaS"]

I found several strong frameworks:

**Product Management**:
- Marty Cagan (SVPG): Emphasizes problem-first PRDs, separating problem space from solution space
- Teresa Torres: Continuous discovery, outcome-focused requirements

**Business Writing**:
- Barbara Minto: Pyramid Principle - lead with conclusion, MECE structure
- Amazon: Working backwards from press release, 6-pager narrative format

**Golden Examples Found**:
- Intercom's public PRD template
- Linear's product spec format

**Common Mistakes** (from my search):
- Jumping to solutions before defining the problem
- Writing for engineers instead of for alignment
- Missing success metrics

Let me dig deeper into primary sources to get very clear conclusions...

[Continues searching until methodology is crystal clear]

Which of these approaches resonates? Or should I search deeper into any of them?

**User**: I like Amazon's approach combined with Cagan's problem-first thinking. Can you find more details on Amazon's method?

**Assistant**: [Searches: "Amazon 6-pager template", "Amazon working backwards methodology", "Amazon PR FAQ process"]

[Fetches relevant articles and synthesizes key principles]

Here's what I found from primary sources...

[After thorough methodology research]

**Assistant**: Before I generate the skill, let me design test scenarios. Let's think through diverse cases:

**Test Scenarios:**
1. Simple feature: Adding a filter to existing dashboard
2. Complex feature: New AI-powered recommendation engine
3. Edge case: Feature requiring significant backend architecture changes
4. Failure mode: Vague problem statement that needs refinement

Does this cover the range of PRDs you typically write? Any other scenarios we should test?

**User**: Also add: Feature that needs cross-team coordination.

**Assistant**: Great addition. Now I'll invoke **skill-creator** to generate the skill with proper format.

[Invokes: Skill tool with skill: "skill-creator:skill-creator"]

The skill-creator will guide me through:
1. Creating proper YAML frontmatter (name, description, model: opus)
2. Writing in imperative form
3. Structuring the skill directory
4. Validating the output

[After skill-creator completes]

I've generated the skill. Let me test it against our scenarios...

[Reviews results, identifies gaps, iterates]

[Continues to Step 10: refinement based on test results]


---

## Referenced Files

> The following files are referenced in this skill and included for context.

### references/skill-taxonomy.md

```markdown
# Skill Taxonomy: Non-Technical Skill Categories

> This document defines a taxonomy for non-technical skills based on their **core cognitive operation**. Different skill types require different methodologies, quality criteria, and generation approaches.

## Why Taxonomy Matters

Non-technical skills are NOT all the same. A "writing skill" and an "analysis skill" require fundamentally different:
- **Methodologies** to draw from
- **Quality criteria** to evaluate output
- **Prompting strategies** to generate effectively

Misclassifying a skill leads to mediocre results. For example:
- Treating an **Insight** task as **Summary** → Gets comprehensive but shallow output
- Treating a **Decision** task as **Research** → Gets information but no commitment

---

## The Core Taxonomy

### Overview Table

| Type | English | Core Operation | Input → Output |
|---|---|---|---|
| 总结类 | Summary | Compress | Many signals → Fewer, preserving coverage |
| 洞察类 | Insight | Extract | Many signals → Few KEY signals that explain WHY |
| 生成类 | Generation | Create | Constraints → New content |
| 决策类 | Decision | Choose | Options + criteria → Selection + rationale |
| 评估类 | Evaluation | Judge | Artifact → Quality score + gaps |
| 诊断类 | Diagnosis | Trace | Symptoms → Root cause + fix |
| 说服类 | Persuasion | Bridge | My goal → Their action |
| 规划类 | Planning | Decompose | Goal → Path with milestones |
| 调研类 | Research | Discover | Questions → Structured answers |
| 引导类 | Facilitation | Elicit | Hidden knowledge → Surfaced knowledge |
| 转化类 | Transformation | Map | Format A → Format B |

---

## Detailed Definitions

### 1. 总结类 (Summary)

**Core Operation**: Compress

**Essence**: Reduce information volume while preserving coverage. The goal is completeness in fewer words.

**Transformation**: Many → Fewer (equal weight compression)

**Example Output**:
```
"The candidate has experience in: backend development (5 years),
microservices (3 projects), cloud platforms (AWS, GCP), and
some AI/ML exposure. Education includes CS degree from..."
```

**Quality Criteria**:
- Completeness: Did it cover all important aspects?
- Accuracy: Is the compression faithful to the original?
- Structure: Is it well-organized and scannable?

**Methodology Sources**:
- Barbara Minto (Pyramid Principle)
- MECE frameworks
- Information architecture

**When to Use**: When the user needs a comprehensive overview, not a judgment.

---

### 2. 洞察类 (Insight)

**Core Operation**: Extract the exceptional

**Essence**: Find the few signals that actually matter. Filter out noise to reveal meaning.

**Transformation**: Many → Few (finding what's decisive)

**Analogy**:
- Summary = Panoramic photo (everything visible, no focus)
- Insight = Finding the focal point (background fades)

**Example Output**:
```
"The KEY issue with this candidate: microservices experience
is real but language mismatch is significant. The AI claim
is likely superficial—no quantified results, vague language."
```

**Quality Criteria**:
- Depth: Does it answer WHY, not just WHAT?
- Prioritization: Are insights ranked by importance?
- Actionability: Can you act on this insight?

**Methodology Sources**:
- NN/g Data → Findings → Insights framework
- Signal detection theory
- Topgrading (A-player identification)

**When to Use**: When the user needs to understand what really matters, not everything.

---

### 3. 生成类 (Generation)

**Core Operation**: Create under constraints

**Essence**: Produce new content that didn't exist, while meeting requirements.

**Transformation**: Constraints/Requirements → New artifact

**Example Tasks**:
- Write a cold outreach email
- Draft a PRD
- Create a presentation

**Quality Criteria**:
- Fit: Does it meet all constraints?
- Effectiveness: Will it achieve its goal?
- Style: Is tone/voice appropriate for audience?

**Methodology Sources**:
- Domain-specific writing frameworks
- Copywriting principles (if persuasive)
- Genre conventions

**When to Use**: When the user needs content created, not analyzed.

---

### 4. 决策类 (Decision)

**Core Operation**: Choose and commit

**Essence**: Weigh trade-offs between incommensurable factors and make a choice.

**Transformation**: Options + Criteria → Selection + Rationale

**Key Challenge**: Factors often can't be directly compared (speed vs quality, cost vs risk).

**Example Output**:
```
"Recommendation: Go with Option B.

Rationale: While Option A is cheaper, Option B's
time-to-market advantage outweighs the 20% cost increase
given current competitive pressure."
```

**Quality Criteria**:
- Clarity: Is the recommendation unambiguous?
- Rationale: Is the reasoning explicit and logical?
- Trade-off acknowledgment: Are downsides stated?

**Methodology Sources**:
- MCDA (Multi-Criteria Decision Analysis)
- Decision matrices
- Bezos "one-way vs two-way door" framework

**When to Use**: When the user needs a choice, not more options.

---

### 5. 评估类 (Evaluation)

**Core Operation**: Judge against standards

**Essence**: Compare an artifact to ideal standards and identify gaps.

**Transformation**: Artifact → Quality judgment + Gap analysis

**Example Tasks**:
- Code review
- Proposal evaluation
- Performance assessment

**Quality Criteria**:
- Objectivity: Based on clear standards, not preference
- Specificity: Exact issues identified, not vague complaints
- Constructiveness: Actionable improvements suggested

**Methodology Sources**:
- Domain-specific quality frameworks
- Rubrics and evaluation criteria
- Best practice checklists

**When to Use**: When the user needs quality judgment, not creation.

---

### 6. 诊断类 (Diagnosis)

**Core Operation**: Trace back to root cause

**Essence**: Reason backward from symptoms to underlying causes.

**Transformation**: Symptoms/Problems → Root cause + Fix

**Key Challenge**: Not stopping at surface-level causes.

**Example Output**:
```
"The build is failing not because of the syntax error (that's
the symptom), but because the dependency update changed the
API signature. Fix: Pin dependency to v2.3.x or update all
call sites."
```

**Quality Criteria**:
- Depth: Did it find the TRUE root cause?
- Completeness: Are all contributing factors identified?
- Actionability: Is the fix clear and executable?

**Methodology Sources**:
- 5 Whys
- Fishbone diagrams
- Systems thinking

**When to Use**: When something is broken and needs fixing.

---

### 7. 说服类 (Persuasion)

**Core Operation**: Bridge worldviews

**Essence**: Connect your goal to their action by understanding their mental model.

**Transformation**: My goal + Their worldview → Message that moves them

**Key Challenge**: Understanding what they already believe, fear, and want.

**Example Tasks**:
- Sales pitch
- Stakeholder buy-in
- Negotiation

**Quality Criteria**:
- Audience fit: Does it speak to THEIR concerns?
- Credibility: Is it believable?
- Call to action: Is next step clear?

**Methodology Sources**:
- Cialdini (Influence)
- SPIN Selling
- Aristotle's Rhetoric (Ethos, Pathos, Logos)

**When to Use**: When the user needs to change someone's mind or behavior.

---

### 8. 规划类 (Planning)

**Core Operation**: Decompose into steps

**Essence**: Break a goal into a sequence of achievable milestones.

**Transformation**: Goal → Path with milestones + Dependencies

**Key Challenge**: Right level of detail, handling uncertainty.

**Example Output**:
```
"Phase 1 (Week 1-2): Research & Design
  - Define API contract
  - Design database schema
Phase 2 (Week 3-4): Implementation
  - Core CRUD operations
  - Authentication integration
..."
```

**Quality Criteria**:
- Completeness: Are all necessary steps included?
- Sequencing: Are dependencies correct?
- Granularity: Right level of detail for the audience?

**Methodology Sources**:
- Work breakdown structures
- Agile planning
- Critical path analysis

**When to Use**: When the user needs a roadmap, not just a goal.

---

### 9. 调研类 (Research)

**Core Operation**: Discover and structure

**Essence**: Explore unknown territory and return with organized knowledge.

**Transformation**: Questions → Structured answers with sources

**Key Challenge**: Knowing when you have "enough", avoiding confirmation bias.

**Example Tasks**:
- Market research
- Competitive analysis
- Technology evaluation

**Quality Criteria**:
- Coverage: Were enough sources consulted?
- Objectivity: Is it balanced, not cherry-picked?
- Structure: Is knowledge organized usefully?

**Methodology Sources**:
- Research methodology
- Source evaluation frameworks
- Synthesis techniques

**When to Use**: When the user needs knowledge gathered, not generated.

---

### 10. 引导类 (Facilitation)

**Core Operation**: Elicit through questions

**Essence**: Help others surface their own knowledge through skilled questioning.

**Transformation**: Hidden/tacit knowledge → Explicit, articulated knowledge

**Key Challenge**: Not leading too much (bias) or too little (missing key info).

**Example Tasks**:
- User interviews
- Requirements elicitation
- Coaching conversations

**Quality Criteria**:
- Depth: Did it surface non-obvious information?
- Neutrality: Avoided leading questions?
- Completeness: Covered all important areas?

**Methodology Sources**:
- Mom Test (Rob Fitzpatrick)
- Motivational Interviewing
- SPIN Selling (for discovery)

**When to Use**: When the user needs to extract information from others.

---

### 11. 转化类 (Transformation)

**Core Operation**: Map between representations

**Essence**: Convert from one format/perspective to another while preserving meaning.

**Transformation**: Format A → Format B (isomorphic mapping)

**Key Challenge**: Knowing what to preserve vs. adapt.

**Example Tasks**:
- Technical → Business translation
- Meeting → Action items
- Long-form → Executive summary

**Quality Criteria**:
- Fidelity: Is essential meaning preserved?
- Fit: Is output appropriate for target format?
- Clarity: Is it clear in the new format?

**Methodology Sources**:
- Translation theory
- Information design
- Audience adaptation

**When to Use**: When content exists but needs reformatting or re-framing.

---

## How to Use This Taxonomy

### Step 1: Identify the Skill Type

When a user requests a skill, first determine its type:

| User says... | Likely type |
|---|---|
| "Help me write..." | Generation |
| "Help me understand..." | Insight or Summary |
| "Help me decide..." | Decision |
| "Help me evaluate..." | Evaluation |
| "Help me figure out why..." | Diagnosis |
| "Help me convince..." | Persuasion |
| "Help me plan..." | Planning |
| "Help me research..." | Research |
| "Help me interview..." | Facilitation |
| "Help me convert..." | Transformation |

### Step 2: Validate with User

Confirm the type with the user:

```
"It sounds like you need an INSIGHT-type skill (finding what really
matters) rather than a SUMMARY-type skill (comprehensive overview).
Is that right?"
```

### Step 3: Apply Type-Specific Generation

Use appropriate:
- Methodology sources for that type
- Quality criteria for that type
- Output format conventions for that type

---

## Common Confusions

| Often confused | How to distinguish |
|---|---|
| Summary vs Insight | Summary = complete coverage; Insight = key signals only |
| Decision vs Evaluation | Decision = make a choice; Evaluation = judge quality |
| Research vs Insight | Research = gather info; Insight = interpret meaning |
| Generation vs Transformation | Generation = create new; Transformation = convert existing |
| Diagnosis vs Evaluation | Diagnosis = find root cause; Evaluation = judge against standard |

---

## Version History

- v1.0 (2026-01-21): Initial taxonomy with 11 types

```

### references/methodology-database.md

```markdown
# Methodology Database

A curated collection of proven methodologies from domain experts. Organized by domain for quick reference.

---

## Writing & Communication

### Business Writing
| Expert | Framework | Core Idea | Source |
|--------|-----------|-----------|--------|
| Barbara Minto | Pyramid Principle | Lead with conclusion, MECE structure, SCQ (Situation-Complication-Question) | Book: The Pyramid Principle |
| William Zinsser | Simplicity First | Cut clutter, use active voice, be human | Book: On Writing Well |
| Amazon | 6-Page Memo | Narrative structure, no PowerPoint, silent reading | Internal practice, Bezos letters |

### Persuasion & Copywriting
| Expert | Framework | Core Idea | Source |
|--------|-----------|-----------|--------|
| Eugene Schwartz | 5 Levels of Awareness | Match copy to reader's awareness stage | Book: Breakthrough Advertising |
| David Ogilvy | Big Idea | Headlines carry 80% of value, research first | Book: Ogilvy on Advertising |
| Gary Halbert | AIDA + Proof | Attention, Interest, Desire, Action + stack proof | The Gary Halbert Letter |

### Storytelling
| Expert | Framework | Core Idea | Source |
|--------|-----------|-----------|--------|
| Nancy Duarte | Sparkline | Alternate between what-is and what-could-be | Book: Resonate |
| Pixar | Story Spine | Once upon a time... Every day... Until one day... | Pixar internal, Kenn Adams |
| Joseph Campbell | Hero's Journey | Universal story structure across cultures | Book: Hero with a Thousand Faces |

---

## Product & Design

### Product Management
| Expert | Framework | Core Idea | Source |
|--------|-----------|-----------|--------|
| Marty Cagan | Empowered Teams | Problem space vs solution space, continuous discovery | Books: Inspired, Empowered |
| Teresa Torres | Continuous Discovery | Weekly customer touchpoints, opportunity solution trees | Book: Continuous Discovery Habits |
| Gibson Biddle | DHM Model | Delight customers, Hard to copy, Margin enhancing | Talks, essays |
| Shreyas Doshi | LNO Framework | Leverage, Neutral, Overhead task classification | Twitter threads, talks |

### User Research
| Expert | Framework | Core Idea | Source |
|--------|-----------|-----------|--------|
| Rob Fitzpatrick | The Mom Test | Don't ask opinions, ask about past behavior | Book: The Mom Test |
| Steve Portigal | Interviewing Users | Rapport, open questions, comfortable silence | Book: Interviewing Users |
| Indi Young | Mental Models | Extract user thought patterns from behavior | Book: Mental Models |
| Clayton Christensen | Jobs to be Done | What job is the user hiring the product for? | Book: Competing Against Luck |

### Design
| Expert | Framework | Core Idea | Source |
|--------|-----------|-----------|--------|
| Don Norman | Human-Centered Design | Affordances, signifiers, feedback, constraints | Book: The Design of Everyday Things |
| Dieter Rams | 10 Principles | Good design is innovative, useful, honest, unobtrusive... | Vitsoe documentation |
| Edward Tufte | Data-Ink Ratio | Maximize data, minimize non-data ink | Book: Visual Display of Quantitative Info |
| Jake Knapp | Design Sprint | 5-day process from problem to tested prototype | Book: Sprint |

---

## Sales & Marketing

### Sales
| Expert | Framework | Core Idea | Source |
|--------|-----------|-----------|--------|
| Neil Rackham | SPIN Selling | Situation, Problem, Implication, Need-payoff questions | Book: SPIN Selling |
| Matthew Dixon | Challenger Sale | Teach, Tailor, Take Control - don't just build relationships | Book: The Challenger Sale |
| Jeb Blount | Fanatical Prospecting | Consistent daily prospecting activity | Book: Fanatical Prospecting |
| MEDDIC | Enterprise Qualification | Metrics, Economic Buyer, Decision Criteria, Decision Process, Identify Pain, Champion | PTC/Parametric origin |

### Marketing
| Expert | Framework | Core Idea | Source |
|--------|-----------|-----------|--------|
| Seth Godin | Permission Marketing | Earn attention, don't interrupt | Book: Permission Marketing |
| April Dunford | Obviously Awesome | Positioning = competitive alternatives + unique attributes + value | Book: Obviously Awesome |
| Steve Jobs | Values-Based Marketing | Marketing is about values, not product specs | "Think Different" internal talk, 1997 |

### Pricing
| Expert | Framework | Core Idea | Source |
|--------|-----------|-----------|--------|
| Patrick Campbell | Value-Based Pricing | Price to value delivered, segment by willingness to pay | ProfitWell research |
| Van Westendorp | PSM Model | 4 questions to find acceptable price range | Price Sensitivity Meter |
| Hermann Simon | Pricing Power | Pricing is the most powerful profit lever | Book: Confessions of the Pricing Man |

---

## Leadership & Management

### Hiring
| Expert | Framework | Core Idea | Source |
|--------|-----------|-----------|--------|
| Laszlo Bock | Structured Interviews | Same questions, rubric scoring, diverse panels | Book: Work Rules! |
| Geoff Smart | A Method | Scorecard, Source, Select, Sell | Book: Who |
| Lou Adler | Performance-Based Hiring | Define performance outcomes, not skills lists | Book: Hire With Your Head |
| Steve Jobs | A Players | A players hire A players, B players hire C players | Multiple interviews |

### Feedback & Performance
| Expert | Framework | Core Idea | Source |
|--------|-----------|-----------|--------|
| Kim Scott | Radical Candor | Care personally + Challenge directly | Book: Radical Candor |
| Ray Dalio | Radical Transparency | Believability-weighted decision making | Book: Principles |
| Netflix | Keeper Test | Would you fight to keep this person? | Netflix Culture Deck |
| Andy Grove | 1:1 Meetings | Subordinate's meeting, manager listens | Book: High Output Management |

### Decision Making
| Expert | Framework | Core Idea | Source |
|--------|-----------|-----------|--------|
| Jeff Bezos | Type 1/Type 2 | Reversible vs irreversible decisions | Shareholder letters |
| Charlie Munger | Mental Models | Multidisciplinary thinking, inversion | Poor Charlie's Almanack |
| Annie Duke | Thinking in Bets | Separate decision quality from outcome quality | Book: Thinking in Bets |
| Richard Rumelt | Good Strategy | Diagnosis, Guiding Policy, Coherent Actions | Book: Good Strategy Bad Strategy |

### Meetings
| Expert | Framework | Core Idea | Source |
|--------|-----------|-----------|--------|
| Patrick Lencioni | 4 Meeting Types | Daily check-in, weekly tactical, monthly strategic, quarterly offsite | Book: Death by Meeting |
| Amazon | Silent Memo Reading | 6-page memo, silent reading at start | Bezos practice |
| Basecamp | Async First | Default to async, meetings are last resort | Book: It Doesn't Have to Be Crazy at Work |

---

## Engineering & Technology

### Software Development
| Expert | Framework | Core Idea | Source |
|--------|-----------|-----------|--------|
| Martin Fowler | Refactoring | Small behavior-preserving transformations | Book: Refactoring |
| Robert Martin | Clean Code | Readable code, single responsibility, meaningful names | Book: Clean Code |
| Kent Beck | TDD | Red-Green-Refactor cycle | Book: Test Driven Development |
| John Ousterhout | Deep Modules | Modules should hide complexity behind simple interfaces | Book: Philosophy of Software Design |

### System Design
| Expert | Framework | Core Idea | Source |
|--------|-----------|-----------|--------|
| Werner Vogels | Distributed Systems | Everything fails all the time, design for failure | AWS re:Invent talks |
| Martin Kleppmann | Data-Intensive Apps | Reliability, scalability, maintainability | Book: Designing Data-Intensive Apps |
| John Carmack | Focus & Simplicity | Deep work, simple solutions, first principles | Talks, Lex Fridman podcast |

### Engineering Culture
| Expert | Framework | Core Idea | Source |
|--------|-----------|-----------|--------|
| Elon Musk | Deletion First | Delete, simplify, accelerate, automate (in that order) | Interviews, employee accounts |
| Google | Blameless Postmortems | Focus on systems, not individuals | SRE Book |
| Keith Rabois | Barrels & Ammunition | Unblock barrels (leaders who can ship), ammunition scales | Stanford CS183 |

---

## Startup & Entrepreneurship

### Lean Startup
| Expert | Framework | Core Idea | Source |
|--------|-----------|-----------|--------|
| Eric Ries | Build-Measure-Learn | Minimum viable products, validated learning | Book: The Lean Startup |
| Ash Maurya | Running Lean | Systematic customer development process | Book: Running Lean |
| Steve Blank | Customer Development | Get out of the building | Book: Four Steps to the Epiphany |

### Scaling
| Expert | Framework | Core Idea | Source |
|--------|-----------|-----------|--------|
| Paul Graham | Do Things That Don't Scale | Manual effort to find what works | Essay |
| Reid Hoffman | Blitzscaling | Speed over efficiency in winner-take-all markets | Book: Blitzscaling |
| Gino Wickman | EOS | Vision, Traction, Healthy - simple operating system | Book: Traction |
| YC | Default Alive | Revenue growth vs burn rate | Paul Graham essay |

---

## Personal Effectiveness

### Productivity
| Expert | Framework | Core Idea | Source |
|--------|-----------|-----------|--------|
| Cal Newport | Deep Work | Focused blocks, eliminate shallow work | Book: Deep Work |
| David Allen | GTD | Capture everything, next actions, contexts | Book: Getting Things Done |
| James Clear | Atomic Habits | 1% better daily, habit stacking, environment design | Book: Atomic Habits |

### Thinking
| Expert | Framework | Core Idea | Source |
|--------|-----------|-----------|--------|
| Edward de Bono | Six Thinking Hats | Parallel thinking, separate modes | Book: Six Thinking Hats |
| Nassim Taleb | Antifragile | Build systems that gain from disorder | Book: Antifragile |
| Daniel Kahneman | System 1/System 2 | Fast intuition vs slow deliberation | Book: Thinking, Fast and Slow |

---

## Negotiation

| Expert | Framework | Core Idea | Source |
|--------|-----------|-----------|--------|
| Chris Voss | Tactical Empathy | Labeling, mirroring, calibrated questions | Book: Never Split the Difference |
| Fisher/Ury | Principled Negotiation | Interests not positions, BATNA | Book: Getting to Yes |
| Herb Cohen | Everything is Negotiable | Information, time, power as key levers | Book: You Can Negotiate Anything |

---

## Finance & Investing

| Expert | Framework | Core Idea | Source |
|--------|-----------|-----------|--------|
| Warren Buffett | Value Investing | Margin of safety, circle of competence, long-term | Shareholder letters |
| Howard Marks | Second-Level Thinking | What's the consensus missing? | Book: The Most Important Thing |
| Michael Porter | Five Forces | Industry structure determines profitability | HBR articles, books |

---

## Oral Tradition (No Books, But Documented)

These experts primarily share through talks, interviews, and social media:

| Expert | Domain | Key Ideas | Where to Find |
|--------|--------|-----------|---------------|
| Jensen Huang | Leadership | No 1:1s, 40 direct reports, "I don't give feedback, I give perspective" | Stanford talks, interviews |
| Patrick Collison | Startups | Move fast meaningfully, high hiring bar | Podcasts, Stripe blog |
| Tobi Lütke | Culture | Trust battery, context over control | Twitter, podcasts |
| Naval Ravikant | Wealth/Life | Specific knowledge, leverage, productize yourself | Twitter, "How to Get Rich" |
| Bryan Chesky | Product | Founder mode, no traditional PMs, review every design | Podcasts, Lenny's |
| Shishir Mehrotra | Product | Rituals, bundling theory | Coda blog, Lenny's Podcast |

---

## How to Use This Database

1. **Identify domains**: Map your skill to 1-3 relevant domains above
2. **Select experts**: Choose 1-3 experts whose philosophy matches your goals
3. **Go to sources**: Use the source column to find primary material
4. **Extract principles**: Distill actionable rules from their frameworks
5. **Resolve conflicts**: If methodologies conflict, choose based on context or create synthesis

## Expanding the Database

If a domain isn't covered:
1. Web search for "[domain] best practices expert"
2. Look for: books with 1000+ reviews, popular conference talks, influential practitioners
3. Identify their core framework or methodology
4. Add to the appropriate section

```

skill-from-masters | SkillHub