Back to skills
SkillHub ClubShip Full StackFull Stack
knowledge-transfer
Imported from https://github.com/yaleh/meta-cc.
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Stars
16
Hot score
86
Updated
March 20, 2026
Overall rating
C3.5
Composite score
3.5
Best-practice grade
F19.6
Install command
npx @skill-hub/cli install yaleh-meta-cc-knowledge-transfer
Repository
yaleh/meta-cc
Skill path: .claude/skills/knowledge-transfer
Imported from https://github.com/yaleh/meta-cc.
Open repositoryBest for
Primary workflow: Ship Full Stack.
Technical facets: Full Stack.
Target audience: everyone.
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: yaleh.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install knowledge-transfer into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/yaleh/meta-cc before adding knowledge-transfer to shared team environments
- Use knowledge-transfer for development workflows
Works across
Claude CodeCodex CLIGemini CLIOpenCode
Favorites: 0.
Sub-skills: 0.
Aggregator: No.
Original source / Raw SKILL.md
---
name: Knowledge Transfer
description: Progressive learning methodology for structured onboarding using time-boxed learning paths (Day-1, Week-1, Month-1), validation checkpoints, and scaffolding principles. Use when onboarding new contributors, reducing ramp-up time from weeks to days, creating self-service learning paths, systematizing ad-hoc knowledge sharing, or building institutional knowledge preservation. Provides 3 learning path templates (Day-1: 4-8h setupβcontribution, Week-1: 20-40h architectureβfeature, Month-1: 40-160h expertiseβmentoring), progressive disclosure pattern, validation checkpoint principle, module mastery best practice. Validated with 3-8x onboarding speedup (structured vs. unstructured), 95%+ transferability to any software project (Go, Rust, Python, TypeScript). Learning theory principles applied: progressive disclosure, scaffolding, validation checkpoints, time-boxing.
allowed-tools: Read, Write, Edit, Grep, Glob
---
# Knowledge Transfer
**Reduce onboarding time by 3-8x with structured learning paths.**
> Progressive disclosure, scaffolding, and validation checkpoints transform weeks of confusion into days of productive learning.
---
## When to Use This Skill
Use this skill when:
- π₯ **Onboarding contributors**: New developers joining project
- β° **Slow ramp-up**: Weeks to first meaningful contribution
- π **Ad-hoc knowledge sharing**: Unstructured, mentor-dependent learning
- π **Scaling teams**: Can't rely on 1-on-1 mentoring
- π **Knowledge preservation**: Institutional knowledge at risk
- π― **Clear learning paths**: Need structured Day-1, Week-1, Month-1 plans
**Don't use when**:
- β Single contributor projects (no onboarding needed)
- β Onboarding already optimal (<1 week to productivity)
- β Non-software projects without adaptation
- β No time to create learning paths (requires 4-8h investment)
---
## Quick Start (30 minutes)
### Step 1: Assess Current Onboarding (10 min)
**Questions to answer**:
- How long does it take for new contributors to make their first meaningful contribution?
- What documentation exists? (README, architecture docs, development guides)
- What do contributors struggle with most? (setup, architecture, workflows)
**Baseline**: Unstructured onboarding typically takes 4-12 weeks to productivity.
### Step 2: Create Day-1 Learning Path (15 min)
**Structure**:
1. **Environment Setup** (1-2h): Installation, build, test
2. **Project Understanding** (1-2h): Purpose, structure, core concepts
3. **Code Navigation** (1-2h): Find files, search code, read docs
4. **First Contribution** (1-2h): Trivial fix (typo, comment)
**Validation**: PR submitted, tests passing, CI green
### Step 3: Plan Week-1 and Month-1 Paths (5 min)
**Week-1 Focus**: Architecture understanding, module mastery, meaningful contribution (20-40h)
**Month-1 Focus**: Domain expertise, significant feature, code ownership, mentoring (40-160h)
---
## Three Learning Path Templates
### 1. Day-1 Learning Path (4-8 hours)
**Purpose**: Get contributor from zero to first contribution in one day
**Four Sections**:
**Section 1: Environment Setup** (1-2h)
- Prerequisites documented (Go 1.21+, git, make)
- Step-by-step installation instructions
- Build verification (`make all`)
- Test suite execution (`make test`)
- **Validation**: Can build and test successfully
**Section 2: Project Understanding** (1-2h)
- Project purpose and value proposition
- Repository structure overview (cmd/, internal/, docs/)
- Core concepts (3-5 key ideas)
- User personas and use cases
- **Validation**: Can explain project purpose in 2-3 sentences
**Section 3: Code Navigation** (1-2h)
- File finding strategies (grep, find, IDE navigation)
- Code search techniques (function definitions, usage sites)
- Documentation navigation (README, docs/, code comments)
- Development workflows (TDD, git flow)
- **Validation**: Can find specific function in codebase within 2 minutes
**Section 4: First Contribution** (1-2h)
- Good first issues identified (typo fixes, comment improvements)
- Contribution process (fork, branch, PR)
- Code review expectations
- CI/CD validation
- **Validation**: PR submitted with tests passing
**Success Criteria**:
- β
Environment working (built, tested)
- β
Basic understanding (can explain purpose)
- β
Code navigation skills (can find files/functions)
- β
First PR submitted (trivial contribution)
**Transferability**: 80% (environment setup is project-specific)
---
### 2. Week-1 Learning Path (20-40 hours)
**Purpose**: Deep architecture understanding and first meaningful contribution
**Four Sections**:
**Section 1: Architecture Deep Dive** (5-10h)
- System design overview (components, data flow)
- Integration points (APIs, databases, external services)
- Design patterns used (MVC, dependency injection)
- Architectural decisions (ADRs)
- **Validation**: Can draw architecture diagram, explain data flow
**Section 2: Module Mastery** (8-15h)
- Core modules identified (3-5 critical modules)
- Dependency-ordered learning (foundational β higher-level)
- Module APIs and interfaces
- Integration between modules
- **Best Practice**: Study modules in dependency order
- **Validation**: Can explain each module's purpose and key functions
**Section 3: Development Workflows** (3-5h)
- TDD workflow (write tests first)
- Debugging techniques (debugger, logging)
- Git workflows (feature branches, rebasing)
- Code review process (standards, checklist)
- **Validation**: Can follow TDD cycle, submit quality PR
**Section 4: Meaningful Contribution** (4-10h)
- "Good first issue" selection (small feature, bug fix)
- Feature implementation (with tests)
- Code review iteration
- Feature merged
- **Validation**: Feature merged, code review feedback incorporated
**Success Criteria**:
- β
Architecture understanding (can explain design)
- β
Module mastery (know 3-5 core modules)
- β
Development workflows (TDD, git, code review)
- β
Meaningful contribution (feature merged)
**Transferability**: 75% (module names and architecture are project-specific)
---
### 3. Month-1 Learning Path (40-160 hours)
**Purpose**: Build deep expertise, deliver significant feature, enable mentoring
**Four Sections**:
**Section 1: Domain Selection & Deep Dive** (10-40h)
- Domain areas identified (e.g., Parser, Analyzer, Query, MCP, CLI)
- Domain selection (choose based on interest and project need)
- Deep dive resources (docs, code, architecture)
- Domain patterns and anti-patterns
- **Validation**: Deep dive deliverable (design doc, refactoring proposal)
**Section 2: Significant Feature Development** (15-60h)
- Feature definition (200+ lines, multi-module, complex logic)
- Design document creation
- Implementation with comprehensive tests
- Performance considerations
- **Validation**: Significant feature merged (200+ lines)
**Section 3: Code Ownership & Expertise** (10-40h)
- Reviewer role for domain
- Issue triaging and assignment
- Architecture improvement proposals
- Performance optimization
- **Validation**: Reviewed 3+ PRs, triaged 5+ issues
**Section 4: Community & Mentoring** (5-20h)
- Mentoring new contributors (guide through first PR)
- Documentation improvements (based on learning experience)
- Knowledge sharing (internal presentations, blog posts)
- Community engagement (discussions, issue responses)
- **Validation**: Mentored 1+ contributor, improved documentation
**Success Criteria**:
- β
Deep domain expertise (go-to expert in one area)
- β
Significant feature delivered (200+ lines, merged)
- β
Code ownership (reviewer, triager)
- β
Mentoring capability (guided new contributor)
**Transferability**: 85% (domain specialization framework is universal)
---
## Learning Theory Principles
### 1. Progressive Disclosure β
**Definition**: Reveal complexity gradually to avoid overwhelming learners
**Application**:
- Day-1: Basic setup and understanding (minimal complexity)
- Week-1: Architecture and module mastery (medium complexity)
- Month-1: Expertise and mentoring (high complexity)
**Evidence**: Each path builds on previous, complexity increases systematically
---
### 2. Scaffolding β
**Definition**: Provide support that reduces over time as learner gains independence
**Application**:
- Day-1: Highly guided (step-by-step instructions, explicit prerequisites)
- Week-1: Semi-guided (structured sections, some autonomy)
- Month-1: Mostly independent (domain selection choice, self-directed deep dives)
**Evidence**: Support level decreases across paths (guided β semi-independent β independent)
---
### 3. Validation Checkpoints β
**Principle**: "Every learning stage needs clear, actionable validation criteria that enable self-assessment without external dependency"
**Rationale**:
- Self-directed learning requires confidence in progress
- External validation doesn't scale (maintainer bottleneck)
- Clear checkpoints prevent confusion and false confidence
**Implementation**:
- Checklists with specific items (not vague "understand X")
- Success criteria with measurable outcomes (PR merged, tests passing)
- Self-assessment questions (can you explain Y? can you implement Z?)
**Universality**: 95%+ (applies to any learning context)
---
### 4. Time-Boxing β
**Definition**: Realistic time estimates help learners plan and avoid frustration
**Application**:
- Day-1: 4-8 hours (clear boundary)
- Week-1: 20-40 hours (flexible but bounded)
- Month-1: 40-160 hours (wide range for depth variation)
**Evidence**: All paths have explicit time estimates with min-max ranges
---
## Module Mastery Best Practice
**Context**: Week-1 contributor learning complex codebase with multiple interconnected modules
**Problem**: Without structure, contributors randomly jump between modules, missing critical dependencies
**Solution**: Architecture-first, sequential module deep dives
**Approach**:
1. **Architecture Overview First**: Understand system design before diving into modules
2. **Dependency-Ordered Sequence**: Study modules in dependency order (foundational β higher-level)
3. **Deliberate Practice**: Build small examples after each module to validate understanding
4. **Integration Understanding**: After individual modules, understand how they interact
**Example** (meta-cc):
- Architecture: Two-layer (CLI + MCP), 3 core packages (parser, analyzer, query)
- Sequence: Parser (foundation) β Analyzer (uses parser) β Query (uses both)
- Practice: Write small programs using each module's API
- Integration: Understand MCP server coordination of all 3 modules
**Transferability**: 80% (applies to modular architectures)
---
## Proven Results
**Validated in bootstrap-011 (meta-cc project)**:
- β
Meta layer: V_meta = 0.877 (CONVERGED)
- β
3 learning path templates complete (Day-1, Week-1, Month-1)
- β
6 knowledge artifacts created (3 templates, 1 pattern, 1 principle, 1 best practice)
- β
Duration: 4 iterations, ~8 hours
- β
3-8x onboarding speedup demonstrated (structured vs. unstructured)
**Onboarding Time Comparison**:
- Traditional unstructured: 4-12 weeks to productivity
- Structured methodology: 1.5-5 weeks to same outcome
- **Speedup**: 3-8x faster β
**Transferability Validation**:
- Go projects: 95-97% transferable
- Rust projects: 90-95% transferable (6-8h adaptation)
- Python projects: 85-90% transferable (8-10h adaptation)
- TypeScript projects: 80-85% transferable (10-12h adaptation)
- **Overall**: 95%+ transferable β
---
## Complete Onboarding Lifecycle
**Total Time**: 64-208 hours (1.5-5 weeks @ 40h/week)
**Day-1 (4-8 hours)**:
- Environment setup β Project understanding β Code navigation β First contribution
- **Outcome**: PR submitted, tests passing
**Week-1 (20-40 hours)** (requires Day-1 completion):
- Architecture deep dive β Module mastery β Development workflows β Meaningful contribution
- **Outcome**: Feature merged, architecture understanding validated
**Month-1 (40-160 hours)** (requires Week-1 completion):
- Domain deep dive β Significant feature β Code ownership β Mentoring
- **Outcome**: Domain expert status, significant feature merged, mentored contributor
**Progressive Complexity**: Simple β Medium β Complex
**Progressive Independence**: Guided β Semi-independent β Independent
**Progressive Impact**: Trivial fix β Small feature β Significant feature
---
## Common Anti-Patterns
β **Information overload**: Dumping all knowledge on Day-1 (overwhelms learner)
β **No validation**: Missing self-assessment checkpoints (learner uncertain of progress)
β **Vague success criteria**: "Understand architecture" (not measurable)
β **No time estimates**: Undefined time commitment (causes frustration)
β **Dependency violations**: Teaching advanced concepts before fundamentals
β **External validation dependency**: Requiring mentor approval for every step (doesn't scale)
---
## Templates and Examples
### Templates
- [Day-1 Learning Path Template](templates/day1-learning-path-template.md) - First-day onboarding
- [Week-1 Learning Path Template](templates/week1-learning-path-template.md) - First-week architecture and modules
- [Month-1 Learning Path Template](templates/month1-learning-path-template.md) - First-month expertise building
### Examples
- [Progressive Learning Path Pattern](examples/progressive-learning-path-pattern.md) - Time-boxed learning structure
- [Validation Checkpoint Principle](examples/validation-checkpoint-principle.md) - Self-assessment criteria
- [Module Mastery Onboarding](examples/module-mastery-best-practice.md) - Architecture-first learning
---
## Related Skills
**Parent framework**:
- [methodology-bootstrapping](../methodology-bootstrapping/SKILL.md) - Core OCA cycle
**Complementary domains**:
- [cross-cutting-concerns](../cross-cutting-concerns/SKILL.md) - Pattern extraction for learning materials
- [technical-debt-management](../technical-debt-management/SKILL.md) - Documentation debt prioritization
---
## References
**Core methodology**:
- [Progressive Learning Path](reference/progressive-learning-path.md) - Full pattern documentation
- [Validation Checkpoints](reference/validation-checkpoints.md) - Self-assessment guide
- [Module Mastery](reference/module-mastery.md) - Dependency-ordered learning
- [Learning Theory](reference/learning-theory.md) - Principles and evidence
**Quick guides**:
- [Creating Day-1 Path](reference/create-day1-path.md) - 15-minute guide
- [Adaptation Guide](reference/adaptation-guide.md) - Transfer to other projects
---
**Status**: β
Production-ready | Validated in meta-cc | 3-8x speedup | 95%+ transferable
---
## Referenced Files
> The following files are referenced in this skill and included for context.
### examples/progressive-learning-path-pattern.md
```markdown
# Progressive Learning Path Pattern
Start simple β add complexity gradually β master edge cases.
Example: Basic tests β table-driven β fixtures β mocking.
```
### examples/validation-checkpoint-principle.md
```markdown
# Validation Checkpoint Principle
Test understanding at key milestones (30%, 70%, 100%).
Example: After each BAIME iteration, validate learnings.
```
### examples/module-mastery-best-practice.md
```markdown
# Module Mastery Best Practice Example
Learn one module completely before moving to next.
Example: Master error classification before recovery patterns.
**Result**: Deeper understanding, faster overall progress.
```
### ../methodology-bootstrapping/SKILL.md
```markdown
---
name: Methodology Bootstrapping
description: Apply Bootstrapped AI Methodology Engineering (BAIME) to develop project-specific methodologies through systematic Observe-Codify-Automate cycles with dual-layer value functions (instance quality + methodology quality). Use when creating testing strategies, CI/CD pipelines, error handling patterns, observability systems, or any reusable development methodology. Provides structured framework with convergence criteria, agent coordination, and empirical validation. Validated in 8 experiments with 100% success rate, 4.9 avg iterations, 10-50x speedup vs ad-hoc. Works for testing, CI/CD, error recovery, dependency management, documentation systems, knowledge transfer, technical debt, cross-cutting concerns.
allowed-tools: Read, Grep, Glob, Edit, Write, Bash
---
# Methodology Bootstrapping
**Apply Bootstrapped AI Methodology Engineering (BAIME) to systematically develop and validate software engineering methodologies through observation, codification, and automation.**
> The best methodologies are not designed but evolved through systematic observation, codification, and automation of successful practices.
---
## What is BAIME?
**BAIME (Bootstrapped AI Methodology Engineering)** is a unified framework that integrates three complementary methodologies optimized for LLM-based development:
1. **OCA Cycle** (Observe-Codify-Automate) - Core iterative framework
2. **Empirical Validation** - Scientific method and data-driven decisions
3. **Value Optimization** - Dual-layer value functions for quantitative evaluation
This skill provides the complete BAIME framework for systematic methodology development. The methodology is especially powerful when combined with AI agents (like Claude Code) that can execute the OCA cycle, coordinate specialized agents, and calculate value functions automatically.
**Key Innovation**: BAIME treats methodology development like software developmentβwith empirical observation, automated testing, continuous iteration, and quantitative metrics.
---
## When to Use This Skill
Use this skill when you need to:
- π― **Create systematic methodologies** for testing, CI/CD, error handling, observability, etc.
- π **Validate methodologies empirically** with data-driven evidence
- π **Evolve practices iteratively** using OCA (Observe-Codify-Automate) cycle
- π **Measure methodology quality** with dual-layer value functions
- π **Achieve rapid convergence** (typically 3-7 iterations, 6-15 hours)
- π **Create transferable methodologies** (70-95% reusable across projects)
**Don't use this skill for**:
- β One-time ad-hoc tasks without reusability goals
- β Trivial processes (<100 lines of code/docs)
- β When established industry standards fully solve your problem
---
## Quick Start with BAIME (10 minutes)
### 1. Define Your Domain
Choose what methodology you want to develop using BAIME:
- Testing strategy (15x speedup example)
- CI/CD pipeline (2.5-3.5x speedup example)
- Error recovery patterns (80% error reduction example)
- Observability system (23-46x speedup example)
- Dependency management (6x speedup example)
- Documentation system (47% token cost reduction example)
- Knowledge transfer (3-8x speedup example)
- Technical debt management
- Cross-cutting concerns
### 2. Establish Baseline
Measure current state:
```bash
# Example: Testing domain
- Current coverage: 65%
- Test quality: Ad-hoc
- No systematic approach
- Bug rate: Baseline
# Example: CI/CD domain
- Build time: 5 minutes
- No quality gates
- Manual releases
```
### 3. Set Dual Goals
Define both layers:
- **Instance goal** (domain-specific): "Reach 80% test coverage"
- **Meta goal** (methodology): "Create reusable testing strategy with 85%+ transferability"
### 4. Start Iteration 0
Follow the OCA cycle (see [reference/observe-codify-automate.md](reference/observe-codify-automate.md))
---
## Specialized Subagents
BAIME provides two specialized Claude Code subagents to streamline experiment execution:
### iteration-prompt-designer
**When to use**: At experiment start, to create comprehensive ITERATION-PROMPTS.md
**What it does**:
- Designs iteration templates tailored to your domain
- Incorporates modular Meta-Agent architecture
- Provides domain-specific guidance for each iteration
- Creates structured prompts for baseline and subsequent iterations
**How to invoke**:
```
Use the Task tool with subagent_type="iteration-prompt-designer"
Example:
"Design ITERATION-PROMPTS.md for refactoring methodology experiment"
```
**Benefits**:
- β
Comprehensive iteration prompts (saves 2-3 hours setup time)
- β
Domain-specific value function design
- β
Proper baseline iteration structure
- β
Evidence-driven evolution guidance
---
### iteration-executor
**When to use**: For each iteration execution (Iteration 0, 1, 2, ...)
**What it does**:
- Executes iteration through lifecycle phases (Observe β Codify β Automate β Evaluate)
- Coordinates Meta-Agent capabilities and agent invocations
- Tracks state transitions (M_{n-1} β M_n, A_{n-1} β A_n, s_{n-1} β s_n)
- Calculates dual-layer value functions (V_instance, V_meta) systematically
- Evaluates convergence criteria rigorously
- Generates complete iteration documentation
**How to invoke**:
```
Use the Task tool with subagent_type="iteration-executor"
Example:
"Execute Iteration 2 of testing methodology experiment using iteration-executor"
```
**Benefits**:
- β
Consistent iteration structure across experiments
- β
Systematic value calculation (reduces bias, improves honesty)
- β
Proper convergence evaluation (prevents premature convergence)
- β
Complete artifact generation (data, knowledge, reflections)
- β
Reduced iteration time (structured execution vs ad-hoc)
**Important**: iteration-executor reads capability files fresh each iteration (no caching) to ensure latest guidance is applied.
---
### knowledge-extractor
**When to use**: After experiment converges, to extract and transform knowledge into reusable artifacts
**What it does**:
- Extracts patterns, principles, templates from converged BAIME experiment
- Transforms experiment artifacts into production-ready Claude Code skills
- Creates knowledge base entries (patterns/*.md, principles/*.md)
- Validates output quality with structured criteria (V_instance β₯ 0.85)
- Achieves 195x speedup (2 min vs 390 min manual extraction)
- Produces distributable, reusable artifacts for the community
**How to invoke**:
```
Use the Task tool with subagent_type="knowledge-extractor"
Example:
"Extract knowledge from Bootstrap-004 refactoring experiment and create code-refactoring skill using knowledge-extractor"
```
**Benefits**:
- β
Systematic knowledge preservation (vs ad-hoc documentation)
- β
Reusable Claude Code skills (ready for distribution)
- β
Quality validation (95% content equivalence to hand-crafted)
- β
Fast extraction (2-5 min, 195x speedup)
- β
Knowledge base population (patterns, principles, templates)
- β
Automated artifact generation (43% workflow automation with 4 tools)
**Lifecycle position**: Post-Convergence phase
```
Experiment Design β iteration-prompt-designer β ITERATION-PROMPTS.md
β
Iterate β iteration-executor (x N) β iteration-0..N.md
β
Converge β Create results.md
β
Extract β knowledge-extractor β .claude/skills/ + knowledge/
β
Distribute β Claude Code users
```
**Validated performance** (Bootstrap-005):
- Speedup: 195x (390 min β 2 min)
- Quality: V_instance = 0.87, 95% content equivalence
- Reliability: 100% success across 3 experiments
- Automation: 43% of workflow (6/14 steps)
---
## Core Framework
### The OCA Cycle
```
Observe β Codify β Automate
β β
βββββββ Evolve βββββββ
```
**Observe**: Collect empirical data about current practices
- Use meta-cc MCP tools to analyze session history
- Git analysis for commit patterns
- Code metrics (coverage, complexity)
- Access pattern tracking
- Error rate monitoring
**Codify**: Extract patterns and document methodologies
- Pattern recognition from data
- Hypothesis formation
- Documentation as markdown
- Validation with real scenarios
**Automate**: Convert methodologies to automated checks
- Detection: Identify when pattern applies
- Validation: Check compliance
- Enforcement: CI/CD gates
- Suggestion: Automated fix recommendations
**Evolve**: Apply methodology to itself for continuous improvement
- Use tools on development process
- Discover meta-patterns
- Optimize methodology
**Detailed guide**: [reference/observe-codify-automate.md](reference/observe-codify-automate.md)
### Dual-Layer Value Functions
Every iteration calculates two scores:
**V_instance(s)**: Domain-specific task quality
- Example (testing): coverage Γ quality Γ stability Γ performance
- Example (CI/CD): speed Γ reliability Γ automation Γ observability
- Target: β₯0.80
**V_meta(s)**: Methodology transferability quality
- Components: completeness Γ effectiveness Γ reusability Γ validation
- Completeness: Is methodology fully documented?
- Effectiveness: What speedup does it provide?
- Reusability: What % transferable across projects?
- Validation: Is it empirically validated?
- Target: β₯0.80
**Detailed guide**: [reference/dual-value-functions.md](reference/dual-value-functions.md)
### Convergence Criteria
Methodology complete when:
1. β
**System stable**: Agent set unchanged for 2+ iterations
2. β
**Dual threshold**: V_instance β₯ 0.80 AND V_meta β₯ 0.80
3. β
**Objectives complete**: All planned work finished
4. β
**Diminishing returns**: ΞV < 0.02 for 2+ iterations
**Alternative patterns**:
- **Meta-Focused Convergence**: V_meta β₯ 0.80, V_instance β₯ 0.55 (when methodology is primary goal)
- **Practical Convergence**: Combined quality exceeds metrics, justified partial criteria
**Detailed guide**: [reference/convergence-criteria.md](reference/convergence-criteria.md)
---
## Iteration Documentation Structure
Every BAIME iteration must produce a comprehensive iteration report following a standardized 10-section structure. This ensures consistent quality, complete knowledge capture, and reproducible methodology development.
### Required Sections
**See complete example**: [examples/iteration-documentation-example.md](examples/iteration-documentation-example.md)
**Use blank template**: [examples/iteration-structure-template.md](examples/iteration-structure-template.md)
1. **Executive Summary** (2-3 paragraphs)
- Iteration focus and objectives
- Key achievements
- Key learnings
- Value scores (V_instance, V_meta)
2. **Pre-Execution Context**
- Previous state: M_{n-1}, A_{n-1}, s_{n-1}
- Previous values: V_instance(s_{n-1}), V_meta(s_{n-1}) with component breakdowns
- Primary objectives for this iteration
3. **Work Executed** (organized by BAIME phases)
- **Phase 1: OBSERVE** - Data collection, measurements, gap identification
- **Phase 2: CODIFY** - Pattern extraction, documentation, knowledge creation
- **Phase 3: AUTOMATE** - Tool creation, script development, enforcement
- **Phase 4: EVALUATE** - Metric calculation, value assessment
4. **Value Calculations** (detailed, evidence-based)
- **V_instance(s_n)** with component breakdowns
- Each component score with concrete evidence
- Formula application with arithmetic
- Final score calculation
- Change from previous iteration (ΞV)
- **V_meta(s_n)** with rubric assessments
- Completeness score (checklist-based, with evidence)
- Effectiveness score (speedup, quality gains, with evidence)
- Reusability score (transferability estimate, with evidence)
- Final score calculation
- Change from previous iteration (ΞV)
5. **Gap Analysis**
- **Instance layer gaps** (what's needed to reach V_instance β₯ 0.80)
- Prioritized list with estimated effort
- **Meta layer gaps** (what's needed to reach V_meta β₯ 0.80)
- Prioritized list with estimated effort
- Estimated work remaining
6. **Convergence Check** (systematic criteria evaluation)
- **Dual threshold**: V_instance β₯ 0.80 AND V_meta β₯ 0.80
- **System stability**: M_n == M_{n-1} AND A_n == A_{n-1}
- **Objectives completeness**: All planned work finished
- **Diminishing returns**: ΞV < 0.02 for 2+ iterations
- **Convergence decision**: YES/NO with detailed rationale
7. **Evolution Decisions** (evidence-driven)
- **Agent sufficiency analysis** (A_n vs A_{n-1})
- Each agent's performance assessment
- Decision: evolution needed or not
- Rationale with evidence
- **Meta-Agent sufficiency analysis** (M_n vs M_{n-1})
- Each capability's effectiveness assessment
- Decision: evolution needed or not
- Rationale with evidence
8. **Artifacts Created**
- Data files (coverage reports, metrics, measurements)
- Knowledge files (patterns, principles, methodology documents)
- Code changes (implementation, tests, tools)
- Other deliverables
9. **Reflections**
- **What worked well** (successes to repeat)
- **What didn't work** (failures to avoid)
- **Learnings** (insights from this iteration)
- **Insights for methodology** (meta-level learnings)
10. **Conclusion**
- Iteration summary
- Key metrics and improvements
- Critical decisions made
- Next steps
- Confidence assessment
### File Naming Convention
```
iterations/iteration-N.md
```
Where N = 0, 1, 2, 3, ... (starting from 0 for baseline)
### Documentation Quality Standards
**Evidence-based scores**:
- Every value component score must have concrete evidence
- Avoid vague assessments ("seems good" β, "72.3% coverage, +5% from baseline" β
)
- Show arithmetic for all calculations
**Honest assessment**:
- Low scores early are expected and acceptable (baseline V_meta often 0.15-0.25)
- Don't inflate scores to meet targets
- Document gaps explicitly
- Acknowledge when objectives are not met
**Complete coverage**:
- All 10 sections must be present
- Don't skip reflections (valuable for meta-learning)
- Don't skip gap analysis (critical for planning)
- Don't skip convergence check (prevents premature convergence)
### Tools for Iteration Documentation
**Recommended workflow**:
1. Copy [examples/iteration-structure-template.md](examples/iteration-structure-template.md) to `iterations/iteration-N.md`
2. Invoke `iteration-executor` subagent to execute iteration with structured documentation
3. Review [examples/iteration-documentation-example.md](examples/iteration-documentation-example.md) for quality reference
**Automated generation**: Use `iteration-executor` subagent to ensure consistent structure and systematic value calculation.
---
## Three-Layer Architecture
**BAIME** integrates three complementary methodologies into a unified framework:
**Layer 1: Core Framework (OCA Cycle)**
- Observe β Codify β Automate β Evolve
- Three-tuple output: (O, Aβ, Mβ)
- Self-referential feedback loop
- Agent coordination
**Layer 2: Scientific Foundation (Empirical Methodology)**
- Empirical observation tools
- Data-driven pattern extraction
- Hypothesis testing
- Scientific validation
**Layer 3: Quantitative Evaluation (Value Optimization)**
- Dual-layer value functions (V_instance + V_meta)
- Convergence mathematics
- Agent as gradient, Meta-Agent as Hessian
- Optimization perspective
**Why "BAIME"?** The framework bootstraps itselfβmethodologies developed using BAIME can be applied to improve BAIME itself. This self-referential property, combined with AI-agent coordination, makes it uniquely suited for LLM-based development tools.
**Detailed guide**: [reference/three-layer-architecture.md](reference/three-layer-architecture.md)
---
## Proven Results
**Validated in 8 experiments**:
- β
100% success rate (8/8 converged)
- β±οΈ Average: 4.9 iterations, 9.1 hours
- π V_instance average: 0.784 (range: 0.585-0.92)
- π V_meta average: 0.840 (range: 0.83-0.877)
- π Transferability: 70-95%+
- π Speedup: 3-46x vs ad-hoc
**Example applications**:
- **Testing strategy**: 15x speedup, 75%β86% coverage ([examples/testing-methodology.md](examples/testing-methodology.md))
- **CI/CD pipeline**: 2.5-3.5x speedup, 91.7% pattern validation ([examples/ci-cd-optimization.md](examples/ci-cd-optimization.md))
- **Error recovery**: 80% error reduction, 85% transferability
- **Observability**: 23-46x speedup, 90-95% transferability
- **Dependency health**: 6x speedup (9hβ1.5h), 88% transferability
- **Knowledge transfer**: 3-8x onboarding speedup, 95%+ transferability
- **Documentation**: 47% token cost reduction, 85% transferability
- **Technical debt**: SQALE quantification, 85% transferability
---
## Usage Templates
### Experiment Template
Use [templates/experiment-template.md](templates/experiment-template.md) to structure your methodology development:
- README.md structure
- Iteration prompts
- Knowledge extraction format
- Results documentation
### Iteration Prompt Template
Use [templates/iteration-prompts-template.md](templates/iteration-prompts-template.md) to guide each iteration:
- Iteration N objectives
- OCA cycle execution steps
- Value calculation rubrics
- Convergence checks
**Automated generation**: Use `iteration-prompt-designer` subagent to create domain-specific iteration prompts.
### Iteration Documentation Template
**Structure template**: [examples/iteration-structure-template.md](examples/iteration-structure-template.md)
- 10-section standardized structure
- Blank template ready to copy and fill
- Includes all required components
**Complete example**: [examples/iteration-documentation-example.md](examples/iteration-documentation-example.md)
- Real iteration from test strategy experiment
- Shows proper value calculations with evidence
- Demonstrates honest assessment and gap analysis
- Illustrates quality reflections and insights
**Automated execution**: Use `iteration-executor` subagent to ensure consistent structure and systematic value calculation.
**Quality standards**:
- Evidence-based scoring (concrete data, not vague assessments)
- Honest evaluation (low scores acceptable, inflation harmful)
- Complete coverage (all 10 sections required)
- Arithmetic shown (all value calculations with steps)
---
## Common Pitfalls
β **Don't**:
- Use only one methodology layer in isolation (except quick prototyping)
- Predetermine agent evolution path (let specialization emerge from data)
- Force convergence at target iteration count (trust the criteria)
- Inflate value metrics to meet targets (honest assessment critical)
- Skip empirical validation (data-driven decisions only)
β
**Do**:
- Start with OCA cycle, add evaluation and validation
- Let agent specialization emerge from domain needs
- Trust the convergence criteria (system knows when done)
- Calculate V(s) honestly based on actual state
- Complete all analysis thoroughly before codifying
### Iteration Documentation Pitfalls
β **Don't**:
- Skip iteration documentation (every iteration needs iteration-N.md)
- Calculate V-scores without component breakdowns and evidence
- Use vague assessments ("seems good", "probably 0.7")
- Omit gap analysis or convergence checks
- Document only successes (failures provide valuable learnings)
- Assume convergence without systematic criteria evaluation
- Inflate scores to meet targets (honesty is critical)
- Skip reflections section (meta-learning opportunity)
β
**Do**:
- Use `iteration-executor` subagent for consistent structure
- Provide concrete evidence for each value component
- Show arithmetic for all calculations
- Document both instance and meta layer gaps explicitly
- Include reflections (what worked, didn't work, learnings, insights)
- Be honest about scores (baseline V_meta of 0.20 is normal and acceptable)
- Follow the 10-section structure for every iteration
- Reference iteration documentation example for quality standards
---
## Related Skills
**Acceleration techniques** (achieve 3-4 iteration convergence):
- [rapid-convergence](../rapid-convergence/SKILL.md) - Fast convergence patterns
- [retrospective-validation](../retrospective-validation/SKILL.md) - Historical data validation
- [baseline-quality-assessment](../baseline-quality-assessment/SKILL.md) - Strong iteration 0
**Supporting skills**:
- [agent-prompt-evolution](../agent-prompt-evolution/SKILL.md) - Track agent specialization
**Domain applications** (ready-to-use methodologies):
- [testing-strategy](../testing-strategy/SKILL.md) - TDD, coverage-driven, fixtures
- [error-recovery](../error-recovery/SKILL.md) - Error taxonomy, recovery patterns
- [ci-cd-optimization](../ci-cd-optimization/SKILL.md) - Quality gates, automation
- [observability-instrumentation](../observability-instrumentation/SKILL.md) - Logging, metrics, tracing
- [dependency-health](../dependency-health/SKILL.md) - Security, freshness, compliance
- [knowledge-transfer](../knowledge-transfer/SKILL.md) - Onboarding, learning paths
- [technical-debt-management](../technical-debt-management/SKILL.md) - SQALE, prioritization
- [cross-cutting-concerns](../cross-cutting-concerns/SKILL.md) - Pattern extraction, enforcement
---
## References
**Core documentation**:
- [Overview](reference/overview.md) - Architecture and philosophy
- [OCA Cycle](reference/observe-codify-automate.md) - Detailed process
- [Value Functions](reference/dual-value-functions.md) - Evaluation framework
- [Convergence Criteria](reference/convergence-criteria.md) - When to stop
- [Three-Layer Architecture](reference/three-layer-architecture.md) - Framework layers
**Quick start**:
- [Quick Start Guide](reference/quick-start-guide.md) - Step-by-step tutorial
**Examples**:
- [Testing Methodology](examples/testing-methodology.md) - Complete walkthrough
- [CI/CD Optimization](examples/ci-cd-optimization.md) - Pipeline example
- [Error Recovery](examples/error-recovery.md) - Error handling example
**Templates**:
- [Experiment Template](templates/experiment-template.md) - Structure your experiment
- [Iteration Prompts](templates/iteration-prompts-template.md) - Guide each iteration
---
**Status**: β
Production-ready | BAIME Framework | 8 experiments | 100% success rate | 95% transferable
**Terminology**: This skill implements the **Bootstrapped AI Methodology Engineering (BAIME)** framework. Use "BAIME" when referring to this methodology in documentation, research, or when asking Claude Code for assistance with methodology development.
```
### ../cross-cutting-concerns/SKILL.md
```markdown
---
name: Cross-Cutting Concerns
description: Systematic methodology for standardizing cross-cutting concerns (error handling, logging, configuration) through pattern extraction, convention definition, automated enforcement, and CI integration. Use when codebase has inconsistent error handling, ad-hoc logging, scattered configuration, need automated compliance enforcement, or preparing for team scaling. Provides 5 universal principles (detect before standardize, prioritize by value, infrastructure enables scale, context is king, automate enforcement), file tier prioritization framework (ROI-based classification), pattern extraction workflow, convention selection process, linter development guide. Validated with 60-75% faster error diagnosis (rich context), 16.7x ROI for high-value files, 80-90% transferability across languages (Go, Python, JavaScript, Rust). Three concerns addressed: error handling (sentinel errors, context preservation, wrapping), logging (structured logging, log levels), configuration (centralized config, validation, environment variables).
allowed-tools: Read, Write, Edit, Bash, Grep, Glob
---
# Cross-Cutting Concerns
**Transform inconsistent patterns into standardized, enforceable conventions with automated compliance.**
> Detect before standardize. Prioritize by value. Build infrastructure first. Enrich with context. Automate enforcement.
---
## When to Use This Skill
Use this skill when:
- π **Inconsistent patterns**: Error handling, logging, or configuration varies across codebase
- π **Pattern extraction needed**: Want to standardize existing practices
- π¨ **Manual review doesn't scale**: Need automated compliance detection
- π― **Prioritization unclear**: Many files need work, unclear where to start
- π **Prevention needed**: Want to prevent non-compliant code from merging
- π₯ **Team scaling**: Multiple developers need consistent patterns
**Don't use when**:
- β Patterns already consistent and enforced with linters/CI
- β Codebase very small (<1K LOC, minimal benefit)
- β No refactoring capacity (detection without action is wasteful)
- β Tools unavailable (need static analysis capabilities)
---
## Quick Start (30 minutes)
### Step 1: Pattern Inventory (15 min)
**For error handling**:
```bash
# Count error creation patterns
grep -r "fmt.Errorf\|errors.New" . --include="*.go" | wc -l
grep -r "raise.*Error\|Exception" . --include="*.py" | wc -l
grep -r "throw new Error\|Error(" . --include="*.js" | wc -l
# Identify inconsistencies
# - Bare errors vs wrapped errors
# - Custom error types vs generic
# - Context preservation patterns
```
**For logging**:
```bash
# Count logging approaches
grep -r "log\.\|slog\.\|logrus\." . --include="*.go" | wc -l
grep -r "logging\.\|logger\." . --include="*.py" | wc -l
grep -r "console\.\|logger\." . --include="*.js" | wc -l
# Identify inconsistencies
# - Multiple logging libraries
# - Structured vs unstructured
# - Log level usage
```
**For configuration**:
```bash
# Count configuration access patterns
grep -r "os.Getenv\|viper\.\|env:" . --include="*.go" | wc -l
grep -r "os.environ\|config\." . --include="*.py" | wc -l
grep -r "process.env\|config\." . --include="*.js" | wc -l
# Identify inconsistencies
# - Direct env access vs centralized config
# - Missing validation
# - No defaults
```
### Step 2: Prioritize by File Tier (10 min)
**Tier 1 (ROI > 10x)**: User-facing APIs, public interfaces, error infrastructure
**Tier 2 (ROI 5-10x)**: Internal services, CLI commands, data processors
**Tier 3 (ROI < 5x)**: Test utilities, stubs, deprecated code
**Decision**: Standardize Tier 1 fully, Tier 2 selectively, defer Tier 3
### Step 3: Define Initial Conventions (5 min)
**Error Handling**:
- Standard: Sentinel errors + wrapping (Go: %w, Python: from, JS: cause)
- Context: Operation + Resource + Error Type + Guidance
**Logging**:
- Standard: Structured logging (Go: log/slog, Python: logging, JS: winston)
- Levels: DEBUG, INFO, WARN, ERROR with clear usage guidelines
**Configuration**:
- Standard: Centralized Config struct with validation
- Source: Environment variables (12-Factor App pattern)
---
## Five Universal Principles
### 1. Detect Before Standardize
**Pattern**: Automate identification of non-compliant code
**Why**: Manual inspection doesn't scale, misses edge cases
**Implementation**:
1. Create linter/static analyzer for your conventions
2. Run on full codebase to quantify scope
3. Categorize violations by severity and user impact
4. Generate compliance report
**Examples by Language**:
- **Go**: `scripts/lint-errors.sh` detects bare `fmt.Errorf`, missing `%w`
- **Python**: pylint rule for bare `raise Exception()`, missing `from` clause
- **JavaScript**: ESLint rule for `throw new Error()` without context
- **Rust**: clippy rule for unwrap() without context
**Validation**: Enables data-driven prioritization (know scope before starting)
---
### 2. Prioritize by Value
**Pattern**: High-value files first, low-value files later (or never)
**Why**: ROI diminishes after 85-90% coverage, focus maximizes impact
**File Tier Classification**:
**Tier 1 (ROI > 10x)**:
- User-facing APIs
- Public interfaces
- Error infrastructure (sentinel definitions, enrichment functions)
- **Impact**: User experience, external API quality
**Tier 2 (ROI 5-10x)**:
- Internal services
- CLI commands
- Data processors
- **Impact**: Developer experience, debugging efficiency
**Tier 3 (ROI < 5x)**:
- Test utilities
- Stubs/mocks
- Deprecated code
- **Impact**: Minimal, defer or skip
**Decision Rule**: Standardize Tier 1 fully (100%), Tier 2 selectively (50-80%), defer Tier 3 (0-20%)
**Validated Data** (meta-cc):
- Tier 1 (capabilities.go): 16.7x ROI, 25.5% value gain
- Tier 2 (internal utilities): 8.3x ROI, 6% value gain
- Tier 3 (stubs): 3x ROI, 1% value gain (skipped)
---
### 3. Infrastructure Enables Scale
**Pattern**: Build foundational components before standardizing call sites
**Why**: 1000 call sites depend on 10 sentinel errors β build sentinels first
**Infrastructure Components**:
1. **Sentinel errors/exceptions**: Define reusable error types
2. **Error enrichment functions**: Add context consistently
3. **Linter/analyzer**: Detect non-compliant code
4. **CI integration**: Enforce standards automatically
**Example Sequence** (Go):
```
1. Create internal/errors/errors.go with sentinels (3 hours)
2. Integrate linter into Makefile (10 minutes)
3. Standardize 53 call sites (5 hours total)
4. Add GitHub Actions workflow (10 minutes)
ROI: Infrastructure (3.3 hours) enables 53 sites (5 hours) + ongoing enforcement (infinite ROI)
```
**Example Sequence** (Python):
```
1. Create errors.py with custom exception classes (2 hours)
2. Create pylint plugin for enforcement (1 hour)
3. Standardize call sites (4 hours)
4. Add tox integration (10 minutes)
```
**Principle**: Invest in infrastructure early for multiplicative returns
---
### 4. Context Is King
**Pattern**: Enrich errors with operation context, resource IDs, actionable guidance
**Why**: 60-75% faster diagnosis with rich context (validated in Bootstrap-013)
**Context Layers**:
1. **Operation**: What was being attempted?
2. **Resource**: Which file/URL/record failed?
3. **Error Type**: What category of failure?
4. **Guidance**: What should user/developer do?
**Examples by Language**:
**Go** (Before/After):
```go
// Before: Poor context
return fmt.Errorf("failed to load: %v", err)
// After: Rich context
return fmt.Errorf("failed to load capability '%s' from source '%s': %w",
name, source, ErrFileIO)
```
**Python** (Before/After):
```python
# Before: Poor context
raise Exception(f"failed to load: {err}")
# After: Rich context
raise FileNotFoundError(
f"failed to load capability '{name}' from source '{source}': {err}",
name=name, source=source) from err
```
**JavaScript** (Before/After):
```javascript
// Before: Poor context
throw new Error(`failed to load: ${err}`);
// After: Rich context
throw new FileLoadError(
`failed to load capability '${name}' from source '${source}': ${err}`,
{ name, source, cause: err }
);
```
**Rust** (Before/After):
```rust
// Before: Poor context
Err(err)?
// After: Rich context
Err(err).context(format!(
"failed to load capability '{}' from source '{}'", name, source))?
```
**Impact**: Error diagnosis time reduced by 60-75% (from minutes to seconds)
---
### 5. Automate Enforcement
**Pattern**: CI blocks non-compliant code, prevents regression
**Why**: Manual review doesn't scale, humans forget conventions
**Implementation** (language-agnostic):
1. Integrate linter into build system (Makefile, package.json, Cargo.toml)
2. Add CI workflow (GitHub Actions, GitLab CI, CircleCI)
3. Run on every push/PR
4. Block merge if violations found
5. Provide clear error messages with fix guidance
**Example CI Setup** (GitHub Actions):
```yaml
name: Lint Cross-Cutting Concerns
on: [push, pull_request]
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Run error handling linter
run: make lint-errors
- name: Fail on violations
run: exit $?
```
**Validated Data** (meta-cc):
- CI setup time: 20 minutes
- Ongoing maintenance: 0 hours (fully automated)
- Regression rate: 0% (100% enforcement)
- False positive rate: 0% (accurate linter)
---
## File Tier Prioritization Framework
### ROI Calculation
**Formula**:
```
For each file:
1. User Impact: high (10) / medium (5) / low (1)
2. Error Sites (N): Count of patterns to standardize
3. Time Investment (T): Estimated hours to refactor
4. Value Gain (ΞV): Expected improvement (0-100%)
5. ROI = (ΞV Γ Project Horizon) / T
Project Horizon: Expected lifespan (e.g., 2 years = 24 months)
```
**Example Calculation** (capabilities.go, meta-cc):
```
User Impact: High (10) - Affects capability loading
Error Sites: 8 sites
Time Investment: 0.5 hours
Value Gain: 25.5% (from 0.233 to 0.488)
Project Horizon: 24 months
ROI = (0.255 Γ 24) / 0.5 = 12.24 (round to 12x)
Classification: Tier 1 (ROI > 10x)
```
### Tier Decision Matrix
| Tier | ROI Range | Strategy | Coverage Target |
|------|-----------|----------|-----------------|
| Tier 1 | >10x | Standardize fully | 100% |
| Tier 2 | 5-10x | Selective standardization | 50-80% |
| Tier 3 | <5x | Defer or skip | 0-20% |
**Meta-cc Results**:
- 1 Tier 1 file (capabilities.go): 100% standardized
- 5 Tier 2 files: 60% standardized (strategic selection)
- 10+ Tier 3 files: 0% standardized (deferred)
---
## Pattern Extraction Workflow
### Phase 1: Observe (Iterations 0-1)
**Objective**: Catalog existing patterns and measure consistency
**Steps**:
1. **Pattern Inventory**:
- Count patterns by type (error handling, logging, config)
- Identify variations (fmt.Errorf vs errors.New, log vs slog)
- Calculate consistency percentage
2. **Baseline Metrics**:
- Total occurrences per pattern
- Consistency ratio (dominant pattern / total)
- Coverage gaps (files without patterns)
3. **Gap Analysis**:
- What's missing? (sentinel errors, structured logging, config validation)
- What's inconsistent? (multiple approaches in same concern)
- What's priority? (user-facing vs internal)
**Output**: Pattern inventory, baseline metrics, gap analysis
---
### Phase 2: Codify (Iterations 2-4)
**Objective**: Define conventions and create enforcement tools
**Steps**:
1. **Convention Selection**:
- Choose standard library or tool per concern
- Document usage guidelines (when to use each pattern)
- Define anti-patterns (what to avoid)
2. **Infrastructure Creation**:
- Create sentinel errors/exceptions
- Create enrichment utilities
- Create configuration struct with validation
3. **Linter Development**:
- Detect non-compliant patterns
- Provide fix suggestions
- Generate compliance reports
**Output**: Conventions document, infrastructure code, linter script
---
### Phase 3: Automate (Iterations 5-6)
**Objective**: Enforce conventions and prevent regressions
**Steps**:
1. **Standardize High-Value Files** (Tier 1):
- Apply conventions systematically
- Test thoroughly (no behavior changes)
- Measure value improvement
2. **CI Integration**:
- Add linter to Makefile/build system
- Create GitHub Actions workflow
- Configure blocking on violations
3. **Documentation**:
- Update contributing guidelines
- Add examples to README
- Document migration process for remaining files
**Output**: Standardized Tier 1 files, CI enforcement, documentation
---
## Convention Selection Process
### Error Handling Conventions
**Decision Tree**:
```
1. Does language have built-in error wrapping?
Go 1.13+: Use fmt.Errorf with %w
Python 3+: Use raise ... from err
JavaScript: Use Error.cause (Node 16.9+)
Rust: Use thiserror + anyhow
2. Define sentinel errors:
- ErrFileIO, ErrNetworkFailure, ErrParseError, ErrNotFound, etc.
- Use custom error types for domain-specific errors
3. Context enrichment template:
Operation + Resource + Error Type + Guidance
```
**13 Best Practices** (Go example, adapt to language):
1. Use sentinel errors for common failures
2. Wrap errors with `%w` for Is/As support
3. Add operation context (what was attempted)
4. Include resource IDs (file paths, URLs, record IDs)
5. Preserve error chain (don't break wrapping)
6. Don't log and return (caller decides)
7. Provide actionable guidance in user-facing errors
8. Use custom error types for domain logic
9. Validate error paths in tests
10. Document error contract in godoc/docstrings
11. Use errors.Is for sentinel matching
12. Use errors.As for type extraction
13. Avoid panic (except unrecoverable programmer errors)
---
### Logging Conventions
**Decision Tree**:
```
1. Choose structured logging library:
Go: log/slog (standard library, performant)
Python: logging (standard library)
JavaScript: winston or pino
Rust: tracing or log
2. Define log levels:
- DEBUG: Detailed diagnostic (dev only)
- INFO: General informational (default)
- WARN: Unexpected but handled
- ERROR: Requires intervention
3. Structured logging format:
logger.Info("operation complete",
"resource", resourceID,
"duration_ms", duration.Milliseconds())
```
**13 Best Practices** (Go log/slog example):
1. Use structured logging (key-value pairs)
2. Configure log level via environment variable
3. Use contextual logger (logger.With for request context)
4. Include operation name in every log
5. Add resource IDs for traceability
6. Use DEBUG for diagnostic details
7. Use INFO for business events
8. Use WARN for recoverable issues
9. Use ERROR for failures requiring action
10. Don't log sensitive data (passwords, tokens)
11. Use consistent key names (user_id not userId/userID)
12. Output to stderr (stdout for application output)
13. Include timestamps and source location
---
### Configuration Conventions
**Decision Tree**:
```
1. Choose configuration approach:
- 12-Factor App: Environment variables (recommended)
- Config files: YAML/TOML (if complex config needed)
- Hybrid: Env vars with file override
2. Create centralized Config struct:
- All configuration in one place
- Validation on load
- Sensible defaults
- Clear documentation
3. Environment variable naming:
PREFIX_COMPONENT_SETTING (e.g., APP_DB_HOST)
```
**14 Best Practices** (Go example):
1. Centralize config in single struct
2. Load config once at startup
3. Validate all required fields
4. Provide sensible defaults
5. Use environment variables for deployment differences
6. Use config files for complex/nested config
7. Never hardcode secrets (use env vars or secret management)
8. Document all config options (README or godoc)
9. Use consistent naming (PREFIX_COMPONENT_SETTING)
10. Parse and validate early (fail fast)
11. Make config immutable after load
12. Support config reload for long-running services (optional)
13. Log effective config on startup (mask secrets)
14. Provide example config file (.env.example)
---
## Proven Results
**Validated in bootstrap-013 (meta-cc project)**:
- β
Error handling: 70% baseline consistency β 90% standardized (Tier 1 files)
- β
Logging: 0.7% baseline coverage β 90% adoption (MCP server, capabilities)
- β
Configuration: 40% baseline consistency β 80% centralized
- β
ROI: 16.7x for Tier 1 files (capabilities.go), 8.3x for Tier 2
- β
Diagnosis speed: 60-75% faster with rich error context
- β
CI enforcement: 0% regression rate, 20-minute setup
**Transferability Validation**:
- Go: 90% (native implementation)
- Python: 80-85% (exception classes, logging module)
- JavaScript: 75-80% (Error.cause, winston)
- Rust: 85-90% (thiserror, anyhow, tracing)
- **Overall**: 80-90% transferable β
**Universal Components** (language-agnostic):
- 5 principles (100% universal)
- File tier prioritization (100% universal)
- ROI calculation framework (100% universal)
- Pattern extraction workflow (95% universal, tooling varies)
- Context enrichment structure (100% universal)
---
## Common Anti-Patterns
β **Pattern Sprawl**: Multiple error handling approaches in same codebase (consistency loss)
β **Standardize Everything**: Wasting effort on Tier 3 files (low ROI)
β **No Infrastructure**: Standardizing call sites before creating sentinels (rework needed)
β **Poor Context**: Generic errors without operation/resource info (slow diagnosis)
β **Manual Enforcement**: Relying on code review instead of CI (regression risk)
β **Premature Optimization**: Building complex linter before understanding patterns (over-engineering)
---
## Templates and Examples
### Templates
- [Sentinel Errors Template](templates/sentinel-errors-template.md) - Define reusable error types by language
- [Linter Script Template](templates/linter-script-template.sh) - Detect non-compliant patterns
- [Structured Logging Template](templates/structured-logging-template.md) - log/slog, winston, etc.
- [Config Struct Template](templates/config-struct-template.md) - Centralized configuration with validation
### Examples
- [Error Handling Standardization](examples/error-handling-walkthrough.md) - Full workflow from inventory to enforcement
- [File Tier Prioritization](examples/file-tier-calculation.md) - ROI calculation with real meta-cc data
- [CI Integration Guide](examples/ci-integration-example.md) - GitHub Actions linter workflow
---
## Related Skills
**Parent framework**:
- [methodology-bootstrapping](../methodology-bootstrapping/SKILL.md) - Core OCA cycle
**Complementary domains**:
- [error-recovery](../error-recovery/SKILL.md) - Error handling patterns align
- [observability-instrumentation](../observability-instrumentation/SKILL.md) - Logging and metrics
- [technical-debt-management](../technical-debt-management/SKILL.md) - Pattern inconsistency is architectural debt
---
## References
**Core methodology**:
- [Cross-Cutting Concerns Methodology](reference/cross-cutting-concerns-methodology.md) - Complete methodology guide
- [5 Universal Principles](reference/universal-principles.md) - Language-agnostic principles
- [File Tier Prioritization](reference/file-tier-prioritization.md) - ROI framework
- [Pattern Extraction](reference/pattern-extraction-workflow.md) - Observe-Codify-Automate process
**Best practices by concern**:
- [Error Handling Best Practices](reference/error-handling-best-practices.md) - 13 practices with language examples
- [Logging Best Practices](reference/logging-best-practices.md) - 13 practices for structured logging
- [Configuration Best Practices](reference/configuration-best-practices.md) - 14 practices for centralized config
**Language-specific guides**:
- [Go Adaptation](reference/go-adaptation.md) - log/slog, fmt.Errorf %w, os.Getenv
- [Python Adaptation](reference/python-adaptation.md) - logging, raise...from, os.environ
- [JavaScript Adaptation](reference/javascript-adaptation.md) - winston, Error.cause, process.env
- [Rust Adaptation](reference/rust-adaptation.md) - tracing, anyhow, thiserror
---
**Status**: β
Production-ready | Validated in meta-cc | 60-75% faster diagnosis | 80-90% transferable
```
### ../technical-debt-management/SKILL.md
```markdown
---
name: Technical Debt Management
description: Systematic technical debt quantification and management using SQALE methodology with value-effort prioritization, phased paydown roadmaps, and prevention strategies. Use when technical debt unmeasured or subjective, need objective prioritization, planning refactoring work, establishing debt prevention practices, or tracking debt trends over time. Provides 6 methodology components (measurement with SQALE index, categorization with code smell taxonomy, prioritization with value-effort matrix, phased paydown roadmap, trend tracking system, prevention guidelines), 3 patterns (SQALE-based quantification, code smell taxonomy mapping, value-effort prioritization), 3 principles (high-value low-effort first, SQALE provides objective baseline, complexity drives maintainability debt). Validated with 4.5x speedup vs manual approach, 85% transferability across languages (Go, Python, JavaScript, Java, Rust), SQALE industry-standard methodology.
allowed-tools: Read, Write, Edit, Bash, Grep, Glob
---
# Technical Debt Management
**Transform subjective debt assessment into objective, data-driven paydown strategy with 4.5x speedup.**
> Measure what matters. Prioritize by value. Pay down strategically. Prevent proactively.
---
## When to Use This Skill
Use this skill when:
- π **Unmeasured debt**: Technical debt unknown or subjectively assessed
- π― **Need prioritization**: Many debt items, unclear which to tackle first
- π **Planning refactoring**: Need objective justification and ROI analysis
- π¨ **Debt accumulation**: Debt growing but no tracking system
- π **Prevention lacking**: Reactive debt management, no proactive practices
- π **Objective reporting**: Stakeholders need quantified debt metrics
**Don't use when**:
- β Debt already well-quantified with SQALE or similar methodology
- β Codebase very small (<1K LOC, minimal debt accumulation)
- β No refactoring capacity (debt measurement without action is wasteful)
- β Tools unavailable (need complexity, coverage, duplication analysis tools)
---
## Quick Start (30 minutes)
### Step 1: Calculate SQALE Index (15 min)
**SQALE Formula**:
```
Development Cost = LOC / 30 (30 LOC/hour productivity)
Technical Debt = Remediation Cost (hours)
TD Ratio = Technical Debt / Development Cost Γ 100%
```
**SQALE Ratings**:
- A (Excellent): β€5% TD ratio
- B (Good): 6-10%
- C (Moderate): 11-20%
- D (Poor): 21-50%
- E (Critical): >50%
**Example** (meta-cc):
```
LOC: 12,759
Development Cost: 425.3 hours
Technical Debt: 66.0 hours
TD Ratio: 15.52% (Rating: C - Moderate)
```
### Step 2: Categorize Debt (10 min)
**SQALE Code Smell Taxonomy**:
1. **Bloaters**: Long methods, large classes (complexity debt)
2. **Change Preventers**: Shotgun surgery, divergent change (flexibility debt)
3. **Reliability Issues**: Test coverage gaps, error handling (quality debt)
4. **Couplers**: Feature envy, inappropriate intimacy (coupling debt)
5. **Dispensables**: Duplicate code, dead code (maintainability debt)
**Example Breakdown**:
- Complexity: 54.5 hours (82.6%)
- Coverage: 10.0 hours (15.2%)
- Duplication: 1.0 hours (1.5%)
### Step 3: Prioritize with Value-Effort Matrix (5 min)
**Four Quadrants**:
```
High Value, Low Effort β Quick Wins (do first)
High Value, High Effort β Strategic (plan carefully)
Low Value, Low Effort β Opportunistic (do when convenient)
Low Value, High Effort β Avoid (skip unless critical)
```
**Quick Wins Example**:
- Fix error capitalization (0.5 hours)
- Increase test coverage for small module (2.0 hours)
---
## Six Methodology Components
### 1. Measurement Framework (SQALE)
**Objective**: Quantify technical debt objectively using industry-standard SQALE methodology
**Three Calculations**:
**A. Development Cost**:
```
Development Cost = LOC / Productivity
Productivity = 30 LOC/hour (SQALE standard)
```
**B. Remediation Cost** (Complexity Example):
```
Graduated Thresholds:
- Low complexity (β€10): 0 hours
- Medium complexity (11-15): 0.5 hours per function
- High complexity (16-25): 1.0 hours per function
- Very high (26-50): 2.0 hours per function
- Extreme (>50): 4.0 hours per function
```
**C. Technical Debt Ratio**:
```
TD Ratio = (Total Remediation Cost / Development Cost) Γ 100%
SQALE Rating = Map TD Ratio to A-E scale
```
**Tools**:
- Go: gocyclo, gocov, golangci-lint
- Python: radon, pylint, pytest-cov
- JavaScript: eslint, jscpd, nyc
- Java: PMD, JaCoCo, CheckStyle
- Rust: cargo-geiger, clippy
**Output**: SQALE Index Report (total debt, TD ratio, rating, breakdown by category)
**Transferability**: 100% (SQALE formulas language-agnostic)
---
### 2. Categorization Framework (Code Smells)
**Objective**: Map metrics to SQALE code smell taxonomy for prioritization
**Five SQALE Categories**:
**1. Bloaters** (Complexity Debt):
- Long methods (cyclomatic complexity >10)
- Large classes (>500 LOC)
- Long parameter lists (>5 parameters)
- **Remediation**: Extract method, split class, introduce parameter object
**2. Change Preventers** (Flexibility Debt):
- Shotgun surgery (change requires touching multiple files)
- Divergent change (class changes for multiple reasons)
- **Remediation**: Consolidate logic, introduce abstraction layer
**3. Reliability Issues** (Quality Debt):
- Test coverage gaps (<80% target)
- Missing error handling
- **Remediation**: Add tests, implement error handling
**4. Couplers** (Coupling Debt):
- Feature envy (method uses data from another class more than own)
- Inappropriate intimacy (high coupling between modules)
- **Remediation**: Move method, reduce coupling
**5. Dispensables** (Maintainability Debt):
- Duplicate code (>3% duplication ratio)
- Dead code (unreachable functions)
- **Remediation**: Extract common code, remove dead code
**Output**: Code Smell Report (smell type, instances, files, remediation cost)
**Transferability**: 80-90% (OO smells apply to OO languages only, others universal)
---
### 3. Prioritization Framework (Value-Effort Matrix)
**Objective**: Rank debt items by ROI (business value / remediation effort)
**Business Value Assessment** (3 factors):
1. **User Impact**: Does debt affect user experience? (0-10)
2. **Change Frequency**: How often is this code changed? (0-10)
3. **Error Risk**: Does debt cause bugs? (0-10)
4. **Total Value**: Sum of 3 factors (0-30)
**Effort Estimation**:
- Use SQALE remediation cost model
- Factor in testing, code review, deployment time
**Value-Effort Quadrants**:
```
High Value
|
Quick | Strategic
Wins |
---------|------------- Effort
Opportun-| Avoid
istic |
|
Low Value
```
**Priority Ranking**:
1. Quick Wins (high value, low effort)
2. Strategic (high value, high effort) - plan carefully
3. Opportunistic (low value, low effort) - when convenient
4. Avoid (low value, high effort) - skip unless critical
**Output**: Prioritization Matrix (debt items ranked by quadrant)
**Transferability**: 95% (value-effort concept universal, specific values vary)
---
### 4. Paydown Framework (Phased Roadmap)
**Objective**: Create actionable, phased plan for debt reduction
**Four Phases**:
**Phase 1: Quick Wins** (0-2 hours)
- Highest ROI items
- Build momentum, demonstrate value
- Example: Fix lint issues, error capitalization
**Phase 2: Coverage Gaps** (2-12 hours)
- Test coverage improvements
- Prevent regressions, enable refactoring confidence
- Example: Add integration tests, increase coverage to β₯80%
**Phase 3: Strategic Complexity** (12-30 hours)
- High-value, high-effort refactoring
- Address architectural debt
- Example: Consolidate duplicated logic, refactor high-complexity functions
**Phase 4: Opportunistic** (as time allows)
- Low-priority items tackled when working nearby
- Example: Refactor during feature development in same area
**Expected Improvements** (calculate per phase):
```
Phase TD Reduction = Sum of remediation costs in phase
New TD Ratio = (Total Debt - Phase TD Reduction) / Development Cost Γ 100%
New SQALE Rating = Map new TD ratio to A-E scale
```
**Output**: Paydown Roadmap (4 phases, time estimates, expected TD ratio improvements)
**Transferability**: 100% (phased approach universal)
---
### 5. Tracking Framework (Trend Analysis)
**Objective**: Continuous debt monitoring with early warning alerts
**Five Tracking Components**:
**1. Automated Data Collection**:
- Weekly metrics collection (complexity, coverage, duplication)
- CI/CD integration (collect on every build)
**2. Baseline Storage**:
- Quarterly SQALE snapshots
- Historical comparison (track delta)
**3. Trend Tracking**:
- Time series: TD ratio, complexity, coverage, hotspots
- Identify trends (increasing, decreasing, stable)
**4. Visualization Dashboard**:
- TD ratio over time
- Debt by category (stacked area chart)
- Coverage trends
- Complexity heatmap
- Hotspot analysis (files with most debt)
**5. Alerting Rules**:
- TD ratio increase >5% in 1 month
- Coverage drop >5%
- New high-complexity functions (>25 complexity)
- Duplication spike >3%
**Expected Impact**:
- Visibility: Point-in-time β continuous trends
- Decision making: Reactive β data-driven proactive
- Early warning: Alert before debt spikes
**Output**: Tracking System Design (automation plan, dashboard mockups, alert rules)
**Transferability**: 95% (tracking concept universal, tools vary)
---
### 6. Prevention Framework (Proactive Practices)
**Objective**: Prevent new debt accumulation through gates and practices
**Six Prevention Strategies**:
**1. Pre-Commit Complexity Gates**:
```bash
# Reject commits with functions >15 complexity
gocyclo -over 15 .
```
**2. Test Coverage Requirements**:
- Overall: β₯80%
- New code: β₯90%
- CI/CD gate: Fail build if coverage drops
**3. Static Analysis Enforcement**:
- Zero tolerance for critical issues
- Warning threshold (fail if >10 warnings)
**4. Code Review Checklist** (6 debt prevention items):
- [ ] No functions >15 complexity
- [ ] Test coverage β₯90% for new code
- [ ] No duplicate code (DRY principle)
- [ ] Error handling complete
- [ ] No dead code
- [ ] Architecture consistency maintained
**5. Refactoring Time Budget**:
- Allocate 20% sprint capacity for refactoring
- Opportunistic paydown during feature work
**6. Architecture Review**:
- Quarterly health checks
- Identify architectural debt early
- Plan strategic refactoring
**Expected Impact**:
- TD accumulation: 2%/quarter β <0.5%/quarter
- ROI: 4 days saved per quarter (prevention time << paydown time)
**Output**: Prevention Guidelines (pre-commit hooks, CI/CD gates, code review checklist)
**Transferability**: 85% (specific thresholds vary, practices universal)
---
## Three Extracted Patterns
### Pattern 1: SQALE-Based Debt Quantification
**Problem**: Subjective debt assessment leads to inconsistent prioritization
**Solution**: Use SQALE methodology for objective, reproducible measurement
**Structure**:
1. Calculate development cost (LOC / 30)
2. Calculate remediation cost (graduated thresholds)
3. Calculate TD ratio (remediation / development Γ 100%)
4. Assign SQALE rating (A-E)
**Benefits**:
- Objective (same methodology, same results)
- Reproducible (industry standard)
- Comparable (across projects, over time)
**Transferability**: 90% (formulas universal, threshold calibration language-specific)
---
### Pattern 2: Code Smell Taxonomy Mapping
**Problem**: Metrics (complexity, duplication) don't directly translate to actionable insights
**Solution**: Map metrics to SQALE code smell taxonomy for clear remediation strategies
**Structure**:
```
Metric β Code Smell β Remediation Strategy
Complexity >10 β Long Method (Bloater) β Extract Method
Duplication >3% β Duplicate Code (Dispensable) β Extract Common Code
Coverage <80% β Test Gap (Reliability Issue) β Add Tests
```
**Benefits**:
- Actionable (smell β remediation)
- Prioritizable (smell severity)
- Educational (developers learn smell patterns)
**Transferability**: 80% (OO smells require adaptation for non-OO languages)
---
### Pattern 3: Value-Effort Prioritization Matrix
**Problem**: Too many debt items, unclear which to tackle first
**Solution**: Rank by ROI using value-effort matrix
**Structure**:
1. Assess business value (user impact + change frequency + error risk)
2. Estimate remediation effort (SQALE model)
3. Plot on matrix (4 quadrants)
4. Prioritize: Quick Wins β Strategic β Opportunistic β Avoid
**Benefits**:
- ROI-driven (maximize value per hour)
- Transparent (stakeholders understand prioritization)
- Flexible (adjust value weights per project)
**Transferability**: 95% (concept universal, specific values vary)
---
## Three Principles
### Principle 1: Pay High-Value Low-Effort Debt First
**Statement**: "Maximize ROI by prioritizing high-value low-effort debt (quick wins) before tackling strategic debt"
**Rationale**:
- Build momentum (early wins)
- Demonstrate value (stakeholder buy-in)
- Free up capacity (small wins compound)
**Evidence**: Quick wins phase (0.5-2 hours) enables larger strategic work
**Application**: Always start paydown roadmap with quick wins
---
### Principle 2: SQALE Provides Objective Baseline
**Statement**: "Use SQALE methodology for objective, reproducible debt measurement to enable data-driven decisions"
**Rationale**:
- Subjective assessment varies by developer
- Objective measurement enables comparison (projects, time periods)
- Industry standard (validated across thousands of projects)
**Evidence**: 4.5x speedup vs manual approach, objective vs subjective
**Application**: Calculate SQALE index before any debt work
---
### Principle 3: Complexity Drives Maintainability Debt
**Statement**: "Complexity debt dominates technical debt (often 70-90%), focus refactoring on high-complexity functions"
**Rationale**:
- High complexity β hard to understand β slow changes β bugs
- Complexity compounds (high complexity attracts more complexity)
- Refactoring complexity has highest impact
**Evidence**: 82.6% of meta-cc debt from complexity (54.5/66 hours)
**Application**: Prioritize complexity reduction in paydown roadmaps
---
## Proven Results
**Validated in bootstrap-012 (meta-cc project)**:
- β
SQALE Index: 66 hours debt, 15.52% TD ratio, rating C (Moderate)
- β
Methodology: 6/6 components complete (measurement, categorization, prioritization, paydown, tracking, prevention)
- β
Convergence: V_instance = 0.805, V_meta = 0.855 (both >0.80)
- β
Duration: 4 iterations, ~7 hours
- β
Paydown roadmap: 31.5 hours β rating B (8.23%, -47.7% debt reduction)
**Effectiveness Validation**:
- Manual approach: 9 hours (ad-hoc review, subjective prioritization)
- Methodology approach: 2 hours (tool-based, SQALE calculation)
- **Speedup**: 4.5x β
- **Accuracy**: Subjective β Objective (SQALE standard)
- **Reproducibility**: Low β High
**Transferability Validation** (5 languages analyzed):
- Go: 90% transferable (native)
- Python: 85% (tools: radon, pylint, pytest-cov)
- JavaScript: 85% (tools: eslint, jscpd, nyc)
- Java: 90% (tools: PMD, JaCoCo, CheckStyle)
- Rust: 80% (tools: cargo-geiger, clippy, skip OO smells)
- **Overall**: 85% transferable β
**Universal Components** (13/16, 81%):
- SQALE formulas (100%)
- Prioritization matrix (100%)
- Paydown roadmap (100%)
- Code smell taxonomy (90%, OO smells excluded)
- Tracking approach (95%)
- Prevention practices (85%)
---
## Common Anti-Patterns
β **Measurement without action**: Calculating debt but not creating paydown plan
β **Strategic-only focus**: Skipping quick wins, tackling only big refactoring (low momentum)
β **No prevention**: Paying down debt without gates (debt re-accumulates)
β **Subjective prioritization**: "This code is bad" without quantified impact
β **Tool-free assessment**: Manual review instead of automated metrics (4.5x slower)
β **No tracking**: Point-in-time snapshot instead of continuous monitoring (reactive)
---
## Templates and Examples
### Templates
- [SQALE Index Report Template](templates/sqale-index-report-template.md) - Standard debt measurement report
- [Code Smell Categorization Template](templates/code-smell-categorization-template.md) - Map metrics to smells
- [Remediation Cost Breakdown Template](templates/remediation-cost-breakdown-template.md) - Estimate paydown effort
- [Transfer Guide Template](templates/transfer-guide-template.md) - Adapt methodology to new language
### Examples
- [SQALE Calculation Walkthrough](examples/sqale-calculation-example.md) - Step-by-step meta-cc example
- [Value-Effort Prioritization](examples/value-effort-matrix-example.md) - Prioritization matrix with real debt items
- [Phased Paydown Roadmap](examples/paydown-roadmap-example.md) - 4-phase plan with TD ratio improvements
---
## Related Skills
**Parent framework**:
- [methodology-bootstrapping](../methodology-bootstrapping/SKILL.md) - Core OCA cycle
**Complementary domains**:
- [testing-strategy](../testing-strategy/SKILL.md) - Coverage debt reduction
- [ci-cd-optimization](../ci-cd-optimization/SKILL.md) - Prevention gates
- [cross-cutting-concerns](../cross-cutting-concerns/SKILL.md) - Architectural debt patterns
---
## References
**Core methodology**:
- [SQALE Methodology](reference/sqale-methodology.md) - Complete SQALE guide
- [Code Smell Taxonomy](reference/code-smell-taxonomy.md) - SQALE categories with examples
- [Prioritization Framework](reference/prioritization-framework.md) - Value-effort matrix guide
- [Transfer Guide](reference/transfer-guide.md) - Language-specific adaptations
**Quick guides**:
- [15-Minute SQALE Analysis](reference/quick-sqale-analysis.md) - Fast debt measurement
- [Remediation Cost Estimation](reference/remediation-cost-guide.md) - Effort calculation
---
**Status**: β
Production-ready | Validated in meta-cc | 4.5x speedup | 85% transferable
```
### reference/progressive-learning-path.md
```markdown
# Progressive Learning Path Design
**Day 1**: Core concepts (30% coverage, 100% essentials)
**Week 1**: Common workflows (70% coverage)
**Month 1**: Complete mastery (100% coverage, edge cases)
**Source**: Knowledge Transfer Framework
```
### reference/validation-checkpoints.md
```markdown
# Validation Checkpoints
Check understanding at: 30%, 70%, 100% completion.
Methods: Self-test, practical application, peer review.
**Source**: Knowledge Transfer Framework
```
### reference/module-mastery.md
```markdown
# Module Mastery Approach
Complete depth-first learning of one module before moving to next.
**Criteria**: Can explain, apply, and adapt without reference.
**Source**: Knowledge Transfer Framework
```
### reference/learning-theory.md
```markdown
# Learning Theory for Knowledge Transfer
Progressive learning: crawl β walk β run
Module mastery: complete one before next
Validation checkpoints: verify understanding at milestones
**Source**: Knowledge Transfer Framework
```
### reference/create-day1-path.md
```markdown
# Creating Day 1 Learning Paths
Focus on: What they need to be productive immediately.
Include: Core concepts, most-used workflows, where to get help.
**Source**: Knowledge Transfer Framework
```
### reference/adaptation-guide.md
```markdown
# Adaptation Guide for New Contexts
Map concepts from source β target domain.
Identify analogies, differences, and edge cases.
**Source**: Knowledge Transfer Framework
```