moai-alfred-practices
Enterprise practical workflows, context engineering strategies, JIT (Just-In-Time) retrieval optimization, real-world execution examples, debugging patterns, and moai-adk workflow mastery; activates for workflow pattern learning, context optimization, debugging issue resolution, feature implementation end-to-end, and team knowledge transfer
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Install command
npx @skill-hub/cli install ajbcoding-claude-skill-eval-moai-alfred-practices
Repository
Skill path: moai-adk-main/src/moai_adk/templates/.claude/skills/moai-alfred-practices
Enterprise practical workflows, context engineering strategies, JIT (Just-In-Time) retrieval optimization, real-world execution examples, debugging patterns, and moai-adk workflow mastery; activates for workflow pattern learning, context optimization, debugging issue resolution, feature implementation end-to-end, and team knowledge transfer
Open repositoryBest for
Primary workflow: Ship Full Stack.
Technical facets: Full Stack.
Target audience: everyone.
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: AJBcoding.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install moai-alfred-practices into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/AJBcoding/claude-skill-eval before adding moai-alfred-practices to shared team environments
- Use moai-alfred-practices for development workflows
Works across
Favorites: 0.
Sub-skills: 0.
Aggregator: No.
Original source / Raw SKILL.md
---
name: "moai-alfred-practices"
version: "4.0.0"
created: 2025-11-05
updated: 2025-11-12
status: stable
description: Enterprise practical workflows, context engineering strategies, JIT (Just-In-Time) retrieval optimization, real-world execution examples, debugging patterns, and moai-adk workflow mastery; activates for workflow pattern learning, context optimization, debugging issue resolution, feature implementation end-to-end, and team knowledge transfer
keywords: ['workflow-patterns', 'context-engineering', 'jit-retrieval', 'agent-usage', 'debugging-patterns', 'feature-implementation', 'practical-examples', 'moai-adk-mastery', 'knowledge-transfer', 'enterprise-workflows']
allowed-tools:
- Read
- Glob
- Bash
- WebFetch
- mcp__context7__resolve-library-id
- mcp__context7__get-library-docs
---
# Enterprise Practical Workflows & Context Engineering v4.0.0
## Skill Metadata
| Field | Value |
| ----- | ----- |
| **Skill Name** | moai-alfred-practices |
| **Version** | 4.0.0 Enterprise (2025-11-12) |
| **Focus** | Practical execution patterns, real-world scenarios |
| **Auto-load** | When workflow guidance or debugging help needed |
| **Included Patterns** | 15+ real-world scenarios |
| **Lines of Content** | 950+ with 20+ production examples |
| **Progressive Disclosure** | 3-level (quick-patterns, scenarios, advanced) |
---
## What It Does
Provides practical workflows, context engineering strategies, real-world execution examples, and debugging solutions for moai-adk. Covers JIT context management, efficient agent usage, SPEC→TDD→Sync execution, and common problem resolution.
---
## JIT (Just-In-Time) Context Strategy
### Principle: Load Only What's Needed Now
```
Traditional (overload):
Load entire codebase
→ Context window fills immediately
→ Limited reasoning capacity
→ Slow, inefficient
JIT (optimized):
Load core entry points
→ Identify specific function/module
→ Load only that section
→ Cache in thread context
→ Reuse for related tasks
→ Minimal context waste
```
### Practice 1: Core Module Mapping
```bash
# 1. Get high-level structure
find src/ -type f -name "*.py" | wc -l
# Output: 145 files total
# 2. Identify entry points (only 3-5 files)
find src/ -name "__main__.py" -o -name "main.py" -o -name "run.py"
# 3. Load entry point + immediate dependencies
Glob("src/{**/}*.py")
# Load only files referenced by entry point
# 4. Cache in Task() context for reuse
Task(prompt="Task 1 using mapped modules")
Task(prompt="Task 2 reuses cached context")
```
### Practice 2: Dependency Tree Navigation
```
Project Root
├─ src/
│ ├─ __init__.py ← Entry point #1
│ ├─ main.py ← Entry point #2
│ ├─ core/
│ │ ├─ domain.py ← Core models
│ │ ├─ repository.py ← Data access
│ │ └─ service.py ← Business logic
│ └─ api/
│ ├─ routes.py ← API endpoints
│ └─ handlers.py ← Request handlers
Load strategy:
1. Load main.py + __init__.py (entry points)
2. When modifying API → Load api/ subtree
3. When fixing business logic → Load core/service.py
4. Cache all loaded files in context
5. Share context between related tasks
```
### Practice 3: Context Reuse Across Tasks
```python
# Task 1: Understand module structure
analysis = Task({
prompt="Map src/ directory structure, identify entry points, list dependencies"
})
# Task 2: Reuse analysis for implementation
implementation = Task({
prompt=f"""Using this structure:
{analysis}
Now implement feature X...
"""
})
# Task 3: Reuse analysis for testing
testing = Task({
prompt=f"""Using this structure:
{analysis}
Write tests for feature X...
"""
})
# Result: No re-mapping, efficient context reuse
```
---
## SPEC → TDD → Sync Execution Pattern
### Step 1: Create SPEC with `/alfred:1-plan`
```bash
/alfred:1-plan "Add user authentication with JWT"
# This creates:
# .moai/specs/SPEC-042/spec.md (full requirements)
# feature/SPEC-042 (git branch)
# Track with TodoWrite
```
### Step 2: Implement with `/alfred:2-run SPEC-042`
```
RED: Test agent writes failing tests
↓
GREEN: Implementer agent creates minimal code
↓
REFACTOR: Quality agent improves code
↓
Repeat TDD cycle for each feature component
↓
All tests passing, coverage ≥85%
```
### Step 3: Sync with `/alfred:3-sync auto SPEC-042`
```
Updates:
✓ Documentation
✓ Test coverage metrics
✓ Creates PR to develop
✓ Auto-validation of quality gates
```
---
## Debugging Pattern: Issue → Root Cause → Fix
### Step 1: Triage & Understand
```
Error message: "Cannot read property 'user_id' of undefined"
Questions:
- When does it occur? (always, intermittently, specific scenario)
- Which code path? (which endpoint/function)
- What's the state? (what data led to this)
- What changed recently? (revert to narrow down)
```
### Step 2: Isolate Root Cause
```python
# Method 1: Binary search
# Is it in API layer? → Yes
# Is it in route handler? → No
# Is it in service layer? → Yes
# Is it in this function? → Narrow down
# Method 2: Add logging
logger.debug(f"user_id = {user_id}") # Check where it becomes undefined
# Method 3: Test locally
# Reproduce with minimal example
# Add breakpoint in debugger
# Step through execution
```
### Step 3: Fix with Tests
```python
# RED: Write failing test
def test_handles_missing_user_id():
"""Should handle case when user_id is undefined."""
assert get_user(None) raises ValueError
# GREEN: Minimal fix
def get_user(user_id):
if not user_id:
raise ValueError("user_id required")
return fetch_user(user_id)
# REFACTOR: Improve
def get_user(user_id: int) -> User:
"""Get user by ID.
Args:
user_id: User identifier
Raises:
ValueError: If user_id is None or invalid
"""
if not user_id or user_id <= 0:
raise ValueError(f"Invalid user_id: {user_id}")
return self.user_repo.find(user_id)
```
---
## 5 Real-World Scenarios
### Scenario 1: Feature Implementation (2-3 hours)
```
1. Create SPEC: /alfred:1-plan "Add user dashboard"
2. Clarify details: AskUserQuestion (which data to show?)
3. Implement: /alfred:2-run SPEC-XXX (TDD cycle)
4. Document: /alfred:3-sync auto SPEC-XXX
5. Result: Production-ready feature
```
### Scenario 2: Bug Investigation (1-2 hours)
```
1. Reproduce: Create minimal test case
2. Isolate: Narrow down affected code
3. Debug: Add logging, trace execution
4. Fix: TDD RED→GREEN→REFACTOR
5. Validate: Ensure tests pass, regression tests
```
### Scenario 3: Large Refactoring (4-8 hours)
```
1. Analyze: Map current code structure
2. Plan: Design new structure with trade-offs
3. Clone pattern: Create autonomous agents for parallel refactoring
4. Integrate: Verify all pieces work together
5. Test: Comprehensive test coverage
```
### Scenario 4: Performance Optimization (2-4 hours)
```
1. Profile: Identify bottleneck with profiler
2. Analyze: Understand performance characteristics
3. Design: Plan optimization approach
4. Implement: TDD RED→GREEN→REFACTOR
5. Validate: Benchmark before/after
```
### Scenario 5: Multi-Team Coordination (ongoing)
```
1. SPEC clarity: AskUserQuestion for ambiguous requirements
2. Agent routing: Delegate to specialist teams
3. Progress tracking: TodoWrite for coordination
4. Integration: Verify components work together
5. Documentation: Central SPEC as source of truth
```
---
## Context Budget Optimization
```
Typical project context:
- Config files: ~50 tokens
- .moai/ structure: ~100 tokens
- Entry points (3-5 files): ~500 tokens
- SPEC document: ~200 tokens
→ Total: ~850 tokens per session
Reusable context:
- Load once per session
- Share across 5-10 tasks
- Saves: 3,500-8,500 tokens per session
- Result: More reasoning capacity
```
---
## Best Practices
### DO
- ✅ Load entry points first (3-5 files)
- ✅ Identify dependencies before deep dive
- ✅ Reuse analyzed context across tasks
- ✅ Cache intermediate results in Task context
- ✅ Follow SPEC → TDD → Sync workflow
- ✅ Track progress with TodoWrite
- ✅ Ask for clarification (AskUserQuestion)
- ✅ Test before declaring done
### DON'T
- ❌ Load entire codebase at once
- ❌ Reanalyze same code multiple times
- ❌ Skip SPEC clarification (causes rework)
- ❌ Write code without tests
- ❌ Ignore error messages
- ❌ Assume context understanding
- ❌ Skip documentation updates
- ❌ Commit without running tests
---
## Related Skills
- `moai-alfred-agent-guide` (Agent orchestration patterns)
- `moai-alfred-clone-pattern` (Complex task delegation)
- `moai-essentials-debug` (Debugging techniques)
---
**For detailed workflow examples**: [reference.md](reference.md)
**For real-world scenarios**: [examples.md](examples.md)
**Last Updated**: 2025-11-12
**Status**: Production Ready (Enterprise v4.0.0)
---
## Referenced Files
> The following files are referenced in this skill and included for context.
### reference.md
```markdown
# CLAUDE-PRACTICES.md
> MoAI-ADK Practical Workflows & Examples
---
## For Alfred: Why This Document Matters
When Alfred reads this document:
1. When performing actual tasks - "How specifically should I execute this?"
2. When context management is needed - "How can I use Explore efficiently?"
3. When solving problems - "How do I diagnose and resolve this error/problem?"
4. When onboarding new developers - "Learn MoAI-ADK workflows through practice"
Alfred's Decision Making:
- "What are the specific steps to perform this task?"
- "How can I collect the necessary context minimally?"
- "Where should I diagnose problems when they occur?"
After reading this document:
- Master JIT (Just-in-Time) context management strategies
- Learn how to use the Explore agent efficiently
- Master specific commands for SPEC → TDD → Sync execution
- Reference solutions for frequently occurring problems
---
→ Related Documents:
- [For rules verification, see CLAUDE-RULES.md](./CLAUDE-RULES.md#skill-invocation-rules)
- [For Agent selection, see CLAUDE-AGENTS-GUIDE.md](./CLAUDE-AGENTS-GUIDE.md#agent-selection-decision-tree)
---
## Context Engineering Strategy
### 1. JIT (Just-in-Time) Retrieval
- Pull only the context required for the immediate step.
- Prefer `Explore` over manual file hunting.
- Cache critical insights in the task thread for reuse.
#### Efficient Use of Explore
- Request call graphs or dependency maps when changing core modules.
- Fetch examples from similar features before implementing new ones.
- Ask for SPEC references or TAG metadata to anchor changes.
### 2. Layered Context Summaries
1. **High-level brief**: purpose, stakeholders, success criteria.
2. **Technical core**: entry points, domain models, shared utilities.
3. **Edge cases**: known bugs, performance constraints, SLAs.
### 3. Living Documentation Sync
- Align code, tests, and docs after each significant change.
- Use `/alfred:3-sync` to update Living Docs and TAG references.
- Record rationale for deviations from the SPEC.
---
## On-demand Agent Usage
### Debugging & Analysis
- Use `debug-helper` for error triage and hypothesis testing.
- Attach logs, stack traces, and reproduction steps.
- Ask for fix-forward vs rollback recommendations.
### TAG System Management
- Assign IDs as `<DOMAIN>-<###>` (e.g., `AUTH-003`).
- Update HISTORY with every change.
### Backup Management
- `/alfred:0-project` and `git-manager` create automatic safety snapshots (e.g., `.moai-backups/`) before risky actions.
- Manual `/alfred:9-checkpoint` commands have been deprecated; rely on Git branches or team-approved backup workflows when additional restore points are needed.
---
## Practical Workflow Examples
### Scenario 1: Implementing a New Feature (USER-DASHBOARD-001)
**Situation**: User requests "Add user dashboard"
**Alfred's Execution Sequence**:
1. **Detect Ambiguity → Execute AskUserQuestion**
```
Question 1: What is the data source?
Options: [REST API | GraphQL | Local State]
Question 2: What is the refresh cycle?
Options: [Real-time (WebSocket) | Every 10 seconds | Manual refresh]
Question 3: Is access restriction needed?
Options: [Admin only | Logged-in users | Public]
User Answers:
- Data source: REST API
- Refresh cycle: Manual refresh
- Access restriction: Logged-in users
```
2. **Write SPEC (Based on User Answers)**
```bash
/alfred:1-plan "User Dashboard Feature - Display user stats with manual refresh, authenticated access only"
```
**Output**: `.moai/specs/SPEC-USER-DASHBOARD-001/spec.md`
- YAML metadata: id, version: 0.0.1, status: draft
- EARS syntax requirements:
- "The system must display user statistics dashboard"
- "WHEN user clicks refresh button, THEN fetch latest data from REST API"
- "IF user not authenticated, THEN redirect to login page"
3. **TDD Implementation (RED → GREEN → REFACTOR)**
```bash
/alfred:2-run USER-DASHBOARD-001
```
**Alfred Internal Execution**:
- **implementation-planner** (Phase 1):
- Establish implementation strategy: React component + fetch API + auth guard
- Library selection: react-query (data fetching), @tanstack/react-query (caching)
- **tdd-implementer** (Phase 2):
- **RED**: Write `tests/features/dashboard.test.tsx` (failing tests)
- **GREEN**: Implement `src/features/Dashboard.tsx` (tests pass)
- **REFACTOR**: Clean code, separate hooks, improve reusability
4. **Document Synchronization**
```bash
/alfred:3-sync
```
**Alfred Internal Execution**:
- Living Document update: README.md, CHANGELOG.md
- PR status change: Draft → Ready
**Final Outputs**:
**Estimated Duration**: 30-45 minutes (SPEC 10min + TDD 20min + Sync 10min)
---
### Scenario 2: Bug Fix (BUG-AUTHENTICATION-TIMEOUT)
**Situation**: User reports "Authentication automatically disconnects after 5 minutes" bug
**Alfred's Execution Sequence**:
1. **Error Analysis (debug-helper)**
```bash
@agent-debug-helper "Authentication timeout after 5 minutes - expected 30 minutes"
```
**debug-helper Analysis Results**:
- Which function causes timeout? → `src/auth/token.ts:validateToken()`
- What is current timeout value? → `300000 ms` (5 minutes)
- What should the normal value be? → `1800000 ms` (30 minutes)
- Cause: JWT token expiration time incorrectly configured
2. **Write SPEC (For Bug Fix)**
```bash
/alfred:1-plan "Fix AUTH-TIMEOUT-001: JWT token expiration should be 30 minutes, not 5 minutes"
```
**Output**: `.moai/specs/SPEC-AUTH-TIMEOUT-001/spec.md`
- Bug description: Fix JWT expiration from 5min → 30min
- Root cause: `expiresIn` value error (change `300` → `1800`)
- Test case: Verify token validity for 30 minutes
3. **TDD Implementation (RED → GREEN → REFACTOR)**
```bash
/alfred:2-run AUTH-TIMEOUT-001
```
**Alfred Internal Execution**:
- **RED**: Add `tests/auth/token.test.ts`
```typescript
it('should keep token valid for 30 minutes', () => {
const token = generateToken();
const now = Date.now();
const futureTime = now + 30 * 60 * 1000;
expect(isTokenValid(token, futureTime)).toBe(true);
});
```
- **GREEN**: Modify `src/auth/token.ts`
```typescript
const JWT_EXPIRATION = 1800; // 30 minutes (was 300)
```
- **REFACTOR**: Constantize
```typescript
const JWT_EXPIRATION_MINUTES = 30;
const JWT_EXPIRATION = JWT_EXPIRATION_MINUTES * 60;
```
4. **Verification**
- **TRUST 5 Check**:
- Test First: ✅ New test case added
- Readable: ✅ ruff lint passed
- Unified: ✅ mypy type safety passed
- Secured: ✅ trivy security scan passed
- **TAG Chain Verification**:
```bash
rg '@(SPEC|TEST|CODE):AUTH-TIMEOUT-001' -n
```
**Final Outputs**:
- SPEC updated
- TEST added
- CODE modified (1 line)
**Estimated Duration**: 15-20 minutes (Analysis 5min + SPEC 5min + TDD 5min + Verification 5min)
---
### Scenario 3: Document Synchronization (Automatic)
**Situation**: Keep documents up to date after code modifications
**Alfred's Execution Sequence**:
1. **Check Changed Files**
```bash
git diff develop...HEAD
```
**Results**:
- `src/features/Dashboard.tsx` (modified)
- `src/api/dashboard.ts` (new)
- `tests/features/dashboard.test.tsx` (new)
2. **Living Document Verification**
```bash
/alfred:3-sync status
```
**doc-syncer Analysis**:
- README.md update needed: Add "User Dashboard" to Features section
- CHANGELOG.md creation needed: v0.4.2 release notes
3. **TAG Integrity Check**
```bash
rg '@(SPEC|TEST|CODE|DOC):' -n .moai/specs/ tests/ src/ docs/
```
**Results**:
- 🎉 No orphan TAGs detected
4. **PR Status Change (Draft → Ready)**
```bash
@agent-git-manager "Move PR #42 from Draft to Ready"
```
**git-manager Execution**:
- PR verification: All tests passed, coverage ≥85%
- PR label update: `draft` → `ready-for-review`
- Auto-assign reviewer: GOOS오라버니
- PR description update: Reflect CHANGELOG.md content
**Final Outputs**:
- README.md auto-updated (Features section)
- CHANGELOG.md auto-generated (v0.4.2 entry)
- TAG chain verification completed
- PR #42 status: Draft → Ready for Review
**Estimated Duration**: 5-10 minutes (automated)
---
### Scenario 4: Multilingual Workflow (Language Boundary Pattern)
**Situation**: Users request features in their preferred language. System responds in that language while keeping all internal operations in English.
**Generic Multilingual Pattern**:
```
User Input (any language): "[Feature request in user's language]"
Examples:
- "Add user authentication feature" (English)
- "사용자 인증 기능 추가해줘" (Korean)
- "ユーザー認証機能を追加してください" (Japanese)
- "Agregar funcionalidad de autenticación de usuarios" (Spanish)
↓ [Alfred's Internal Translation Layer]
Alfred Internal Prompt: "Create authentication feature SPEC with JWT strategy,
email+password login, 30-minute token expiry"
↓ [Invoke spec-builder with English prompt]
Task(subagent_type="spec-builder",
prompt="Create SPEC for user authentication with JWT tokens...")
↓ [spec-builder receives English prompt]
spec-builder Execution:
1. Detects SPEC authoring task → Skill("moai-foundation-specs") ✅ [100% match]
2. Applies EARS syntax → Skill("moai-foundation-ears") ✅ [100% match]
3. Returns structured English SPEC
↓ [Alfred receives English SPEC output]
SPEC Output:
{
"id": "AUTH-001",
"version": "0.0.1",
"requirements": [
"The system must provide JWT-based authentication",
"WHEN valid credentials provided, THEN issue JWT token with 30-minute expiry",
"IF user not authenticated, THEN redirect to login page"
]
}
↓ [Alfred's Response Translation Layer]
User Response (in user's language):
- English: "I've created the authentication SPEC with JWT strategy..."
- Korean: "인증 SPEC을 작성했습니다. JWT 전략으로..."
- Japanese: "認証SPECを作成しました。JWT戦略で..."
- Spanish: "He creado la especificación de autenticación. Con estrategia JWT..."
```
**Key Principles**:
| Aspect | Implementation |
|--------|-----------------|
| **User-Facing (External)** | User's configured language (flexible) |
| **Internal Operations (Layer 2)** | English only (Task prompts, Sub-agent communication) |
| **Skills & Code (Layer 3)** | English only (Skill descriptions, code comments) |
| **Translation Points** | User Input → English (entry), English → User Language (response) |
**Why This Works**:
- ✅ **Skills remain unchanged**: English-only Skills work reliably for ANY user language
- ✅ **Zero maintenance burden**: No need to translate 55 Skills into N languages
- ✅ **Infinite scalability**: Add Korean, Russian, Mandarin, Arabic without code changes
- ✅ **Consistent quality**: English prompts guarantee 100% Skill trigger matching
- ✅ **Industry standard**: Same pattern used by Netflix, Google, AWS (localized UI + English backend)
**Estimated Duration**: Same as English (no overhead from translation layer)
---
**Last Updated**: 2025-10-27
**Document Version**: v1.0.0
```
### examples.md
```markdown
# Practical Examples: Workflow Execution
## Example 1: JIT Context Retrieval
### Task: "Add email verification feature"
**Phase 1: High-level Brief**
```markdown
## Email Verification Feature
- Goal: User can verify email after signup
- Success: User receives email, clicks link, marked verified
- Stakeholders: User (receiver), Admin (monitoring)
```
**Phase 2: Technical Core**
```markdown
## Architecture
- Entry point: src/api/auth.py - POST /auth/signup
- Domain model: models/user.py - User.email_verified
- Email service: infra/email_service.py - send_verification_email()
```
**Phase 3: Edge Cases**
```markdown
## Known Gotchas
- Token expires in 24h
- Duplicate email prevents signup
- Test mode uses mock email service (doesn't send)
```
---
## Example 2: Feature Implementation Workflow
```bash
# Step 1: Create SPEC
/alfred:1-plan "Email Verification"
# Step 2: TDD RED phase
/alfred:2-run SPEC-AUTH-015
# Write tests: test_verify_email_valid_token, test_token_expired, test_duplicate_email
# RED: All 3 tests fail
# Step 3: TDD GREEN phase
# Implement: User.verify_email(token)
# GREEN: All 3 tests pass
# Step 4: TDD REFACTOR phase
# Improve: Extract token validation logic
# REFACTOR: Tests still pass, code cleaner
# Step 5: Sync
/alfred:3-sync
# Update README with email verification docs
# Update CHANGELOG with SPEC-AUTH-015 reference
```
---
## Example 3: Explore Agent for Large Codebase
### ❌ WRONG: Manual file hunting
```
User: "How is authentication currently implemented?"
Alfred:
grep -r "authenticate" src/
grep -r "login" src/
grep -r "jwt" src/
# … 20 files to read, context bloated
```
### ✅ CORRECT: Use Explore Agent
```
User: "How is authentication currently implemented?"
Alfred: Task(subagent_type="Explore", prompt="Find authentication flow including entry points, models, middleware")
Explore:
- Found: src/api/auth.py (login endpoint)
- Found: models/user.py (User model, password_hash)
- Found: middleware/auth.py (JWT validation)
- Found: test/test_auth.py (test patterns)
Result: Clear architecture summary without bloated context
```
---
## Example 4: Problem Diagnosis
### Scenario: Tests failing unexpectedly
```
Error: "test_email_verification failed - connection timeout"
Debugging Steps:
1. Check stack trace → Email service timeout
2. Skill("moai-essentials-debug") → "Is test mode configured?"
3. Diagnosis → Production email service called in tests
4. Fix → Add mock for test environment
5. Verify → Tests pass again
```
---
## Example 5: Multi-step Workflow with Agents
```
User: "Implement search feature with 95%+ test coverage"
Alfred:
1. AskUserQuestion → Clarify search scope (users? products? all?)
2. Skill("moai-alfred-spec-metadata-extended") → Create SPEC-SEARCH-001
3. Skill("moai-foundation-trust") → Enforce 95% coverage target
4. Skill("moai-essentials-debug") → Handle search performance
5. Skill("moai-foundation-tags") → Validate TAG chain
6. Skill("moai-foundation-git") → Proper commit messages
Result: Complete feature with TRUST 5 + full traceability
```
---
Learn more in `reference.md` for complete workflow patterns and advanced scenarios.
```