Back to skills
SkillHub ClubWrite Technical DocsFull StackTech Writer

skill-doc-generator

Auto-generates standardized README documentation from SKILL.md files, validates consistency (frontmatter, descriptions, terminology), and creates usage examples. Use when documenting individual skills, generating docs for multiple skills in a directory, or validating skill quality standards.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
30
Hot score
89
Updated
March 20, 2026
Overall rating
C3.2
Composite score
3.2
Best-practice grade
B73.6

Install command

npx @skill-hub/cli install svenja-dev-claude-code-skills-skill-doc-generator

Repository

Svenja-dev/claude-code-skills

Skill path: skills/doc-generator/skill-doc-generator

Auto-generates standardized README documentation from SKILL.md files, validates consistency (frontmatter, descriptions, terminology), and creates usage examples. Use when documenting individual skills, generating docs for multiple skills in a directory, or validating skill quality standards.

Open repository

Best for

Primary workflow: Write Technical Docs.

Technical facets: Full Stack, Tech Writer.

Target audience: everyone.

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: Svenja-dev.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install skill-doc-generator into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/Svenja-dev/claude-code-skills before adding skill-doc-generator to shared team environments
  • Use skill-doc-generator for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: skill-doc-generator
description: Auto-generates standardized README documentation from SKILL.md files, validates consistency (frontmatter, descriptions, terminology), and creates usage examples. Use when documenting individual skills, generating docs for multiple skills in a directory, or validating skill quality standards.
---

# Skill Documentation Generator

Auto-generate high-quality README documentation for skills with built-in consistency validation and example generation.

## Overview

This skill automates the creation of standardized README files for skills by analyzing SKILL.md files, extracting structure and examples, validating quality standards, and generating comprehensive documentation. It ensures consistency across skill documentation while providing actionable validation feedback.

## Workflow

### Single Skill Documentation

Generate documentation for one skill:

1. **Analyze the skill**:
   ```bash
   python scripts/analyze_skill.py <skill_directory>
   ```
   Extracts metadata, sections, code blocks, and resources.

2. **Validate consistency**:
   ```bash
   python scripts/validate_consistency.py <skill_directory> --verbose
   ```
   Checks frontmatter, description quality, and terminology.

3. **Generate README**:
   ```bash
   python scripts/generate_readme.py <skill_directory> [output_path]
   ```
   Creates README.md with validation results.

### Batch Documentation

Document multiple skills at once:

```bash
python scripts/document_directory.py <directory> [options]
```

**Options:**
- `--output <dir>`: Specify output directory
- `--no-recursive`: Don't search subdirectories
- `--no-index`: Skip index file generation
- `--no-validate`: Skip validation checks

**Example:**
```bash
# Document all user skills with validation
python scripts/document_directory.py /mnt/skills/user --output ./docs

# Quick pass without validation
python scripts/document_directory.py ./my-skills --no-validate
```

## Script Reference

### analyze_skill.py
Parses SKILL.md and extracts structured information.

**Usage**: `python scripts/analyze_skill.py <skill_directory>`

**Returns**:
- Metadata (name, description)
- Sections and structure
- Code blocks with language tags
- Referenced resources (scripts, references, assets)
- Statistics (line count, section count)

### validate_consistency.py
Validates skill quality against standards defined in references/consistency-rules.md.

**Usage**: `python scripts/validate_consistency.py <skill_directory> [--verbose]`

**Checks**:
- Frontmatter completeness and format
- Description quality (length, clarity, triggers)
- Structure appropriateness
- Terminology consistency
- Resource references
- Code example quality

**Severity Levels**:
- **ERROR**: Breaks functionality (missing required fields)
- **WARNING**: Quality issues (naming, unreferenced resources)
- **INFO**: Suggestions (style, optional improvements)

### generate_readme.py
Creates README.md from skill analysis.

**Usage**: `python scripts/generate_readme.py <skill_directory> [output_path]`

**Generates**:
- Title and description
- Overview from SKILL.md
- Trigger scenarios
- Structure statistics
- Bundled resource lists with links
- Key sections overview
- Usage examples (up to 3)
- Validation results (optional)

**Template**: See references/readme-template.md for structure.

### document_directory.py
Batch processes multiple skills in a directory.

**Usage**: `python scripts/document_directory.py <directory> [options]`

**Features**:
- Recursive skill discovery
- Parallel validation and documentation
- Index generation with categorization
- Summary statistics
- Error handling per skill

## Quality Standards

Validation enforces these standards:

### Frontmatter
- **name**: Lowercase with hyphens (e.g., `skill-name`)
- **description**: 50-500 chars, clear triggers
- Must start with capital letter
- Include "when" or "use" phrases

### Structure
- Body: 100+ chars minimum, <500 lines recommended
- Sections: Overview/workflow recommended
- Resources: All files referenced in SKILL.md

### Terminology
- Use imperative form: "Use" not "You should use"
- Capitalize "Claude" consistently
- Avoid vague terms: "various", "multiple"
- Active voice preferred

See references/consistency-rules.md and references/terminology-standards.md for complete standards.

## Reference Files

### readme-template.md
Standard README structure and best practices. Defines:
- Required sections
- Optional sections
- Formatting guidelines
- Link conventions

### consistency-rules.md
Detailed validation criteria. Covers:
- Frontmatter requirements
- Description quality metrics
- Structure guidelines
- Resource validation
- Error severity definitions

### terminology-standards.md
Standard vocabulary and style guide. Includes:
- Writing style (imperative form)
- Common terms and their usage
- Phrases to avoid
- Formatting conventions
- Consistency checklist

## Examples

### Example 1: Document a Single Skill
```bash
# Analyze
python scripts/analyze_skill.py ./my-skill

# Validate
python scripts/validate_consistency.py ./my-skill --verbose

# Generate README
python scripts/generate_readme.py ./my-skill
```

### Example 2: Batch Process with Index
```bash
# Document all skills in a directory
python scripts/document_directory.py /mnt/skills/user \
  --output ./documentation \
  --recursive
```

### Example 3: Quick Validation Pass
```bash
# Just validate without generating docs
python scripts/validate_consistency.py ./my-skill
```

## Common Use Cases

**New skill creation**: Generate documentation as part of skill development
**Quality audits**: Validate existing skills against standards
**Documentation updates**: Regenerate READMEs after SKILL.md changes
**Batch operations**: Document entire skill libraries
**CI/CD integration**: Automated validation in deployment pipelines

## Tips

- Run validation before generating documentation to catch issues early
- Use `--verbose` flag to see INFO-level suggestions
- Reference files provide the "why" behind validation rules
- Generated READMEs include validation results for transparency
- Index files help navigate large skill collections


---

## Referenced Files

> The following files are referenced in this skill and included for context.

### scripts/analyze_skill.py

```python
#!/usr/bin/env python3
"""
Analyzes a SKILL.md file and extracts metadata, structure, and resources.
"""

import yaml
import re
import os
import sys
from pathlib import Path
from typing import Dict, List, Optional


def parse_frontmatter(content: str) -> tuple[Dict, str]:
    """Extract YAML frontmatter and remaining content."""
    frontmatter_pattern = r'^---\s*\n(.*?)\n---\s*\n(.*)$'
    match = re.match(frontmatter_pattern, content, re.DOTALL)
    
    if not match:
        return {}, content
    
    yaml_content = match.group(1)
    body = match.group(2)
    
    try:
        metadata = yaml.safe_load(yaml_content)
        return metadata or {}, body
    except yaml.YAMLError as e:
        print(f"Warning: Failed to parse YAML frontmatter: {e}", file=sys.stderr)
        return {}, content


def extract_sections(body: str) -> Dict[str, str]:
    """Extract major sections from markdown body."""
    sections = {}
    current_section = "introduction"
    current_content = []
    
    for line in body.split('\n'):
        if line.startswith('# '):
            if current_content:
                sections[current_section] = '\n'.join(current_content).strip()
            current_section = line[2:].strip().lower().replace(' ', '_')
            current_content = []
        elif line.startswith('## '):
            if current_content:
                sections[current_section] = '\n'.join(current_content).strip()
            current_section = line[3:].strip().lower().replace(' ', '_')
            current_content = []
        else:
            current_content.append(line)
    
    if current_content:
        sections[current_section] = '\n'.join(current_content).strip()
    
    return sections


def find_code_blocks(content: str) -> List[Dict[str, str]]:
    """Extract code blocks with their language tags."""
    code_blocks = []
    pattern = r'```(\w+)?\n(.*?)```'
    
    for match in re.finditer(pattern, content, re.DOTALL):
        language = match.group(1) or 'text'
        code = match.group(2).strip()
        code_blocks.append({
            'language': language,
            'code': code
        })
    
    return code_blocks


def find_references(body: str, skill_dir: Path) -> Dict[str, List[str]]:
    """Find references to bundled resources (scripts, references, assets)."""
    resources = {
        'scripts': [],
        'references': [],
        'assets': []
    }
    
    # Pattern for markdown links and file references
    link_pattern = r'\[(.*?)\]\((.*?)\)'
    
    for match in re.finditer(link_pattern, body):
        link_path = match.group(2)
        
        # Check if it's a relative path
        if not link_path.startswith('http'):
            for resource_type in resources.keys():
                if link_path.startswith(resource_type):
                    resources[resource_type].append(link_path)
    
    # Also check actual filesystem
    for resource_type in resources.keys():
        resource_dir = skill_dir / resource_type
        if resource_dir.exists():
            for file_path in resource_dir.rglob('*'):
                if file_path.is_file():
                    rel_path = file_path.relative_to(skill_dir)
                    path_str = str(rel_path)
                    if path_str not in resources[resource_type]:
                        resources[resource_type].append(path_str)
    
    return resources


def analyze_skill(skill_path: str) -> Dict:
    """
    Analyze a skill and return structured information.
    
    Args:
        skill_path: Path to skill directory or SKILL.md file
        
    Returns:
        Dictionary containing skill analysis
    """
    skill_path = Path(skill_path)
    
    # Handle both directory and file paths
    if skill_path.is_dir():
        skill_file = skill_path / 'SKILL.md'
        skill_dir = skill_path
    else:
        skill_file = skill_path
        skill_dir = skill_path.parent
    
    if not skill_file.exists():
        raise FileNotFoundError(f"SKILL.md not found at {skill_file}")
    
    content = skill_file.read_text(encoding='utf-8')
    metadata, body = parse_frontmatter(content)
    
    analysis = {
        'path': str(skill_dir),
        'metadata': metadata,
        'name': metadata.get('name', skill_dir.name),
        'description': metadata.get('description', ''),
        'sections': extract_sections(body),
        'code_blocks': find_code_blocks(body),
        'resources': find_references(body, skill_dir),
        'body_length': len(body),
        'line_count': len(body.split('\n'))
    }
    
    return analysis


if __name__ == '__main__':
    if len(sys.argv) != 2:
        print("Usage: python analyze_skill.py <skill_directory_or_SKILL.md>")
        sys.exit(1)
    
    skill_path = sys.argv[1]
    
    try:
        analysis = analyze_skill(skill_path)
        
        print(f"Skill Analysis: {analysis['name']}")
        print("=" * 60)
        print(f"Description: {analysis['description'][:100]}...")
        print(f"\nMetadata fields: {', '.join(analysis['metadata'].keys())}")
        print(f"Body length: {analysis['body_length']} chars, {analysis['line_count']} lines")
        print(f"\nSections found: {len(analysis['sections'])}")
        for section in analysis['sections'].keys():
            print(f"  - {section}")
        
        print(f"\nCode blocks: {len(analysis['code_blocks'])}")
        for i, block in enumerate(analysis['code_blocks'][:3], 1):
            print(f"  {i}. {block['language']} ({len(block['code'])} chars)")
        
        print("\nResources:")
        for resource_type, files in analysis['resources'].items():
            if files:
                print(f"  {resource_type}: {len(files)} file(s)")
                for f in files[:3]:
                    print(f"    - {f}")
        
    except Exception as e:
        print(f"Error: {e}", file=sys.stderr)
        sys.exit(1)

```

### scripts/validate_consistency.py

```python
#!/usr/bin/env python3
"""
Validates skill consistency: frontmatter format, description quality, and terminology.
"""

import sys
from pathlib import Path
from typing import List, Dict
from analyze_skill import analyze_skill


class ValidationIssue:
    """Represents a validation issue."""
    
    SEVERITY_ERROR = 'ERROR'
    SEVERITY_WARNING = 'WARNING'
    SEVERITY_INFO = 'INFO'
    
    def __init__(self, severity: str, category: str, message: str):
        self.severity = severity
        self.category = category
        self.message = message
    
    def __str__(self):
        return f"[{self.severity}] {self.category}: {self.message}"


class SkillValidator:
    """Validates skill structure and content."""
    
    def __init__(self):
        self.issues: List[ValidationIssue] = []
    
    def validate(self, skill_path: str) -> List[ValidationIssue]:
        """Run all validation checks on a skill."""
        self.issues = []
        
        try:
            analysis = analyze_skill(skill_path)
        except Exception as e:
            self.issues.append(ValidationIssue(
                ValidationIssue.SEVERITY_ERROR,
                'Parse Error',
                f"Failed to analyze skill: {e}"
            ))
            return self.issues
        
        self._validate_frontmatter(analysis)
        self._validate_description(analysis)
        self._validate_structure(analysis)
        self._validate_terminology(analysis)
        self._validate_resources(analysis)
        self._validate_examples(analysis)
        
        return self.issues
    
    def _validate_frontmatter(self, analysis: Dict):
        """Check required frontmatter fields and format."""
        metadata = analysis['metadata']
        
        # Required fields
        if 'name' not in metadata:
            self.issues.append(ValidationIssue(
                ValidationIssue.SEVERITY_ERROR,
                'Frontmatter',
                "Missing required field: 'name'"
            ))
        elif not metadata['name']:
            self.issues.append(ValidationIssue(
                ValidationIssue.SEVERITY_ERROR,
                'Frontmatter',
                "'name' field is empty"
            ))
        
        if 'description' not in metadata:
            self.issues.append(ValidationIssue(
                ValidationIssue.SEVERITY_ERROR,
                'Frontmatter',
                "Missing required field: 'description'"
            ))
        elif not metadata['description']:
            self.issues.append(ValidationIssue(
                ValidationIssue.SEVERITY_ERROR,
                'Frontmatter',
                "'description' field is empty"
            ))
        
        # Name format (should be lowercase with hyphens)
        if 'name' in metadata and metadata['name']:
            name = metadata['name']
            if not name.islower() or ' ' in name:
                self.issues.append(ValidationIssue(
                    ValidationIssue.SEVERITY_WARNING,
                    'Frontmatter',
                    f"Name '{name}' should be lowercase with hyphens (e.g., 'skill-name')"
                ))
    
    def _validate_description(self, analysis: Dict):
        """Check description quality and completeness."""
        description = analysis.get('description', '')
        
        if not description:
            return  # Already caught in frontmatter check
        
        # Length checks
        if len(description) < 50:
            self.issues.append(ValidationIssue(
                ValidationIssue.SEVERITY_WARNING,
                'Description',
                f"Description is very short ({len(description)} chars). Should be comprehensive and specific."
            ))
        
        if len(description) > 500:
            self.issues.append(ValidationIssue(
                ValidationIssue.SEVERITY_INFO,
                'Description',
                f"Description is quite long ({len(description)} chars). Consider if all content is essential for skill selection."
            ))
        
        # Content quality checks
        if not description[0].isupper():
            self.issues.append(ValidationIssue(
                ValidationIssue.SEVERITY_WARNING,
                'Description',
                "Description should start with a capital letter"
            ))
        
        # Check for specificity
        vague_terms = ['various', 'multiple', 'different', 'some', 'general']
        for term in vague_terms:
            if term.lower() in description.lower():
                self.issues.append(ValidationIssue(
                    ValidationIssue.SEVERITY_INFO,
                    'Description',
                    f"Description contains vague term '{term}' - consider being more specific"
                ))
                break
        
        # Check for trigger phrases
        if 'when' not in description.lower() and 'use' not in description.lower():
            self.issues.append(ValidationIssue(
                ValidationIssue.SEVERITY_INFO,
                'Description',
                "Consider adding trigger phrases ('when', 'use this when') to help with skill selection"
            ))
    
    def _validate_structure(self, analysis: Dict):
        """Check overall document structure."""
        sections = analysis.get('sections', {})
        body_length = analysis.get('body_length', 0)
        line_count = analysis.get('line_count', 0)
        
        # Check for empty body
        if body_length < 100:
            self.issues.append(ValidationIssue(
                ValidationIssue.SEVERITY_WARNING,
                'Structure',
                f"SKILL.md body is very short ({body_length} chars)"
            ))
        
        # Check for excessive length (suggests need for references/)
        if line_count > 500:
            self.issues.append(ValidationIssue(
                ValidationIssue.SEVERITY_WARNING,
                'Structure',
                f"SKILL.md is quite long ({line_count} lines). Consider moving detailed content to references/"
            ))
        
        # Check for common expected sections
        section_names = [s.lower() for s in sections.keys()]
        
        has_overview = any('overview' in s or 'about' in s for s in section_names)
        has_workflow = any('workflow' in s or 'usage' in s or 'how' in s for s in section_names)
        
        if not has_overview:
            self.issues.append(ValidationIssue(
                ValidationIssue.SEVERITY_INFO,
                'Structure',
                "Consider adding an 'Overview' section to introduce the skill"
            ))
    
    def _validate_terminology(self, analysis: Dict):
        """Check for consistent terminology and style."""
        # Get all text content
        body = '\n'.join(analysis.get('sections', {}).values())
        
        # Check for inconsistent capitalization of common terms
        terms_to_check = {
            'claude': ['Claude', 'claude'],  # Should be 'Claude'
            'skill': ['Skill', 'skill'],  # Can vary by context
        }
        
        # Check for imperative/infinitive form (per guidelines)
        non_imperative_starts = [
            'you should', 'you can', 'you must', 'you will',
            'we should', 'we can', 'we must', 'we will'
        ]
        
        for phrase in non_imperative_starts:
            if phrase.lower() in body.lower():
                self.issues.append(ValidationIssue(
                    ValidationIssue.SEVERITY_INFO,
                    'Terminology',
                    f"Found '{phrase}' - consider using imperative form (e.g., 'Use' instead of 'You should use')"
                ))
                break
    
    def _validate_resources(self, analysis: Dict):
        """Check bundled resources are properly referenced."""
        resources = analysis.get('resources', {})
        sections = analysis.get('sections', {})
        body = '\n'.join(sections.values())
        
        # Check if scripts exist but aren't mentioned
        scripts = resources.get('scripts', [])
        if scripts:
            for script in scripts:
                script_name = Path(script).name
                if script_name not in body:
                    self.issues.append(ValidationIssue(
                        ValidationIssue.SEVERITY_WARNING,
                        'Resources',
                        f"Script '{script}' exists but isn't referenced in SKILL.md"
                    ))
        
        # Check if references exist but aren't mentioned
        references = resources.get('references', [])
        if references:
            for ref in references:
                ref_name = Path(ref).name
                if ref_name not in body:
                    self.issues.append(ValidationIssue(
                        ValidationIssue.SEVERITY_WARNING,
                        'Resources',
                        f"Reference file '{ref}' exists but isn't mentioned in SKILL.md"
                    ))
    
    def _validate_examples(self, analysis: Dict):
        """Check for presence and quality of code examples."""
        code_blocks = analysis.get('code_blocks', [])
        
        if not code_blocks:
            self.issues.append(ValidationIssue(
                ValidationIssue.SEVERITY_INFO,
                'Examples',
                "No code examples found. Consider adding examples if they would help clarify usage."
            ))
        
        # Check for language tags on code blocks
        for i, block in enumerate(code_blocks, 1):
            if block['language'] == 'text':
                self.issues.append(ValidationIssue(
                    ValidationIssue.SEVERITY_INFO,
                    'Examples',
                    f"Code block {i} has no language tag. Consider adding one for syntax highlighting."
                ))


def validate_skill(skill_path: str, verbose: bool = False) -> tuple[List[ValidationIssue], bool]:
    """
    Validate a skill and return issues.
    
    Returns:
        Tuple of (issues, has_errors)
    """
    validator = SkillValidator()
    issues = validator.validate(skill_path)
    
    has_errors = any(issue.severity == ValidationIssue.SEVERITY_ERROR for issue in issues)
    
    return issues, has_errors


if __name__ == '__main__':
    if len(sys.argv) < 2:
        print("Usage: python validate_consistency.py <skill_directory> [--verbose]")
        sys.exit(1)
    
    skill_path = sys.argv[1]
    verbose = '--verbose' in sys.argv
    
    issues, has_errors = validate_skill(skill_path, verbose)
    
    if not issues:
        print(f"✅ Skill validation passed with no issues!")
        sys.exit(0)
    
    # Group by severity
    errors = [i for i in issues if i.severity == ValidationIssue.SEVERITY_ERROR]
    warnings = [i for i in issues if i.severity == ValidationIssue.SEVERITY_WARNING]
    info = [i for i in issues if i.severity == ValidationIssue.SEVERITY_INFO]
    
    print(f"Validation Results for: {skill_path}")
    print("=" * 60)
    
    if errors:
        print(f"\n❌ ERRORS ({len(errors)}):")
        for issue in errors:
            print(f"  {issue}")
    
    if warnings:
        print(f"\n⚠️  WARNINGS ({len(warnings)}):")
        for issue in warnings:
            print(f"  {issue}")
    
    if info and verbose:
        print(f"\nℹ️  INFO ({len(info)}):")
        for issue in info:
            print(f"  {issue}")
    
    print(f"\nSummary: {len(errors)} errors, {len(warnings)} warnings, {len(info)} info")
    
    sys.exit(1 if has_errors else 0)

```

### scripts/generate_readme.py

```python
#!/usr/bin/env python3
"""
Generates a README.md file for a skill based on its analysis.
"""

import sys
from pathlib import Path
from typing import Dict, List
from analyze_skill import analyze_skill
from validate_consistency import validate_skill


def format_resource_list(resources: List[str], base_path: str = '') -> str:
    """Format a list of resources as markdown list items."""
    if not resources:
        return "_None_"
    
    lines = []
    for resource in sorted(resources):
        resource_path = Path(resource)
        if base_path:
            link = f"[`{resource}`]({base_path}/{resource})"
        else:
            link = f"`{resource}`"
        lines.append(f"- {link}")
    
    return '\n'.join(lines)


def extract_key_examples(analysis: Dict) -> List[Dict]:
    """Extract representative code examples from the skill."""
    code_blocks = analysis.get('code_blocks', [])
    
    # Limit to first 3 most substantial examples
    examples = []
    for block in code_blocks[:3]:
        if len(block['code']) > 20:  # Skip trivial examples
            examples.append(block)
    
    return examples


def generate_usage_section(analysis: Dict) -> str:
    """Generate usage/trigger examples section."""
    name = analysis['name']
    description = analysis['description']
    
    # Try to extract trigger phrases from description
    triggers = []
    if 'when' in description.lower():
        # Extract phrases after "when"
        import re
        when_matches = re.finditer(r'when\s+([^.,;]+)', description, re.IGNORECASE)
        for match in when_matches:
            triggers.append(match.group(1).strip())
    
    usage = f"This skill is triggered when working with tasks related to {name}.\n\n"
    
    if triggers:
        usage += "**Common trigger scenarios:**\n"
        for trigger in triggers[:3]:
            usage += f"- {trigger}\n"
    else:
        usage += f"The skill activates based on: {description[:200]}...\n"
    
    return usage


def generate_readme(analysis: Dict, include_validation: bool = True) -> str:
    """
    Generate README.md content from skill analysis.
    
    Args:
        analysis: Skill analysis dictionary
        include_validation: Whether to include validation results
        
    Returns:
        README.md content as string
    """
    name = analysis['name']
    description = analysis['description']
    sections = analysis.get('sections', {})
    resources = analysis.get('resources', {})
    
    readme = []
    
    # Title and description
    readme.append(f"# {name}")
    readme.append("")
    readme.append(f"> {description}")
    readme.append("")
    
    # Overview (from first section or description)
    overview_section = sections.get('overview', sections.get('introduction', ''))
    if overview_section:
        readme.append("## Overview")
        readme.append("")
        # Take first few paragraphs
        paragraphs = overview_section.split('\n\n')[:2]
        readme.append('\n\n'.join(paragraphs))
        readme.append("")
    
    # When to use this skill
    readme.append("## When to Use This Skill")
    readme.append("")
    readme.append(generate_usage_section(analysis))
    readme.append("")
    
    # Structure
    readme.append("## Skill Structure")
    readme.append("")
    readme.append(f"- **Lines of documentation:** {analysis['line_count']}")
    readme.append(f"- **Sections:** {len(sections)}")
    readme.append(f"- **Code examples:** {len(analysis['code_blocks'])}")
    readme.append("")
    
    # Resources
    if any(resources.values()):
        readme.append("## Bundled Resources")
        readme.append("")
        
        if resources.get('scripts'):
            readme.append("### Scripts")
            readme.append("")
            readme.append(format_resource_list(resources['scripts'], 'scripts'))
            readme.append("")
        
        if resources.get('references'):
            readme.append("### Reference Documentation")
            readme.append("")
            readme.append(format_resource_list(resources['references'], 'references'))
            readme.append("")
        
        if resources.get('assets'):
            readme.append("### Assets")
            readme.append("")
            readme.append(format_resource_list(resources['assets'], 'assets'))
            readme.append("")
    
    # Key sections
    if len(sections) > 1:
        readme.append("## Key Sections")
        readme.append("")
        # List main sections (skip introduction/overview)
        main_sections = [s for s in sections.keys() 
                        if s not in ['introduction', 'overview', name.lower().replace('-', '_')]]
        
        for section in main_sections[:5]:  # Limit to top 5
            section_title = section.replace('_', ' ').title()
            readme.append(f"- **{section_title}**")
        readme.append("")
    
    # Usage examples
    examples = extract_key_examples(analysis)
    if examples:
        readme.append("## Usage Examples")
        readme.append("")
        for i, example in enumerate(examples, 1):
            readme.append(f"### Example {i}")
            readme.append("")
            readme.append(f"```{example['language']}")
            readme.append(example['code'][:300])  # Truncate long examples
            if len(example['code']) > 300:
                readme.append("...")
            readme.append("```")
            readme.append("")
    
    # Validation results (optional)
    if include_validation:
        try:
            issues, has_errors = validate_skill(analysis['path'])
            
            readme.append("## Quality Validation")
            readme.append("")
            
            if not issues:
                readme.append("✅ **All validation checks passed**")
            else:
                errors = [i for i in issues if i.severity == 'ERROR']
                warnings = [i for i in issues if i.severity == 'WARNING']
                
                if errors:
                    readme.append(f"❌ **{len(errors)} error(s) found**")
                if warnings:
                    readme.append(f"⚠️  **{len(warnings)} warning(s) found**")
                
                readme.append("")
                readme.append("<details>")
                readme.append("<summary>View validation details</summary>")
                readme.append("")
                for issue in issues[:10]:  # Limit output
                    readme.append(f"- `{issue.severity}` {issue.category}: {issue.message}")
                readme.append("")
                readme.append("</details>")
            
            readme.append("")
        except Exception as e:
            pass  # Skip validation section if it fails
    
    # Footer
    readme.append("---")
    readme.append("")
    readme.append(f"_Documentation auto-generated from `SKILL.md`_")
    
    return '\n'.join(readme)


def save_readme(skill_path: str, readme_content: str, output_path: str = None) -> str:
    """
    Save README to file.
    
    Args:
        skill_path: Path to skill directory
        readme_content: README content
        output_path: Optional custom output path
        
    Returns:
        Path where README was saved
    """
    skill_path = Path(skill_path)
    
    if skill_path.is_file():
        skill_path = skill_path.parent
    
    if output_path:
        readme_path = Path(output_path)
    else:
        readme_path = skill_path / 'README.md'
    
    readme_path.write_text(readme_content, encoding='utf-8')
    return str(readme_path)


if __name__ == '__main__':
    if len(sys.argv) < 2:
        print("Usage: python generate_readme.py <skill_directory> [output_path]")
        print("\nExamples:")
        print("  python generate_readme.py ./my-skill")
        print("  python generate_readme.py ./my-skill ./docs/MY_SKILL.md")
        sys.exit(1)
    
    skill_path = sys.argv[1]
    output_path = sys.argv[2] if len(sys.argv) > 2 else None
    
    try:
        print(f"Analyzing skill at: {skill_path}")
        analysis = analyze_skill(skill_path)
        
        print(f"Generating README for: {analysis['name']}")
        readme = generate_readme(analysis, include_validation=True)
        
        saved_path = save_readme(skill_path, readme, output_path)
        
        print(f"✅ README generated successfully: {saved_path}")
        print(f"   {len(readme.split(chr(10)))} lines, {len(readme)} characters")
        
    except Exception as e:
        print(f"❌ Error: {e}", file=sys.stderr)
        sys.exit(1)

```

### scripts/document_directory.py

```python
#!/usr/bin/env python3
"""
Generates documentation for all skills in a directory.
"""

import sys
import traceback
from pathlib import Path
from typing import List, Dict
from analyze_skill import analyze_skill
from generate_readme import generate_readme, save_readme
from validate_consistency import validate_skill


def find_skills(directory: str, recursive: bool = True) -> List[Path]:
    """
    Find all SKILL.md files in directory.
    
    Args:
        directory: Root directory to search
        recursive: Whether to search subdirectories
        
    Returns:
        List of paths to SKILL.md files
    """
    directory = Path(directory)
    
    if not directory.exists():
        raise FileNotFoundError(f"Directory not found: {directory}")
    
    skill_files = []
    
    if recursive:
        skill_files = list(directory.rglob('SKILL.md'))
    else:
        skill_files = list(directory.glob('*/SKILL.md'))
    
    return sorted(skill_files)


def create_index_file(skills: List[Dict], output_path: Path):
    """Generate an index/catalog of all skills."""
    lines = []
    
    lines.append("# Skills Documentation Index")
    lines.append("")
    lines.append(f"Documentation for {len(skills)} skills.")
    lines.append("")
    
    # Group by category if possible (based on path structure)
    categorized = {}
    uncategorized = []
    
    for skill in skills:
        path = Path(skill['path'])
        # Try to extract category from path
        parts = path.parts
        if len(parts) > 1 and parts[-2] not in ['skill-doc-generator', '.']:
            category = parts[-2]
        else:
            category = 'Other'
        
        if category not in categorized:
            categorized[category] = []
        categorized[category].append(skill)
    
    # Output by category
    for category in sorted(categorized.keys()):
        lines.append(f"## {category.title()}")
        lines.append("")
        
        for skill in sorted(categorized[category], key=lambda x: x['name']):
            name = skill['name']
            desc = skill['description'][:100] + "..." if len(skill['description']) > 100 else skill['description']
            
            # Link to README if it exists
            skill_path = Path(skill['path'])
            readme_path = skill_path / 'README.md'
            
            if readme_path.exists():
                try:
                    rel_path = readme_path.relative_to(output_path.parent)
                    lines.append(f"### [{name}]({rel_path})")
                except (ValueError, AttributeError):
                    lines.append(f"### [{name}](./README.md)")
            else:
                lines.append(f"### {name}")
            
            lines.append("")
            lines.append(f"{desc}")
            lines.append("")
            
            # Add quick stats
            lines.append(f"- **Lines:** {skill['line_count']}")
            lines.append(f"- **Resources:** {sum(len(v) for v in skill['resources'].values())} files")
            lines.append("")
    
    content = '\n'.join(lines)
    output_path.write_text(content, encoding='utf-8')
    return str(output_path)


def document_directory(
    directory: str,
    output_dir: str = None,
    recursive: bool = True,
    generate_index_file: bool = True,
    validate: bool = True
) -> Dict:
    """
    Document all skills in a directory.
    
    Args:
        directory: Directory containing skills
        output_dir: Optional output directory for documentation
        recursive: Whether to search subdirectories
        generate_index_file: Whether to create an index file
        validate: Whether to run validation
        
    Returns:
        Statistics dictionary
    """
    directory = Path(directory)
    
    if output_dir:
        output_dir = Path(output_dir)
        output_dir.mkdir(parents=True, exist_ok=True)
    
    print(f"Searching for skills in: {directory}")
    skill_files = find_skills(directory, recursive)
    
    if not skill_files:
        print("⚠️  No SKILL.md files found")
        return {'total': 0, 'successful': 0, 'failed': 0}
    
    print(f"Found {len(skill_files)} skill(s)")
    print("")
    
    stats = {
        'total': len(skill_files),
        'successful': 0,
        'failed': 0,
        'errors': 0,
        'warnings': 0
    }
    
    analyzed_skills = []
    
    for skill_file in skill_files:
        skill_dir = skill_file.parent
        skill_name = skill_dir.name
        
        try:
            print(f"Processing: {skill_name}...")
            
            # Analyze
            analysis = analyze_skill(skill_dir)
            analyzed_skills.append(analysis)
            
            # Validate if requested
            if validate:
                issues, has_errors = validate_skill(skill_dir)
                errors = [i for i in issues if i.severity == 'ERROR']
                warnings = [i for i in issues if i.severity == 'WARNING']
                
                stats['errors'] += len(errors)
                stats['warnings'] += len(warnings)
                
                if errors:
                    print(f"  ❌ {len(errors)} error(s)")
                elif warnings:
                    print(f"  ⚠️  {len(warnings)} warning(s)")
                else:
                    print(f"  ✅ Validated")
            
            # Generate README
            readme_content = generate_readme(analysis, include_validation=validate)
            
            # Save README
            if output_dir:
                # Save to output directory
                skill_output_dir = output_dir / skill_name
                skill_output_dir.mkdir(exist_ok=True)
                readme_path = skill_output_dir / 'README.md'
            else:
                # Save alongside SKILL.md
                readme_path = skill_dir / 'README.md'
            
            readme_path.write_text(readme_content, encoding='utf-8')
            print(f"  📄 README: {readme_path}")
            
            stats['successful'] += 1
            
        except Exception as e:
            print(f"  ❌ Failed: {e}")
            stats['failed'] += 1
        
        print("")
    
    # Generate index if requested
    if generate_index_file and analyzed_skills:
        index_path = output_dir / 'INDEX.md' if output_dir else directory / 'INDEX.md'
        create_index_file(analyzed_skills, index_path)
        print(f"📚 Index generated: {index_path}")
        print("")
    
    # Summary
    print("=" * 60)
    print("Summary:")
    print(f"  Total skills: {stats['total']}")
    print(f"  Successful: {stats['successful']}")
    print(f"  Failed: {stats['failed']}")
    if validate:
        print(f"  Total errors: {stats['errors']}")
        print(f"  Total warnings: {stats['warnings']}")
    
    return stats


if __name__ == '__main__':
    if len(sys.argv) < 2:
        print("Usage: python document_directory.py <directory> [options]")
        print("\nOptions:")
        print("  --output <dir>     Output directory for documentation")
        print("  --no-recursive     Don't search subdirectories")
        print("  --no-index         Don't generate index file")
        print("  --no-validate      Skip validation checks")
        print("\nExamples:")
        print("  python document_directory.py /mnt/skills/user")
        print("  python document_directory.py ./skills --output ./docs")
        sys.exit(1)
    
    directory = sys.argv[1]
    
    # Parse options
    args = sys.argv[2:]
    output_dir = None
    recursive = '--no-recursive' not in args
    generate_index = '--no-index' not in args
    validate = '--no-validate' not in args
    
    if '--output' in args:
        idx = args.index('--output')
        if idx + 1 < len(args):
            output_dir = args[idx + 1]
    
    try:
        stats = document_directory(
            directory,
            output_dir=output_dir,
            recursive=recursive,
            generate_index_file=generate_index,
            validate=validate
        )
        
        sys.exit(0 if stats['failed'] == 0 else 1)
        
    except Exception as e:
        print(f"❌ Error: {e}", file=sys.stderr)
        traceback.print_exc()
        sys.exit(1)

```

skill-doc-generator | SkillHub