Back to skills
SkillHub ClubWrite Technical DocsFull StackData / AITech Writer

docs-pipeline-automation

Build repeatable data-to-Docs pipelines from Sheets and Drive sources. Use for automated status reports, template-based document assembly, and scheduled publishing workflows.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
3,074
Hot score
99
Updated
March 19, 2026
Overall rating
C4.0
Composite score
4.0
Best-practice grade
A92.4

Install command

npx @skill-hub/cli install openclaw-skills-docs-pipeline-automation

Repository

openclaw/skills

Skill path: skills/0x-professor/docs-pipeline-automation

Build repeatable data-to-Docs pipelines from Sheets and Drive sources. Use for automated status reports, template-based document assembly, and scheduled publishing workflows.

Open repository

Best for

Primary workflow: Write Technical Docs.

Technical facets: Full Stack, Data / AI, Tech Writer.

Target audience: everyone.

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: openclaw.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install docs-pipeline-automation into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/openclaw/skills before adding docs-pipeline-automation to shared team environments
  • Use docs-pipeline-automation for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: docs-pipeline-automation
description: Build repeatable data-to-Docs pipelines from Sheets and Drive sources. Use for automated status reports, template-based document assembly, and scheduled publishing workflows.
---

# Docs Pipeline Automation

## Overview

Create deterministic pipelines that transform Workspace data sources into generated Docs outputs.

## Workflow

1. Define pipeline name, sources, template, and destination.
2. Normalize source extraction and section mapping steps.
3. Build report assembly sequence and publish target.
4. Export implementation-ready pipeline artifact.

## Use Bundled Resources

- Run `scripts/compose_docs_pipeline.py` for deterministic pipeline output.
- Read `references/docs-pipeline-guide.md` for document assembly guidance.

## Guardrails

- Keep source mapping explicit and versioned.
- Include fallback behavior for missing sections.


---

## Referenced Files

> The following files are referenced in this skill and included for context.

### scripts/compose_docs_pipeline.py

```python
#!/usr/bin/env python3
from __future__ import annotations

import argparse
import csv
import json
from pathlib import Path

MAX_INPUT_BYTES = 1_048_576


def parse_args() -> argparse.Namespace:
    parser = argparse.ArgumentParser(description="Compose a Docs pipeline specification.")
    parser.add_argument("--input", required=False, help="Path to JSON input.")
    parser.add_argument("--output", required=True, help="Path to output artifact.")
    parser.add_argument("--format", choices=["json", "md", "csv"], default="json")
    parser.add_argument("--dry-run", action="store_true", help="Run without side effects.")
    return parser.parse_args()


def load_payload(path: str | None, max_input_bytes: int = MAX_INPUT_BYTES) -> dict:
    if not path:
        return {}
    payload_path = Path(path)
    if not payload_path.exists():
        raise FileNotFoundError(f"Input file not found: {payload_path}")
    if payload_path.stat().st_size > max_input_bytes:
        raise ValueError(f"Input file exceeds {max_input_bytes} bytes: {payload_path}")
    return json.loads(payload_path.read_text(encoding="utf-8"))


def render(result: dict, output_path: Path, fmt: str) -> None:
    output_path.parent.mkdir(parents=True, exist_ok=True)

    if fmt == "json":
        output_path.write_text(json.dumps(result, indent=2), encoding="utf-8")
        return

    if fmt == "md":
        details = result["details"]
        lines = [
            f"# {result['summary']}",
            "",
            f"- status: {result['status']}",
            f"- pipeline_name: {details['pipeline_name']}",
            "",
            "## Sources",
        ]
        for src in details["sources"]:
            lines.append(f"- {src}")
        lines.extend(["", "## Steps"])
        for step in details["steps"]:
            lines.append(f"- {step['order']}. {step['name']}")
        output_path.write_text("\n".join(lines) + "\n", encoding="utf-8")
        return

    with output_path.open("w", newline="", encoding="utf-8") as handle:
        writer = csv.DictWriter(handle, fieldnames=["order", "name"])
        writer.writeheader()
        writer.writerows(result["details"]["steps"])


def main() -> int:
    args = parse_args()
    payload = load_payload(args.input)

    pipeline_name = str(payload.get("pipeline_name", "docs-pipeline"))
    sources = payload.get("sources", [])
    if not isinstance(sources, list):
        sources = []

    template_doc = str(payload.get("template_doc", "docs://template"))
    destination_doc = str(payload.get("destination_doc", "docs://destination"))

    steps = [
        {"order": 1, "name": "extract_sources"},
        {"order": 2, "name": "normalize_data"},
        {"order": 3, "name": "render_template"},
        {"order": 4, "name": "publish_document"},
    ]

    result = {
        "status": "ok" if sources else "warning",
        "summary": (
            f"Composed docs pipeline '{pipeline_name}'"
            if sources
            else f"No sources supplied for pipeline '{pipeline_name}'"
        ),
        "artifacts": [str(Path(args.output))],
        "details": {
            "pipeline_name": pipeline_name,
            "sources": [str(source) for source in sources],
            "template_doc": template_doc,
            "destination_doc": destination_doc,
            "steps": steps,
            "dry_run": args.dry_run,
        },
    }

    render(result, Path(args.output), args.format)
    return 0


if __name__ == "__main__":
    raise SystemExit(main())

```

### references/docs-pipeline-guide.md

```markdown
# Docs Pipeline Guide

## Required Inputs

- `pipeline_name`
- `sources`
- `template_doc`
- `destination_doc`

## Pipeline Stages

1. Source extraction
2. Data normalization
3. Template rendering
4. Document publishing

## Reliability Considerations

- Handle missing source blocks gracefully.
- Keep section IDs stable.
- Log source timestamps for traceability.

```



---

## Skill Companion Files

> Additional files collected from the skill directory layout.

### _meta.json

```json
{
  "owner": "0x-professor",
  "slug": "docs-pipeline-automation",
  "displayName": "Docs Pipeline Automation",
  "latest": {
    "version": "0.1.0",
    "publishedAt": 1772136462712,
    "commit": "https://github.com/openclaw/skills/commit/786684a60a552061c215b2a4a4652b89ce91c342"
  },
  "history": []
}

```