Back to skills
SkillHub ClubShip Full StackFull StackTesting

code-qc

Run a structured quality control audit on any codebase. Use when asked to QC, audit, review, or check code quality for a project. Supports Python, TypeScript, GDScript, and general projects. Produces a standardized report with PASS/WARN/FAIL verdict, covering tests, imports, type checking, static analysis, smoke tests, and documentation. Also use when asked to compare QC results over time.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
3,070
Hot score
99
Updated
March 19, 2026
Overall rating
C4.0
Composite score
4.0
Best-practice grade
B75.6

Install command

npx @skill-hub/cli install openclaw-skills-code-qc

Repository

openclaw/skills

Skill path: skills/isonaei/code-qc

Run a structured quality control audit on any codebase. Use when asked to QC, audit, review, or check code quality for a project. Supports Python, TypeScript, GDScript, and general projects. Produces a standardized report with PASS/WARN/FAIL verdict, covering tests, imports, type checking, static analysis, smoke tests, and documentation. Also use when asked to compare QC results over time.

Open repository

Best for

Primary workflow: Ship Full Stack.

Technical facets: Full Stack, Testing.

Target audience: everyone.

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: openclaw.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install code-qc into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/openclaw/skills before adding code-qc to shared team environments
  • Use code-qc for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: code-qc
description: Run a structured quality control audit on any codebase. Use when asked to QC, audit, review, or check code quality for a project. Supports Python, TypeScript, GDScript, and general projects. Produces a standardized report with PASS/WARN/FAIL verdict, covering tests, imports, type checking, static analysis, smoke tests, and documentation. Also use when asked to compare QC results over time.
---

# Code QC

Structured quality control audit for codebases. Delegates static analysis to proper tools (ruff, eslint, gdlint) and focuses on what AI adds: semantic understanding, cross-module consistency, and dynamic smoke test generation.

## Quick Start

1. Detect project type (read the profile for that language)
2. Load `.qc-config.yaml` if present (for custom thresholds/exclusions)
3. Run the 8-phase audit (or subset with `--quick`)
4. Generate report with verdict
5. Save baseline for future comparison

## Configuration (`.qc-config.yaml`)

Optional project-level config for monorepos and custom settings:

```yaml
# .qc-config.yaml
thresholds:
  test_failure_rate: 0.05    # >5% = FAIL, 0-5% = WARN, 0% = PASS
  lint_errors_max: 0         # Max lint errors before FAIL
  lint_warnings_max: 50      # Max warnings before WARN
  type_errors_max: 0         # Max type errors before FAIL (strict by default)

exclude:
  dirs: [vendor, third_party, generated]
  files: ["*_generated.py", "*.pb.go"]

changed_only: false          # Only check git-changed files (CI mode)
fail_fast: false             # Stop on first failure
quick_mode: false            # Only run Phase 1, 3, 3.5, 6

languages:
  python:
    min_coverage: 80
    ignore_rules: [T201]     # Allow print in this project
  typescript:
    strict_mode: true        # Require tsconfig strict: true
    ignore_rules: []         # eslint rules to ignore
  gdscript:
    godot_version: "4.2"
```

## Execution Modes

| Mode | Phases Run | Use Case |
|------|------------|----------|
| Full (default) | All 8 phases | Thorough audit |
| `--quick` | 1, 3, 3.5, 6 | Fast sanity check |
| `--changed-only` | All, filtered | CI on pull requests |
| `--fail-fast` | All, stops early | Find first issue fast |
| `--fix` | 3 with autofix | Apply automatic fixes |

## Phase Overview

| # | Phase | What | Tools |
|---|-------|------|-------|
| 1 | Test Suite | Run existing tests + coverage | pytest --cov / jest --coverage |
| 2 | Import Integrity | Verify all modules load | `scripts/import_check.py` |
| 3 | Static Analysis | Lint with proper tools | ruff / eslint / gdlint |
| 3.5 | Type Checking | Static type verification | mypy / tsc --noEmit / (N/A for GDScript) |
| 4 | Smoke Tests | Verify business logic works | AI-generated per project |
| 5 | UI/Frontend | Verify UI components load | Framework-specific |
| 6 | File Consistency | Syntax + git state | `scripts/syntax_check.py` + git |
| 7 | Documentation | Docstrings + docs accuracy | `scripts/docstring_check.py` |

## Phase Details

### Phase 1: Test Suite

Run the project's test suite with coverage. Auto-detect the test runner:

```
pytest.ini / pyproject.toml [tool.pytest] → pytest --cov
package.json scripts.test → npm test (or npx vitest --coverage)
Cargo.toml → cargo test
project.godot → (GUT if present, else manual)
```

**Record:** total, passed, failed, errors, skipped, duration, coverage %.

**Verdict contribution:**
- No tests found → **SKIP** (not FAIL; project may be config-only)
- Failure rate = 0% → **PASS**
- Failure rate ≤ threshold (default 5%) → **WARN**
- Failure rate > threshold → **FAIL**

**Coverage reporting (Python):**
```bash
pytest --cov=<package> --cov-report=term-missing --cov-report=json
```

### Phase 2: Import Integrity (Python/GDScript)

**Python:** Run `scripts/import_check.py` against the project root.

**GDScript:** Verify scene/preload references are valid (see gdscript-profile.md).

#### Critical vs Optional Import Classification

Use these heuristics to classify import failures:

| Pattern | Classification | Rationale |
|---------|---------------|-----------|
| `__init__.py`, `main.py`, `app.py`, `cli.py` | **Critical** | Core entry points |
| Module in `src/`, `lib/`, or top-level package | **Critical** | Core functionality |
| `*_test.py`, `test_*.py`, `conftest.py` | **Optional** | Test infrastructure |
| Modules in `examples/`, `scripts/`, `tools/` | **Optional** | Auxiliary code |
| Import error mentions `cuml`, `triton`, `tensorrt` | **Optional** | Hardware-specific |
| Import error mentions missing system lib | **Optional** | Environment-specific |
| Dependency in `[project.optional-dependencies]` | **Optional** | Declared optional |

### Phase 3: Static Analysis

**Do NOT use grep.** Use the language's standard linter.

#### Standard Mode
```bash
# Python
ruff check --select E722,T201,B006,F401,F841,UP,I --statistics <project>

# TypeScript  
npx eslint . --format json

# GDScript
gdlint <project>
```

#### Fix Mode (`--fix`)
When `--fix` is specified, apply automatic corrections:

```bash
# Python — safe auto-fixes
ruff check --fix --select E,F,I,UP <project>
ruff format <project>

# TypeScript
npx eslint . --fix

# GDScript
gdformat <project>
```

**Important:** After `--fix`, re-run the check to report remaining issues that couldn't be auto-fixed.

### Phase 3.5: Type Checking (NEW)

Run static type analysis before proceeding to runtime checks.

**Python:**
```bash
mypy <package> --ignore-missing-imports --no-error-summary
# or if pyproject.toml has [tool.pyright]:
pyright <package>
```

**TypeScript:**
```bash
npx tsc --noEmit
```

**GDScript:** Godot 4 has built-in static typing but no standalone checker. Estimate type coverage manually:

```bash
# Find untyped declarations
grep -rn "var \w\+ =" --include="*.gd" .       # Untyped variables
grep -rn "func \w\+(" --include="*.gd" . | grep -v ":"  # Untyped functions
```

Use the `estimate_type_coverage()` function from `gdscript-profile.md` to calculate coverage per file:
```python
# From gdscript-profile.md
def estimate_type_coverage(gd_file: str) -> float:
    """Count typed vs untyped declarations."""
    # See full implementation in gdscript-profile.md
```

Also check for `@warning_ignore` annotations which may hide type issues.

**Record:** Total errors, categorized by severity.

### Phase 4: Smoke Tests (Business Logic)

Test **backend/core functionality** — NOT UI components (that's Phase 5).

**API Discovery Heuristics:**

1. **Entry points:** Look for `main()`, `cli()`, `app`, `create_app()`, `__main__.py`
2. **Service layer:** Find classes/modules named `*Service`, `*Manager`, `*Handler`  
3. **Public API:** Check `__all__` exports in `__init__.py`
4. **FastAPI/Flask:** Find route decorators (`@app.get`, `@router.post`)
5. **CLI:** Find typer/click `@app.command()` decorators
6. **SDK:** Look for client classes, public methods without `_` prefix

**For each discovered API, generate a minimal test:**
```python
def smoke_test_user_service():
    """Test UserService basic CRUD."""
    from myproject.services.user import UserService
    svc = UserService(db=":memory:")
    user = svc.create(name="test")
    assert user.id is not None
    fetched = svc.get(user.id)
    assert fetched.name == "test"
    return "PASS"
```

**Guidelines:**
- Import + instantiate + call one method with minimal valid input
- Use in-memory/temp resources (`:memory:`, `tempdir`)
- Each test < 5 seconds
- Catch exceptions, report clearly

### Phase 5: UI/Frontend Verification

Test **UI components** separately from business logic.

| Framework | Test Method |
|-----------|-------------|
| **Gradio** | `from project.ui import create_ui` (no `launch()`) |
| **Streamlit** | `streamlit run app.py --headless` exits cleanly |
| **PyQt/PySide** | Set `QT_QPA_PLATFORM=offscreen`, import widget modules |
| **React** | `npm run build` succeeds |
| **Vue** | `npm run build` succeeds |
| **Godot** | Scene files parse without error, required scripts exist |
| **CLI** | `--help` on all subcommands returns 0 |

**Boundary:** Phase 4 tests "does the logic work?" Phase 5 tests "does the UI render?"

### Phase 6: File Consistency

Run `scripts/syntax_check.py` — compiles all source files to verify no syntax errors.

> **Note:** Phase 2 (Import Integrity) tests *runtime* import behavior including initialization code. Phase 6 tests *static* syntax correctness. Both are needed: a file can have valid syntax but fail to import (e.g., missing dependency), or vice versa (syntax error in a module that's never imported).

Check git state:
```bash
git status --short      # Should be clean (or report uncommitted changes)
git diff --check        # No conflict markers
```

### Phase 7: Documentation

Run `scripts/docstring_check.py` (now checks `__init__.py` by default).

Also verify:
- README exists and is non-empty
- Key docs (CHANGELOG, CONTRIBUTING) exist if referenced
- No stale TODO markers in docs claiming completion

## Verdict Logic

```
# Calculate test failure rate
failure_rate = test_failures / total_tests

# Default thresholds (override in .qc-config.yaml)
FAIL_THRESHOLD = 0.05  # 5%
WARN_THRESHOLD = 0.00  # 0%
TYPE_ERRORS_MAX = 0    # Default: strict (any type error = FAIL)

# Verdict determination
if any([
    failure_rate > FAIL_THRESHOLD,
    critical_import_failure,
    type_check_errors > thresholds.type_errors_max,  # Configurable threshold
    lint_errors > thresholds.lint_errors_max,
]):
    verdict = "FAIL"
elif any([
    0 < failure_rate <= FAIL_THRESHOLD,
    optional_import_failures > 0,
    lint_warnings > thresholds.lint_warnings_max,
    missing_docstrings > 0,
    smoke_test_failures > 0,
]):
    verdict = "PASS WITH WARNINGS"
else:
    verdict = "PASS"
```

## Baseline Comparison

Save results to `.qc-baseline.json`:

```json
{
  "timestamp": "2026-02-15T15:00:00Z",
  "commit": "abc123",
  "verdict": "PASS WITH WARNINGS",
  "config": {
    "mode": "full",
    "thresholds": {"test_failure_rate": 0.05}
  },
  "phases": {
    "tests": {"total": 134, "passed": 134, "failed": 0, "coverage": 87.5},
    "imports": {"total": 50, "failed": 0, "optional_failed": 1, "critical_failed": 0},
    "types": {"errors": 0, "warnings": 5},
    "lint": {"errors": 0, "warnings": 12, "fixed": 8},
    "smoke": {"total": 14, "passed": 14},
    "docs": {"missing_docstrings": 3}
  }
}
```

On subsequent runs, report delta:
```
Tests:      134 → 140 (+6 ✅)
Coverage:   87% → 91% (+4% ✅)
Type errors: 0 → 0 (✅)
Lint warnings: 12 → 5 (-7 ✅)
```

## Report Output

Generate in 3 formats:
1. **Markdown** (`qc-report.md`) — full detailed report for humans
2. **JSON** (`.qc-baseline.json`) — machine-readable for CI/comparison
3. **Summary** (chat message) — 10-line digest for Discord/Slack

### Summary Format Example

```
📊 QC Report: my-project @ abc123
Verdict: ✅ PASS WITH WARNINGS

Tests:    134/134 passed (100%) | Coverage: 87%
Types:    0 errors
Lint:     0 errors, 12 warnings
Imports:  50/50 (1 optional failed)
Smoke:    14/14 passed

⚠️ Warnings:
- 3 missing docstrings
- 12 lint warnings (run with --fix)
```

## Language-Specific Profiles

Read the appropriate profile before running:
- **Python**: `references/python-profile.md`
- **TypeScript**: `references/typescript-profile.md`
- **GDScript**: `references/gdscript-profile.md`
- **General** (any language): `references/general-profile.md`


---

## Referenced Files

> The following files are referenced in this skill and included for context.

### scripts/import_check.py

```python
#!/usr/bin/env python3
"""Check that all modules in a package can be imported.

Usage:
    python import_check.py <package_name> [--exclude dir1,dir2] [--json]

Example:
    python import_check.py castle --exclude aot,sam,dinov2,dinov3
"""
from __future__ import annotations

import argparse
import importlib
import json
import logging
import pkgutil
import re
import sys
import time
from dataclasses import dataclass, field
from typing import TYPE_CHECKING

if TYPE_CHECKING:
    from collections.abc import Iterator

# Configure logging to stderr (not stdout, which is for output)
logging.basicConfig(
    level=logging.INFO,
    format="%(message)s",
    stream=sys.stderr,
)
logger = logging.getLogger(__name__)

# Hardware/environment-specific libraries that are typically optional
OPTIONAL_DEPENDENCY_PATTERNS: list[str] = [
    r"cuml",
    r"triton",
    r"tensorrt",
    r"cuda",
    r"nccl",
    r"apex",
    r"flash_attn",
    r"xformers",
    r"bitsandbytes",
    r"deepspeed",
    r"horovod",
    r"mpi4py",
]

# Paths that indicate optional/auxiliary code
OPTIONAL_PATH_PATTERNS: list[str] = [
    r"examples?/",
    r"scripts?/",
    r"tools?/",
    r"benchmarks?/",
    r"demos?/",
    r"notebooks?/",
    r"_test\.py$",
    r"test_.*\.py$",
    r"conftest\.py$",
    r"tests?/",
]

# Paths that indicate critical/core code
CRITICAL_PATH_PATTERNS: list[str] = [
    r"__init__\.py$",
    r"__main__\.py$",
    r"main\.py$",
    r"app\.py$",
    r"cli\.py$",
    r"core/",
    r"src/",
    r"lib/",
]


@dataclass
class ImportFailure:
    """Record of a failed import."""

    module: str
    error: str
    error_type: str
    is_critical: bool = True

    def to_dict(self) -> dict:
        return {
            "module": self.module,
            "error": self.error,
            "type": self.error_type,
            "critical": self.is_critical,
        }


@dataclass
class ImportResults:
    """Results of an import check run."""

    total: int = 0
    passed: int = 0
    failed: list[ImportFailure] = field(default_factory=list)
    skipped: list[str] = field(default_factory=list)
    duration_s: float = 0.0

    @property
    def critical_failures(self) -> list[ImportFailure]:
        return [f for f in self.failed if f.is_critical]

    @property
    def optional_failures(self) -> list[ImportFailure]:
        return [f for f in self.failed if not f.is_critical]

    def to_dict(self) -> dict:
        return {
            "total": self.total,
            "passed": self.passed,
            "failed": [f.to_dict() for f in self.failed],
            "critical_failed": len(self.critical_failures),
            "optional_failed": len(self.optional_failures),
            "skipped": self.skipped,
            "duration_s": self.duration_s,
        }


def is_optional_failure(module_name: str, error_msg: str) -> bool:
    """Determine if an import failure is for an optional dependency.

    Uses heuristics based on:
    1. Module path patterns (tests, examples, scripts)
    2. Error message content (missing hardware libs)
    """
    # Check if module path suggests optional code
    for pattern in OPTIONAL_PATH_PATTERNS:
        if re.search(pattern, module_name):
            return True

    # Check if error mentions optional dependencies
    error_lower = error_msg.lower()
    for pattern in OPTIONAL_DEPENDENCY_PATTERNS:
        if re.search(pattern, error_lower):
            return True

    # Check for common environment-specific errors
    env_errors = [
        "no module named",
        "cannot import name",
        "shared object file",
        "library not loaded",
        "dll load failed",
    ]
    if any(err in error_lower for err in env_errors):
        # But only if it's not a core module
        for pattern in CRITICAL_PATH_PATTERNS:
            if re.search(pattern, module_name):
                return False
        return True

    return False


def walk_package_modules(
    package_name: str, exclude: list[str]
) -> Iterator[tuple[str, bool]]:
    """Yield (module_name, is_pkg) for all modules in a package."""
    try:
        pkg = importlib.import_module(package_name)
    except ImportError as e:
        logger.error(f"Cannot import base package {package_name}: {e}")
        return

    pkg_path = getattr(pkg, "__path__", None)
    if pkg_path is None:
        yield package_name, False
        return

    for _importer, modname, ispkg in pkgutil.walk_packages(
        pkg_path, prefix=f"{package_name}."
    ):
        # Check exclusions
        skip = False
        for ex in exclude:
            if f"{package_name}.{ex}." in modname or modname == f"{package_name}.{ex}":
                skip = True
                break
        if not skip:
            yield modname, ispkg


def check_imports(package_name: str, exclude: list[str] | None = None) -> ImportResults:
    """Check all modules in a package can be imported.

    Args:
        package_name: The package to check
        exclude: List of submodule names to skip

    Returns:
        ImportResults with pass/fail counts and details
    """
    exclude = exclude or []
    results = ImportResults()

    # Try to import the base package first
    try:
        importlib.import_module(package_name)
    except ImportError as e:
        results.failed.append(
            ImportFailure(
                module=package_name,
                error=str(e),
                error_type=type(e).__name__,
                is_critical=True,
            )
        )
        return results

    for modname, _ispkg in walk_package_modules(package_name, exclude):
        # Check if excluded
        if any(
            f"{package_name}.{ex}." in modname or modname == f"{package_name}.{ex}"
            for ex in exclude
        ):
            results.skipped.append(modname)
            continue

        results.total += 1
        try:
            importlib.import_module(modname)
            results.passed += 1
        except KeyboardInterrupt:
            # Always re-raise keyboard interrupt
            raise
        except SystemExit as e:
            # Treat SystemExit as a failure but don't exit
            error_msg = f"Module called sys.exit({e.code})"
            results.failed.append(
                ImportFailure(
                    module=modname,
                    error=error_msg,
                    error_type="SystemExit",
                    is_critical=not is_optional_failure(modname, error_msg),
                )
            )
        except (ImportError, ModuleNotFoundError) as e:
            # Specific import errors
            error_msg = str(e)
            results.failed.append(
                ImportFailure(
                    module=modname,
                    error=error_msg,
                    error_type=type(e).__name__,
                    is_critical=not is_optional_failure(modname, error_msg),
                )
            )
        except (AttributeError, TypeError, ValueError, RuntimeError) as e:
            # Common errors during import (e.g., missing config, bad initialization)
            error_msg = str(e)
            results.failed.append(
                ImportFailure(
                    module=modname,
                    error=error_msg,
                    error_type=type(e).__name__,
                    is_critical=not is_optional_failure(modname, error_msg),
                )
            )
        except OSError as e:
            # File/resource access errors
            error_msg = str(e)
            results.failed.append(
                ImportFailure(
                    module=modname,
                    error=error_msg,
                    error_type=type(e).__name__,
                    is_critical=False,  # Usually environment-specific
                )
            )
        except Exception as e:
            # Catch-all for any other exceptions (custom exceptions, etc.)
            # This prevents the entire check from crashing on unexpected errors
            error_msg = str(e)
            results.failed.append(
                ImportFailure(
                    module=modname,
                    error=error_msg,
                    error_type=type(e).__name__,
                    is_critical=not is_optional_failure(modname, error_msg),
                )
            )

    return results


def main() -> None:
    """CLI entry point."""
    parser = argparse.ArgumentParser(description="Import check for Python packages")
    parser.add_argument("package", help="Package name to check")
    parser.add_argument(
        "--exclude", default="", help="Comma-separated dirs to exclude"
    )
    parser.add_argument("--json", action="store_true", help="Output as JSON")
    args = parser.parse_args()

    exclude = [x.strip() for x in args.exclude.split(",") if x.strip()]

    t0 = time.time()
    results = check_imports(args.package, exclude)
    results.duration_s = round(time.time() - t0, 2)

    if args.json:
        # JSON goes to stdout
        print(json.dumps(results.to_dict(), indent=2))
    else:
        # Human-readable output to stderr
        logger.info(f"Import Check: {args.package}")
        logger.info(f"  Total:   {results.total}")
        logger.info(f"  Passed:  {results.passed}")
        logger.info(f"  Failed:  {len(results.failed)}")
        logger.info(f"    Critical: {len(results.critical_failures)}")
        logger.info(f"    Optional: {len(results.optional_failures)}")
        logger.info(f"  Skipped: {len(results.skipped)}")

        if results.critical_failures:
            logger.info("\n❌ Critical Failures:")
            for f in results.critical_failures:
                logger.info(f"  {f.module}: {f.error}")

        if results.optional_failures:
            logger.info("\n⚠️  Optional Failures:")
            for f in results.optional_failures:
                logger.info(f"  {f.module}: {f.error}")

        logger.info(f"\nDuration: {results.duration_s}s")

    # Exit code: 1 if any critical failures
    sys.exit(1 if results.critical_failures else 0)


if __name__ == "__main__":
    main()

```

### scripts/syntax_check.py

```python
#!/usr/bin/env python3
"""Check all Python files for syntax errors using py_compile.

Usage:
    python syntax_check.py <directory> [--exclude dir1,dir2] [--json]

Example:
    python syntax_check.py src/ --exclude vendor,generated
"""
from __future__ import annotations

import argparse
import json
import logging
import os
import py_compile
import sys
from dataclasses import dataclass, field

# Configure logging to stderr
logging.basicConfig(
    level=logging.INFO,
    format="%(message)s",
    stream=sys.stderr,
)
logger = logging.getLogger(__name__)


@dataclass
class SyntaxIssue:
    """Record of a syntax error."""

    file: str
    error: str
    line: int | None = None
    column: int | None = None

    def to_dict(self) -> dict:
        result: dict = {"file": self.file, "error": self.error}
        if self.line is not None:
            result["line"] = self.line
        if self.column is not None:
            result["column"] = self.column
        return result


@dataclass
class SyntaxResults:
    """Results of a syntax check run."""

    total: int = 0
    passed: int = 0
    errors: list[SyntaxIssue] = field(default_factory=list)

    def to_dict(self) -> dict:
        return {
            "total": self.total,
            "passed": self.passed,
            "failed": len(self.errors),
            "errors": [e.to_dict() for e in self.errors],
        }


def check_syntax(root: str, exclude: list[str] | None = None) -> SyntaxResults:
    """Check all Python files in a directory for syntax errors.

    Args:
        root: Directory to check
        exclude: List of directory names to skip

    Returns:
        SyntaxResults with pass/fail counts and error details
    """
    exclude = exclude or []
    results = SyntaxResults()

    skip_dirs = exclude + ["__pycache__", ".git", ".venv", "venv", "node_modules"]

    for dirpath, dirnames, filenames in os.walk(root):
        rel = os.path.relpath(dirpath, root)

        # Skip excluded directories
        if any(ex in rel.split(os.sep) for ex in skip_dirs):
            continue

        # Filter dirnames to prevent descending into excluded dirs
        dirnames[:] = [d for d in dirnames if d not in skip_dirs]

        for fname in filenames:
            if not fname.endswith(".py"):
                continue

            filepath = os.path.join(dirpath, fname)
            results.total += 1

            try:
                py_compile.compile(filepath, doraise=True)
                results.passed += 1
            except py_compile.PyCompileError as e:
                # Extract line/column if available
                line = None
                column = None
                if hasattr(e, "exc_value") and e.exc_value:
                    exc = e.exc_value
                    if hasattr(exc, "lineno"):
                        line = exc.lineno
                    if hasattr(exc, "offset"):
                        column = exc.offset

                results.errors.append(
                    SyntaxIssue(
                        file=os.path.relpath(filepath, root),
                        error=str(e.msg) if hasattr(e, "msg") else str(e),
                        line=line,
                        column=column,
                    )
                )

    return results


def main() -> None:
    """CLI entry point."""
    parser = argparse.ArgumentParser(
        description="Syntax check for Python files",
        formatter_class=argparse.RawDescriptionHelpFormatter,
        epilog="""
Examples:
  python syntax_check.py src/
  python syntax_check.py . --exclude vendor,migrations --json
        """,
    )
    parser.add_argument("directory", help="Root directory to check")
    parser.add_argument(
        "--exclude", default="", help="Comma-separated dirs to exclude"
    )
    parser.add_argument("--json", action="store_true", help="Output as JSON")
    args = parser.parse_args()

    exclude = [x.strip() for x in args.exclude.split(",") if x.strip()]
    results = check_syntax(args.directory, exclude)

    if args.json:
        print(json.dumps(results.to_dict(), indent=2))
    else:
        logger.info(f"Syntax Check: {args.directory}")
        logger.info(f"  Total:  {results.total}")
        logger.info(f"  Passed: {results.passed}")
        logger.info(f"  Errors: {len(results.errors)}")

        if results.errors:
            logger.info("\n❌ Syntax Errors:")
            for e in results.errors:
                loc = f":{e.line}" if e.line else ""
                logger.info(f"  {e.file}{loc}: {e.error}")

    sys.exit(1 if results.errors else 0)


if __name__ == "__main__":
    main()

```

### scripts/docstring_check.py

```python
#!/usr/bin/env python3
"""Check for missing module, class, and function docstrings using AST.

Usage:
    python docstring_check.py <directory> [--exclude dir1,dir2] [--classes] [--functions]
    python docstring_check.py <directory> --skip-init  # Skip __init__.py files

Example:
    python docstring_check.py src/ --classes --functions --exclude tests,migrations
"""
from __future__ import annotations

import argparse
import ast
import json
import logging
import os
import sys
from dataclasses import dataclass, field
from typing import TYPE_CHECKING

if TYPE_CHECKING:
    pass

# Configure logging to stderr
logging.basicConfig(
    level=logging.INFO,
    format="%(message)s",
    stream=sys.stderr,
)
logger = logging.getLogger(__name__)


@dataclass
class DocstringResults:
    """Results of a docstring check run."""

    total_modules: int = 0
    missing_module_docstring: list[str] = field(default_factory=list)
    missing_class_docstring: list[str] = field(default_factory=list)
    missing_function_docstring: list[str] = field(default_factory=list)

    def to_dict(self) -> dict:
        return {
            "total_modules": self.total_modules,
            "missing_module_docstring": self.missing_module_docstring,
            "missing_class_docstring": self.missing_class_docstring,
            "missing_function_docstring": self.missing_function_docstring,
            "total_missing": (
                len(self.missing_module_docstring)
                + len(self.missing_class_docstring)
                + len(self.missing_function_docstring)
            ),
        }


def should_check_function(node: ast.FunctionDef | ast.AsyncFunctionDef) -> bool:
    """Determine if a function should have a docstring.

    Skip private/dunder methods and very short functions.
    """
    name = node.name

    # Skip private methods (single underscore)
    if name.startswith("_") and not name.startswith("__"):
        return False

    # Skip most dunder methods (they're well-known)
    if name.startswith("__") and name.endswith("__"):
        # But check __init__ if it has non-trivial logic
        if name == "__init__":
            # Check if body has more than just assignments
            non_assign_stmts = [
                stmt
                for stmt in node.body
                if not isinstance(stmt, (ast.Assign, ast.AnnAssign, ast.Expr, ast.Pass))
            ]
            return len(non_assign_stmts) > 0
        return False

    # Skip very short functions (1-2 lines, likely trivial)
    if len(node.body) <= 2:
        # Unless they have complex logic
        has_complex = any(
            isinstance(stmt, (ast.If, ast.For, ast.While, ast.Try, ast.With))
            for stmt in node.body
        )
        if not has_complex:
            return False

    return True


def check_docstrings(
    root: str,
    exclude: list[str] | None = None,
    check_classes: bool = False,
    check_functions: bool = False,
    skip_init: bool = False,
) -> DocstringResults:
    """Check for missing docstrings in Python files.

    Args:
        root: Directory to check
        exclude: List of directory names to skip
        check_classes: Also check class docstrings
        check_functions: Also check function docstrings
        skip_init: Skip __init__.py files

    Returns:
        DocstringResults with lists of missing docstrings
    """
    exclude = exclude or []
    results = DocstringResults()

    for dirpath, dirnames, filenames in os.walk(root):
        rel = os.path.relpath(dirpath, root)

        # Skip excluded directories
        skip_dirs = exclude + ["__pycache__", ".git", ".venv", "venv", "node_modules"]
        if any(ex in rel.split(os.sep) for ex in skip_dirs):
            continue

        # Filter out excluded dirs from dirnames to prevent descending
        dirnames[:] = [d for d in dirnames if d not in skip_dirs]

        for fname in filenames:
            if not fname.endswith(".py"):
                continue

            # Handle __init__.py based on flag
            if fname == "__init__.py" and skip_init:
                continue

            filepath = os.path.join(dirpath, fname)
            relpath = os.path.relpath(filepath, root)
            results.total_modules += 1

            try:
                with open(filepath, encoding="utf-8") as f:
                    source = f.read()
                tree = ast.parse(source)
            except SyntaxError as e:
                logger.warning(f"Syntax error in {relpath}: {e}")
                continue
            except UnicodeDecodeError as e:
                logger.warning(f"Encoding error in {relpath}: {e}")
                continue

            # Module docstring
            if not ast.get_docstring(tree):
                results.missing_module_docstring.append(relpath)

            # Walk AST for class and function definitions
            for node in ast.walk(tree):
                if check_classes and isinstance(node, ast.ClassDef):
                    if not ast.get_docstring(node):
                        results.missing_class_docstring.append(
                            f"{relpath}::{node.name}"
                        )

                if check_functions and isinstance(
                    node, (ast.FunctionDef, ast.AsyncFunctionDef)
                ):
                    if should_check_function(node) and not ast.get_docstring(node):
                        results.missing_function_docstring.append(
                            f"{relpath}::{node.name}"
                        )

    return results


def main() -> None:
    """CLI entry point."""
    parser = argparse.ArgumentParser(
        description="Docstring check for Python modules",
        formatter_class=argparse.RawDescriptionHelpFormatter,
        epilog="""
Examples:
  python docstring_check.py src/
  python docstring_check.py . --classes --functions
  python docstring_check.py . --skip-init --exclude tests,migrations
        """,
    )
    parser.add_argument("directory", help="Root directory to check")
    parser.add_argument(
        "--exclude", default="", help="Comma-separated dirs to exclude"
    )
    parser.add_argument(
        "--classes", action="store_true", help="Also check class docstrings"
    )
    parser.add_argument(
        "--functions", action="store_true", help="Also check function docstrings"
    )
    parser.add_argument(
        "--skip-init",
        action="store_true",
        help="Skip __init__.py files (default: check them)",
    )
    parser.add_argument("--json", action="store_true", help="Output as JSON")
    args = parser.parse_args()

    exclude = [x.strip() for x in args.exclude.split(",") if x.strip()]
    results = check_docstrings(
        args.directory,
        exclude,
        check_classes=args.classes,
        check_functions=args.functions,
        skip_init=args.skip_init,
    )

    if args.json:
        print(json.dumps(results.to_dict(), indent=2))
    else:
        logger.info(f"Docstring Check: {args.directory}")
        logger.info(f"  Modules checked: {results.total_modules}")
        logger.info(
            f"  Missing module docstring: {len(results.missing_module_docstring)}"
        )

        if results.missing_module_docstring:
            for m in results.missing_module_docstring[:10]:  # Limit output
                logger.info(f"    ⚠️  {m}")
            if len(results.missing_module_docstring) > 10:
                remaining = len(results.missing_module_docstring) - 10
                logger.info(f"    ... and {remaining} more")

        if args.classes:
            logger.info(
                f"  Missing class docstring: {len(results.missing_class_docstring)}"
            )
            for c in results.missing_class_docstring[:10]:
                logger.info(f"    ⚠️  {c}")
            if len(results.missing_class_docstring) > 10:
                remaining = len(results.missing_class_docstring) - 10
                logger.info(f"    ... and {remaining} more")

        if args.functions:
            logger.info(
                f"  Missing function docstring: {len(results.missing_function_docstring)}"
            )
            for f in results.missing_function_docstring[:10]:
                logger.info(f"    ⚠️  {f}")
            if len(results.missing_function_docstring) > 10:
                remaining = len(results.missing_function_docstring) - 10
                logger.info(f"    ... and {remaining} more")

    # Docstring missing is a warning, not an error
    sys.exit(0)


if __name__ == "__main__":
    main()

```

### references/python-profile.md

```markdown
# Python QC Profile

## Project Detection

Python project if any of these exist:
- `pyproject.toml`
- `setup.py` or `setup.cfg`
- `requirements.txt`
- `Pipfile`
- Top-level `*.py` files

## Virtual Environment Detection

Check for active/available virtual environment in order:

```bash
# 1. Already active
echo $VIRTUAL_ENV

# 2. Common venv directories
for dir in .venv venv env .env; do
    if [ -f "$dir/bin/activate" ] || [ -f "$dir/Scripts/activate" ]; then
        echo "Found: $dir"
    fi
done

# 3. Poetry
if [ -f "poetry.lock" ]; then
    poetry env info --path 2>/dev/null
fi

# 4. Conda
if [ -f "environment.yml" ] || [ -f "environment.yaml" ]; then
    echo "Conda environment file found"
    # Check if conda env exists
    conda env list | grep -q "$(basename $PWD)"
fi

# 5. PDM
if [ -f "pdm.lock" ]; then
    pdm info --env 2>/dev/null
fi

# 6. Hatch
if [ -f "hatch.toml" ] || grep -q "[tool.hatch]" pyproject.toml 2>/dev/null; then
    hatch env find 2>/dev/null
fi

# 7. Project-specific activate script
if [ -f "activate.sh" ]; then
    echo "Found: activate.sh"
fi
```

**Activation for QC:**
```bash
# Auto-activate if found
if [ -f ".venv/bin/activate" ]; then
    source .venv/bin/activate
elif [ -f "venv/bin/activate" ]; then
    source venv/bin/activate
elif [ -f "poetry.lock" ]; then
    poetry shell || poetry run pytest
fi
```

## Test Runner Detection

Check in order:
1. `pyproject.toml` → `[tool.pytest.ini_options]` → `pytest`
2. `pytest.ini` → `pytest`
3. `setup.cfg` → `[tool:pytest]` → `pytest`
4. `tox.ini` → `tox`
5. `noxfile.py` → `nox`
6. Fallback: `python -m pytest tests/`

## Phase 1: Test Suite with Coverage

### Basic Run
```bash
pytest -v --tb=short
```

### With Coverage
```bash
# Coverage for the main package
pytest --cov=<package_name> --cov-report=term-missing --cov-report=json --cov-fail-under=0

# Parse coverage JSON for reporting
python -c "import json; d=json.load(open('coverage.json')); print(f\"Coverage: {d['totals']['percent_covered']:.1f}%\")"
```

### Scientific Computing Projects

Large test suites with GPU/slow tests often use markers:

```bash
# Skip slow tests for quick QC
pytest -m "not slow"

# Skip GPU tests on CPU-only machines
pytest -m "not gpu"

# Run only unit tests (skip integration)
pytest -m "not integration"

# Common marker combinations
pytest -m "not (slow or gpu or integration)"
```

**Detect available markers:**
```bash
pytest --markers | grep -E "^@pytest.mark\.(slow|gpu|integration|e2e)"
```

### No Tests Handling

If no tests found:
- Check for `tests/`, `test/`, `*_test.py`, `test_*.py`
- If directory exists but empty: **SKIP** with note "Test directory exists but no tests"
- If no test directory: **SKIP** with note "No test suite configured"
- Do NOT fail for missing tests (project may be library without tests)

## Phase 3: Static Analysis with ruff

Install if needed: `pip install ruff`

### Standard Check
```bash
ruff check --select E722,T201,B006,F401,F841,UP,I --statistics <project>
```

### Fix Mode
```bash
# Safe auto-fixes only
ruff check --fix --select E,F,I,UP <project>

# Also format
ruff format <project>
```

### Recommended Rule Set

| Rule | What it catches |
|------|----------------|
| `E722` | Bare `except:` without exception type |
| `T201` | `print()` statement found (should use logging) |
| `B006` | Mutable default argument (`def f(x=[])`) |
| `F401` | Unused import |
| `F841` | Unused local variable |
| `UP` | Pyupgrade: outdated syntax for target Python version |
| `I` | isort: import ordering |

### Severity Mapping

- `E722`, `B006` → WARNING (potential bugs)
- `T201` → WARNING (code hygiene)
- `F401`, `F841` → INFO (cleanup)
- Anything ruff calls ERROR → ERROR

### Strict Mode (Optional)

For stricter QC, add security and complexity checks:
```bash
ruff check --select E722,T201,B006,F401,F841,UP,I,S,C90,PT,RUF --statistics .
```

## Phase 3.5: Type Checking

### mypy
```bash
# Basic check
mypy <package> --ignore-missing-imports

# Strict mode
mypy <package> --strict --ignore-missing-imports

# With config from pyproject.toml
mypy <package>
```

### pyright (if configured)
```bash
# Check if pyright is configured
grep -q "tool.pyright" pyproject.toml && pyright <package>
```

### Type Coverage Estimate
```bash
# Count typed vs untyped function signatures
grep -rn "def " --include="*.py" | wc -l  # total
grep -rn "def .*(.*:.*) ->" --include="*.py" | wc -l  # typed
```

## Import Check

Use `scripts/import_check.py`:

```bash
python scripts/import_check.py <package> --exclude vendor1,vendor2 --json
```

Common exclusions:
- Vendored dependencies: `aot`, `sam`, `dinov2`
- Test fixtures: `fixtures`, `mocks`
- Migration scripts: `migrations`, `alembic`
- Generated code: `generated`, `proto`

## Smoke Test Patterns

### Service Layer
```python
import tempfile
from package.service import create_thing, get_thing

def smoke_test_thing_service():
    with tempfile.TemporaryDirectory() as tmp:
        result = create_thing(tmp, "test")
        assert result is not None
        fetched = get_thing(tmp, "test")
        assert fetched is not None
    return "PASS"
```

### CLI (typer/click)
```python
from typer.testing import CliRunner
from package.cli import app

def smoke_test_cli():
    runner = CliRunner()
    result = runner.invoke(app, ["--help"])
    assert result.exit_code == 0
    return "PASS"
```

### Config Round-trip
```python
from package.config import Config

def smoke_test_config():
    cfg = Config()
    d = cfg.to_dict()
    cfg2 = Config.from_dict(d)
    assert cfg.field == cfg2.field
    return "PASS"
```

### FastAPI/Flask Endpoints
```python
from fastapi.testclient import TestClient
from package.app import app

def smoke_test_api():
    client = TestClient(app)
    # Health check
    r = client.get("/health")
    assert r.status_code == 200
    # API version
    r = client.get("/api/v1/version")
    assert r.status_code == 200
    return "PASS"
```

### Numerical/ML Backward Compatibility
```python
import numpy as np

def smoke_test_model_backward_compat():
    """Verify model produces same output as baseline."""
    from package.model import predict
    
    # Fixed input for reproducibility
    np.random.seed(42)
    test_input = np.random.randn(1, 10)
    
    result = predict(test_input)
    
    # Compare to saved baseline (update when intentionally changing)
    baseline = np.load("tests/baselines/model_output.npy")
    np.testing.assert_allclose(result, baseline, rtol=1e-5)
    return "PASS"
```

### Database/ORM
```python
import tempfile
from package.db import init_db, Session
from package.models import User

def smoke_test_database():
    with tempfile.NamedTemporaryFile(suffix=".db") as f:
        init_db(f"sqlite:///{f.name}")
        with Session() as session:
            user = User(name="test")
            session.add(user)
            session.commit()
            assert session.query(User).count() == 1
    return "PASS"
```

## UI Verification

### Gradio
```python
def smoke_test_gradio_ui():
    import os
    os.environ["GRADIO_ANALYTICS_ENABLED"] = "False"
    from package.ui import create_ui
    
    demo = create_ui()
    assert demo is not None
    # Don't call launch()
    return "PASS"
```

### Streamlit
```bash
# Run in headless mode, should exit cleanly
timeout 10 streamlit run app.py --headless --server.headless true 2>&1 | grep -q "error" && echo "FAIL" || echo "PASS"
```

### PyQt/PySide
```python
import os
os.environ["QT_QPA_PLATFORM"] = "offscreen"

def smoke_test_qt_ui():
    from package.ui.main_window import MainWindow
    from PySide6.QtWidgets import QApplication
    
    app = QApplication([])
    window = MainWindow()
    assert window is not None
    # Don't show() or exec()
    return "PASS"
```

## Dependency Security

```bash
# pip-audit (recommended)
pip-audit --json

# safety (alternative)
safety check --json

# Check for known vulnerabilities
pip-audit --strict --desc on
```

```

### references/typescript-profile.md

```markdown
# TypeScript QC Profile

## Project Detection

TypeScript project if any of these exist:
- `tsconfig.json`
- `package.json` with TypeScript in dependencies
- `.ts` or `.tsx` files

## Monorepo Detection

Check for monorepo structure in order:

### pnpm Workspaces
```bash
if [ -f "pnpm-workspace.yaml" ]; then
    echo "pnpm monorepo detected"
    # List workspaces
    pnpm list --recursive --depth 0
fi
```

### npm/Yarn Workspaces
```bash
if grep -q '"workspaces"' package.json 2>/dev/null; then
    echo "npm/yarn workspaces detected"
    # List packages
    jq -r '.workspaces[]' package.json
fi
```

### Nx
```bash
if [ -f "nx.json" ]; then
    echo "Nx monorepo detected"
    # List projects
    npx nx show projects
fi
```

### Turborepo
```bash
if [ -f "turbo.json" ]; then
    echo "Turborepo detected"
    # List packages
    ls -d packages/*/package.json apps/*/package.json 2>/dev/null
fi
```

### Lerna (legacy)
```bash
if [ -f "lerna.json" ]; then
    echo "Lerna monorepo detected"
    npx lerna list
fi
```

### Monorepo QC Strategy

For monorepos, run QC per package or use workspace-aware commands:

```bash
# pnpm - run in all packages
pnpm -r run lint
pnpm -r run test

# Nx - run affected only
npx nx affected --target=lint
npx nx affected --target=test

# Turborepo
npx turbo run lint test

# Changed packages only (CI mode)
pnpm -r --filter "...[origin/main]" run test
```

## Test Runner Detection

Check in order:
1. `vitest.config.ts` or `vitest` in devDependencies → `npx vitest`
2. `jest.config.ts/js` or `jest` in devDependencies → `npx jest`
3. `package.json` → `scripts.test` → use that command
4. Fallback: `npm test`

### With Coverage
```bash
# Vitest
npx vitest --coverage

# Jest
npx jest --coverage --coverageReporters=json-summary

# Parse coverage
cat coverage/coverage-summary.json | jq '.total.lines.pct'
```

## Phase 3: Static Analysis with eslint

### Standard Check

Use project's eslint config if available:
```bash
npx eslint . --format json --output-file /tmp/eslint-report.json
```

If no eslint config exists:
```bash
npx eslint . --no-eslintrc \
  --parser @typescript-eslint/parser \
  --plugin @typescript-eslint \
  --rule '{"no-unused-vars":"warn","no-console":"warn","no-debugger":"error"}'
```

### Fix Mode
```bash
npx eslint . --fix
```

### Monorepo eslint
```bash
# Root config for shared rules
npx eslint . --config .eslintrc.js

# Or per-package
pnpm -r run lint

# Nx
npx nx run-many --target=lint --all
```

### Key Rules to Check

| Rule | What it catches |
|------|-----------------|
| `no-unused-vars` | Unused variables/imports |
| `no-console` | console.log in production code |
| `no-debugger` | debugger statements |
| `@typescript-eslint/no-explicit-any` | Untyped `any` usage |
| `@typescript-eslint/no-non-null-assertion` | Unsafe `!` assertions |
| `@typescript-eslint/strict-boolean-expressions` | Truthy/falsy bugs |

## Phase 3.5: Type Checking

```bash
# Full type check without emit
npx tsc --noEmit

# Specific project in monorepo
npx tsc --project packages/core/tsconfig.json --noEmit

# All projects (Nx)
npx nx run-many --target=typecheck --all

# Parse errors
npx tsc --noEmit 2>&1 | grep -c "error TS"
```

### Strict Mode Verification

Check if strict mode is enabled:
```bash
grep -q '"strict": true' tsconfig.json && echo "Strict mode enabled" || echo "Warning: strict mode disabled"
```

## Smoke Test Patterns

### API Endpoint (Express/Fastify)
```typescript
import request from 'supertest';
import { app } from './app';

async function smokeTestApi() {
  const res = await request(app).get('/health');
  if (res.status !== 200) throw new Error('Health check failed');
  
  const apiRes = await request(app).get('/api/v1/version');
  if (apiRes.status !== 200) throw new Error('API version failed');
  
  return 'PASS';
}
```

### Module Import
```typescript
async function smokeTestImports() {
  const { MainService } = await import('./services/main');
  const service = new MainService();
  if (!service) throw new Error('MainService instantiation failed');
  return 'PASS';
}
```

### Database Connection
```typescript
import { createConnection } from './db';

async function smokeTestDatabase() {
  const conn = await createConnection(':memory:');
  await conn.query('SELECT 1');
  await conn.close();
  return 'PASS';
}
```

### React Component (without DOM)
```typescript
import { renderToString } from 'react-dom/server';
import { App } from './App';

function smokeTestReactApp() {
  const html = renderToString(<App />);
  if (!html.includes('<div')) throw new Error('App render failed');
  return 'PASS';
}
```

### CLI Tool
```typescript
import { execSync } from 'child_process';

function smokeTestCli() {
  const output = execSync('npx my-cli --help', { encoding: 'utf8' });
  if (!output.includes('Usage:')) throw new Error('CLI help failed');
  return 'PASS';
}
```

## Build Check

```bash
# TypeScript compilation
npx tsc --noEmit

# Full build
npm run build

# Check for build output
ls -la dist/ build/ out/ 2>/dev/null

# Monorepo build
npx turbo run build
# or
npx nx run-many --target=build --all
```

Record: build success/failure, total errors, total warnings.

## Package Audit

```bash
# npm
npm audit --json

# pnpm
pnpm audit --json

# yarn
yarn audit --json
```

Report: total vulnerabilities by severity (critical/high/moderate/low).

## UI Verification

### React/Vue Build
```bash
# Should complete without error
npm run build

# Check for type errors in components
npx tsc --project tsconfig.json --noEmit
```

### Next.js
```bash
# Build and export
npx next build

# Check for static issues
npx next lint
```

### Storybook (if present)
```bash
# Build storybook (catches import/render errors)
npx storybook build --quiet
```

## Changed Files Only (CI Mode)

For pull request CI, only check changed files:

```bash
# Get changed TypeScript files
CHANGED=$(git diff --name-only origin/main...HEAD -- '*.ts' '*.tsx')

# Lint only changed files
echo "$CHANGED" | xargs npx eslint

# Type check is always full project (TypeScript needs full context)
npx tsc --noEmit
```

```

### references/gdscript-profile.md

```markdown
# GDScript QC Profile

Quality control profile for Godot 4.x projects using GDScript.

## Project Detection

Godot project if any of these exist:
- `project.godot` (Godot project file)
- `*.tscn` files (scene files)
- `*.gd` files (GDScript files)

## Tool Setup

### gdlint / gdformat (gdtoolkit)

Install via pip:
```bash
pip install gdtoolkit
```

Verify installation:
```bash
gdlint --version
gdformat --version
```

### GUT (Godot Unit Testing)

Check for GUT addon:
```bash
# GUT is present if this directory exists
ls addons/gut/
```

## Phase 1: Test Suite

### With GUT

If `addons/gut/` exists, run tests via command line:

```bash
# Method 1: Using Godot headless
godot --headless -s addons/gut/gut_cmdln.gd -gdir=res://test -gexit

# Method 2: Using gut_cmdln.gd directly (Godot 4)
godot --headless --script addons/gut/gut_cmdln.gd \
  -gdir=res://test \
  -ginclude_subdirs \
  -gexit_on_success \
  -glog=2
```

**GUT output parsing:**
- Look for `Passed: X` and `Failed: Y` in output
- Exit code 0 = all passed
- Exit code 1 = failures

### Without GUT

If no test framework:
- Check if `test/` or `tests/` directory exists with `.gd` files
- If yes, mark as "Tests exist but no framework detected"
- If no, mark as **SKIP** (not FAIL)

## Phase 2: Scene/Resource Integrity

GDScript equivalent of import checking. Verify all references are valid.

### Preload/Load Validation

Find all `preload()` and `load()` calls:

```bash
grep -rn "preload\|load(" --include="*.gd" .
```

For each path found, verify the file exists:
```python
import re
import os

def check_gdscript_loads(project_root: str) -> dict:
    """Check all preload/load references are valid."""
    results = {"total": 0, "valid": 0, "broken": []}
    
    for root, dirs, files in os.walk(project_root):
        for f in files:
            if not f.endswith(".gd"):
                continue
            filepath = os.path.join(root, f)
            with open(filepath) as gd:
                for lineno, line in enumerate(gd, 1):
                    # Match preload("res://...") or load("res://...")
                    for match in re.finditer(r'(?:preload|load)\s*\(\s*["\']([^"\']+)["\']', line):
                        res_path = match.group(1)
                        results["total"] += 1
                        
                        if res_path.startswith("res://"):
                            actual_path = os.path.join(
                                project_root, 
                                res_path.replace("res://", "")
                            )
                            if os.path.exists(actual_path):
                                results["valid"] += 1
                            else:
                                results["broken"].append({
                                    "file": filepath,
                                    "line": lineno,
                                    "path": res_path
                                })
    return results
```

### Scene Reference Validation

Parse `.tscn` files for external resource references:

```bash
grep -h "ext_resource" *.tscn | grep "path="
```

Verify each referenced path exists.

### Script Attachment Validation

For each scene, verify attached scripts exist:

```python
def check_scene_scripts(tscn_path: str, project_root: str) -> list:
    """Check all script references in a scene file."""
    broken = []
    with open(tscn_path) as f:
        content = f.read()
    
    # Find script paths
    for match in re.finditer(r'script\s*=\s*ExtResource\s*\(\s*"([^"]+)"\s*\)', content):
        # Need to resolve ExtResource ID to actual path
        pass
    
    # Direct script path references
    for match in re.finditer(r'path="(res://[^"]+\.gd)"', content):
        script_path = match.group(1).replace("res://", "")
        if not os.path.exists(os.path.join(project_root, script_path)):
            broken.append(script_path)
    
    return broken
```

## Phase 3: Static Analysis (gdlint)

### Standard Check

```bash
gdlint . --exclude addons/
```

### Key Rules

| Rule | What it catches |
|------|-----------------|
| `function-name` | Non-snake_case function names |
| `class-name` | Non-PascalCase class names |
| `unused-argument` | Unused function arguments |
| `private-method-call` | Calling _private methods externally |
| `max-line-length` | Lines > 100 chars (configurable) |

### Configuration (.gdlintrc)

Create if not exists:
```ini
[gdlint]
max-line-length=120
excluded-directories=addons,build
```

### Fix Mode (gdformat)

```bash
# Check formatting issues
gdformat --check .

# Auto-fix formatting
gdformat .
```

## Phase 3.5: Type Checking

Godot 4 supports static typing but has no standalone type checker.

**Manual verification:**
1. Look for typed function signatures: `func foo(x: int) -> String:`
2. Check for `@warning_ignore` annotations (potential type issues)
3. Note untyped variables in critical code paths

**Heuristic for type coverage:**
```python
def estimate_type_coverage(gd_file: str) -> float:
    """Estimate percentage of typed declarations."""
    with open(gd_file) as f:
        content = f.read()
    
    # Count function definitions
    funcs = re.findall(r'func\s+\w+\s*\(', content)
    typed_funcs = re.findall(r'func\s+\w+\s*\([^)]*:\s*\w+', content)
    
    # Count variable declarations
    vars = re.findall(r'var\s+\w+', content)
    typed_vars = re.findall(r'var\s+\w+\s*:\s*\w+', content)
    
    total = len(funcs) + len(vars)
    typed = len(typed_funcs) + len(typed_vars)
    
    return typed / total if total > 0 else 1.0
```

## Phase 4: Smoke Tests (Business Logic)

### Autoload/Singleton Testing

Find autoloads in `project.godot`:
```ini
[autoload]
GameManager="*res://scripts/game_manager.gd"
SaveSystem="*res://scripts/save_system.gd"
```

For each autoload, verify it can be instantiated:
```gdscript
# test/test_autoloads.gd
extends GutTest

func test_game_manager_exists():
    assert_not_null(GameManager)

func test_save_system_save_load():
    var data = {"score": 100}
    SaveSystem.save_data(data)
    var loaded = SaveSystem.load_data()
    assert_eq(loaded["score"], 100)
```

### Core Class Testing

Identify core classes (non-UI nodes):
- Classes in `scripts/`, `src/`, `core/`
- Classes inheriting from `Resource`, `RefCounted`, `Object`
- Classes NOT inheriting from UI nodes (`Control`, `Button`, etc.)

## Phase 5: UI/Scene Verification

### Scene Loading Test

Verify all scenes can be instantiated:
```gdscript
# test/test_scenes.gd
extends GutTest

var scenes = [
    "res://scenes/main_menu.tscn",
    "res://scenes/game.tscn",
    "res://scenes/settings.tscn"
]

func test_scenes_load():
    for scene_path in scenes:
        var scene = load(scene_path)
        assert_not_null(scene, "Failed to load: " + scene_path)
        var instance = scene.instantiate()
        assert_not_null(instance)
        instance.queue_free()
```

### UI Node Verification

For each UI scene, check:
- All signal connections are valid
- All NodePath references resolve
- No missing fonts/themes

## Phase 6: File Consistency

### GDScript Syntax Check

Godot validates syntax on load. For CI without Godot:
```bash
# Basic syntax check using gdtoolkit
gdparse *.gd
```

### Resource Format Validation

Check `.tscn` and `.tres` files are valid:
```bash
# These should be readable text files
file *.tscn *.tres | grep -v "ASCII text"
```

### Git State

Same as general profile:
```bash
git status --short
git diff --check
```

## Phase 7: Documentation

### Script Documentation

GDScript uses `##` for doc comments:
```gdscript
## Player character controller.
## Handles movement, jumping, and combat.
class_name Player
extends CharacterBody3D

## Current health points.
var health: int = 100

## Move the player in the given direction.
## [param direction]: Normalized movement vector.
## [param delta]: Frame delta time.
func move(direction: Vector3, delta: float) -> void:
    pass
```

Check for documentation:
```python
def check_gdscript_docs(gd_file: str) -> dict:
    """Check for documentation in GDScript file."""
    with open(gd_file) as f:
        lines = f.readlines()
    
    results = {"has_class_doc": False, "undocumented_funcs": []}
    
    # Check for class documentation (## at top before class_name)
    for i, line in enumerate(lines):
        if line.strip().startswith("##"):
            results["has_class_doc"] = True
            break
        if line.strip().startswith("class_name") or line.strip().startswith("extends"):
            break
    
    # Check function documentation
    for i, line in enumerate(lines):
        if re.match(r'\s*func\s+\w+', line):
            func_name = re.search(r'func\s+(\w+)', line).group(1)
            # Check if previous non-empty line is a doc comment
            for j in range(i-1, -1, -1):
                prev = lines[j].strip()
                if prev.startswith("##"):
                    break
                if prev:  # Non-empty, non-doc line
                    if not func_name.startswith("_"):  # Skip private
                        results["undocumented_funcs"].append(func_name)
                    break
    
    return results
```

## Common Issues Checklist

- [ ] Missing `class_name` for reusable classes
- [ ] Hardcoded paths instead of exports
- [ ] Using `get_node()` with magic strings vs `@onready` + NodePath
- [ ] Missing `@tool` annotation for editor scripts
- [ ] Signals not documented
- [ ] Large scripts (>500 lines) that should be split
- [ ] Using `await get_tree().process_frame` instead of proper signals

```

### references/general-profile.md

```markdown
# General QC Profile (Language-Agnostic)

Use this for cross-cutting checks that apply to any language, or when the project isn't Python, TypeScript, or GDScript.

## Git State

```bash
# Current commit (for baseline)
git log --oneline -1

# Working directory state
git status --short          # Should be empty (clean) or document changes

# Conflict markers (should never exist)
git diff --check            # Returns non-zero if conflict markers found

# Uncommitted changes summary
git diff --stat
```

## File Structure Verification

Verify expected project files exist:

### Required Files (WARN if missing)
- README.md or README
- License file (LICENSE, LICENSE.md, LICENSE.txt)

### Standard Files (INFO if missing)
- Test directory (`tests/`, `test/`, `__tests__/`)
- CI config (`.github/workflows/`, `.gitlab-ci.yml`, `.circleci/`)
- CHANGELOG (CHANGELOG.md, HISTORY.md, NEWS.md)
- CONTRIBUTING guide

### Configuration Files
- `.gitignore` — should exist
- `.editorconfig` — nice to have for consistency

## Documentation Freshness

Check if docs match actual state:

### Version Number Consistency
```bash
# Find version declarations
grep -rn "version" pyproject.toml package.json Cargo.toml 2>/dev/null
grep -rn "Version:" README.md 2>/dev/null

# These should match
```

### Stale Markers
```bash
# Find TODO/FIXME that might be stale
grep -rn "TODO\|FIXME\|XXX\|HACK" --include="*.md" .

# Find "Coming soon" or "WIP" claims
grep -rni "coming soon\|work in progress\|wip\|not yet implemented" --include="*.md" .
```

### Status Badges
If README has badges, verify they're current:
- Build status badge links to actual CI
- Coverage badge URL is correct
- Version badge matches package version

## Dependency Security

Run language-appropriate security audit:

| Language | Tool | Command |
|----------|------|---------|
| Python | pip-audit | `pip-audit --json` |
| Python | safety | `safety check --json` |
| Node | npm | `npm audit --json` |
| Node | pnpm | `pnpm audit --json` |
| Rust | cargo-audit | `cargo audit --json` |
| Go | govulncheck | `govulncheck -json ./...` |
| Ruby | bundler-audit | `bundle audit check --format json` |

### Interpreting Results
- **Critical/High** → Must fix before release
- **Moderate** → Should fix, but not blocking
- **Low** → Document and track

## Code Size Analysis

### Line Count Summary
```bash
# Top 20 largest files
find . -name "*.py" -o -name "*.ts" -o -name "*.rs" -o -name "*.go" -o -name "*.gd" | \
  xargs wc -l 2>/dev/null | sort -n | tail -20
```

### Large File Detection
Files over 500 lines may need splitting:
```bash
find . \( -name "*.py" -o -name "*.ts" -o -name "*.gd" \) \
  -exec sh -c 'lines=$(wc -l < "$1"); [ "$lines" -gt 500 ] && echo "$lines $1"' _ {} \;
```

### Complexity Indicators
- Single file > 500 lines: Consider splitting
- Single function > 50 lines: Consider refactoring
- Deep nesting (>4 levels): Consider extraction

## CI/CD Check

### CI Config Exists
```bash
ls -la .github/workflows/*.yml .gitlab-ci.yml .circleci/config.yml Jenkinsfile 2>/dev/null
```

### CI Should Include
- [ ] Tests run on every PR
- [ ] Linting runs on every PR
- [ ] Build step exists
- [ ] Security scanning (optional but recommended)
- [ ] Coverage reporting (optional)

### CI Health
If CI badges exist, verify:
- Recent runs are passing
- Runs complete in reasonable time (<10 min for most projects)

## Changed Files Only Mode

For CI on pull requests, only check changed files:

```bash
# Get changed files vs main branch
git diff --name-only origin/main...HEAD

# Get changed files (staged)
git diff --name-only --cached

# Get changed files (unstaged)
git diff --name-only
```

### Filter by Extension
```bash
# Python files only
git diff --name-only origin/main...HEAD -- '*.py'

# TypeScript files only
git diff --name-only origin/main...HEAD -- '*.ts' '*.tsx'

# GDScript files only
git diff --name-only origin/main...HEAD -- '*.gd'
```

## Environment Detection

### Container/VM
```bash
# Docker
[ -f /.dockerenv ] && echo "Running in Docker"

# Kubernetes
[ -n "$KUBERNETES_SERVICE_HOST" ] && echo "Running in Kubernetes"

# CI environments
[ -n "$CI" ] && echo "Running in CI"
[ -n "$GITHUB_ACTIONS" ] && echo "Running in GitHub Actions"
[ -n "$GITLAB_CI" ] && echo "Running in GitLab CI"
```

### Required Tools
Check that required tools are available:
```bash
command -v ruff && ruff --version
command -v eslint && npx eslint --version
command -v gdlint && gdlint --version
command -v mypy && mypy --version
```

## Cross-Platform Considerations

### Line Endings
```bash
# Check for mixed line endings
file * | grep -i "crlf\|dos"

# Git config should handle this
git config core.autocrlf
```

### Path Separators
- Code should use `pathlib.Path` (Python) or `path.join` (Node)
- Avoid hardcoded `/` or `\`

### Shell Scripts
```bash
# Check shebang
head -1 scripts/*.sh 2>/dev/null

# Shellcheck (if available)
shellcheck scripts/*.sh 2>/dev/null
```

## Quick Mode Subset

For `--quick` mode, run only these checks:
1. Git state (clean working directory)
2. Syntax check (files parse without error)
3. Critical lint rules only (E722, B006)
4. README exists

Skip:
- Full test suite
- Import checking
- Smoke tests
- Documentation completeness

```



---

## Skill Companion Files

> Additional files collected from the skill directory layout.

### _meta.json

```json
{
  "owner": "isonaei",
  "slug": "code-qc",
  "displayName": "Code QC",
  "latest": {
    "version": "1.0.0",
    "publishedAt": 1771374057944,
    "commit": "https://github.com/openclaw/skills/commit/8f3a6eab1fa7ac4a0a6b67adb333cc1d97924522"
  },
  "history": []
}

```

### references/ruff-rules.md

```markdown
# Ruff Rule Reference for QC

## Rule Selection Decision Tree

Use this flowchart to select the appropriate rule set for your project:

```
Is this a QC audit or development linting?
├── QC Audit (one-time check)
│   └── Use: STANDARD SET (catch issues without blocking)
│
└── Development/CI (ongoing)
    │
    ├── Is this a library/SDK?
    │   └── Use: STRICT SET (quality is paramount)
    │
    ├── Is this a CLI/script?
    │   ├── Allow print()? → Remove T201 from rules
    │   └── Use: STANDARD SET minus T201
    │
    ├── Is this a web application?
    │   └── Use: STANDARD + SECURITY (add S)
    │
    ├── Is this a data science/ML project?
    │   ├── Allow long lines? → Add --line-length 120
    │   └── Use: STANDARD SET (flexible)
    │
    └── Is this a legacy codebase?
        └── Use: MINIMAL SET (fix critical only)
```

## Rule Sets

### MINIMAL SET (Legacy/Quick Fix)
```bash
ruff check --select E722,B006 .
```
Only catches:
- Bare `except:` (potential bug masking)
- Mutable default arguments (actual bugs)

### STANDARD SET (QC Audit)
```bash
ruff check --select E722,T201,B006,F401,F841,UP,I --statistics .
```
Recommended for most projects. Catches common issues without being noisy.

### STRICT SET (Libraries/SDKs)
```bash
ruff check --select E722,T201,B006,F401,F841,UP,I,S,C90,PT,RUF,D --statistics .
```
Full strictness for public libraries.

### SECURITY SET (Web Applications)
```bash
ruff check --select E722,B006,F401,F841,UP,I,S --statistics .
```
Focus on security-relevant issues.

## Rule Explanations

### E722 — Bare except
```python
# Bad — catches KeyboardInterrupt, SystemExit, and hides bugs
try:
    risky()
except:
    pass

# Good — specific exceptions
try:
    risky()
except (ValueError, KeyError) as e:
    logger.error(f"Failed: {e}")

# Also Good — Exception base class (still allows KeyboardInterrupt)
try:
    risky()
except Exception as e:
    logger.exception("Unexpected error")
```

**Why it matters:** Bare `except:` catches `KeyboardInterrupt` (Ctrl+C), `SystemExit` (sys.exit()), and other system signals. This makes debugging impossible and can prevent graceful shutdown.

### T201 — print() found
```python
# Bad — goes to stdout, no timestamps, no levels
print(f"Processing {item}")

# Good — proper logging
logger.info(f"Processing {item}")

# Also Good — explicit CLI output
import sys
sys.stderr.write(f"Processing {item}\n")
```

**When to ignore:** CLIs, scripts, notebooks. Add `# noqa: T201` or exclude T201 from rule set.

### B006 — Mutable default argument
```python
# Bad — shared mutable state across all calls
def process(items=[]):  # Same list for every call!
    items.append("new")
    return items

process()  # ['new']
process()  # ['new', 'new'] — Bug!

# Good — None sentinel
def process(items=None):
    if items is None:
        items = []
    items.append("new")
    return items
```

**Why it matters:** This is a genuine bug, not style. Default arguments are evaluated once at function definition, not each call.

### F401 — Unused import
```python
# Bad — clutters namespace, slows startup
import os  # never used

# Good — remove or use
from typing import TYPE_CHECKING
if TYPE_CHECKING:
    import os  # Only for type hints
```

**When to ignore:** `__init__.py` re-exports, plugins that need imports for side effects.

### F841 — Unused variable
```python
# Bad — confusing, may indicate forgotten logic
result = expensive_call()  # result never used

# Good — explicit discard
_ = expensive_call()

# Or just
expensive_call()
```

### UP — Pyupgrade
Catches outdated syntax for your target Python version:
```python
# Python 3.9+: old
from typing import Dict, List
# Python 3.9+: new
dict, list  # Use built-in generics

# Python 3.10+: old
Union[int, str]
# Python 3.10+: new
int | str
```

### I — isort
Consistent import ordering:
```python
# Bad — inconsistent grouping
from myproject.utils import helper
import os
from typing import Optional
import sys

# Good — grouped and sorted
import os
import sys
from typing import Optional

from myproject.utils import helper
```

## Extended Rules (Optional)

### S — Bandit Security
```bash
ruff check --select S .
```
- `S101` — assert used (disabled in production)
- `S105` — hardcoded password
- `S106` — hardcoded password in function arg
- `S107` — hardcoded password default
- `S108` — insecure temp file
- `S301` — pickle usage (insecure deserialization)
- `S608` — SQL injection

### C90 — McCabe Complexity
```bash
ruff check --select C90 --max-complexity 10 .
```
- `C901` — function too complex (cyclomatic complexity)

### D — Docstrings (pydocstyle)
```bash
ruff check --select D .
```
- `D100` — missing module docstring
- `D101` — missing class docstring
- `D102` — missing method docstring
- `D103` — missing function docstring

### PT — pytest
```bash
ruff check --select PT .
```
- `PT001` — use @pytest.fixture
- `PT006` — wrong type for parametrize args
- `PT009` — use pytest.raises instead of unittest

### RUF — Ruff-specific
```bash
ruff check --select RUF .
```
Ruff's own rules for common Python issues.

## Configuration

### pyproject.toml
```toml
[tool.ruff]
line-length = 100
target-version = "py311"

[tool.ruff.lint]
select = ["E722", "T201", "B006", "F401", "F841", "UP", "I"]
ignore = ["T201"]  # Allow print in this project

[tool.ruff.lint.per-file-ignores]
"tests/*" = ["S101"]  # Allow assert in tests
"scripts/*" = ["T201"]  # Allow print in scripts
```

### CLI Override
```bash
# Ignore specific rules for this run
ruff check --ignore T201,F841 .

# Add extra rules
ruff check --select E722,T201,B006 --extend-select S .
```

## Fix Mode

### Safe Fixes Only
```bash
ruff check --fix .
```
Applies only fixes that are guaranteed safe (won't change behavior).

### Unsafe Fixes
```bash
ruff check --fix --unsafe-fixes .
```
Applies all fixes including potentially behavior-changing ones. Review diff carefully.

### Format
```bash
ruff format .
```
Black-compatible formatting. Run after `--fix`.

## Interpreting Output

```
$ ruff check --statistics .
Found 47 errors.
F401  [ ] 23  `...` imported but unused
T201  [ ] 15  `print` found
F841  [*]  5  Local variable `...` is assigned but never used
E722  [ ]  3  Do not use bare `except`
B006  [ ]  1  Do not use mutable data structures for argument defaults

[*] 5 fixable with `--fix`
```

- **Count by rule** — prioritize high-count rules
- **[*] Fixable** — can be auto-fixed
- **Statistics** — use `--statistics` for summary view

## CI Integration

```yaml
# GitHub Actions
- name: Lint with ruff
  run: |
    pip install ruff
    ruff check --output-format=github .
```

```yaml
# GitLab CI
lint:
  script:
    - pip install ruff
    - ruff check --output-format=gitlab .
```

```