Back to skills
SkillHub ClubShip Full StackFull StackBackendTesting

testing

Use when running tests to validate implementations, collecting test evidence, or debugging failures. Load in TEST state. Covers unit tests (pytest/jest), API tests (curl), browser tests (Claude-in-Chrome), database verification. All results are code-verified, not LLM-judged.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
7
Hot score
83
Updated
March 20, 2026
Overall rating
C2.3
Composite score
2.3
Best-practice grade
N/A

Install command

npx @skill-hub/cli install ingpoc-skills-testing
testingpytestapiautomationdebugging

Repository

ingpoc/SKILLS

Skill path: testing

Use when running tests to validate implementations, collecting test evidence, or debugging failures. Load in TEST state. Covers unit tests (pytest/jest), API tests (curl), browser tests (Claude-in-Chrome), database verification. All results are code-verified, not LLM-judged.

Open repository

Best for

Primary workflow: Ship Full Stack.

Technical facets: Full Stack, Backend, Testing.

Target audience: everyone.

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: ingpoc.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install testing into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/ingpoc/SKILLS before adding testing to shared team environments
  • Use testing for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: testing
description: "Use when running tests to validate implementations, collecting test evidence, or debugging failures. Load in TEST state. Covers unit tests (pytest/jest), API tests (curl), browser tests (Claude-in-Chrome), database verification. All results are code-verified, not LLM-judged."
keywords: test, verify, pytest, jest, api, browser, evidence
---

# Testing

Comprehensive testing for TEST state.

## Instructions

1. Run unit tests: `scripts/run-unit-tests.sh`
2. Run API tests: `scripts/run-api-tests.sh`
3. Run browser tests (if UI): via Claude-in-Chrome MCP
4. Verify database (if data): `scripts/verify-database.sh`
5. Collect evidence: `scripts/collect-evidence.sh`
6. Report results (code verified, not judged)

## Exit Criteria (Code Verified)

```bash
# All must return exit code 0
scripts/run-unit-tests.sh
scripts/run-api-tests.sh
[ -f "/tmp/test-evidence/results.json" ]
jq '.all_passed == true' /tmp/test-evidence/results.json
```

## References

| File | Load When |
|------|-----------|
| references/unit-testing.md | Writing/running unit tests |
| references/api-testing.md | Testing API endpoints |
| references/browser-testing.md | UI testing with Chrome |
| references/database-testing.md | Database verification |


---

## Referenced Files

> The following files are referenced in this skill and included for context.

### scripts/run-unit-tests.sh

```bash
#!/bin/bash
# Run unit tests based on project config
# Exit: 0 = all pass, 1 = failures
# Config: .claude/config/project.json → test_command

EVIDENCE_DIR="/tmp/test-evidence"
mkdir -p "$EVIDENCE_DIR"

# ─────────────────────────────────────────────────────────────────
# Config helper (self-contained)
# ─────────────────────────────────────────────────────────────────
CONFIG="$PWD/.claude/config/project.json"
get_config() { jq -r ".$1 // empty" "$CONFIG" 2>/dev/null || echo "$2"; }

# ─────────────────────────────────────────────────────────────────
# Get test command (config → auto-detect → fallback)
# ─────────────────────────────────────────────────────────────────
TEST_CMD=$(get_config "test_command" "")

if [ -z "$TEST_CMD" ]; then
    # Auto-detect based on project files
    if [ -f "pytest.ini" ] || [ -f "pyproject.toml" ]; then
        TEST_CMD="pytest -q --tb=short"
    elif [ -f "Cargo.toml" ]; then
        TEST_CMD="cargo test"
    elif [ -f "go.mod" ]; then
        TEST_CMD="go test ./..."
    elif [ -f "package.json" ]; then
        TEST_CMD="npm test"
    else
        TEST_CMD="echo 'No test command configured in .claude/config/project.json'"
    fi
fi

# ─────────────────────────────────────────────────────────────────
# Run tests
# ─────────────────────────────────────────────────────────────────
echo "=== Running Unit Tests ==="
echo "Command: $TEST_CMD"

RESULT=0
eval "$TEST_CMD" 2>&1 | tee "$EVIDENCE_DIR/test-output.log" || RESULT=$?

# Save evidence
cat > "$EVIDENCE_DIR/unit-tests.json" << EOF
{
  "unit_tests": {
    "passed": $([ $RESULT -eq 0 ] && echo true || echo false),
    "exit_code": $RESULT,
    "command": "$TEST_CMD"
  }
}
EOF

exit $RESULT

```

### scripts/run-api-tests.sh

```bash
#!/bin/bash
# Run API endpoint tests
# Exit: 0 = all pass, 1 = failures
# Config: .claude/config/project.json → api_url, dev_server_port

EVIDENCE_DIR="/tmp/test-evidence"
mkdir -p "$EVIDENCE_DIR"

# ─────────────────────────────────────────────────────────────────
# Config helper (self-contained)
# ─────────────────────────────────────────────────────────────────
CONFIG="$PWD/.claude/config/project.json"
get_config() { jq -r ".$1 // empty" "$CONFIG" 2>/dev/null || echo "$2"; }

# ─────────────────────────────────────────────────────────────────
# Get base URL from config
# ─────────────────────────────────────────────────────────────────
PORT=$(get_config "dev_server_port" "3000")
BASE_URL=$(get_config "api_url" "http://localhost:$PORT")

ERRORS=0
TESTS=0

echo "=== Running API Tests ==="
echo "Base URL: $BASE_URL"

# ─────────────────────────────────────────────────────────────────
# Test endpoint helper
# ─────────────────────────────────────────────────────────────────
test_endpoint() {
    local method=$1
    local endpoint=$2
    local expected_status=$3
    local description=$4

    ((TESTS++))
    STATUS=$(curl -s -o /dev/null -w "%{http_code}" -X "$method" "$BASE_URL$endpoint" --max-time 5 2>/dev/null || echo "000")

    if [ "$STATUS" = "$expected_status" ]; then
        echo "✓ $description - $STATUS"
    else
        echo "✗ $description - Expected $expected_status, got $STATUS"
        ((ERRORS++))
    fi
}

# ─────────────────────────────────────────────────────────────────
# Run tests
# ─────────────────────────────────────────────────────────────────
test_endpoint "GET" "/health" "200" "Health check"
test_endpoint "GET" "/" "200" "Root endpoint"

# Save evidence
cat > "$EVIDENCE_DIR/api-tests.json" << EOF
{
  "api_tests": {
    "base_url": "$BASE_URL",
    "total": $TESTS,
    "passed": $((TESTS - ERRORS)),
    "failed": $ERRORS,
    "all_passed": $([ $ERRORS -eq 0 ] && echo true || echo false)
  }
}
EOF

[ $ERRORS -eq 0 ] && exit 0 || exit 1

```

### scripts/collect-evidence.sh

```bash
#!/bin/bash
# Collect all test evidence into summary
# Output: results.json with all test outcomes

EVIDENCE_DIR="/tmp/test-evidence"
mkdir -p "$EVIDENCE_DIR"

echo "=== Collecting Test Evidence ==="

# Initialize results
ALL_PASSED=true
RESULTS="{\"timestamp\": \"$(date -Iseconds)\", \"tests\": {}}"

# Collect unit test results
if [ -f "$EVIDENCE_DIR/unit-tests.json" ]; then
    UNIT=$(cat "$EVIDENCE_DIR/unit-tests.json")
    PASSED=$(echo "$UNIT" | jq '.unit_tests.passed')
    RESULTS=$(echo "$RESULTS" | jq '.tests.unit = '"$UNIT"'')
    [ "$PASSED" = "false" ] && ALL_PASSED=false
fi

# Collect API test results
if [ -f "$EVIDENCE_DIR/api-tests.json" ]; then
    API=$(cat "$EVIDENCE_DIR/api-tests.json")
    PASSED=$(echo "$API" | jq '.api_tests.passed')
    RESULTS=$(echo "$RESULTS" | jq '.tests.api = '"$API"'')
    [ "$PASSED" = "false" ] && ALL_PASSED=false
fi

# Collect browser test results
if [ -f "$EVIDENCE_DIR/browser-tests.json" ]; then
    BROWSER=$(cat "$EVIDENCE_DIR/browser-tests.json")
    PASSED=$(echo "$BROWSER" | jq '.browser_tests.passed')
    RESULTS=$(echo "$RESULTS" | jq '.tests.browser = '"$BROWSER"'')
    [ "$PASSED" = "false" ] && ALL_PASSED=false
fi

# Set overall result
RESULTS=$(echo "$RESULTS" | jq '.all_passed = '$ALL_PASSED'')

# Save final results
echo "$RESULTS" | jq '.' > "$EVIDENCE_DIR/results.json"

echo "Evidence collected to $EVIDENCE_DIR/results.json"
cat "$EVIDENCE_DIR/results.json"

[ "$ALL_PASSED" = "true" ] && exit 0 || exit 1

```

testing | SkillHub