Back to skills
SkillHub ClubRun DevOpsFull StackDevOpsIntegration

cloudflare

Infrastructure operations for Cloudflare: Workers, KV, R2, D1, Hyperdrive, observability, builds, audit logs. Triggers: worker/KV/R2/D1/logs/build/deploy/audit. Three permission tiers: Diagnose (read-only), Change (write requires confirmation), Super Admin (isolated environment). Write operations follow read-first, confirm, execute, verify pattern. MCP is optional — works with Wrangler CLI/Dashboard too.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
322
Hot score
99
Updated
March 20, 2026
Overall rating
C4.3
Composite score
4.3
Best-practice grade
C62.8

Install command

npx @skill-hub/cli install heyvhuang-ship-faster-cloudflare

Repository

Heyvhuang/ship-faster

Skill path: skills/cloudflare

Infrastructure operations for Cloudflare: Workers, KV, R2, D1, Hyperdrive, observability, builds, audit logs. Triggers: worker/KV/R2/D1/logs/build/deploy/audit. Three permission tiers: Diagnose (read-only), Change (write requires confirmation), Super Admin (isolated environment). Write operations follow read-first, confirm, execute, verify pattern. MCP is optional — works with Wrangler CLI/Dashboard too.

Open repository

Best for

Primary workflow: Run DevOps.

Technical facets: Full Stack, DevOps, Integration.

Target audience: everyone.

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: Heyvhuang.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install cloudflare into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/Heyvhuang/ship-faster before adding cloudflare to shared team environments
  • Use cloudflare for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: cloudflare
description: "Infrastructure operations for Cloudflare: Workers, KV, R2, D1, Hyperdrive, observability, builds, audit logs. Triggers: worker/KV/R2/D1/logs/build/deploy/audit. Three permission tiers: Diagnose (read-only), Change (write requires confirmation), Super Admin (isolated environment). Write operations follow read-first, confirm, execute, verify pattern. MCP is optional — works with Wrangler CLI/Dashboard too."
allowed-tools:
  - Read
  - Bash
  - WebFetch
---

# Cloudflare Infrastructure Operations

Manage Cloudflare services: Workers, KV, R2, D1, Hyperdrive, Observability, Builds, and Audit Logs.

> **MCP is optional.** This skill works with MCP (auto), Wrangler CLI, or Dashboard. See [BACKENDS.md](BACKENDS.md) for execution options.

## Permission Tiers

| Tier | Purpose | Scope | Risk Control |
|------|---------|-------|--------------|
| **Diagnose** | Read-only/query/troubleshoot | Observability, Builds, Audit | Default entry, no writes |
| **Change** | Create/modify/delete resources | KV, R2, D1, Hyperdrive | Requires confirmation + verification |
| **Super Admin** | Highest privileges | All + Container Sandbox | Only in isolated/test environments |

## Security Rules

### Read Operations
1. **Define scope first** — account / worker / resource ID
2. **No account set?** — List accounts first, then set active
3. **Evidence required** — Conclusions must have logs/screenshots/audit records

### Write Operations (Three-step Flow)
```
1. Plan: Read current state first (list/get)
2. Confirm: Output precise change (name/ID/impact), await user confirmation
3. Execute: create/delete/update
4. Verify: audit logs + observability confirm no new errors
```

### Prohibited Actions
- ❌ Execute create/delete/update without confirmation
- ❌ Delete production resources (unless user explicitly says "delete production xxx")
- ❌ Use Super Admin privileges in non-isolated environments
- ❌ Use container sandbox as persistent environment

## Operation Categories

### Diagnose Tier (Read-only)

| Category | What You Can Do |
|----------|-----------------|
| **Observability** | Query worker logs/metrics, discover fields, explore values |
| **Builds** | List build history, get build details, view build logs |
| **Browser** | Fetch page HTML, convert to markdown, take screenshots |
| **Audit** | Pull change history by time range |
| **Workers** | List workers, get details, view source code |

### Change Tier (Write Operations)

| Resource | Operations |
|----------|------------|
| **KV** | List, get, create ⚠️, update ⚠️, delete ⚠️ |
| **R2** | List, get, create ⚠️, delete ⚠️ |
| **D1** | List, get, query, create ⚠️, delete ⚠️ |
| **Hyperdrive** | List, get, create ⚠️, edit ⚠️, delete ⚠️ |

⚠️ = Requires confirmation

### Super Admin Tier (Container Sandbox)

Temporary container for isolated tasks (~10 min lifecycle):
- Initialize, execute commands, read/write/delete files
- Use for: running tests, reproducing issues, parsing data
- NOT for: persistent state, production workloads

## Common Workflows

### Troubleshooting Flow
```
1. Clarify symptoms → worker name / time range / error type
2. Query observability to pull logs/metrics
3. If build-related → get build logs
4. If page-related → take screenshot to reproduce
5. Trace changes → pull audit logs
6. Summarize: root cause + evidence + fix recommendations
```

### Resource Management Flow
```
1. List accounts → set active account
2. List resources (KV / R2 / D1)
3. Plan changes → present to user
4. Execute after confirmation
5. Verify: audit logs + observability shows no errors
```

## Output Format

- **Language**: English
- **Structure**: Conclusion → Key data/evidence → Tool call summary → Next steps
- **Write operations**: Must clearly list operations and impact scope

Example:
```
✅ Investigation complete: worker `api-gateway` experienced 5xx spike between 18:00-18:30

Root cause: New code deployed threw TypeError when processing /v2/users
Evidence:
- Logs: 18:02 first occurrence of "Cannot read property 'id' of undefined"
- Audit: 18:00 user [email protected] deployed new version
- Metrics: error_rate jumped from 0.1% to 12%

Recommendation: Roll back to previous version, or fix /v2/users handler
```

## File-based Pipeline

When integrating into multi-step workflows:

```
runs/<workflow>/active/<run_id>/
├── proposal.md                # Symptoms/objectives
├── context.json               # Account/worker/resource/time_range
├── tasks.md                   # Checklist + approval gate
├── evidence/observability.md
├── evidence/audit.md
├── evidence/screenshots/
├── evidence/change-plan.md    # Write operations written here first
├── evidence/report.md         # Conclusion + evidence + next steps
└── logs/events.jsonl          # Optional tool call summary
```

## Error Handling

| Situation | Action |
|-----------|--------|
| Account not set | Run accounts_list → set_active_account first |
| Resource doesn't exist | Verify ID/name, list available resources |
| Insufficient permissions | Explain required permissions, check API token scope |
| Observability query too broad | Split into smaller time ranges |

## Related Files

- [BACKENDS.md](BACKENDS.md) — Execution options (MCP/CLI/Dashboard)
- [SETUP.md](SETUP.md) — MCP configuration (optional)
- [scenarios.md](scenarios.md) — 20 real-world scenario examples


---

## Referenced Files

> The following files are referenced in this skill and included for context.

### BACKENDS.md

```markdown
# Cloudflare Execution Backends

Three ways to execute infrastructure operations. Pick what works for you.

## Quick Reference

| Operation | MCP | Wrangler CLI | Dashboard |
|-----------|-----|--------------|-----------|
| **Workers** |
| List workers | `workers_list` | `wrangler deployments list` | Workers & Pages |
| Get worker details | `workers_get_worker` | `wrangler deployments view` | Worker → Settings |
| Get worker code | `workers_get_worker_code` | — | Worker → Quick Edit |
| **KV** |
| List namespaces | `kv_namespaces_list` | `wrangler kv namespace list` | Workers → KV |
| Create namespace | `kv_namespace_create` | `wrangler kv namespace create` | KV → Create |
| Delete namespace | `kv_namespace_delete` | `wrangler kv namespace delete` | KV → Delete |
| **R2** |
| List buckets | `r2_buckets_list` | `wrangler r2 bucket list` | R2 → Overview |
| Create bucket | `r2_bucket_create` | `wrangler r2 bucket create` | R2 → Create bucket |
| Delete bucket | `r2_bucket_delete` | `wrangler r2 bucket delete` | R2 → Delete |
| **D1** |
| List databases | `d1_databases_list` | `wrangler d1 list` | D1 → Overview |
| Execute SQL | `d1_database_query` | `wrangler d1 execute` | D1 → Console |
| Create database | `d1_database_create` | `wrangler d1 create` | D1 → Create |
| Delete database | `d1_database_delete` | `wrangler d1 delete` | D1 → Delete |
| **Observability** |
| Query logs/metrics | `query_worker_observability` | — | Analytics → Workers |
| **Audit** |
| View audit logs | `auditlogs_by_account_id` | — | Audit Log |
| **Builds** |
| List builds | `workers_builds_list_builds` | — | Worker → Deployments |
| Get build logs | `workers_builds_get_build_logs` | — | Deployment → View logs |

## Option A: MCP (Automatic)

If you have MCP configured, tools execute automatically.

**Setup**: See [SETUP.md](SETUP.md)

**Example**:
```
Agent calls: kv_namespaces_list()
→ Results returned automatically
```

## Option B: Wrangler CLI

Install: `npm install -g wrangler`

### Setup

```bash
# Login (opens browser)
wrangler login

# Check whoami
wrangler whoami
```

### Workers

```bash
# List deployments
wrangler deployments list

# View specific deployment
wrangler deployments view --deployment-id <id>

# Tail logs (real-time)
wrangler tail <worker-name>
```

### KV

```bash
# List namespaces
wrangler kv namespace list

# Create namespace
wrangler kv namespace create <name>

# List keys
wrangler kv key list --namespace-id <id>

# Get value
wrangler kv key get <key> --namespace-id <id>

# Put value
wrangler kv key put <key> <value> --namespace-id <id>

# Delete namespace
wrangler kv namespace delete <id>
```

### R2

```bash
# List buckets
wrangler r2 bucket list

# Create bucket
wrangler r2 bucket create <name>

# List objects
wrangler r2 object list <bucket-name>

# Upload object
wrangler r2 object put <bucket-name>/<key> --file <path>

# Delete bucket
wrangler r2 bucket delete <name>
```

### D1

```bash
# List databases
wrangler d1 list

# Create database
wrangler d1 create <name>

# Execute SQL
wrangler d1 execute <database-name> --command "SELECT * FROM users LIMIT 10"

# Execute from file
wrangler d1 execute <database-name> --file schema.sql

# Delete database
wrangler d1 delete <name>
```

### Evidence Collection

After executing, paste results back:
```
Executed: wrangler kv namespace list
Result:
┌──────────────────────────────────────┬─────────────────┐
│ id                                   │ title           │
├──────────────────────────────────────┼─────────────────┤
│ abc123...                            │ feature-flags   │
└──────────────────────────────────────┴─────────────────┘
```

## Option C: Cloudflare Dashboard

### Workers & Pages
1. Go to: `https://dash.cloudflare.com/<account>/workers-and-pages`
2. Click worker to view details
3. Use "Quick Edit" to view/edit code
4. Check "Deployments" tab for build history

### KV
1. Go to: `https://dash.cloudflare.com/<account>/workers/kv/namespaces`
2. Create/delete namespaces
3. Browse keys and values

### R2
1. Go to: `https://dash.cloudflare.com/<account>/r2/overview`
2. Create/delete buckets
3. Upload/download objects

### D1
1. Go to: `https://dash.cloudflare.com/<account>/d1`
2. Create/delete databases
3. Use "Console" to run SQL queries

### Analytics (Observability)
1. Go to: `https://dash.cloudflare.com/<account>/workers/analytics`
2. Filter by worker, time range
3. View requests, errors, CPU time

### Audit Log
1. Go to: `https://dash.cloudflare.com/<account>/audit-log`
2. Filter by time, action type, user
3. Export or screenshot relevant entries

### Evidence Collection

After executing manually, provide:
- Screenshot of result
- Copy/paste of table data
- Resource IDs created/modified

## Choosing a Backend

| Situation | Recommended |
|-----------|-------------|
| Have MCP configured | MCP (fastest) |
| CI/CD or scripting | Wrangler CLI |
| One-off operations | Dashboard |
| Need visual confirmation | Dashboard |
| Real-time log tailing | Wrangler CLI (`wrangler tail`) |
| Team doesn't have MCP | Wrangler CLI or Dashboard |

## Security Notes

All backends follow the same security rules from [SKILL.md](SKILL.md):
- Read before write
- Confirm before create/update/delete
- Verify after execution via audit logs
- Never delete production resources without explicit confirmation

```

### SETUP.md

```markdown
# Cloudflare MCP Setup (Optional)

MCP enables automatic execution of infrastructure operations. This is optional — you can always use Wrangler CLI or Dashboard instead.

## Prerequisites

- Claude Code, Cursor, or another MCP-compatible client
- Cloudflare account with API access

## Connection

### Method 1: Cloudflare MCP Server

```bash
# Add the MCP server
claude mcp add cloudflare

# Authenticate
claude /mcp
```

### Method 2: API Token Authentication

1. Go to: `https://dash.cloudflare.com/profile/api-tokens`
2. Create a token with required permissions
3. Configure in your MCP client

**Recommended token permissions**:

| Permission | Read | Edit | Required For |
|------------|------|------|--------------|
| Account Settings | ✅ | — | accounts_list |
| Workers Scripts | ✅ | ✅ | workers_*, builds_* |
| Workers KV Storage | ✅ | ✅ | kv_* |
| Workers R2 Storage | ✅ | ✅ | r2_* |
| D1 | ✅ | ✅ | d1_* |
| Account Analytics | ✅ | — | observability_* |
| Audit Logs | ✅ | — | auditlogs_* |

## Available MCP Tools

### Diagnose Tier (Read-only)

**Observability**
| Tool | Purpose |
|------|---------|
| `query_worker_observability` | Query logs/metrics (events, CPU, error rate) |
| `observability_keys` | Discover available fields |
| `observability_values` | Explore field values |

**Builds**
| Tool | Purpose |
|------|---------|
| `workers_builds_list_builds` | List build history |
| `workers_builds_get_build` | Get build details |
| `workers_builds_get_build_logs` | Get build logs |

**Browser Rendering**
| Tool | Purpose |
|------|---------|
| `get_url_html_content` | Fetch page HTML |
| `get_url_markdown` | Convert to Markdown |
| `get_url_screenshot` | Take page screenshot |

**Audit**
| Tool | Purpose |
|------|---------|
| `auditlogs_by_account_id` | Pull change history |

**Workers**
| Tool | Purpose |
|------|---------|
| `workers_list` | List workers |
| `workers_get_worker` | Get worker details |
| `workers_get_worker_code` | Get source code |

### Change Tier (Write Operations)

**Account**
| Tool | Purpose |
|------|---------|
| `accounts_list` | List accounts |
| `set_active_account` | Set active account |

**KV**
| Tool | Purpose |
|------|---------|
| `kv_namespaces_list` | List namespaces |
| `kv_namespace_get` | Get details |
| `kv_namespace_create` | Create ⚠️ |
| `kv_namespace_update` | Update ⚠️ |
| `kv_namespace_delete` | Delete ⚠️ |

**R2**
| Tool | Purpose |
|------|---------|
| `r2_buckets_list` | List buckets |
| `r2_bucket_get` | Get details |
| `r2_bucket_create` | Create ⚠️ |
| `r2_bucket_delete` | Delete ⚠️ |

**D1**
| Tool | Purpose |
|------|---------|
| `d1_databases_list` | List databases |
| `d1_database_get` | Get details |
| `d1_database_query` | Execute SQL |
| `d1_database_create` | Create ⚠️ |
| `d1_database_delete` | Delete ⚠️ |

**Hyperdrive**
| Tool | Purpose |
|------|---------|
| `hyperdrive_configs_list` | List configs |
| `hyperdrive_config_get` | Get details |
| `hyperdrive_config_create` | Create ⚠️ |
| `hyperdrive_config_edit` | Edit ⚠️ |
| `hyperdrive_config_delete` | Delete ⚠️ |

⚠️ = Requires confirmation per [SKILL.md](SKILL.md)

### Super Admin Tier (Container Sandbox)

| Tool | Purpose |
|------|---------|
| `container_initialize` | Initialize container (~10 min lifecycle) |
| `container_exec` | Execute command |
| `container_file_write` | Write file |
| `container_file_read` | Read file |
| `container_files_list` | List files |
| `container_file_delete` | Delete file |

## Verification

Test the connection:
```
accounts_list()
```

Should return list of accounts you have access to.

## Troubleshooting

| Issue | Solution |
|-------|----------|
| Authentication failed | Re-run `claude /mcp` or check API token |
| Account not set | Run `accounts_list` → `set_active_account` first |
| Permission denied | Check API token has required permissions |
| Tool not found | Verify MCP server is properly added |

## Without MCP

If you can't or don't want to use MCP, see [BACKENDS.md](BACKENDS.md) for Wrangler CLI and Dashboard alternatives. All operations in [SKILL.md](SKILL.md) can be performed manually.

```

### scenarios.md

```markdown
# Cloudflare MCP Scenario Examples

20 real-world development scenarios, each annotated with required tools and execution flow.

## Observability (Troubleshooting)

### 1. Worker 5xx Spike Investigation
```
User: My worker had a 5xx spike starting yesterday at 18:00, find the most likely cause and evidence.
Execution:
1. query_worker_observability: filter status >= 500, time range 17:30-19:00
2. Find first error occurrence time and error message
3. auditlogs_by_account_id: check for deployments/config changes during same period
4. Summary: root cause + timeline + evidence + fix recommendations
```

### 2. CPU Trend Analysis
```
User: Pull the CPU time trend for worker `api-gateway` over the last 24h, tell me if there are abnormal spikes.
Execution:
1. query_worker_observability: metric type CPU time, worker name api-gateway
2. Aggregate by hour, identify peak periods
3. Compare against historical baseline, determine if abnormal
4. If spikes found, correlate with logs from same period to find cause
```

## Builds (Build Troubleshooting)

### 3. Build History Review
```
User: List the last 5 builds for `frontend-app`, why did the latest one fail?
Execution:
1. workers_builds_list_builds: worker name frontend-app, limit 5
2. workers_builds_get_build: get failed build details
3. workers_builds_get_build_logs: pull complete logs
4. Extract failure reason + fix recommendations
```

### 4. Build Log Analysis
```
User: Pull logs for build UUID xxx, help me extract the first failure cause and possible fixes.
Execution:
1. workers_builds_get_build_logs: build ID
2. Locate first ERROR/FATAL
3. Analyze dependency issues/syntax errors/config problems
4. Provide specific fix steps
```

## Browser Rendering (Page Capture)

### 5. Page Screenshot Verification
```
User: Take a screenshot of https://my-site.com, see if the top banner loaded.
Execution:
1. Confirm active account (otherwise run accounts_list + set_active_account first)
2. get_url_screenshot: URL
3. Return screenshot + observation conclusions
```

### 6. Page to Markdown Conversion
```
User: Convert an online error page to markdown, I need to paste it into an incident postmortem.
Execution:
1. get_url_markdown: URL
2. Clean up formatting, keep key error information
3. Return ready-to-use markdown
```

## Audit Logs (Change Tracking)

### 7. DNS Change Tracking
```
User: Who changed the DNS records yesterday at noon? Give me the audit records.
Execution:
1. auditlogs_by_account_id: time range yesterday 11:00-14:00
2. Filter action type for DNS-related
3. List: time + operator + specific changes
```

### 8. Weekly Change Report
```
User: Summarize Worker-related key config changes from the past 7 days into a report.
Execution:
1. auditlogs_by_account_id: past 7 days
2. Filter Worker-related actions
3. Group by date
4. Format output report
```

## KV Management

### 9. List KV Namespaces
```
User: List all KV namespaces in my account, find ones named like `prod-*`.
Execution:
1. accounts_list → set_active_account (if not set)
2. kv_namespaces_list
3. Filter names matching prod-*
4. Return list + statistics
```

### 10. Create KV Namespace
```
User: Create a KV namespace called `feature-flags`, and tell me how to bind it to a worker.
Execution:
1. Present plan: create namespace "feature-flags"
2. Await user confirmation
3. kv_namespace_create: name = feature-flags
4. Return creation result + wrangler.toml binding example
```

### 11. Batch Delete KV (Dangerous)
```
User: Delete all KV namespaces starting with `temp-*` (list them first for my confirmation).
Execution:
1. kv_namespaces_list
2. Filter temp-* prefix
3. List namespaces to be deleted (name + ID)
4. ⚠️ Await explicit user confirmation
5. Delete each with kv_namespace_delete
6. auditlogs_by_account_id to verify deletion records
```

## R2 Management

### 12. R2 Cleanup Recommendations
```
User: List R2 buckets, find ones that might be unused, give me cleanup recommendations.
Execution:
1. r2_buckets_list
2. Analyze by creation time/name pattern
3. Flag potentially abandoned ones (e.g., test-*, tmp-*)
4. Provide cleanup recommendations (don't delete directly)
```

### 13. Create R2 Bucket + Code Example
```
User: Create an R2 bucket called `uploads-prod`, and give me a minimal worker code snippet to read/write it.
Execution:
1. Present plan: create bucket "uploads-prod"
2. Await confirmation
3. r2_bucket_create: name = uploads-prod
4. Return creation result + Worker code example (env.BUCKET.put/get)
```

## D1 Management

### 14. D1 Query
```
User: List D1 databases, run `SELECT COUNT(*) FROM users;`.
Execution:
1. d1_databases_list
2. Confirm target database
3. d1_database_query: SELECT COUNT(*) FROM users
4. Return result
```

### 15. D1 Migration Dry Run (Complete Flow)
```
User: I need a temporary D1 for migration dry run: create, run schema, insert test data, then delete (give me confirmation points at each step).
Execution:
1. Present plan: create temp DB → create tables → insert test data → delete
2. After confirmation, d1_database_create
3. After confirmation, d1_database_query: CREATE TABLE ...
4. After confirmation, d1_database_query: INSERT ...
5. After confirmation, d1_database_delete
6. auditlogs to verify
```

## Hyperdrive Management

### 16. Hyperdrive Config Analysis
```
User: List Hyperdrive configs, help me find which one connects to the production database, and suggest cache strategy improvements.
Execution:
1. hyperdrive_configs_list
2. hyperdrive_config_get: check connection strings one by one
3. Identify production database connection
4. Analyze current cache config + optimization recommendations
```

## Workers Code

### 17. Worker Source Code Inspection
```
User: Pull the source code for worker `my-worker-script`, I suspect it has an env variable wrong.
Execution:
1. workers_get_worker: get worker details
2. workers_get_worker_code: get source code
3. Search for env. references
4. Flag suspicious locations
```

## Container Sandbox

### 18. Run Tests
```
User: In the sandbox, clone this repo, run tests, paste any failing tests and errors.
Execution:
1. container_initialize (note: ~10 min lifecycle)
2. container_exec: git clone <repo>
3. container_exec: npm install && npm test
4. Extract failed tests + error messages
5. Summary report
```

### 19. Data Analysis
```
User: Use Python to parse this log/metric export, calculate p95 and error rate changes.
Execution:
1. container_initialize
2. container_file_write: write log data
3. container_file_write: write analysis script
4. container_exec: python analyze.py
5. Return analysis results
```

## End-to-End Workflows

### 20. Build Failure Full-chain Troubleshooting
```
User: Create an automated troubleshooting flow: from build failure → find logs → fix recommendations → verify production recovery.
Execution:
1. workers_builds_list_builds: find failed build
2. workers_builds_get_build_logs: analyze failure cause
3. Provide fix recommendations
4. (After user fixes and redeploys)
5. workers_builds_list_builds: confirm new build succeeded
6. query_worker_observability: confirm no new errors
7. get_url_screenshot: verify page is normal
8. Summary report
```

## General Principles

1. **Read before write**: Always list/get current state before any write operation
2. **Explicit confirmation**: Write operations must present plan, await user confirmation
3. **Post-execution verification**: Audit logs + observability confirm no anomalies
4. **Evidence chain**: Troubleshooting conclusions must have logs/metrics/screenshots supporting them
5. **Split requests**: Break complex queries into smaller pieces to avoid context overload

```

cloudflare | SkillHub