trigger-cost-savings
Analyze Trigger.dev tasks, schedules, and runs for cost optimization opportunities. Use when asked to reduce spend, optimize costs, audit usage, right-size machines, or review task efficiency. Requires Trigger.dev MCP tools for run analysis.
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Install command
npx @skill-hub/cli install triggerdotdev-skills-trigger-cost-savings
Repository
Skill path: trigger-cost-savings
Analyze Trigger.dev tasks, schedules, and runs for cost optimization opportunities. Use when asked to reduce spend, optimize costs, audit usage, right-size machines, or review task efficiency. Requires Trigger.dev MCP tools for run analysis.
Open repositoryBest for
Primary workflow: Ship Full Stack.
Technical facets: Full Stack, Integration.
Target audience: everyone.
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: triggerdotdev.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install trigger-cost-savings into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/triggerdotdev/skills before adding trigger-cost-savings to shared team environments
- Use trigger-cost-savings for development workflows
Works across
Favorites: 0.
Sub-skills: 0.
Aggregator: No.
Original source / Raw SKILL.md
---
name: trigger-cost-savings
description: Analyze Trigger.dev tasks, schedules, and runs for cost optimization opportunities. Use when asked to reduce spend, optimize costs, audit usage, right-size machines, or review task efficiency. Requires Trigger.dev MCP tools for run analysis.
---
# Trigger.dev Cost Savings Analysis
Analyze task runs and configurations to find cost reduction opportunities.
## Prerequisites: MCP Tools
This skill requires the **Trigger.dev MCP server** to analyze live run data.
### Check MCP availability
Before analysis, verify these MCP tools are available:
- `list_runs` — list runs with filters (status, task, time period, machine size)
- `get_run_details` — get run logs, duration, and status
- `get_current_worker` — get registered tasks and their configurations
If these tools are **not available**, instruct the user:
```
To analyze your runs, you need the Trigger.dev MCP server installed.
Run this command to install it:
npx trigger.dev@latest install-mcp
This launches an interactive wizard that configures the MCP server for your AI client.
```
Do NOT proceed with run analysis without MCP tools. You can still review source code for static issues (see Static Analysis below).
### Load latest cost reduction documentation
Before giving recommendations, fetch the latest guidance:
```
WebFetch: https://trigger.dev/docs/how-to-reduce-your-spend
```
Use the fetched content to ensure recommendations are current. If the fetch fails, fall back to the reference documentation in `references/cost-reduction.md`.
## Analysis Workflow
### Step 1: Static Analysis (source code)
Scan task files in the project for these issues:
1. **Oversized machines** — tasks using `large-1x` or `large-2x` without clear need
2. **Missing `maxDuration`** — tasks without execution time limits (runaway cost risk)
3. **Excessive retries** — `maxAttempts` > 5 without `AbortTaskRunError` for known failures
4. **Missing debounce** — high-frequency triggers without debounce configuration
5. **Missing idempotency** — payment/critical tasks without idempotency keys
6. **Polling instead of waits** — `setTimeout`/`setInterval`/sleep loops instead of `wait.for()`
7. **Short waits** — `wait.for()` with < 5 seconds (not checkpointed, wastes compute)
8. **Sequential instead of batch** — multiple `triggerAndWait()` calls that could use `batchTriggerAndWait()`
9. **Over-scheduled crons** — schedules running more frequently than necessary
### Step 2: Run Analysis (requires MCP tools)
Use MCP tools to analyze actual usage patterns:
#### 2a. Identify expensive tasks
```
list_runs with filters:
- period: "30d" or "7d"
- Sort by duration or cost
- Check across different task IDs
```
Look for:
- Tasks with high total compute time (duration x run count)
- Tasks with high failure rates (wasted retries)
- Tasks running on large machines with short durations (over-provisioned)
#### 2b. Analyze failure patterns
```
list_runs with status: "FAILED" or "CRASHED"
```
For high-failure tasks:
- Check if failures are retryable (transient) vs permanent
- Suggest `AbortTaskRunError` for known non-retryable errors
- Calculate wasted compute from failed retries
#### 2c. Check machine utilization
```
get_run_details for sample runs of each task
```
Compare actual resource usage against machine preset:
- If a task on `large-2x` consistently runs in < 1 second, it's over-provisioned
- If tasks are I/O-bound (API calls, DB queries), they likely don't need large machines
#### 2d. Review schedule frequency
```
get_current_worker to list scheduled tasks and their cron patterns
```
Flag schedules that may be too frequent for their purpose.
### Step 3: Generate Recommendations
Present findings as a prioritized list with estimated impact:
```markdown
## Cost Optimization Report
### High Impact
1. **Right-size `process-images` machine** — Currently `large-2x`, average run 2s.
Switching to `small-2x` could reduce this task's cost by ~16x.
```ts
machine: { preset: "small-2x" } // was "large-2x"
```
### Medium Impact
2. **Add debounce to `sync-user-data`** — 847 runs/day, often triggered in bursts.
```ts
debounce: { key: `user-${userId}`, delay: "5s" }
```
### Low Impact / Best Practices
3. **Add `maxDuration` to `generate-report`** — No timeout configured.
```ts
maxDuration: 300 // 5 minutes
```
```
## Machine Preset Costs (relative)
Larger machines cost proportionally more per second of compute:
| Preset | vCPU | RAM | Relative Cost |
|--------|------|-----|---------------|
| micro | 0.25 | 0.25 GB | 0.25x |
| small-1x | 0.5 | 0.5 GB | 1x (baseline) |
| small-2x | 1 | 1 GB | 2x |
| medium-1x | 1 | 2 GB | 2x |
| medium-2x | 2 | 4 GB | 4x |
| large-1x | 4 | 8 GB | 8x |
| large-2x | 8 | 16 GB | 16x |
## Key Principles
- **Waits > 5 seconds are free** — checkpointed, no compute charge
- **Start small, scale up** — default `small-1x` is right for most tasks
- **I/O-bound tasks don't need big machines** — API calls, DB queries wait on network
- **Debounce saves the most on high-frequency tasks** — consolidates bursts into single runs
- **Idempotency prevents duplicate work** — especially important for expensive operations
- **`AbortTaskRunError` stops wasteful retries** — don't retry permanent failures
See `references/cost-reduction.md` for detailed strategies with code examples.
---
## Referenced Files
> The following files are referenced in this skill and included for context.
### references/cost-reduction.md
```markdown
# Cost Reduction Strategies
Detailed strategies for reducing Trigger.dev spend. For the latest version, fetch:
https://trigger.dev/docs/how-to-reduce-your-spend
## 1. Monitor Usage
Review your usage dashboard regularly to identify:
- Most expensive tasks (by total compute time)
- Run counts and daily spikes
- Failure rates and wasted retries
## 2. Configure Billing Alerts
Set up alerts in the Trigger.dev dashboard:
- **Standard alerts**: Notifications at 75%, 90%, 100%, 200%, 500% of budget
- **Spike alerts**: Protection at 10x, 20x, 50x, 100x of monthly budget
Keep spike alerts enabled as a safety net against runaway costs.
## 3. Right-Size Machines
Start with the smallest machine and scale only when necessary:
```ts
// Default (small-1x) is right for most tasks
export const apiTask = task({
id: "call-api",
// No machine preset needed — defaults to small-1x
run: async (payload) => {
const response = await fetch("https://api.example.com/data");
return response.json();
},
});
// Only use larger machines for CPU/memory-intensive work
export const imageProcessor = task({
id: "process-image",
machine: { preset: "medium-1x" }, // Only if actually needed
run: async (payload) => {
// Heavy image processing that needs more RAM
},
});
// Override machine at trigger time for variable workloads
await imageProcessor.trigger(largePayload, {
machine: { preset: "large-1x" }, // Larger only for this specific run
});
```
## 4. Use Idempotency Keys
Prevent duplicate execution of expensive operations:
```ts
import { task, idempotencyKeys } from "@trigger.dev/sdk";
export const expensiveTask = task({
id: "expensive-operation",
run: async (payload: { orderId: string }) => {
const key = await idempotencyKeys.create(`order-${payload.orderId}`);
// This won't re-execute if triggered again with same key
await costlyChildTask.trigger(payload, {
idempotencyKey: key,
idempotencyKeyTTL: "24h",
});
},
});
```
## 5. Parallelize Within Tasks
Consolidate multiple async operations into single tasks instead of spawning many:
```ts
// Expensive: 3 separate task runs
await taskA.triggerAndWait(data);
await taskB.triggerAndWait(data);
await taskC.triggerAndWait(data);
// Cheaper: single task with parallel I/O (when work is I/O-bound)
export const combinedTask = task({
id: "combined-api-calls",
run: async (payload) => {
const [a, b, c] = await Promise.all([
fetch("https://api-a.com"),
fetch("https://api-b.com"),
fetch("https://api-c.com"),
]);
return { a: await a.json(), b: await b.json(), c: await c.json() };
},
});
```
Note: Only use `Promise.all` for regular async operations (fetch, DB queries), NOT for `triggerAndWait()` or `wait.*` calls.
## 6. Optimize Retries
Reduce wasted compute from retries:
```ts
import { task, AbortTaskRunError } from "@trigger.dev/sdk";
export const smartRetryTask = task({
id: "smart-retry",
retry: {
maxAttempts: 3, // Not 10 — be realistic
},
catchError: async ({ error }) => {
// Don't retry known permanent failures
if (error.message?.includes("NOT_FOUND")) {
throw new AbortTaskRunError("Resource not found — won't retry");
}
if (error.message?.includes("UNAUTHORIZED")) {
throw new AbortTaskRunError("Auth failed — won't retry");
}
// Only retry transient errors
},
run: async (payload) => {
// task logic
},
});
```
## 7. Set maxDuration
Prevent runaway tasks from consuming unlimited compute:
```ts
export const boundedTask = task({
id: "bounded-task",
maxDuration: 300, // 5 minutes max
run: async (payload) => {
// If this takes longer than 5 minutes, it's killed
},
});
```
## 8. Use Waitpoints Instead of Polling
Waits > 5 seconds are checkpointed and free:
```ts
// Expensive: polling loop burns compute
export const pollingTask = task({
id: "polling-bad",
run: async (payload) => {
while (true) {
const status = await checkStatus(payload.id);
if (status === "ready") break;
await new Promise((r) => setTimeout(r, 5000)); // WASTES compute
}
},
});
// Free: checkpointed wait
import { wait } from "@trigger.dev/sdk";
export const waitTask = task({
id: "wait-good",
run: async (payload) => {
await wait.for({ minutes: 5 }); // FREE — checkpointed
const status = await checkStatus(payload.id);
if (status !== "ready") {
await wait.for({ minutes: 5 }); // Still free
}
},
});
```
## 9. Debounce High-Frequency Triggers
Consolidate bursts into single executions:
```ts
// Without debounce: 100 webhook events = 100 task runs
await syncTask.trigger({ userId: "123" });
// With debounce: 100 events in 5s = 1 task run
await syncTask.trigger(
{ userId: "123" },
{
debounce: {
key: "sync-user-123",
delay: "5s",
mode: "trailing", // Use latest payload
},
}
);
```
## Cost Checklist
Use this checklist when reviewing tasks:
- [ ] Machine preset matches actual resource needs (start with `small-1x`)
- [ ] `maxDuration` is set to a reasonable limit
- [ ] Retry `maxAttempts` is appropriate (not excessive)
- [ ] `AbortTaskRunError` used for known permanent failures
- [ ] Idempotency keys used for expensive/critical operations
- [ ] `wait.for()` used instead of polling loops (with delays > 5s)
- [ ] Debounce configured for high-frequency trigger sources
- [ ] Batch triggering used instead of sequential `triggerAndWait()` loops
- [ ] Scheduled task frequency matches actual business needs
- [ ] Billing alerts configured in dashboard
```