Back to skills
SkillHub ClubShip Full StackFull Stack

Context Compactor

Summarizes and compresses conversation context to stay within token limits. Prevents context window overflow.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
433
Hot score
99
Updated
March 20, 2026
Overall rating
C3.5
Composite score
3.5
Best-practice grade
C61.2

Install command

npx @skill-hub/cli install winstonkoh87-athena-public-context-compactor

Repository

winstonkoh87/Athena-Public

Skill path: examples/skills/workflow/context-compactor

Summarizes and compresses conversation context to stay within token limits. Prevents context window overflow.

Open repository

Best for

Primary workflow: Ship Full Stack.

Technical facets: Full Stack.

Target audience: everyone.

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: winstonkoh87.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install Context Compactor into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/winstonkoh87/Athena-Public before adding Context Compactor to shared team environments
  • Use Context Compactor for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: Context Compactor
description: Summarizes and compresses conversation context to stay within token limits. Prevents context window overflow.
created: 2026-02-27
auto-invoke: false
model: default
---

# πŸ—œοΈ Context Compactor

> **Philosophy**: The best context is compressed context. Keep signal, discard noise.

## 1. The Problem

Long conversations overflow the context window, causing:

- Lost instructions from earlier in the conversation
- Degraded response quality
- Repeated mistakes (agent forgets prior decisions)

## 2. When to Trigger

- Conversation exceeds ~50 turns
- Agent starts repeating questions or forgetting decisions
- Token budget is >60% consumed
- Before starting a major new phase of work

## 3. Execution Workflow

```
STEP 1: INVENTORY
  └─ List all decisions made so far
  └─ List all files modified
  └─ List all open questions

STEP 2: COMPRESS
  └─ Summarize each topic into 1-2 sentences
  └─ Keep: decisions, file paths, error messages, user preferences
  └─ Discard: exploratory discussion, rejected approaches, verbose logs

STEP 3: CHECKPOINT
  └─ Write the compressed summary to a file:
     └─ `.context/session_checkpoint.md` (or equivalent)

STEP 4: RESET
  └─ Reference the checkpoint file instead of raw conversation history
```

## 4. Compression Template

```markdown
# Session Checkpoint β€” [Date]

## Decisions Made
1. [Decision] β€” Reason: [Why]
2. [Decision] β€” Reason: [Why]

## Files Modified
- `path/to/file.py` β€” [What changed]

## Current State
- Working on: [Current task]
- Blocked by: [If anything]

## Open Questions
- [Question 1]

## Key Constraints
- [Constraint the agent must remember]
```

## 5. Rules

- Never discard user preferences or constraints
- Always preserve file paths and error messages
- Compress reasoning chains into conclusions only
- Keep the checkpoint under 500 words

---

# skill #context-management #efficiency #memory
Context Compactor | SkillHub