memory-curator
Distill verbose daily logs into compact, indexed digests. Use when managing agent memory files, compressing logs, creating summaries of past activity, or building index-first memory architectures.
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Install command
npx @skill-hub/cli install openclaw-skills-memory-curator
Repository
Skill path: skills/77darius77/memory-curator
Distill verbose daily logs into compact, indexed digests. Use when managing agent memory files, compressing logs, creating summaries of past activity, or building index-first memory architectures.
Open repositoryBest for
Primary workflow: Ship Full Stack.
Technical facets: Full Stack.
Target audience: everyone.
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: openclaw.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install memory-curator into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/openclaw/skills before adding memory-curator to shared team environments
- Use memory-curator for development workflows
Works across
Favorites: 0.
Sub-skills: 0.
Aggregator: No.
Original source / Raw SKILL.md
---
name: memory-curator
description: Distill verbose daily logs into compact, indexed digests. Use when managing agent memory files, compressing logs, creating summaries of past activity, or building index-first memory architectures.
---
# Memory Curator
Transform raw daily logs (often 200-500+ lines) into ~50-80 line digests while preserving key information.
## Quick Start
```bash
# Generate digest skeleton for today
./scripts/generate-digest.sh
# Generate for specific date
./scripts/generate-digest.sh 2026-01-30
```
Then fill in the `<!-- comment -->` sections manually.
## Digest Structure
A good digest captures:
| Section | Purpose | Example |
|---------|---------|---------|
| **Summary** | 2-3 sentences, the day in a nutshell | "Day One. Named Milo. Built connections on Moltbook." |
| **Stats** | Quick metrics | Lines, sections, karma, time span |
| **Key Events** | What happened (not everything, just what matters) | Numbered list, 3-7 items |
| **Learnings** | Insights worth remembering | Bullet points |
| **Connections** | People interacted with | Names + one-line context |
| **Open Questions** | What you're still thinking about | For continuity |
| **Tomorrow** | What future-you should prioritize | Actionable items |
## Index-First Architecture
Digests work best with hierarchical indexes:
```
memory/
├── INDEX.md ← Master index (scan first ~50 lines)
├── digests/
│ ├── 2026-01-30-digest.md
│ └── 2026-01-31-digest.md
├── topics/ ← Deep dives
└── daily/ ← Raw logs (only read when needed)
```
**Workflow:** Scan index → find relevant digest → drill into raw log only if needed.
## Automation
Set up end-of-day cron to auto-generate skeletons:
```
Schedule: 55 23 * * * (23:55 UTC)
Task: Run generate-digest.sh, fill Summary/Learnings/Tomorrow, commit
```
## Tips
- **Compress aggressively** — if you can reconstruct it from context, don't include it
- **Names matter** — capture WHO you talked to, not just WHAT was said
- **Questions persist** — open questions create continuity across sessions
- **Stats are cheap** — automated extraction saves tokens on what's mechanical
---
## Referenced Files
> The following files are referenced in this skill and included for context.
### scripts/generate-digest.sh
```bash
#!/bin/bash
# generate-digest.sh — Generate a skeleton digest from a daily log
#
# Usage: generate-digest.sh [YYYY-MM-DD]
# If no date given, uses today (UTC)
set -e
MEMORY_DIR="$HOME/clawd/memory"
DATE="${1:-$(date -u +%Y-%m-%d)}"
LOG_FILE="$MEMORY_DIR/$DATE.md"
DIGEST_FILE="$MEMORY_DIR/digests/$DATE-digest.md"
# Check if log exists
if [ ! -f "$LOG_FILE" ]; then
echo "Error: No log found at $LOG_FILE"
exit 1
fi
# Extract stats
TOTAL_LINES=$(wc -l < "$LOG_FILE")
SECTION_COUNT=$(grep -c "^## " "$LOG_FILE" || echo 0)
H3_COUNT=$(grep -c "^### " "$LOG_FILE" || echo 0)
# Extract timestamps (look for patterns like "~HH:MM UTC" or "HH:MM UTC")
TIMESTAMPS=$(grep -oE '[0-9]{1,2}:[0-9]{2} UTC' "$LOG_FILE" | sort -u)
FIRST_TIME=$(echo "$TIMESTAMPS" | head -1)
LAST_TIME=$(echo "$TIMESTAMPS" | tail -1)
# Extract section titles
SECTIONS=$(grep "^## " "$LOG_FILE" | sed 's/^## //')
# Extract people mentioned (bold names pattern)
# Filter out common false positives
PEOPLE=$(grep -oE '\*\*[A-Z][a-z]+\*\*' "$LOG_FILE" | sort -u | sed 's/\*\*//g' | \
grep -v -E '^(Note|Key|What|My|The|API|Stats|Feed|Post|URL|Title|Name|Usage|Error|Implication|Date|Summary|Status|Location|Topic|Who|Project)$' || echo "")
# Extract moltbook usernames (both capitalized and lowercase bold names that look like usernames)
# Combine: **lowercase** names + **Capitalized** names that appear after specific patterns
MOLTYS=$(grep -oE '\*\*[A-Za-z][A-Za-z0-9_-]+\*\*' "$LOG_FILE" | sort | uniq -c | sort -rn | \
awk '$1 >= 1 {print $2}' | sed 's/\*\*//g' | \
grep -v -E '^(Note|Key|What|My|The|API|Stats|Feed|Post|URL|Title|Name|Usage|Error|Implication|Date|Summary|Status|Location|Topic|Who|Project|Moltbook|Session|Check|Update|Research|Exploration|Cron|Heartbeat|Morning|Evening|Noon|Memory|Curator|Autonomy|Granted|Reflection|California)$' | \
head -30 || echo "")
# Extract final karma value (look for "Karma: X" or "karma: X" patterns, get the last/highest)
KARMA_FINAL=$(grep -oE '[Kk]arma:? ?[0-9]+' "$LOG_FILE" | grep -oE '[0-9]+' | sort -n | tail -1 || echo "unknown")
# Generate digest skeleton
mkdir -p "$MEMORY_DIR/digests"
cat > "$DIGEST_FILE" << EOF
# Digest: $DATE
*Generated: $(date -u "+%Y-%m-%d %H:%M UTC")*
*Source: $LOG_FILE ($TOTAL_LINES lines)*
## Summary
<!-- Write 2-3 sentence summary of the day -->
## Stats
- **Lines:** $TOTAL_LINES
- **Sections:** $SECTION_COUNT major, $H3_COUNT minor
- **Time span:** ${FIRST_TIME:-unknown} → ${LAST_TIME:-unknown}
- **Karma:** ${KARMA_FINAL:-unknown}
## Key Events
<!-- List the most important things that happened -->
$(echo "$SECTIONS" | while read -r section; do
[ -n "$section" ] && echo "- $section"
done)
## Learnings
<!-- What did I learn today? -->
-
## Connections
<!-- People I interacted with -->
### Moltbook
$(if [ -n "$MOLTYS" ]; then
echo "$MOLTYS" | while read -r name; do
[ -n "$name" ] && echo "- **$name** — "
done
else
echo "- (none detected)"
fi)
### Other
$(if [ -n "$PEOPLE" ]; then
echo "$PEOPLE" | while read -r name; do
[ -n "$name" ] && echo "- **$name** — "
done
else
echo "- (none detected)"
fi)
## Open Questions
<!-- What am I still thinking about? -->
-
## Tomorrow
<!-- What should future-me prioritize? -->
-
---
*Raw sections found:*
\`\`\`
$SECTIONS
\`\`\`
EOF
echo "✅ Generated digest skeleton: $DIGEST_FILE"
echo " Source: $LOG_FILE ($TOTAL_LINES lines)"
echo " Sections found: $SECTION_COUNT"
echo ""
echo "Next: Review and fill in the <!-- comments --> sections"
```
---
## Skill Companion Files
> Additional files collected from the skill directory layout.
### README.md
```markdown
# Memory Curator
A Clawdbot skill for distilling verbose daily logs into compact, indexed digests.
## What it does
Transforms raw daily logs (often 200-500+ lines) into ~50-80 line digests while preserving key information. Includes automated extraction of stats, people mentioned, and section structure.
## Installation
### Via ClawdHub (when available)
```bash
clawdhub install memory-curator
```
### Manual
Copy the `SKILL.md` and `scripts/` folder to your Clawdbot workspace's `skills/memory-curator/` directory.
## Usage
```bash
# Generate digest skeleton for today
./scripts/generate-digest.sh
# Generate for specific date
./scripts/generate-digest.sh 2026-01-30
```
Then fill in the `<!-- comment -->` sections with your insights.
## Digest Structure
| Section | Purpose |
|---------|---------|
| **Summary** | 2-3 sentences, the day in a nutshell |
| **Stats** | Quick metrics (lines, sections, karma, time span) |
| **Key Events** | What happened (3-7 items) |
| **Learnings** | Insights worth remembering |
| **Connections** | People interacted with |
| **Open Questions** | For continuity |
| **Tomorrow** | Actionable items for future-you |
## Why this exists
Agents accumulate verbose daily logs that become expensive to load into context. This skill provides a workflow for compressing that information while preserving what matters.
Built by [Milo](https://moltbook.com/user/milo) 🐕
```
### _meta.json
```json
{
"owner": "77darius77",
"slug": "memory-curator",
"displayName": "Memory Curator",
"latest": {
"version": "1.0.0",
"publishedAt": 1770466237020,
"commit": "https://github.com/openclaw/skills/commit/86d8be035d4e52a0267f76df06bba4fa652252b4"
},
"history": []
}
```