Back to skills
SkillHub ClubWrite Technical DocsFull StackTech Writer

palest-ink

Track and recall your daily activities including git commits, web browsing, shell commands, and VS Code edits. Use this skill whenever the user asks about their recent activity, wants a daily report or summary, asks "what did I do today/yesterday/this week", wants to find a specific commit or webpage they visited, asks about browsing history, needs to recall any past work activity, or queries about specific content they viewed online. Also trigger when the user mentions palest-ink, activity tracking, daily log, work journal, daily report, or activity summary. Trigger for questions like "which website had info about X", "when did I commit the code for Y", "show my git activity".

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
3,125
Hot score
99
Updated
March 20, 2026
Overall rating
C4.0
Composite score
4.0
Best-practice grade
C62.8

Install command

npx @skill-hub/cli install openclaw-skills-palest-ink

Repository

openclaw/skills

Skill path: skills/billhandsome52/palest-ink

Track and recall your daily activities including git commits, web browsing, shell commands, and VS Code edits. Use this skill whenever the user asks about their recent activity, wants a daily report or summary, asks "what did I do today/yesterday/this week", wants to find a specific commit or webpage they visited, asks about browsing history, needs to recall any past work activity, or queries about specific content they viewed online. Also trigger when the user mentions palest-ink, activity tracking, daily log, work journal, daily report, or activity summary. Trigger for questions like "which website had info about X", "when did I commit the code for Y", "show my git activity".

Open repository

Best for

Primary workflow: Write Technical Docs.

Technical facets: Full Stack, Tech Writer.

Target audience: everyone.

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: openclaw.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install palest-ink into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/openclaw/skills before adding palest-ink to shared team environments
  • Use palest-ink for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: palest-ink
description: >
  Track and recall your daily activities including git commits, web browsing,
  shell commands, and VS Code edits. Use this skill whenever the user asks about
  their recent activity, wants a daily report or summary, asks "what did I do
  today/yesterday/this week", wants to find a specific commit or webpage they
  visited, asks about browsing history, needs to recall any past work activity,
  or queries about specific content they viewed online. Also trigger when the
  user mentions palest-ink, activity tracking, daily log, work journal, daily
  report, or activity summary. Trigger for questions like "which website had
  info about X", "when did I commit the code for Y", "show my git activity".
tools: Read, Glob, Grep, Bash
---

# Palest Ink (淡墨) — Activity Tracker & Daily Reporter

> 好记性不如烂笔头 — The faintest ink is better than the strongest memory.

## Overview

Palest Ink tracks the user's daily activities automatically:
- **Git operations**: commits, pushes, pulls, branch switches
- **Web browsing**: Chrome & Safari history with page content summaries
- **Shell commands**: zsh/bash command history with execution duration
- **VS Code edits**: recently opened/edited files
- **App focus**: which application is in the foreground, with time duration
- **File changes**: files modified in watched directories

All data is stored locally at `~/.palest-ink/data/YYYY/MM/DD.jsonl`.

## Setup Check

Before answering any query, first check if Palest Ink is installed:

```bash
test -f ~/.palest-ink/config.json && echo "INSTALLED" || echo "NOT_INSTALLED"
```

If NOT installed, tell the user:
> Palest Ink is not yet set up. To install, run:
> ```bash
> bash <SKILL_PATH>/../../collectors/install.sh
> ```
> This will set up automatic tracking of git, browsing, and shell activity.

Then stop and wait for the user to install.

## Answering Queries

### Daily Report / "What did I do today?"

Run the report generator:

```bash
python3 <SKILL_PATH>/scripts/report.py --date today
```

For yesterday: `--date yesterday`
For a specific date: `--date 2026-03-03`
For the whole week: `--week`

Read the output and present it conversationally to the user. Highlight notable patterns
(focused work sessions, frequent topics, etc).

### Searching for Specific Activities

Use the query tool to search activity records:

```bash
python3 <SKILL_PATH>/scripts/query.py --date today --type git_commit --search "plugin"
```

**Common query patterns:**

| User asks about... | Arguments |
|---------------------|-----------|
| A git commit | `--type git_commit --search "keyword"` |
| A webpage about X | `--type web_visit --search-content "keyword"` |
| Shell commands | `--type shell_command --search "keyword"` |
| VS Code files | `--type vscode_edit --search "keyword"` |
| App focus / screen time | `--type app_focus --summary` |
| File changes in project | `--type file_change --search "project"` |
| Everything today | `--date today --summary` |
| Date range | `--from 2026-03-01 --to 2026-03-07` |

**Important:** When the user searches for web page content (e.g., "which website talked about homebrew"),
use `--search-content` instead of `--search`. This searches within page content summaries and keywords,
not just URLs and titles.

### Status Check

Show collector status and data statistics:

```bash
python3 <SKILL_PATH>/scripts/status.py
```

If the output contains "CLEANUP RECOMMENDED", proactively tell the user:
> "Your palest-ink data is approaching 2 GB. Would you like me to clean up older records?"

If the user agrees, first show a dry-run preview:

```bash
python3 ~/.palest-ink/bin/cleanup.py --dry-run
```

Present the preview (how many files, date range, records count, space to free).
Then ask for explicit confirmation before actually deleting:

```bash
python3 ~/.palest-ink/bin/cleanup.py --force
```

Options:
- `--max-size N` — threshold in GB (default: 2.0)
- `--keep-days N` — always keep the most recent N days (default: 30)
- `--dry-run` — preview only, no changes
- `--force` — skip the interactive prompt (use after user confirms in chat)

## Fallback: Direct File Reading

If scripts fail or for simple lookups, read the JSONL files directly:

1. Construct the file path: `~/.palest-ink/data/YYYY/MM/DD.jsonl`
2. Use Grep to search: `grep "keyword" ~/.palest-ink/data/2026/03/03.jsonl`
3. Each line is a JSON object with fields: `ts`, `type`, `source`, `data`

## Data Schema

### Activity Types

- `git_commit` — data: repo, branch, hash, message, files_changed, insertions, deletions
- `git_push` — data: repo, branch, remote, remote_url
- `git_pull` — data: repo, branch, is_squash
- `git_checkout` — data: repo, from_ref, to_branch
- `web_visit` — data: url, title, visit_duration_seconds, browser, content_summary, content_keywords
- `shell_command` — data: command, duration_seconds (null if not available)
- `vscode_edit` — data: file_path, workspace, language
- `app_focus` — data: app_name, window_title, duration_seconds
- `file_change` — data: path, workspace, language, event

### Web Visit Content

Web visits include a `content_summary` field (up to 800 chars of page text) and
`content_keywords` (extracted keywords). This enables content-based search.

Example: if user browsed a page about "Homebrew installation guide", the content_summary
will contain the actual page text, making it searchable even if the URL/title don't mention it.

## Tips for Good Answers

1. When showing git activity, include the commit message and changed files
2. When showing web visits, include both the title and a brief content summary
3. For "what did I do" questions, give a narrative summary, not just raw data
4. Group related activities together (e.g., "You worked on project X, making 5 commits...")
5. If the search returns too many results, help the user narrow down
6. Mention the time of activities to give temporal context


---

## Skill Companion Files

> Additional files collected from the skill directory layout.

### README.md

```markdown
# Palest Ink (淡墨)

> 好记性不如烂笔头 — The faintest ink is better than the strongest memory.

A Claude Code skill that automatically tracks your daily development activities and helps you recall what you've done — without lifting a finger.

## Features

### Data Collection (automatic, every 15 seconds via launchd)

| Collector | What it tracks |
|-----------|---------------|
| **Git** | Commits, pushes, pulls, branch switches (via global git hooks + periodic scan) |
| **Browser** | Chrome & Safari history with full page content summaries |
| **Shell** | zsh/bash command history with execution duration |
| **VS Code** | Recently opened/edited files |
| **App Focus** | Which application is in the foreground and how long |
| **File Changes** | Files modified in your watched project directories |

### Reports

Ask Claude *"What did I do today?"* and get a structured daily report including:

- **Timeline** — chronological activity across morning / afternoon / evening / night
- **Git Activity** — commits per repo with line change stats
- **Top Websites** — domains visited with visit counts
- **Files Edited** — VS Code edits grouped by language
- **专注时段 (Focus Sessions)** — continuous blocks of focus on a single app (e.g. "09:00–11:15 Cursor 2h15m")
- **应用使用时长 (App Usage)** — Top 5 apps by time + breakdown by category (development / browser / communication)
- **Shell 命令统计 (Shell Stats)** — most-used commands + slowest commands by duration
- **跨域关联 (Cross-domain Correlation)** — web research topics detected in the 2 hours before each git commit

### Natural Language Search

| You ask | How it's answered |
|---------|-------------------|
| "Which commit touched the auth code?" | Searches `git_commit` records by keyword |
| "Which website had info about Homebrew?" | Searches **page content summaries**, not just titles/URLs |
| "How long did I use Cursor today?" | Queries `app_focus` records and sums duration |
| "What shell commands took more than 30 seconds?" | Queries `shell_command` with `duration_seconds` |
| "What files changed in the palest-ink project?" | Queries `file_change` records |
| "Show my activity from last Monday to Wednesday" | Date-range query across multiple days |

## Installation

### 1. Install the collectors

```bash
bash collectors/install.sh
```

This will:
- Create `~/.palest-ink/` for storing activity data
- Write a default `config.json`
- Install git hooks globally (`post-commit`, `post-merge`, `post-checkout`, `pre-push`)
- Install a **launchd agent** that runs every 15 seconds (replaces cron)

### 2. Grant permissions

**Full Disk Access** (for Safari history):
> System Settings → Privacy & Security → Full Disk Access → enable Terminal.app

**Accessibility** (for app focus tracking):
> System Settings → Privacy & Security → Accessibility → enable Terminal.app

### 3. Install as a Claude Code skill

Add the skill to your Claude Code settings by pointing to the `skills/` directory, or follow Claude Code's plugin installation instructions.

### 4. (Optional) Configure watched directories

Edit `~/.palest-ink/config.json` to set which directories to monitor for file changes:

```json
"watched_dirs": ["/Users/you/projects/myapp", "/Users/you/work"]
```

If `watched_dirs` is empty, the file change collector falls back to `tracked_repos`.

## Usage Examples

Once installed, just talk to Claude:

```
"今天做了什么?"
"What did I do today?"
"Show my git activity this week"
"Which website had information about JWT authentication?"
"How long was I in Cursor vs Chrome today?"
"What files did I change in the palest-ink project today?"
"Which shell commands took the longest to run this week?"
"Show me what I researched before my last commit"
```

## Data Storage

All data stays local on your machine — nothing leaves your machine.

```
~/.palest-ink/
├── config.json            # Configuration and collector state
├── data/
│   └── YYYY/MM/DD.jsonl   # Activity records (one JSON object per line)
├── reports/               # Saved report files
├── hooks/                 # Git hook scripts
├── bin/                   # Collector scripts (copied from collectors/)
├── tmp/                   # Lock files, marker files, temp files
└── cron.log               # Collection log
```

Each record follows a common schema:

```json
{
  "ts": "2026-03-03T09:14:22+00:00",
  "type": "git_commit",
  "source": "git_hook",
  "data": { ... }
}
```

Supported types: `git_commit`, `git_push`, `git_pull`, `git_checkout`, `web_visit`, `shell_command`, `vscode_edit`, `app_focus`, `file_change`.

See [`skills/palest-ink/references/schema.md`](skills/palest-ink/references/schema.md) for full field definitions.

## Configuration

`~/.palest-ink/config.json` — key fields:

```json
{
  "collectors": {
    "chrome": true,
    "safari": true,
    "shell": true,
    "vscode": true,
    "git_hooks": true,
    "git_scan": true,
    "content": true,
    "app": true,
    "fsevent": true
  },
  "tracked_repos": ["/path/to/repo"],
  "watched_dirs": [],
  "exclude_patterns": {
    "urls": ["chrome://", "chrome-extension://", "about:", "file://"],
    "commands": ["^ls$", "^cd ", "^clear$", "^pwd$", "^exit$", "^history"]
  },
  "content_fetch": {
    "max_urls_per_run": 50,
    "summary_max_chars": 800,
    "timeout_seconds": 10
  },
  "app": {
    "min_focus_seconds": 600,
    "exclude": ["loginwindow", "Dock", "SystemUIServer", "Finder", "ScreenSaverEngine"]
  },
  "app_categories": {
    "development": ["Cursor", "Code", "Xcode", "Terminal", "iTerm2", "Warp"],
    "browser": ["Google Chrome", "Safari", "Firefox", "Arc"],
    "communication": ["Slack", "WeChat", "Discord", "Zoom"]
  }
}
```

## Uninstallation

```bash
bash collectors/uninstall.sh
```

Removes the launchd agent, restores git hooks, and optionally deletes collected data.

## Privacy

- All data is stored **locally** — nothing is sent anywhere
- URL exclusion patterns filter out browser-internal and sensitive pages
- Command exclusion patterns filter out noise (`ls`, `cd`, `clear`, etc.)
- App focus collection skips the lock screen (`loginwindow`, `ScreenSaverEngine`)
- Accessible only to your local user account

## License

MIT

```

### _meta.json

```json
{
  "owner": "billhandsome52",
  "slug": "palest-ink",
  "displayName": "Palest Ink - Activity Tracker",
  "latest": {
    "version": "0.1.0",
    "publishedAt": 1772605191857,
    "commit": "https://github.com/openclaw/skills/commit/75f0828866efbd1d93142a0868264738ee786a2b"
  },
  "history": [
    {
      "version": "1.0.0",
      "publishedAt": 1772502125574,
      "commit": "https://github.com/openclaw/skills/commit/596b87c122f43a85b0583468f7932cd2d6d7ee34"
    }
  ]
}

```

### references/schema.md

```markdown
# Palest Ink Data Schema

## Storage Location

All activity data is stored at `~/.palest-ink/data/YYYY/MM/DD.jsonl`.
Each file contains one day's activities, one JSON record per line.

## Record Format

Every record has the same top-level structure:

```json
{
  "ts": "2026-03-03T14:22:31Z",
  "type": "git_commit",
  "source": "git_hook",
  "data": { ... }
}
```

| Field | Type | Description |
|-------|------|-------------|
| `ts` | string | ISO 8601 timestamp |
| `type` | string | Activity type (see below) |
| `source` | string | Which collector produced this record |
| `data` | object | Type-specific data fields |

## Activity Types

### git_commit
Source: `git_hook` or `git_scan`

```json
{
  "repo": "/Users/xuyun/my-project",
  "branch": "main",
  "hash": "a1b2c3d",
  "message": "Fix login validation bug",
  "files_changed": ["src/auth.py", "tests/test_auth.py"],
  "insertions": 42,
  "deletions": 15
}
```

### git_push
Source: `git_hook`

```json
{
  "repo": "/Users/xuyun/my-project",
  "branch": "main",
  "remote": "origin",
  "remote_url": "[email protected]:user/repo.git"
}
```

### git_pull
Source: `git_hook`

```json
{
  "repo": "/Users/xuyun/my-project",
  "branch": "main",
  "is_squash": false
}
```

### git_checkout
Source: `git_hook`

```json
{
  "repo": "/Users/xuyun/my-project",
  "from_ref": "main",
  "to_branch": "feature/auth"
}
```

### web_visit
Source: `chrome_collector` or `safari_collector`

```json
{
  "url": "https://brew.sh/",
  "title": "Homebrew — The Missing Package Manager for macOS",
  "visit_duration_seconds": 120,
  "browser": "chrome",
  "content_summary": "Homebrew installs the stuff you need...",
  "content_keywords": ["homebrew", "package manager", "macOS", "install"],
  "content_pending": false
}
```

**Content fields:**
- `content_pending`: `true` if content hasn't been fetched yet
- `content_summary`: First 800 chars of extracted page text
- `content_keywords`: Top 10 keywords extracted from page content
- `content_error`: `true` if content fetch failed

### shell_command
Source: `shell_collector`

```json
{
  "command": "git log --oneline -5"
}
```

### vscode_edit
Source: `vscode_collector`

```json
{
  "file_path": "/Users/xuyun/project/src/main.py",
  "workspace": "/Users/xuyun/project",
  "language": "python",
  "is_folder": false
}
```

### app_focus
Source: `app_collector`

```json
{
  "app_name": "Cursor",
  "window_title": "palest_ink — main.py",
  "duration_seconds": 45
}
```

Records the frontmost application and window. When the same app+window is detected in consecutive collection cycles, `duration_seconds` is accumulated in-place rather than creating new records.

### file_change
Source: `fsevent_collector`

```json
{
  "path": "/Users/xuyun/project/src/main.py",
  "workspace": "/Users/xuyun/project",
  "language": "python",
  "event": "modified"
}
```

Detected via `find -newer <marker>` on directories listed in `watched_dirs` (or `tracked_repos` as fallback). The `workspace` field is the nearest parent directory containing a `.git` folder.

### shell_command (enhanced)
Source: `shell_collector`

```json
{
  "command": "git log --oneline -5",
  "duration_seconds": 2
}
```

`duration_seconds` is extracted from the zsh extended history format (`: timestamp:duration;command`). Set to `null` for bash or simple-format zsh history.

## Configuration

Config file: `~/.palest-ink/config.json`

Key fields:
- `collectors`: Enable/disable individual collectors (including `app` and `fsevent`)
- `tracked_repos`: List of git repo paths for git_scan collector
- `watched_dirs`: Directories to monitor for file changes (falls back to `tracked_repos` if empty)
- `exclude_patterns.urls`: URL prefixes to ignore
- `exclude_patterns.commands`: Regex patterns for commands to ignore
- `content_fetch.max_urls_per_run`: Max URLs to fetch per collection cycle
- `content_fetch.summary_max_chars`: Max chars for content summary
- `app.min_focus_seconds`: Minimum session duration to show in Focus Sessions report (default: 600)
- `app.exclude`: App names to skip during app focus collection
- `app_categories`: Map of category names to app name lists for usage grouping
- `app_last_app`, `app_last_window`, `app_last_ts`, `app_last_record_line`: State for app focus in-place duration merging

```

### scripts/query.py

```python
#!/usr/bin/env python3
"""Palest Ink - Activity Query Tool

Search and filter activity records. Designed to be called by Claude Code skill.

Usage:
    python3 query.py --date today --summary
    python3 query.py --date 2026-03-03 --type git_commit --search "plugin"
    python3 query.py --from 2026-03-01 --to 2026-03-03 --type web_visit --search "homebrew"
    python3 query.py --date today --type web_visit --search-content "人效"
"""

import argparse
import json
import os
import re
import sys
from datetime import datetime, timedelta, timezone
from zoneinfo import ZoneInfo

LOCAL_TZ = ZoneInfo("Asia/Shanghai")


def ts_to_local_str(ts_str):
    """Convert UTC ISO timestamp to local time string (YYYY-MM-DD HH:MM:SS)."""
    try:
        dt = datetime.fromisoformat(ts_str)
        if dt.tzinfo is None:
            dt = dt.replace(tzinfo=timezone.utc)
        return dt.astimezone(LOCAL_TZ).strftime("%Y-%m-%d %H:%M:%S")
    except Exception:
        return ts_str[:19].replace("T", " ")

DATA_DIR = os.path.expanduser("~/.palest-ink/data")

ACTIVITY_TYPES = {
    "git_commit", "git_push", "git_pull", "git_checkout",
    "web_visit", "shell_command", "vscode_edit",
    "app_focus", "file_change",
}

TYPE_LABELS = {
    "git_commit": "Git Commit",
    "git_push": "Git Push",
    "git_pull": "Git Pull",
    "git_checkout": "Git Checkout",
    "web_visit": "Web Visit",
    "shell_command": "Shell Command",
    "vscode_edit": "VS Code Edit",
    "app_focus": "App Focus",
    "file_change": "File Change",
}


def parse_date(s):
    """Parse a date string, supporting 'today', 'yesterday', and ISO format."""
    s = s.strip().lower()
    today = datetime.now().date()
    if s == "today":
        return today
    elif s == "yesterday":
        return today - timedelta(days=1)
    else:
        return datetime.strptime(s, "%Y-%m-%d").date()


def get_datafiles(date_from, date_to):
    """Get list of JSONL data file paths for the date range."""
    files = []
    d = date_from
    while d <= date_to:
        path = os.path.join(DATA_DIR, d.strftime("%Y"), d.strftime("%m"), f"{d.strftime('%d')}.jsonl")
        if os.path.exists(path):
            files.append(path)
        d += timedelta(days=1)
    return files


def load_records(files, type_filter=None):
    """Load all records from the given files, optionally filtering by type."""
    records = []
    for filepath in files:
        with open(filepath, "r") as f:
            for line in f:
                line = line.strip()
                if not line:
                    continue
                try:
                    record = json.loads(line)
                except json.JSONDecodeError:
                    continue
                if type_filter and record.get("type") != type_filter:
                    continue
                records.append(record)
    return records


def search_records(records, term, search_content=False):
    """Filter records matching search term across all data fields."""
    term_lower = term.lower()
    matched = []
    for record in records:
        data = record.get("data", {})
        # Search in all string values of data
        searchable = []
        for key, val in data.items():
            if isinstance(val, str):
                searchable.append(val)
            elif isinstance(val, list):
                searchable.extend(str(v) for v in val)
        # Also search in content_summary and content_keywords if search_content
        if search_content:
            summary = data.get("content_summary", "")
            keywords = data.get("content_keywords", [])
            searchable.append(summary)
            searchable.extend(keywords)

        text = " ".join(searchable).lower()
        if term_lower in text:
            matched.append(record)
    return matched


def format_record_text(record):
    """Format a single record for text output."""
    ts = ts_to_local_str(record.get("ts", ""))
    rtype = record.get("type", "unknown")
    data = record.get("data", {})
    label = TYPE_LABELS.get(rtype, rtype)

    if rtype == "git_commit":
        repo = os.path.basename(data.get("repo", ""))
        msg = data.get("message", "")
        files = data.get("files_changed", [])
        ins = data.get("insertions", 0)
        dels = data.get("deletions", 0)
        return f"[{ts}] {label}: {repo} - {msg} ({len(files)} files, +{ins}/-{dels})"

    elif rtype == "git_push":
        repo = os.path.basename(data.get("repo", ""))
        branch = data.get("branch", "")
        return f"[{ts}] {label}: {repo} ({branch}) -> {data.get('remote', '')}"

    elif rtype == "git_pull":
        repo = os.path.basename(data.get("repo", ""))
        branch = data.get("branch", "")
        return f"[{ts}] {label}: {repo} ({branch})"

    elif rtype == "git_checkout":
        repo = os.path.basename(data.get("repo", ""))
        return f"[{ts}] {label}: {repo} ({data.get('from_ref', '')} -> {data.get('to_branch', '')})"

    elif rtype == "web_visit":
        title = data.get("title", "")[:60]
        url = data.get("url", "")
        duration = data.get("visit_duration_seconds", 0)
        browser = data.get("browser", "")
        content_summary = data.get("content_summary", "")[:100]
        line = f"[{ts}] {label} ({browser}): {title}\n         URL: {url}"
        if duration > 0:
            line += f" ({duration}s)"
        if content_summary:
            line += f"\n         Summary: {content_summary}..."
        return line

    elif rtype == "shell_command":
        cmd = data.get("command", "")[:120]
        return f"[{ts}] {label}: {cmd}"

    elif rtype == "vscode_edit":
        filepath = data.get("file_path", "")
        lang = data.get("language", "")
        return f"[{ts}] {label}: {filepath} ({lang})"

    elif rtype == "app_focus":
        app = data.get("app_name", "")
        window = data.get("window_title", "")
        dur = data.get("duration_seconds", 0)
        suffix = f" — {window[:60]}" if window else ""
        return f"[{ts}] {label}: {app}{suffix} ({dur}s)"

    elif rtype == "file_change":
        path = data.get("path", "")
        lang = data.get("language", "")
        workspace = os.path.basename(data.get("workspace", ""))
        suffix = f" in {workspace}" if workspace else ""
        return f"[{ts}] {label}: {path} ({lang}){suffix}"

    else:
        return f"[{ts}] {label}: {json.dumps(data, ensure_ascii=False)[:100]}"


def print_summary(records):
    """Print a summary of records by type."""
    counts = {}
    for r in records:
        rtype = r.get("type", "unknown")
        counts[rtype] = counts.get(rtype, 0) + 1

    total = len(records)
    print(f"Total activities: {total}")
    print("-" * 40)
    for rtype, count in sorted(counts.items(), key=lambda x: x[1], reverse=True):
        label = TYPE_LABELS.get(rtype, rtype)
        print(f"  {label}: {count}")

    # Additional stats for git
    git_commits = [r for r in records if r.get("type") == "git_commit"]
    if git_commits:
        repos = set(r.get("data", {}).get("repo", "") for r in git_commits)
        total_ins = sum(r.get("data", {}).get("insertions", 0) for r in git_commits)
        total_del = sum(r.get("data", {}).get("deletions", 0) for r in git_commits)
        print(f"\nGit: {len(git_commits)} commits across {len(repos)} repos (+{total_ins}/-{total_del})")

    # Web stats
    web_visits = [r for r in records if r.get("type") == "web_visit"]
    if web_visits:
        domains = set()
        for r in web_visits:
            url = r.get("data", {}).get("url", "")
            try:
                from urllib.parse import urlparse
                domains.add(urlparse(url).netloc)
            except Exception:
                pass
        print(f"Web: {len(web_visits)} page visits across {len(domains)} domains")


def main():
    parser = argparse.ArgumentParser(description="Palest Ink - Query activities")
    parser.add_argument("--date", help="Date to query (YYYY-MM-DD, 'today', 'yesterday')")
    parser.add_argument("--from", dest="date_from", help="Start date (YYYY-MM-DD)")
    parser.add_argument("--to", dest="date_to", help="End date (YYYY-MM-DD)")
    parser.add_argument("--type", dest="activity_type", help="Filter by activity type")
    parser.add_argument("--search", help="Search term across all fields")
    parser.add_argument("--search-content", help="Search term including web page content summaries")
    parser.add_argument("--summary", action="store_true", help="Show summary instead of records")
    parser.add_argument("--limit", type=int, default=50, help="Max records to show (default: 50)")
    parser.add_argument("--format", choices=["text", "json"], default="text", help="Output format")

    args = parser.parse_args()

    # Determine date range
    if args.date:
        date_from = date_to = parse_date(args.date)
    elif args.date_from and args.date_to:
        date_from = parse_date(args.date_from)
        date_to = parse_date(args.date_to)
    else:
        date_from = date_to = datetime.now().date()

    # Find data files
    files = get_datafiles(date_from, date_to)
    if not files:
        print(f"No data found for {date_from} to {date_to}")
        sys.exit(0)

    # Load records
    records = load_records(files, type_filter=args.activity_type)

    # Apply search
    if args.search:
        records = search_records(records, args.search)
    if args.search_content:
        records = search_records(records, args.search_content, search_content=True)

    # Sort by timestamp
    records.sort(key=lambda r: r.get("ts", ""))

    # Output
    if args.summary:
        print_summary(records)
    elif args.format == "json":
        output = records[:args.limit]
        print(json.dumps(output, indent=2, ensure_ascii=False))
    else:
        for record in records[:args.limit]:
            print(format_record_text(record))
            print()

        if len(records) > args.limit:
            print(f"... and {len(records) - args.limit} more records (use --limit to see more)")


if __name__ == "__main__":
    main()

```

### scripts/report.py

```python
#!/usr/bin/env python3
"""Palest Ink - Daily Report Generator

Generates a structured markdown daily report from activity records.

Usage:
    python3 report.py --date today
    python3 report.py --date 2026-03-03
    python3 report.py --week
"""

import argparse
import json
import os
import sys
from collections import defaultdict
from datetime import datetime, timedelta, timezone
from zoneinfo import ZoneInfo

CONFIG_FILE = os.path.expanduser("~/.palest-ink/config.json")

LOCAL_TZ = ZoneInfo("Asia/Shanghai")

DATA_DIR = os.path.expanduser("~/.palest-ink/data")
REPORTS_DIR = os.path.expanduser("~/.palest-ink/reports")


def load_config():
    if os.path.exists(CONFIG_FILE):
        with open(CONFIG_FILE, "r") as f:
            return json.load(f)
    return {}


def parse_date(s):
    s = s.strip().lower()
    today = datetime.now().date()
    if s == "today":
        return today
    elif s == "yesterday":
        return today - timedelta(days=1)
    else:
        return datetime.strptime(s, "%Y-%m-%d").date()


def load_day_records(d):
    path = os.path.join(DATA_DIR, d.strftime("%Y"), d.strftime("%m"), f"{d.strftime('%d')}.jsonl")
    records = []
    if not os.path.exists(path):
        return records
    with open(path, "r") as f:
        for line in f:
            line = line.strip()
            if not line:
                continue
            try:
                records.append(json.loads(line))
            except json.JSONDecodeError:
                pass
    records.sort(key=lambda r: r.get("ts", ""))
    return records


def ts_to_local(ts_str):
    """Parse a UTC timestamp string and return local datetime."""
    try:
        dt = datetime.fromisoformat(ts_str)
        if dt.tzinfo is None:
            dt = dt.replace(tzinfo=timezone.utc)
        return dt.astimezone(LOCAL_TZ)
    except Exception:
        return None


def ts_to_local_str(ts_str):
    """Return HH:MM in local time."""
    dt = ts_to_local(ts_str)
    if dt:
        return dt.strftime("%H:%M")
    return ts_str[11:16] if len(ts_str) >= 16 else "??:??"


def time_period(ts_str):
    """Classify a timestamp into morning/afternoon/evening."""
    dt = ts_to_local(ts_str)
    if dt is None:
        return "other"
    hour = dt.hour
    if 6 <= hour < 12:
        return "morning"
    elif 12 <= hour < 18:
        return "afternoon"
    elif 18 <= hour < 22:
        return "evening"
    else:
        return "night"





def format_duration(seconds):
    """Format seconds into human-readable duration string."""
    if seconds is None or seconds < 0:
        return "0s"
    seconds = int(seconds)
    h = seconds // 3600
    m = (seconds % 3600) // 60
    s = seconds % 60
    if h > 0:
        return f"{h}h{m:02d}m" if m > 0 else f"{h}h"
    elif m > 0:
        return f"{m}m{s:02d}s" if s > 0 else f"{m}m"
    else:
        return f"{s}s"


def period_label(period):
    labels = {
        "morning": "Morning (06:00 - 12:00)",
        "afternoon": "Afternoon (12:00 - 18:00)",
        "evening": "Evening (18:00 - 22:00)",
        "night": "Night (22:00 - 06:00)",
    }
    return labels.get(period, period)


def format_timeline_entry(record):
    ts = record.get("ts", "")
    time_str = ts_to_local_str(ts)
    rtype = record.get("type", "")
    data = record.get("data", {})

    if rtype == "git_commit":
        repo = os.path.basename(data.get("repo", ""))
        msg = data.get("message", "")
        files = data.get("files_changed", [])
        ins = data.get("insertions", 0)
        dels = data.get("deletions", 0)
        return f"- **{time_str}** Committed `{msg}` in {repo} ({len(files)} files, +{ins}/-{dels})"

    elif rtype == "git_push":
        repo = os.path.basename(data.get("repo", ""))
        branch = data.get("branch", "")
        return f"- **{time_str}** Pushed {repo}/{branch}"

    elif rtype == "git_pull":
        repo = os.path.basename(data.get("repo", ""))
        return f"- **{time_str}** Pulled {repo}"

    elif rtype == "git_checkout":
        repo = os.path.basename(data.get("repo", ""))
        return f"- **{time_str}** Switched to `{data.get('to_branch', '')}` in {repo}"

    elif rtype == "web_visit":
        title = data.get("title", "")[:50]
        url = data.get("url", "")
        duration = data.get("visit_duration_seconds", 0)
        suffix = f" ({duration}s)" if duration > 10 else ""
        return f"- **{time_str}** Visited [{title}]({url}){suffix}"

    elif rtype == "shell_command":
        cmd = data.get("command", "")[:80]
        return f"- **{time_str}** `{cmd}`"

    elif rtype == "vscode_edit":
        filepath = data.get("file_path", "")
        filename = os.path.basename(filepath)
        lang = data.get("language", "")
        return f"- **{time_str}** Edited `{filename}` ({lang})"

    elif rtype == "app_focus":
        app = data.get("app_name", "")
        window = data.get("window_title", "")
        dur = data.get("duration_seconds", 0)
        dur_str = format_duration(dur) if dur else ""
        suffix = f" — {window[:50]}" if window else ""
        return f"- **{time_str}** {app}{suffix} ({dur_str})"

    elif rtype == "file_change":
        path = data.get("path", "")
        filename = os.path.basename(path)
        lang = data.get("language", "")
        workspace = os.path.basename(data.get("workspace", ""))
        suffix = f" in {workspace}" if workspace else ""
        return f"- **{time_str}** Changed `{filename}` ({lang}){suffix}"

    return f"- **{time_str}** {rtype}"


def _append_focus_sessions(lines, records, config):
    """Append focus sessions section: merge adjacent same-app app_focus records."""
    app_records = [r for r in records if r.get("type") == "app_focus"]
    if not app_records:
        return

    min_focus = config.get("app", {}).get("min_focus_seconds", 600)
    gap_threshold = 30 * 60  # 30 minutes

    # Build sessions by merging adjacent records with same app and gap <= threshold
    sessions = []
    for r in app_records:
        dt = ts_to_local(r.get("ts", ""))
        if dt is None:
            continue
        app = r.get("data", {}).get("app_name", "")
        dur = r.get("data", {}).get("duration_seconds", 0) or 0

        if sessions:
            last = sessions[-1]
            last_end = last["end_dt"]
            gap = (dt - last_end).total_seconds()
            if last["app"] == app and gap <= gap_threshold:
                last["total_duration"] += dur
                last["end_dt"] = dt + timedelta(seconds=dur)
                continue

        sessions.append({
            "app": app,
            "start_dt": dt,
            "end_dt": dt + timedelta(seconds=dur),
            "total_duration": dur,
        })

    # Filter short sessions
    sessions = [s for s in sessions if s["total_duration"] >= min_focus]
    if not sessions:
        return

    lines.append("## 专注时段 (Focus Sessions)")
    lines.append("")
    for s in sessions:
        start_str = s["start_dt"].strftime("%H:%M")
        end_str = s["end_dt"].strftime("%H:%M")
        dur_str = format_duration(s["total_duration"])
        lines.append(f"- {start_str}–{end_str}  **{s['app']}** ({dur_str})")
    lines.append("")


def _append_app_usage(lines, records, config):
    """Append app usage section: total time per app, grouped by category."""
    app_records = [r for r in records if r.get("type") == "app_focus"]
    if not app_records:
        return

    app_totals = defaultdict(int)
    for r in app_records:
        app = r.get("data", {}).get("app_name", "")
        dur = r.get("data", {}).get("duration_seconds", 0) or 0
        if app:
            app_totals[app] += dur

    if not app_totals:
        return

    categories = config.get("app_categories", {})
    cat_totals = defaultdict(int)
    for app, dur in app_totals.items():
        for cat, cat_apps in categories.items():
            if app in cat_apps:
                cat_totals[cat] += dur
                break

    lines.append("## 应用使用时长 (App Usage)")
    lines.append("")
    lines.append("**Top 5 Applications:**")
    lines.append("")
    lines.append("| App | Duration |")
    lines.append("|-----|----------|")
    top5 = sorted(app_totals.items(), key=lambda x: x[1], reverse=True)[:5]
    for app, dur in top5:
        lines.append(f"| {app} | {format_duration(dur)} |")
    lines.append("")

    if cat_totals:
        lines.append("**By Category:**")
        lines.append("")
        cat_label = {"development": "Development", "browser": "Browser", "communication": "Communication"}
        for cat, dur in sorted(cat_totals.items(), key=lambda x: x[1], reverse=True):
            label = cat_label.get(cat, cat.capitalize())
            lines.append(f"- **{label}**: {format_duration(dur)}")
        lines.append("")


def _append_shell_stats(lines, records):
    """Append shell command statistics section."""
    shell_records = [r for r in records if r.get("type") == "shell_command"]
    if not shell_records:
        return

    lines.append("## Shell 命令统计 (Shell Stats)")
    lines.append("")

    # Top commands by frequency (first word of command)
    cmd_counts = defaultdict(int)
    for r in shell_records:
        cmd = r.get("data", {}).get("command", "").strip()
        if cmd:
            prefix = cmd.split()[0] if cmd.split() else cmd
            cmd_counts[prefix] += 1

    if cmd_counts:
        lines.append("**最高频命令 Top 5:**")
        lines.append("")
        top5 = sorted(cmd_counts.items(), key=lambda x: x[1], reverse=True)[:5]
        for cmd, count in top5:
            lines.append(f"- `{cmd}`: {count} times")
        lines.append("")

    # Top by duration (only when duration data is available)
    timed = [
        (r.get("data", {}).get("command", ""), r.get("data", {}).get("duration_seconds"))
        for r in shell_records
        if r.get("data", {}).get("duration_seconds") is not None
    ]
    if timed:
        timed.sort(key=lambda x: x[1], reverse=True)
        lines.append("**耗时最长命令 Top 5:**")
        lines.append("")
        for cmd, dur in timed[:5]:
            lines.append(f"- `{cmd[:80]}` — {format_duration(dur)}")
        lines.append("")


def _append_cross_domain_correlation(lines, records):
    """Append cross-domain correlation: keywords researched before each commit."""
    git_commits = [r for r in records if r.get("type") == "git_commit"]
    web_visits = [r for r in records if r.get("type") == "web_visit"]
    if not git_commits or not web_visits:
        return

    correlations = []
    for commit in git_commits:
        commit_dt = ts_to_local(commit.get("ts", ""))
        if commit_dt is None:
            continue
        window_start = commit_dt - timedelta(hours=2)

        keywords = set()
        for visit in web_visits:
            visit_dt = ts_to_local(visit.get("ts", ""))
            if visit_dt is None:
                continue
            if window_start <= visit_dt <= commit_dt:
                kws = visit.get("data", {}).get("content_keywords", [])
                keywords.update(kws)

        if len(keywords) >= 3:
            repo = os.path.basename(commit.get("data", {}).get("repo", ""))
            msg = commit.get("data", {}).get("message", "")
            correlations.append((repo, msg, sorted(keywords)[:8]))

    if not correlations:
        return

    lines.append("## 跨域关联 (Cross-domain Correlation)")
    lines.append("")
    lines.append("Research topics found before git commits:")
    lines.append("")
    for repo, msg, kws in correlations:
        lines.append(f"- **{repo}** `{msg}` ← researched: {', '.join(kws)}")
    lines.append("")


def generate_report(target_date, records):
    """Generate a markdown report for a single day."""
    config = load_config()
    lines = []
    date_str = target_date.strftime("%Y-%m-%d")
    weekday = target_date.strftime("%A")
    lines.append(f"# Daily Activity Report - {date_str} ({weekday})")
    lines.append("")

    if not records:
        lines.append("No activities recorded for this day.")
        return "\n".join(lines)

    # Summary
    type_counts = defaultdict(int)
    for r in records:
        type_counts[r.get("type", "unknown")] += 1

    lines.append("## Summary")
    if type_counts.get("git_commit"):
        git_commits = [r for r in records if r.get("type") == "git_commit"]
        repos = set(os.path.basename(r.get("data", {}).get("repo", "")) for r in git_commits)
        lines.append(f"- **{type_counts['git_commit']}** git commits across **{len(repos)}** repos")
    if type_counts.get("web_visit"):
        lines.append(f"- **{type_counts['web_visit']}** web pages visited")
    if type_counts.get("shell_command"):
        lines.append(f"- **{type_counts['shell_command']}** shell commands executed")
    if type_counts.get("vscode_edit"):
        lines.append(f"- **{type_counts['vscode_edit']}** files edited in VS Code")
    if type_counts.get("app_focus"):
        total_app_time = sum(
            r.get("data", {}).get("duration_seconds", 0) or 0
            for r in records if r.get("type") == "app_focus"
        )
        lines.append(f"- **{type_counts['app_focus']}** app focus events ({format_duration(total_app_time)} total)")
    if type_counts.get("file_change"):
        lines.append(f"- **{type_counts['file_change']}** file changes detected")
    lines.append("")

    # Timeline (only show significant events, not every shell command)
    significant_types = {
        "git_commit", "git_push", "git_pull", "git_checkout",
        "web_visit", "vscode_edit", "app_focus", "file_change",
    }
    significant = [r for r in records if r.get("type") in significant_types]

    if significant:
        lines.append("## Timeline")
        lines.append("")

        by_period = defaultdict(list)
        for r in significant:
            period = time_period(r.get("ts", ""))
            by_period[period].append(r)

        for period in ["morning", "afternoon", "evening", "night"]:
            if period in by_period:
                lines.append(f"### {period_label(period)}")
                # Limit entries per period to avoid huge reports
                entries = by_period[period][:30]
                for r in entries:
                    lines.append(format_timeline_entry(r))
                if len(by_period[period]) > 30:
                    lines.append(f"- ... and {len(by_period[period]) - 30} more activities")
                lines.append("")

    # Git Activity Table
    git_commits = [r for r in records if r.get("type") == "git_commit"]
    if git_commits:
        lines.append("## Git Activity")
        lines.append("")
        repo_stats = defaultdict(lambda: {"commits": 0, "files": 0, "ins": 0, "dels": 0})
        for r in git_commits:
            data = r.get("data", {})
            repo = os.path.basename(data.get("repo", "unknown"))
            repo_stats[repo]["commits"] += 1
            repo_stats[repo]["files"] += len(data.get("files_changed", []))
            repo_stats[repo]["ins"] += data.get("insertions", 0)
            repo_stats[repo]["dels"] += data.get("deletions", 0)

        lines.append("| Repository | Commits | Files Changed | Lines +/- |")
        lines.append("|------------|---------|---------------|-----------|")
        for repo, stats in sorted(repo_stats.items(), key=lambda x: x[1]["commits"], reverse=True):
            lines.append(f"| {repo} | {stats['commits']} | {stats['files']} | +{stats['ins']}/-{stats['dels']} |")
        lines.append("")

    # Top Websites
    web_visits = [r for r in records if r.get("type") == "web_visit"]
    if web_visits:
        lines.append("## Top Websites")
        lines.append("")
        domain_counts = defaultdict(int)
        for r in web_visits:
            url = r.get("data", {}).get("url", "")
            try:
                from urllib.parse import urlparse
                domain = urlparse(url).netloc
                if domain:
                    domain_counts[domain] += 1
            except Exception:
                pass

        for i, (domain, count) in enumerate(
            sorted(domain_counts.items(), key=lambda x: x[1], reverse=True)[:15]
        ):
            lines.append(f"{i+1}. **{domain}** ({count} visits)")
        lines.append("")

    # VS Code Edits
    vscode_edits = [r for r in records if r.get("type") == "vscode_edit"]
    if vscode_edits:
        lines.append("## Files Edited (VS Code)")
        lines.append("")
        lang_counts = defaultdict(int)
        for r in vscode_edits:
            lang = r.get("data", {}).get("language", "unknown")
            lang_counts[lang] += 1

        for lang, count in sorted(lang_counts.items(), key=lambda x: x[1], reverse=True):
            lines.append(f"- **{lang}**: {count} files")
        lines.append("")

    # New analysis sections
    _append_focus_sessions(lines, records, config)
    _append_app_usage(lines, records, config)
    _append_shell_stats(lines, records)
    _append_cross_domain_correlation(lines, records)

    return "\n".join(lines)


def main():
    parser = argparse.ArgumentParser(description="Palest Ink - Report Generator")
    parser.add_argument("--date", help="Date (YYYY-MM-DD, 'today', 'yesterday')")
    parser.add_argument("--week", action="store_true", help="Generate report for current week")
    parser.add_argument("--save", action="store_true", help="Save report to file")

    args = parser.parse_args()

    if args.week:
        today = datetime.now().date()
        # Go back to Monday
        start = today - timedelta(days=today.weekday())
        all_records = []
        for i in range(7):
            d = start + timedelta(days=i)
            if d > today:
                break
            all_records.extend(load_day_records(d))
        report = generate_report(today, all_records)
        report = report.replace("Daily Activity Report", "Weekly Activity Report", 1)
    else:
        target = parse_date(args.date) if args.date else datetime.now().date()
        records = load_day_records(target)
        report = generate_report(target, records)

    print(report)

    if args.save:
        os.makedirs(REPORTS_DIR, exist_ok=True)
        date_str = args.date or "today"
        if date_str == "today":
            date_str = datetime.now().strftime("%Y-%m-%d")
        report_path = os.path.join(REPORTS_DIR, f"{date_str}.md")
        with open(report_path, "w") as f:
            f.write(report)
        print(f"\nReport saved to: {report_path}")


if __name__ == "__main__":
    main()

```

### scripts/status.py

```python
#!/usr/bin/env python3
"""Palest Ink - Status Checker

Shows collection status, data volume, and system health.

Usage:
    python3 status.py
"""

import json
import os
import subprocess
from datetime import datetime, timezone

PALEST_INK_DIR = os.path.expanduser("~/.palest-ink")
CONFIG_FILE = os.path.join(PALEST_INK_DIR, "config.json")
DATA_DIR = os.path.join(PALEST_INK_DIR, "data")
CLEANUP_FLAG = os.path.join(PALEST_INK_DIR, "tmp", "cleanup_needed")


def load_config():
    if os.path.exists(CONFIG_FILE):
        with open(CONFIG_FILE, "r") as f:
            return json.load(f)
    return {}


def count_today_records():
    now = datetime.now()
    path = os.path.join(DATA_DIR, now.strftime("%Y"), now.strftime("%m"), f"{now.strftime('%d')}.jsonl")
    if not os.path.exists(path):
        return 0, {}
    counts = {}
    total = 0
    with open(path, "r") as f:
        for line in f:
            line = line.strip()
            if not line:
                continue
            try:
                record = json.loads(line)
                rtype = record.get("type", "unknown")
                counts[rtype] = counts.get(rtype, 0) + 1
                total += 1
            except json.JSONDecodeError:
                pass
    return total, counts


def get_data_size():
    """Get total size of data directory."""
    total = 0
    for root, dirs, files in os.walk(DATA_DIR):
        for f in files:
            total += os.path.getsize(os.path.join(root, f))
    return total


def check_cleanup_needed():
    """Return flag info dict if cleanup is needed, else None."""
    if not os.path.exists(CLEANUP_FLAG):
        return None
    try:
        with open(CLEANUP_FLAG, "r") as f:
            return json.load(f)
    except (OSError, json.JSONDecodeError):
        return {}


def check_cron():
    """Check if cron job is installed."""
    try:
        result = subprocess.run(
            ["crontab", "-l"],
            capture_output=True, text=True, timeout=5
        )
        return "palest-ink" in result.stdout
    except (subprocess.TimeoutExpired, OSError):
        return False


def check_git_hooks():
    """Check if git hooks are configured."""
    try:
        result = subprocess.run(
            ["git", "config", "--global", "core.hooksPath"],
            capture_output=True, text=True, timeout=5
        )
        hooks_path = result.stdout.strip()
        return hooks_path == os.path.join(PALEST_INK_DIR, "hooks")
    except (subprocess.TimeoutExpired, OSError):
        return False


def format_size(size_bytes):
    for unit in ["B", "KB", "MB", "GB"]:
        if size_bytes < 1024:
            return f"{size_bytes:.1f} {unit}"
        size_bytes /= 1024
    return f"{size_bytes:.1f} TB"


def main():
    print("=" * 50)
    print("  Palest Ink (淡墨) - Status")
    print("=" * 50)
    print()

    # Data directory
    if not os.path.exists(PALEST_INK_DIR):
        print("Status: NOT INSTALLED")
        print(f"Run install.sh to set up Palest Ink.")
        return

    config = load_config()
    collectors = config.get("collectors", {})

    # Today's records
    total, counts = count_today_records()
    data_bytes = get_data_size()
    print(f"Data directory: {DATA_DIR}")
    print(f"Data size: {format_size(data_bytes)}")
    print()

    # Cleanup warning
    cleanup_flag = check_cleanup_needed()
    if cleanup_flag is not None:
        flag_size = cleanup_flag.get("size_human", "")
        size_note = f" ({flag_size} MB)" if flag_size else ""
        print("=" * 50)
        print(f"  ⚠️  CLEANUP RECOMMENDED")
        print(f"  Data{size_note} is approaching the 2 GB limit.")
        print(f"  Run cleanup to remove oldest records:")
        print(f"    python3 ~/.palest-ink/bin/cleanup.py --dry-run")
        print(f"    python3 ~/.palest-ink/bin/cleanup.py")
        print("=" * 50)
        print()

    print(f"Today's records: {total}")
    if counts:
        for rtype, count in sorted(counts.items(), key=lambda x: x[1], reverse=True):
            print(f"  {rtype}: {count}")
    print()

    # Collector status
    print("Collectors:")
    cron_active = check_cron()
    hooks_active = check_git_hooks()

    print(f"  git_hooks:  {'ACTIVE' if hooks_active else 'INACTIVE'} (core.hooksPath)")
    print(f"  cron:       {'ACTIVE' if cron_active else 'INACTIVE'} (every 3 min)")
    print(f"  chrome:     {'enabled' if collectors.get('chrome', True) else 'disabled'}")
    print(f"  safari:     {'enabled' if collectors.get('safari', True) else 'disabled'}")
    print(f"  shell:      {'enabled' if collectors.get('shell', True) else 'disabled'}")
    print(f"  vscode:     {'enabled' if collectors.get('vscode', True) else 'disabled'}")
    print(f"  git_scan:   {'enabled' if collectors.get('git_scan', True) else 'disabled'}")
    print(f"  content:    {'enabled' if collectors.get('content', True) else 'disabled'}")
    print()

    # Last cron run
    cron_log = os.path.join(PALEST_INK_DIR, "cron.log")
    if os.path.exists(cron_log):
        try:
            with open(cron_log, "r") as f:
                lines = f.readlines()
            # Find last "Starting collection" or "Collection complete"
            for line in reversed(lines):
                if "Collection complete" in line or "Starting collection" in line:
                    print(f"Last cron: {line.strip()}")
                    break
        except OSError:
            pass

    # Tracked repos
    tracked = config.get("tracked_repos", [])
    if tracked:
        print(f"\nTracked repos ({len(tracked)}):")
        for repo in tracked:
            print(f"  {repo}")


if __name__ == "__main__":
    main()

```

palest-ink | SkillHub