Back to skills
SkillHub ClubWrite Technical DocsFull StackFrontendTech Writer

multimodal-memory

Remember and retrieve visual content from conversations. Use when: (1) user sends an image, photo, chart, diagram, or screenshot and wants it saved/remembered; (2) user asks to capture or remember a website, URL, or web page UI; (3) user asks what you've seen before, wants to recall a past image, or searches visual memories; (4) user sends an image to find similar past content.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
3,118
Hot score
99
Updated
March 20, 2026
Overall rating
C4.0
Composite score
4.0
Best-practice grade
B70.0

Install command

npx @skill-hub/cli install openclaw-skills-minds-eye

Repository

openclaw/skills

Skill path: skills/horisky/minds-eye

Remember and retrieve visual content from conversations. Use when: (1) user sends an image, photo, chart, diagram, or screenshot and wants it saved/remembered; (2) user asks to capture or remember a website, URL, or web page UI; (3) user asks what you've seen before, wants to recall a past image, or searches visual memories; (4) user sends an image to find similar past content.

Open repository

Best for

Primary workflow: Write Technical Docs.

Technical facets: Full Stack, Frontend, Tech Writer, Designer.

Target audience: everyone.

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: openclaw.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install multimodal-memory into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/openclaw/skills before adding multimodal-memory to shared team environments
  • Use multimodal-memory for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: multimodal-memory
description: "Remember and retrieve visual content from conversations. Use when: (1) user sends an image, photo, chart, diagram, or screenshot and wants it saved/remembered; (2) user asks to capture or remember a website, URL, or web page UI; (3) user asks what you've seen before, wants to recall a past image, or searches visual memories; (4) user sends an image to find similar past content."
metadata: {"clawdbot":{"emoji":"๐Ÿง ","os":["darwin","linux"],"requires":{"bins":["python3"]}}}
---

# Multimodal Memory

Store and retrieve visual content โ€” user images, charts, diagrams, website UIs โ€” across conversations.

## Important: Image Analysis

**The primary model may not support vision.** Always use `analyze.py` to analyze images โ€” it calls GPT-4o directly via API and does not rely on your own vision capability.

## Storage Location

All data lives in `~/.multimodal-memory/`:
- `images/` โ€” saved copies of captured images
- `metadata.db` โ€” SQLite database (auto-created)
- `memory.md` โ€” human-readable summary (auto-updated)

Read `~/.multimodal-memory/memory.md` at session start for a quick overview.

## Scenarios & Actions

### 1. User Sends an Image / Chart / Diagram

When a user sends an image, OpenClaw saves it locally and provides the file path in the message context (look for a path like `/tmp/...` or `~/.openclaw/...`).

Run `analyze.py` with that path โ€” it calls GPT-4o to analyze and stores the result automatically:

```bash
python {baseDir}/scripts/analyze.py \
  --image-path "/absolute/path/to/image.jpg" \
  --source "image"
```

For charts use `--source "chart"`, for diagrams use `--source "image"`.

**If you cannot find the file path in the message context**, ask the user:
> "่ฏท้—ฎ่ฟ™ๅผ ๅ›พ็‰‡ไฟๅญ˜ๅœจๅ“ชไธช่ทฏๅพ„๏ผŸๆˆ–่€…ไฝ ๅฏไปฅ็›ดๆŽฅ็ฒ˜่ดดๆ–‡ไปถ่ทฏๅพ„็ป™ๆˆ‘ใ€‚"

### 2. User Asks to Capture / Remember a Website

Step 1 โ€” take the screenshot:
```bash
python {baseDir}/scripts/capture_url.py --url "https://example.com"
```
The script prints the saved screenshot path.

Step 2 โ€” analyze and store it:
```bash
python {baseDir}/scripts/analyze.py \
  --image-path "/path/printed/above.png" \
  --source "website" \
  --url "https://example.com"
```

### 3. User Searches by Text

```bash
python {baseDir}/scripts/search.py --query "login screen dark theme"
```

Show results with descriptions and image paths.

### 4. User Sends an Image to Search (find similar memories)

Step 1 โ€” analyze the query image to get its description:
```bash
python {baseDir}/scripts/analyze.py \
  --image-path "/path/to/query/image.jpg" \
  --source "image"
```

Step 2 โ€” the analysis is stored; also search for similar past content using the description keywords:
```bash
python {baseDir}/scripts/search.py --query "key concepts from the analysis output"
```

Step 3 โ€” present matching memories and explain why they're relevant.

### 5. List Recent Memories

```bash
python {baseDir}/scripts/list.py --limit 20
```

## Core Rules

- **Never try to analyze images yourself** โ€” always delegate to `analyze.py`.
- After storing, confirm to user: description + tags saved.
- Image paths must be **absolute**.
- The `--extra-tags` arg accepts comma-separated additional tags.

## One-Time Setup for URL Capture

If `capture_url.py` fails:
```bash
pip install playwright && python -m playwright install chromium
```

## Script Reference

| Script | Purpose | Key args |
|--------|---------|----------|
| `analyze.py` | Analyze image with GPT-4o + store | `--image-path`, `--source`, `--url`, `--extra-tags` |
| `store.py` | Store pre-analyzed result | `--image-path`, `--description`, `--tags`, `--source`, `--url` |
| `search.py` | Text search | `--query`, `[--limit N]` |
| `list.py` | List memories | `[--limit N]` |
| `capture_url.py` | Screenshot a URL | `--url` |


---

## Skill Companion Files

> Additional files collected from the skill directory layout.

### README.md

```markdown
# minds-eye ๐Ÿง ๐Ÿ‘๏ธ

> Give your AI agent a visual memory โ€” store, search, and recall images, charts, diagrams, and website screenshots across conversations.

**minds-eye** is an [OpenClaw](https://openclaw.ai) skill that lets AI agents remember visual content. Send your agent an image, chart, or website URL โ€” it analyzes it with GPT-4o vision and stores the description, tags, and a copy of the image. Later, search by keyword to retrieve what was seen.

## Features

- **Image analysis** โ€” Analyzes any image with GPT-4o (or compatible vision model)
- **Website capture** โ€” Full-page screenshots of URLs via Playwright or headless Chrome
- **Semantic storage** โ€” SQLite database with description, tags, source type, and URL
- **Keyword search** โ€” Full-text search across all stored visual memories
- **Auto-summary** โ€” Maintains a human-readable `memory.md` of recent entries
- **Works with any OpenAI-compatible API** โ€” Uses your configured provider (OpenClaw, OpenAI, custom endpoint)

## How It Works

```
User sends image
       โ†“
analyze.py calls GPT-4o vision API (base64)
       โ†“
Returns: description + tags + raw_text
       โ†“
store.py saves to SQLite + copies image file
       โ†“
Agent confirms: "Saved! Description: ..."
```

## Installation

This skill is designed for [OpenClaw](https://openclaw.ai). Place the folder in your OpenClaw skills directory:

```bash
~/.openclaw/skills/skills/multimodal-memory/
```

For website capture, install Playwright (one-time setup):

```bash
pip install playwright
python -m playwright install chromium
```

## Usage (via OpenClaw agent)

Once installed as an OpenClaw skill, your agent will automatically:

- Analyze and store images sent in conversation
- Capture and remember websites when asked
- Search visual memories on request

### Direct script usage

**Analyze and store an image:**
```bash
python scripts/analyze.py --image-path /path/to/image.jpg --source image
python scripts/analyze.py --image-path chart.png --source chart
```

**Capture a website:**
```bash
python scripts/capture_url.py --url "https://example.com"
# Prints saved screenshot path, then pass to analyze.py
```

**Search memories:**
```bash
python scripts/search.py --query "login dark theme"
python scripts/search.py --query "price chart BTC" --limit 5
```

**List recent memories:**
```bash
python scripts/list.py --limit 20
```

## Configuration

By default, the skill reads your API key from `~/.openclaw/openclaw.json` (OpenClaw config). It looks for:

1. The provider configured as `agents.defaults.imageModel.primary`
2. Any provider with an `apiKey` field
3. `OPENAI_API_KEY` environment variable
4. `~/.openclaw/.env` file

The vision model must support image input (e.g. `gpt-4o`, `gpt-4-vision-preview`).

## Storage

All data lives in `~/.multimodal-memory/`:

```
~/.multimodal-memory/
โ”œโ”€โ”€ images/          # Saved image files
โ”œโ”€โ”€ metadata.db      # SQLite database
โ””โ”€โ”€ memory.md        # Human-readable summary (auto-generated)
```

## Requirements

- Python 3.9+
- `playwright` (optional, for website capture)

## License

MIT

```

### _meta.json

```json
{
  "owner": "horisky",
  "slug": "minds-eye",
  "displayName": "minds-eye",
  "latest": {
    "version": "1.0.0",
    "publishedAt": 1772743248468,
    "commit": "https://github.com/openclaw/skills/commit/c686d992fd39b00bb2c4ee438faf87d7eb7d8f7e"
  },
  "history": []
}

```

### scripts/analyze.py

```python
#!/usr/bin/env python3
"""
analyze.py โ€” analyze an image with GPT-4o and store it as a memory

Usage:
  python analyze.py --image-path /path/to/image.jpg [--source image|chart|website] [--url URL] [--extra-tags tag1,tag2]

Output:
  Prints analysis result and confirms memory was saved.
"""

import argparse
import base64
import json
import os
import subprocess
import sys
from pathlib import Path

SCRIPT_DIR = Path(__file__).parent


def load_openclaw_config() -> dict:
    """Read ~/.openclaw/openclaw.json and return the full config dict."""
    config_path = Path.home() / ".openclaw" / "openclaw.json"
    if config_path.exists():
        import json as _json
        return _json.loads(config_path.read_text())
    return {}


def resolve_api_config() -> tuple[str, str, str]:
    """
    Returns (api_key, base_url, model) to use for vision requests.

    Priority:
    1. openclaw.json imageModel -> find matching provider
    2. openclaw.json first provider that has an apiKey
    3. OPENAI_API_KEY env var -> standard OpenAI endpoint
    4. ~/.openclaw/.env OPENAI_API_KEY
    """
    cfg = load_openclaw_config()

    providers = cfg.get("models", {}).get("providers", {})
    image_model_primary = (
        cfg.get("agents", {}).get("defaults", {}).get("imageModel", {}).get("primary", "")
    )

    # Parse "provider/model" format, e.g. "openai/gpt-4o"
    if "/" in image_model_primary:
        provider_name, model_name = image_model_primary.split("/", 1)
    else:
        provider_name, model_name = "", image_model_primary or "gpt-4o"

    # Try the named provider first
    if provider_name and provider_name in providers:
        p = providers[provider_name]
        key = p.get("apiKey", "")
        base_url = p.get("baseUrl", "https://api.openai.com/v1")
        if key:
            return key, base_url, model_name

    # Try any provider that has an apiKey
    for p in providers.values():
        key = p.get("apiKey", "")
        base_url = p.get("baseUrl", "https://api.openai.com/v1")
        if key:
            return key, base_url, model_name

    # Fall back to environment / .env file
    key = os.environ.get("OPENAI_API_KEY", "")
    if not key:
        env_file = Path.home() / ".openclaw" / ".env"
        if env_file.exists():
            for line in env_file.read_text().splitlines():
                line = line.strip()
                if line.startswith("OPENAI_API_KEY="):
                    key = line.split("=", 1)[1].strip().strip('"').strip("'")
                    break
    return key, "https://api.openai.com/v1", model_name


def analyze_with_gpt4o(image_path: str, source: str, api_key: str,
                        base_url: str = "https://api.openai.com/v1",
                        model: str = "gpt-4o") -> dict:
    import urllib.request
    import urllib.error

    with open(image_path, "rb") as f:
        image_data = base64.b64encode(f.read()).decode("utf-8")

    ext = Path(image_path).suffix.lower().lstrip(".")
    mime_map = {"jpg": "image/jpeg", "jpeg": "image/jpeg", "png": "image/png", "gif": "image/gif", "webp": "image/webp"}
    mime = mime_map.get(ext, "image/jpeg")

    source_hints = {
        "chart": "Focus on: chart type, title, axis labels, data values, trends, key insights.",
        "website": "Focus on: page purpose, layout structure, navigation, key UI sections, color scheme, visible text.",
        "image": "Describe all visible content in detail.",
    }
    hint = source_hints.get(source, source_hints["image"])

    prompt = (
        f"Analyze this image. {hint}\n"
        'Respond in JSON with exactly three fields:\n'
        '  "description": detailed natural-language description (in the same language as any text in the image, default Chinese if ambiguous),\n'
        '  "tags": list of 3-8 relevant keyword strings,\n'
        '  "raw_text": all visible text verbatim (empty string if none).\n'
        "Return only the JSON object, no markdown."
    )

    payload = json.dumps({
        "model": model,
        "messages": [{
            "role": "user",
            "content": [
                {"type": "image_url", "image_url": {"url": f"data:{mime};base64,{image_data}"}},
                {"type": "text", "text": prompt},
            ],
        }],
        "max_tokens": 1024,
    }).encode("utf-8")

    req = urllib.request.Request(
        f"{base_url.rstrip('/')}/chat/completions",
        data=payload,
        headers={
            "Content-Type": "application/json",
            "Authorization": f"Bearer {api_key}",
        },
    )

    try:
        with urllib.request.urlopen(req, timeout=30) as resp:
            body = json.loads(resp.read())
    except urllib.error.HTTPError as e:
        raise RuntimeError(f"OpenAI API error {e.code}: {e.read().decode()}") from e

    content = body["choices"][0]["message"]["content"].strip()
    # Strip markdown fences if present
    if content.startswith("```"):
        content = content.split("\n", 1)[1].rsplit("```", 1)[0].strip()

    try:
        result = json.loads(content)
    except json.JSONDecodeError:
        result = {"description": content, "tags": [], "raw_text": ""}

    result.setdefault("description", "")
    result.setdefault("tags", [])
    result.setdefault("raw_text", "")
    if isinstance(result["tags"], str):
        result["tags"] = [t.strip() for t in result["tags"].split(",") if t.strip()]

    return result


def main():
    parser = argparse.ArgumentParser()
    parser.add_argument("--image-path", required=True)
    parser.add_argument("--source", default="image", choices=["image", "chart", "website"])
    parser.add_argument("--url", default="")
    parser.add_argument("--extra-tags", default="")
    args = parser.parse_args()

    if not os.path.isfile(args.image_path):
        print(f"ERROR: File not found: {args.image_path}", file=sys.stderr)
        sys.exit(1)

    api_key, base_url, model = resolve_api_config()
    if not api_key:
        print("ERROR: No API key found. Add OPENAI_API_KEY to ~/.openclaw/.env or configure a provider in openclaw.json.", file=sys.stderr)
        sys.exit(1)

    print(f"Analyzing {args.image_path} with {model} via {base_url} ...")
    result = analyze_with_gpt4o(args.image_path, args.source, api_key, base_url, model)

    extra_tags = [t.strip() for t in args.extra_tags.split(",") if t.strip()]
    all_tags = list(dict.fromkeys(result["tags"] + extra_tags))

    # Call store.py
    store_cmd = [
        sys.executable, str(SCRIPT_DIR / "store.py"),
        "--image-path", args.image_path,
        "--description", result["description"],
        "--tags", ",".join(all_tags),
        "--source", args.source,
    ]
    if args.url:
        store_cmd += ["--url", args.url]

    subprocess.run(store_cmd, check=True)

    if result["raw_text"]:
        print(f"  Detected text: {result['raw_text'][:300]}")


if __name__ == "__main__":
    main()

```

### scripts/capture_url.py

```python
#!/usr/bin/env python3
"""
capture_url.py โ€” take a full-page screenshot of a URL

Usage:
  python capture_url.py --url "https://example.com"

Output:
  Prints the absolute path to the saved screenshot file.

Requirements (install once):
  pip install playwright && python -m playwright install chromium
"""

import argparse
import os
import sys
from datetime import datetime

IMAGES_DIR = os.path.expanduser("~/.multimodal-memory/images")


def capture_with_playwright(url: str, output_path: str):
    from playwright.sync_api import sync_playwright

    with sync_playwright() as p:
        browser = p.chromium.launch(headless=True)
        page = browser.new_page(viewport={"width": 1440, "height": 900})
        page.goto(url, wait_until="networkidle", timeout=30000)
        page.screenshot(path=output_path, full_page=True)
        browser.close()


def capture_with_chrome_headless(url: str, output_path: str):
    import subprocess

    candidates = [
        "/Applications/Google Chrome.app/Contents/MacOS/Google Chrome",
        "/Applications/Chromium.app/Contents/MacOS/Chromium",
        "chromium-browser",
        "google-chrome",
        "chromium",
    ]
    chrome = next((c for c in candidates if os.path.isfile(c) or _which(c)), None)
    if not chrome:
        raise FileNotFoundError("Chrome/Chromium not found.")

    subprocess.run(
        [
            chrome,
            "--headless",
            "--disable-gpu",
            "--no-sandbox",
            f"--screenshot={output_path}",
            "--window-size=1440,900",
            url,
        ],
        check=True,
        capture_output=True,
    )


def _which(cmd: str) -> bool:
    import shutil
    return shutil.which(cmd) is not None


def main():
    parser = argparse.ArgumentParser()
    parser.add_argument("--url", required=True)
    args = parser.parse_args()

    os.makedirs(IMAGES_DIR, exist_ok=True)

    timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
    # Sanitise URL for use in filename
    safe = args.url.replace("https://", "").replace("http://", "").replace("/", "_")[:60]
    filename = f"website_{safe}_{timestamp}.png"
    output_path = os.path.join(IMAGES_DIR, filename)

    # Try playwright first, fall back to Chrome headless
    try:
        capture_with_playwright(args.url, output_path)
    except ImportError:
        try:
            capture_with_chrome_headless(args.url, output_path)
        except Exception as e:
            print(
                f"ERROR: Could not take screenshot. "
                f"Install playwright: pip install playwright && python -m playwright install chromium\n"
                f"Details: {e}",
                file=sys.stderr,
            )
            sys.exit(1)
    except Exception as e:
        print(f"ERROR: {e}", file=sys.stderr)
        sys.exit(1)

    print(output_path)


if __name__ == "__main__":
    main()

```

### scripts/list.py

```python
#!/usr/bin/env python3
"""
list.py โ€” list recent visual memories

Usage:
  python list.py [--limit 20]
"""

import argparse
import json
import os
import sqlite3

DB_PATH = os.path.expanduser("~/.multimodal-memory/metadata.db")


def main():
    parser = argparse.ArgumentParser()
    parser.add_argument("--limit", type=int, default=20)
    args = parser.parse_args()

    if not os.path.isfile(DB_PATH):
        print("No memories stored yet.")
        return

    conn = sqlite3.connect(DB_PATH)
    conn.row_factory = sqlite3.Row
    rows = conn.execute(
        "SELECT * FROM memories ORDER BY created_at DESC LIMIT ?", (args.limit,)
    ).fetchall()
    total = conn.execute("SELECT COUNT(*) FROM memories").fetchone()[0]
    conn.close()

    if not rows:
        print("No memories stored yet.")
        return

    print(f"Showing {len(rows)} of {total} total memories:\n")
    for r in rows:
        tags = ", ".join(json.loads(r["tags"])) if r["tags"] else "โ€”"
        url_part = f" | {r['url']}" if r["url"] else ""
        print(f"[{r['id']}] {r['created_at']} โ€” {r['source_type']}{url_part}")
        print(f"  {r['description'][:160]}")
        print(f"  Tags: {tags}")
        if r["image_path"]:
            exists = "" if os.path.isfile(r["image_path"]) else " [missing]"
            print(f"  Image: {r['image_path']}{exists}")
        print()


if __name__ == "__main__":
    main()

```

### scripts/search.py

```python
#!/usr/bin/env python3
"""
search.py โ€” search visual memories by keyword

Usage:
  python search.py --query "login dark mode" [--limit 10]
"""

import argparse
import json
import os
import sqlite3

DB_PATH = os.path.expanduser("~/.multimodal-memory/metadata.db")


def main():
    parser = argparse.ArgumentParser()
    parser.add_argument("--query", required=True)
    parser.add_argument("--limit", type=int, default=10)
    args = parser.parse_args()

    if not os.path.isfile(DB_PATH):
        print("No memories stored yet.")
        return

    conn = sqlite3.connect(DB_PATH)
    conn.row_factory = sqlite3.Row

    # Split query into terms for broader matching
    terms = [t.strip() for t in args.query.split() if t.strip()]
    if not terms:
        print("Empty query.")
        return

    # Build WHERE clause: match any term in description OR tags
    clauses = []
    params = []
    for term in terms:
        clauses.append("(description LIKE ? OR tags LIKE ? OR url LIKE ?)")
        params += [f"%{term}%", f"%{term}%", f"%{term}%"]

    where = " OR ".join(clauses)
    rows = conn.execute(
        f"SELECT * FROM memories WHERE {where} ORDER BY created_at DESC LIMIT ?",
        params + [args.limit],
    ).fetchall()
    conn.close()

    if not rows:
        print(f"No memories found for: {args.query}")
        return

    print(f"Found {len(rows)} result(s) for '{args.query}':\n")
    for r in rows:
        tags = ", ".join(json.loads(r["tags"])) if r["tags"] else ""
        url_part = f"\n  URL         : {r['url']}" if r["url"] else ""
        img_part = f"\n  Image       : {r['image_path']}" if r["image_path"] else ""
        exists = ""
        if r["image_path"] and not os.path.isfile(r["image_path"]):
            exists = " [file missing]"

        print(f"[{r['id']}] {r['created_at']} โ€” {r['source_type']}")
        print(f"  Description : {r['description']}")
        if tags:
            print(f"  Tags        : {tags}")
        if r["url"]:
            print(f"  URL         : {r['url']}")
        if r["image_path"]:
            print(f"  Image       : {r['image_path']}{exists}")
        print()


if __name__ == "__main__":
    main()

```

### scripts/store.py

```python
#!/usr/bin/env python3
"""
store.py โ€” save a visual memory to ~/.multimodal-memory/

Usage:
  python store.py --description TEXT --tags tag1,tag2 \
                  [--image-path PATH] [--source image|chart|website] [--url URL]
"""

import argparse
import json
import os
import shutil
import sqlite3
from datetime import datetime

DATA_DIR = os.path.expanduser("~/.multimodal-memory")
IMAGES_DIR = os.path.join(DATA_DIR, "images")
DB_PATH = os.path.join(DATA_DIR, "metadata.db")
MEMORY_MD = os.path.join(DATA_DIR, "memory.md")


def init():
    os.makedirs(IMAGES_DIR, exist_ok=True)
    conn = sqlite3.connect(DB_PATH)
    conn.execute("""
        CREATE TABLE IF NOT EXISTS memories (
            id          INTEGER PRIMARY KEY AUTOINCREMENT,
            image_path  TEXT,
            description TEXT    NOT NULL,
            tags        TEXT    NOT NULL DEFAULT '[]',
            source_type TEXT    NOT NULL DEFAULT 'image',
            url         TEXT,
            created_at  TEXT    NOT NULL
        )
    """)
    conn.commit()
    return conn


def copy_image(src: str) -> str:
    """Copy image into the managed images dir and return new absolute path."""
    if not src or not os.path.isfile(src):
        return src
    filename = os.path.basename(src)
    dst = os.path.join(IMAGES_DIR, filename)
    # Avoid overwriting by adding timestamp suffix if needed
    if os.path.exists(dst) and dst != os.path.abspath(src):
        name, ext = os.path.splitext(filename)
        ts = datetime.now().strftime("%Y%m%d%H%M%S")
        dst = os.path.join(IMAGES_DIR, f"{name}_{ts}{ext}")
    if os.path.abspath(src) != dst:
        shutil.copy2(src, dst)
    return dst


def update_memory_md(conn: sqlite3.Connection):
    """Write a human-readable summary of the latest 30 memories to memory.md."""
    conn.row_factory = sqlite3.Row
    rows = conn.execute(
        "SELECT * FROM memories ORDER BY created_at DESC LIMIT 30"
    ).fetchall()

    lines = [
        "# Multimodal Memory โ€” Recent Entries\n",
        f"_Last updated: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}_\n",
        f"_Total stored: {conn.execute('SELECT COUNT(*) FROM memories').fetchone()[0]}_\n\n",
    ]
    for r in rows:
        tags = ", ".join(json.loads(r["tags"])) if r["tags"] else ""
        url_part = f" | {r['url']}" if r["url"] else ""
        lines.append(f"## [{r['id']}] {r['created_at']} โ€” {r['source_type']}{url_part}\n")
        lines.append(f"**Description:** {r['description']}\n")
        if tags:
            lines.append(f"**Tags:** {tags}\n")
        if r["image_path"]:
            lines.append(f"**Image:** {r['image_path']}\n")
        lines.append("\n")

    with open(MEMORY_MD, "w", encoding="utf-8") as f:
        f.writelines(lines)


def main():
    parser = argparse.ArgumentParser()
    parser.add_argument("--description", required=True)
    parser.add_argument("--tags", default="")
    parser.add_argument("--image-path", default="")
    parser.add_argument("--source", default="image", choices=["image", "chart", "website"])
    parser.add_argument("--url", default="")
    args = parser.parse_args()

    tags = [t.strip() for t in args.tags.split(",") if t.strip()]

    conn = init()

    saved_path = copy_image(args.image_path) if args.image_path else None

    cur = conn.execute(
        """
        INSERT INTO memories (image_path, description, tags, source_type, url, created_at)
        VALUES (?, ?, ?, ?, ?, ?)
        """,
        (
            saved_path,
            args.description,
            json.dumps(tags),
            args.source,
            args.url or None,
            datetime.now().isoformat(timespec="seconds"),
        ),
    )
    conn.commit()
    record_id = cur.lastrowid

    update_memory_md(conn)
    conn.close()

    print(f"Saved memory id={record_id}")
    print(f"  Source      : {args.source}")
    if saved_path:
        print(f"  Image       : {saved_path}")
    if args.url:
        print(f"  URL         : {args.url}")
    print(f"  Tags        : {', '.join(tags)}")
    print(f"  Description : {args.description[:200]}")


if __name__ == "__main__":
    main()

```