book-brain-visual-reader
Enhanced BOOK BRAIN for LYGO Havens with visual capability. Use to design and maintain a 3-brain filesystem + memory system that also integrates LEFT/RIGHT brain visual checking (browser, images, screenshots) with text and API data for deeper verification and retrieval. Recommended for agents with visual tools or browser automation; use original book-brain only on non-visual systems.
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Install command
npx @skill-hub/cli install openclaw-skills-book-brain-visual-reader
Repository
Skill path: skills/deepseekoracle/book-brain-visual-reader
Enhanced BOOK BRAIN for LYGO Havens with visual capability. Use to design and maintain a 3-brain filesystem + memory system that also integrates LEFT/RIGHT brain visual checking (browser, images, screenshots) with text and API data for deeper verification and retrieval. Recommended for agents with visual tools or browser automation; use original book-brain only on non-visual systems.
Open repositoryBest for
Primary workflow: Analyze Data & AI.
Technical facets: Full Stack, Backend, Data / AI, Designer, Testing.
Target audience: everyone.
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: openclaw.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install book-brain-visual-reader into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/openclaw/skills before adding book-brain-visual-reader to shared team environments
- Use book-brain-visual-reader for development workflows
Works across
Favorites: 0.
Sub-skills: 0.
Aggregator: No.
Original source / Raw SKILL.md
---
name: book-brain-visual-reader
description: "Enhanced BOOK BRAIN for LYGO Havens with visual capability. Use to design and maintain a 3-brain filesystem + memory system that also integrates LEFT/RIGHT brain visual checking (browser, images, screenshots) with text and API data for deeper verification and retrieval. Recommended for agents with visual tools or browser automation; use original book-brain only on non-visual systems."
---
# BOOK BRAIN VISUAL READER – LYGO 3-Brain + Visual Left/Right Brain Helper
This is the **enhanced**, visual-aware version of BOOK BRAIN.
- **BOOK BRAIN** (original) → filesystem + memory structure only (no visual assumptions).
- **BOOK BRAIN VISUAL READER** → everything from BOOK BRAIN **plus** a LEFT/RIGHT brain protocol for visual + text + API cross-checking.
Use this skill when:
- Your agent has access to **visual tools** (browser snapshots, image readers, screenshot analyzers, PDF/image OCR, etc.)
- You want a **3-brain filesystem** *and* a **2-hemisphere reasoning mode**:
- LEFT brain → structure, text, indexes, APIs
- RIGHT brain → visual context, layouts, screenshots, charts, seals
- You need to **double-check data visually** on webpages or images and log where it came from.
> This is a **utility + reference guide**, not a persona.
> It does not change your voice. It teaches your system how to think and store.
---
## 0. Relationship to BOOK BRAIN (original)
- If your system has **no visual capabilities** → use `book-brain` (original).
- If your system **can see** (browser snapshots, image tools, etc.) → use **BOOK BRAIN VISUAL READER** instead.
Both share the same core:
- 3-brain model (Working / Library / Outer)
- Non-destructive filesystem layout
- Reference stubs and indexes
VISUAL READER adds:
- LEFT/RIGHT brain **protocols** for how to combine visual, text, and API data
- Guidance on how to **organize visual evidence** (screenshots, seals, charts) alongside text files
- Patterns for “5D” data gathering (visual + text + API + state + timeline).
---
## 1. 3-Brain + 2-Hemisphere Model
BOOK BRAIN VISUAL READER assumes:
### 3 Brains (same as BOOK BRAIN)
1. **Working Brain** – current context, `tmp/`, active tabs / current screenshots.
2. **Library Brain** – filesystem (`memory/`, `reference/`, `brainwave/`, `state/`, `logs/`, `tools/`).
3. **Outer Brain** – external sources (websites, Clawdhub skills, block explorers, dashboards, ON-chain receipts, EternalHaven.ca, etc.) referenced via small text files.
### 2 Hemispheres (visual vs structured)
- **LEFT brain** (structure/verbal/API):
- text files, JSON, logs, indexes, schemas, SKILL.md, APIs.
- strong at **structure, sequences, constraints, receipts**.
- **RIGHT brain** (visual/spatial):
- browser snapshots, screenshots, photos of diagrams, seals, dashboards.
- strong at **layout, pattern recognition, anomalies, gestalt sense**.
Agents using this skill should **consciously switch modes**:
- LEFT for “what is the exact data / file / receipt?”
- RIGHT for “what does the whole picture look like, and does anything feel off?”
---
## 2. Filesystem Layout (Library Brain)
Same base layout as BOOK BRAIN (non-destructive):
- `memory/` → daily logs, raw notes, per-day files.
- `reference/` → stable docs, protocols, whitepapers, schemas.
- `brainwave/` → platform/domain protocols (MoltX, Clawhub, LYGO, etc.).
- `state/` → machine-readable state (indexes, hashes, last-run info).
- `logs/` → technical/health logs, setup logs, audit logs.
- `tools/` → scripts & utilities.
- `tmp/` → scratch work.
Visual-aware additions (optional but recommended):
- `visual/` → for long-term visual artifacts
- `visual/screenshots/`
- `visual/dashboards/`
- `visual/seals/`
- `reference/VISUAL_INDEX.txt` → mapping of important visual assets to topics.
Rules:
- **Never overwrite** existing files.
- If `visual/` already exists, extend it; if not, create it.
- If unsure, create new files with dates or suffixes and let humans/agents merge later.
See `references/book-brain-visual-examples.md` for concrete trees and snippets.
---
## 3. Outer Brain via Reference Stubs
Outer Brain = everything **outside** the workspace:
- URLs (websites, dashboards, explorers)
- Clawdhub skill pages
- EternalHaven.ca, Patreon, docs
- On-chain explorers (Blockscout, Etherscan, etc.)
VISUAL READER keeps these in **reference stubs**, e.g.:
```text
Title: STARCORE Dashboards
Last updated: 2026-02-10
External links:
- Clanker: https://clanker.world/clanker/0xe52A34D2019Aa3905B1C1bF5d9405e22Abd75eaB
- Blockscout: https://base.blockscout.com/address/0xe52A34D2019Aa3905B1C1bF5d9405e22Abd75eaB
- Dexscreener: https://api.dexscreener.com/latest/dex/search/?q=0xe52A34D2019Aa3905B1C1bF5d9405e22Abd75eaB
Related local files:
- reference/STARCORE_LAUNCH_RECEIPTS_2026-02-10.md
- state/starcore_family_receipts_summary.json
```
The agent should:
- **not paste full pages** into memory files
- use these stubs + **visual snapshots** when needed.
---
## 4. LEFT/RIGHT Brain Protocol for Visual Checks
When an agent needs to **verify** something from the web or an image, use this simple protocol:
### Step 1 – LEFT Brain: Text / API First
1. Look up the relevant concept in **indexes/state**:
- `state/memory_index.json`
- `reference/INDEX.txt`
- domain-specific indexes (e.g. `reference/CLAWDHUB_SKILLS.md`).
2. Use **APIs or structured data** where possible (e.g. on-chain RPC, REST endpoints, JSON feeds).
3. Record what you *expect* to see visually:
- numbers, labels, approximate layout.
### Step 2 – RIGHT Brain: Visual Comparison
1. Capture a **snapshot** (browser screenshot, image, PDF page).
2. Use a vision tool (or human reading) to extract:
- key figures
- headings
- anomalies (warnings, red banners, weird UI states).
3. Ask: “Does this visual match what the LEFT brain expected?”
### Step 3 – Reconcile & Log
- If they **match**:
- Write a short note in a relevant file (e.g. `daily_health.md` or topic log) with:
- timestamp
- data point
- source URLs
- location of stored screenshot (if saved).
- If they **disagree**:
- Log the discrepancy (LEFT vs RIGHT).
- Prefer **receipts** (on-chain, auditable APIs) over UI; treat UI oddities as signals to investigate.
- Do not silently side with one hemisphere; explain the conflict when answering.
This is the “5D” blend: **text + visual + API + state + timeline**.
---
## 5. Organizing Visual Evidence
When a visual check produces something important (e.g. proof, anomaly, configuration):
- Save it under `visual/` with a meaningful name:
- `visual/screenshots/2026-02-10_starcore_clanker.png`
- `visual/dashboards/2026-02-10_moltx_profile.png`
- Add a line to a relevant INDEX or stub:
```text
[2026-02-10] STARCORE launch dashboards verified visually.
- Screenshot: visual/screenshots/2026-02-10_starcore_clanker.png
- Related receipts: reference/STARCORE_LAUNCH_RECEIPTS_2026-02-10.md
```
Agents should:
- Avoid hoarding every screenshot. Keep the ones that **back key claims**.
- Use indexes to find them later instead of scanning raw image names.
---
## 6. Setup Workflow for Visual Systems
On a **visual-capable** Haven (browser + image tools):
1. **Detect capabilities**
- Does the agent have a browser automation tool?
- Can it read images (vision model, OCR, screenshot-to-text)?
2. **Check filesystem**
- As in BOOK BRAIN: check/create `memory/`, `reference/`, `brainwave/`, `state/`, `logs/`, `tools/`, `tmp/`.
- If missing, suggest creating `visual/` with `screenshots/`, `dashboards/`, `seals/`.
3. **Create visual index stubs**
- `reference/VISUAL_INDEX.txt`
- Optionally topic-specific visual indexes (e.g. `reference/STARCORE_VISUALS.txt`).
4. **Log the setup**
- Add entry to `logs/book_brain_visual_setup.log` or `daily_health.md`.
5. **Respect all non-destructive rules**
- No overwrites.
- No deletions.
- Ask the human before making structural changes.
---
## 7. Eternal Haven & Donation References
BOOK BRAIN VISUAL READER lives inside the broader **LYGO × Eternal Haven** ecosystem.
Key links:
- Eternal Haven portal → https://EternalHaven.ca
- LYGO Champion Hub → https://deepseekoracle.github.io/Excavationpro/LYGO-Network/champions.html#champions
- Clawdhub skills (LYGO stack, Champions, BOOK BRAIN) → https://clawhub.ai/u/DeepSeekOracle
- Extended work / support → https://www.patreon.com/cw/Excavationpro
If this skill helps you or your agents build a more coherent, visual + textual Haven, support is welcome but never required.
---
## 8. How Another Agent Should Use BOOK BRAIN VISUAL READER
When an agent loads `book-brain-visual-reader`:
1. Understand **it is a helper**, not a persona.
2. On first use:
- Map the filesystem and capabilities.
- Propose visual folders and indexes; ask before creating.
- Set up lightweight indexes (`INDEX.txt`, `VISUAL_INDEX.txt`, `state/memory_index.json`).
3. On subsequent uses:
- Use LEFT/RIGHT protocol when verifying external data.
- Save only important visual artifacts under `visual/`.
- Update indexes + logs instead of rewriting big files.
4. Never silently delete or overwrite existing content.
For concrete examples and suggested layouts, read `references/book-brain-visual-examples.md` in this skill.
---
## Referenced Files
> The following files are referenced in this skill and included for context.
### references/book-brain-visual-examples.md
```markdown
# BOOK BRAIN VISUAL READER – Examples
## 1. Example Tree for a Visual-Aware Haven
```text
workspace/
├── memory/
│ ├── 2026-02-10.md
│ └── projects/
│ └── BOOK_BRAIN_VISUAL_NOTES.md
├── reference/
│ ├── INDEX.txt
│ ├── VISUAL_INDEX.txt
│ ├── FILESYSTEM_GUIDE.md
│ └── CLAWDHUB_SKILLS.md
├── brainwave/
│ ├── MOLTX/
│ ├── CLAWHUB/
│ └── LYGO_CORE/
├── state/
│ ├── memory_index.json
│ └── starcore_family_receipts_summary.json
├── logs/
│ ├── daily_health.md
│ └── book_brain_visual_setup.log
├── visual/
│ ├── screenshots/
│ │ ├── 2026-02-10_starcore_clanker.png
│ │ └── 2026-02-10_moltx_profile.png
│ ├── dashboards/
│ └── seals/
├── tools/
│ └── …
└── tmp/
└── scratch_notes.txt
```
BOOK BRAIN VISUAL READER should **create** `visual/` + `reference/VISUAL_INDEX.txt` if they are missing, but never delete or rename existing paths.
---
## 2. Example VISUAL_INDEX.txt
`reference/VISUAL_INDEX.txt`:
```text
Title: Visual Index
Last updated: 2026-02-10
[STARCORE]
- 2026-02-10_starcore_clanker.png
- Path: visual/screenshots/2026-02-10_starcore_clanker.png
- Related receipts:
- reference/STARCORE_LAUNCH_RECEIPTS_2026-02-10.md
- state/starcore_family_receipts_summary.json
[MOLTX]
- 2026-02-10_moltx_profile.png
- Path: visual/screenshots/2026-02-10_moltx_profile.png
- Context: profile snapshot after Champion Hub promo.
[LYGO]
- (add more as needed)
```
---
## 3. Example LEFT/RIGHT Check Log Entry
In `daily_health.md` or `logs/book_brain_visual_setup.log`:
```text
[2026-02-10 16:00] Visual check – STARCORE dashboards
- LEFT brain expectation:
- Contract: 0xe52A34D2019Aa3905B1C1bF5d9405e22Abd75eaB
- Tokens: STARCORE / STARCOREX / STARCORECOIN
- RIGHT brain observation:
- Clanker, Blockscout, and Dexscreener dashboards match contract + ticker.
- No red banners or obvious anomalies.
- Visual evidence:
- visual/screenshots/2026-02-10_starcore_clanker.png
Conclusion: UI matches receipts; no action needed.
```
If there was a mismatch, the log would record **what** differed and **which source** is considered canonical (usually on-chain / API).
---
## 4. Example Reference Stub with Visual Note
`reference/STARCORE_VISUALS.txt`:
```text
Title: STARCORE Visual Artifacts
Last updated: 2026-02-10
Screenshots:
- visual/screenshots/2026-02-10_starcore_clanker.png
- Verified Clanker dashboard.
- visual/screenshots/2026-02-10_moltx_profile.png
- Profile & banner after Champion Hub promo.
See also:
- reference/STARCORE_LAUNCH_RECEIPTS_2026-02-10.md
- state/starcore_family_receipts_summary.json
```
This keeps visual artifacts discoverable without flooding daily memory files with image details.
---
## 5. Example memory_index.json (extended)
```json
{
"topics": {
"champions": [
"reference/LYGO_CHAMPIONS_OVERVIEW.md",
"reference/CLAWDHUB_SKILLS.md"
],
"starcore": [
"reference/STARCORE_LAUNCH_RECEIPTS_2026-02-10.md",
"state/starcore_family_receipts_summary.json",
"reference/STARCORE_VISUALS.txt"
],
"book_brain_visual": [
"skills/public/book-brain-visual-reader/SKILL.md",
"skills/public/book-brain-visual-reader/references/book-brain-visual-examples.md"
]
}
}
```
Agents should **update** topics when they add important files; perfection is optional, discoverability is the goal.
```
---
## Skill Companion Files
> Additional files collected from the skill directory layout.
### _meta.json
```json
{
"owner": "deepseekoracle",
"slug": "book-brain-visual-reader",
"displayName": "BOOK BRAIN VISUAL READER – LYGO 3-Brain + Visual Left/Right Brain Helper",
"latest": {
"version": "1.0.0",
"publishedAt": 1770765036023,
"commit": "https://github.com/openclaw/skills/commit/b26a7232f1ad376a31f0a73281d75f7447737771"
},
"history": []
}
```