Back to skills
SkillHub ClubShip Full StackFull Stack
pdf-ocr-layout
Imported from https://github.com/openclaw/skills.
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Stars
3,131
Hot score
99
Updated
March 20, 2026
Overall rating
C0.0
Composite score
0.0
Best-practice grade
F32.4
Install command
npx @skill-hub/cli install openclaw-skills-pdf-ocr-layout
Repository
openclaw/skills
Skill path: skills/baokui/pdf-ocr-layout
Imported from https://github.com/openclaw/skills.
Open repositoryBest for
Primary workflow: Ship Full Stack.
Technical facets: Full Stack.
Target audience: everyone.
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: openclaw.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install pdf-ocr-layout into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/openclaw/skills before adding pdf-ocr-layout to shared team environments
- Use pdf-ocr-layout for development workflows
Works across
Claude CodeCodex CLIGemini CLIOpenCode
Favorites: 0.
Sub-skills: 0.
Aggregator: No.
Original source / Raw SKILL.md
---
name: pdf-ocr-layout
description: Multimodal document deep analysis tool based on Zhipu GLM-OCR, GLM-4.7, and GLM-4.6V.
Use when:
- Need to extract tables from documents (PDF/images) with high precision and convert to Markdown format
- Need to automatically crop and extract illustrations and charts from document pages as independent files
- Need to perform deep semantic understanding on extracted charts (based on GLM-4.6V visual analysis)
- Need to perform logical analysis on extracted table data (based on GLM-4.7 text analysis)
Core Architecture:
1. Visual Extraction: GLM-OCR
2. Semantic Understanding: GLM-4.7 (text/tables) + GLM-4.6V (multimodal/images)
---
# GLM-OCR Multimodal Deep Analysis
This tool builds a high-precision document parsing pipeline: using **GLM-OCR** for layout element extraction, calling **GLM-4.7** for logical interpretation of table data, and calling **GLM-4.6V** for multimodal visual interpretation of images and charts.
## Pipeline Implementation Architecture
This Skill consists of two core script stages, orchestrated through `glm_ocr_pipeline.py`:
### 1. Extraction Stage (`scripts/glm_ocr_extract.py`)
- **Core Model**: GLM-OCR
- **Function**: Responsible for physical layout analysis of documents
- **Output**: Extract table HTML and clean to Markdown, automatically crop independent chart image files based on Bbox coordinates, and generate intermediate JSON containing full page reading order
### 2. Understanding Stage (`scripts/glm_understanding.py`)
- **Core Model**: GLM-4.7 (text) / GLM-4.6V (visual)
- **Function**: Responsible for deep semantic reasoning of content
- **Logic**:
- **Tables**: Combine full text context, use GLM-4.7 to analyze business meaning of Markdown table data
- **Charts**: Combine full text context + cropped images, use GLM-4.6V for multimodal visual analysis
## Invocation Methods
### Command Line Invocation
```bash
# Run complete pipeline: extraction -> cropping -> understanding analysis, supports input in .pdf, .jpg, .png and other formats
python scripts/glm_ocr_pipeline.py \
--file_path "/data/report_page.jpg" \
--output_dir "/data/output"
```
## API Parameter Description
| Parameter | Type | Required | Description |
| --- | --- | --- | --- |
| file_path | string | ✅ | Absolute path to input file (supports .pdf, .png, .jpg) |
| output_dir | string | ✅ | Result output directory (used to save cropped images and JSON reports) |
## Return Result Structure (JSON)
The tool returns a list containing layout elements and their deep understanding:
```json
[
{
"type": "table",
"bbox": [100, 200, 500, 600],
"content_info": "| Revenue | Q1 |\n|---|---|\n| 100M | ... |",
"deep_understanding": "(Generated by GLM-4.7) This table shows Q1 2024 revenue data. Combined with the 'market expansion strategy' mentioned in paragraph 3 of the body text, it can be seen that..."
},
{
"type": "image",
"bbox": [100, 700, 500, 900],
"content_info": "/data/output/images/report_page_img_2.png",
"deep_understanding": "(Generated by GLM-4.6V) This is a system architecture diagram. Visually, it shows the flow of clients connecting to servers through a Load Balancer. Combined with the title 'Fig 3' and context, this diagram is mainly used to illustrate..."
}
]
```
## Environment Requirements
- Environment variable `ZHIPU_API_KEY` must be configured
- Python 3.8+
- Dependencies: `zhipuai`, `pillow`, `beautifulsoup4`
## Notes
### 1. Model Routing Strategy
- **Table (表格)**: Content passed to **GLM-4.7**, combined with full text Markdown context for logical reasoning
- **Image (图片)**: Image Base64 encoded and passed to **GLM-4.6V**, combined with OCR-extracted titles and full text context for multimodal understanding
### 2. Context Association
All understanding is based on the complete layout logic of the document (Markdown Context), not isolated fragment analysis.
### 3. PDF Processing
Multi-page PDFs default to processing the first page. For batch processing, please extend the loop logic at the script level.
---
## Skill Companion Files
> Additional files collected from the skill directory layout.
### _meta.json
```json
{
"owner": "baokui",
"slug": "pdf-ocr-layout",
"displayName": "pdf-ocr-layout",
"latest": {
"version": "1.0.2",
"publishedAt": 1770776206884,
"commit": "https://github.com/openclaw/skills/commit/e5867013bd6321fd9b60d789c43d622780aeb074"
},
"history": [
{
"version": "1.0.1",
"publishedAt": 1770718409402,
"commit": "https://github.com/openclaw/skills/commit/53b95cabe99b019aeaca6a444902089dc579166b"
}
]
}
```