depth-map-generation
Generate depth maps from images using each::sense AI. Create depth estimation for 3D effects, parallax animations, VR/AR applications, focus effects, and stereo image generation.
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Install command
npx @skill-hub/cli install openclaw-skills-depth-map-generation
Repository
Skill path: skills/eftalyurtseven/depth-map-generation
Generate depth maps from images using each::sense AI. Create depth estimation for 3D effects, parallax animations, VR/AR applications, focus effects, and stereo image generation.
Open repositoryBest for
Primary workflow: Analyze Data & AI.
Technical facets: Full Stack, Data / AI.
Target audience: everyone.
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: openclaw.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install depth-map-generation into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/openclaw/skills before adding depth-map-generation to shared team environments
- Use depth-map-generation for development workflows
Works across
Favorites: 0.
Sub-skills: 0.
Aggregator: No.
Original source / Raw SKILL.md
---
name: depth-map-generation
description: Generate depth maps from images using each::sense AI. Create depth estimation for 3D effects, parallax animations, VR/AR applications, focus effects, and stereo image generation.
metadata:
author: eachlabs
version: "1.0"
---
# Depth Map Generation
Generate accurate depth maps from any image using each::sense. This skill extracts depth information from 2D images for 3D effects, parallax animations, VR/AR applications, computational photography, and more.
## Features
- **Monocular Depth Estimation**: Extract depth from single images
- **Portrait Depth Maps**: Precise depth for portrait photography effects
- **Landscape Depth**: Scene depth for panoramic and landscape images
- **Product Depth**: 3D-ready depth maps for e-commerce
- **Architectural Depth**: Building and interior depth analysis
- **3D Parallax Effects**: Depth data for Ken Burns-style animations
- **VR/AR Depth**: Real-time depth estimation for immersive experiences
- **Stereo Image Generation**: Convert 2D to stereoscopic 3D
- **Focus Stacking**: Depth-based focus plane selection
- **Background Blur**: Depth-aware bokeh and blur effects
## Quick Start
```bash
curl -X POST https://sense.eachlabs.run/chat \
-H "Content-Type: application/json" \
-H "X-API-Key: $EACHLABS_API_KEY" \
-H "Accept: text/event-stream" \
-d '{
"message": "Generate a depth map from this photo",
"image_urls": ["https://example.com/photo.jpg"],
"mode": "max"
}'
```
## Depth Map Output Formats
| Format | Description | Use Case |
|--------|-------------|----------|
| Grayscale | 8-bit depth (white=near, black=far) | General purpose, visualization |
| Inverse Grayscale | 8-bit depth (black=near, white=far) | Some 3D software compatibility |
| Normalized | 0-1 range depth values | Machine learning pipelines |
| Metric | Real-world distance estimation | AR/VR, robotics |
## Use Case Examples
### 1. Generate Depth Map from Photo
Basic depth extraction from any image.
```bash
curl -X POST https://sense.eachlabs.run/chat \
-H "Content-Type: application/json" \
-H "X-API-Key: $EACHLABS_API_KEY" \
-H "Accept: text/event-stream" \
-d '{
"message": "Generate a depth map from this image. Output as grayscale where white represents closer objects and black represents distant objects.",
"image_urls": ["https://example.com/scene.jpg"],
"mode": "max"
}'
```
### 2. Portrait Depth Map
Extract precise depth from portrait photos for bokeh effects and 3D portraits.
```bash
curl -X POST https://sense.eachlabs.run/chat \
-H "Content-Type: application/json" \
-H "X-API-Key: $EACHLABS_API_KEY" \
-H "Accept: text/event-stream" \
-d '{
"message": "Create a high-precision depth map from this portrait photo. Focus on accurate edge detection around the subject, especially hair and facial features. I need this for applying realistic depth-of-field effects.",
"image_urls": ["https://example.com/portrait.jpg"],
"mode": "max"
}'
```
### 3. Landscape Depth Map
Generate depth from landscape and outdoor scenes with extended depth range.
```bash
curl -X POST https://sense.eachlabs.run/chat \
-H "Content-Type: application/json" \
-H "X-API-Key: $EACHLABS_API_KEY" \
-H "Accept: text/event-stream" \
-d '{
"message": "Generate a depth map from this landscape photo. The scene has foreground elements, mid-ground terrain, and distant mountains. Capture the full depth range from near to far with good separation between depth layers.",
"image_urls": ["https://example.com/landscape.jpg"],
"mode": "max"
}'
```
### 4. Product Depth for 3D Effect
Create depth maps for product images to enable 3D viewing experiences.
```bash
curl -X POST https://sense.eachlabs.run/chat \
-H "Content-Type: application/json" \
-H "X-API-Key: $EACHLABS_API_KEY" \
-H "Accept: text/event-stream" \
-d '{
"message": "Extract a depth map from this product photo. I need accurate depth information to create a 3D interactive view for an e-commerce website. Focus on capturing the product shape and surface details.",
"image_urls": ["https://example.com/product-sneaker.jpg"],
"mode": "max"
}'
```
### 5. Architectural Depth Map
Generate depth from architectural and interior photos for visualization.
```bash
curl -X POST https://sense.eachlabs.run/chat \
-H "Content-Type: application/json" \
-H "X-API-Key: $EACHLABS_API_KEY" \
-H "Accept: text/event-stream" \
-d '{
"message": "Create a depth map from this interior architecture photo. Capture the spatial relationships between walls, furniture, and architectural elements. I need this for a virtual tour with depth-based transitions.",
"image_urls": ["https://example.com/interior.jpg"],
"mode": "max"
}'
```
### 6. 3D Parallax Effect Creation
Generate depth maps optimized for creating parallax animations and Ken Burns effects.
```bash
curl -X POST https://sense.eachlabs.run/chat \
-H "Content-Type: application/json" \
-H "X-API-Key: $EACHLABS_API_KEY" \
-H "Accept: text/event-stream" \
-d '{
"message": "Generate a depth map from this photo that I will use for a 3D parallax animation. I need clear depth separation between foreground, midground, and background elements. The depth should be smooth with distinct layers for a compelling parallax effect.",
"image_urls": ["https://example.com/scene-for-parallax.jpg"],
"mode": "max"
}'
```
### 7. VR/AR Depth Estimation
Create depth maps suitable for virtual and augmented reality applications.
```bash
curl -X POST https://sense.eachlabs.run/chat \
-H "Content-Type: application/json" \
-H "X-API-Key: $EACHLABS_API_KEY" \
-H "Accept: text/event-stream" \
-d '{
"message": "Generate a depth map from this room photo for AR/VR use. I need metric depth estimation that accurately represents real-world distances. This will be used for placing virtual objects in an augmented reality application.",
"image_urls": ["https://example.com/room.jpg"],
"mode": "max"
}'
```
### 8. Stereo Image Generation
Convert 2D images to stereoscopic 3D using depth estimation.
```bash
curl -X POST https://sense.eachlabs.run/chat \
-H "Content-Type: application/json" \
-H "X-API-Key: $EACHLABS_API_KEY" \
-H "Accept: text/event-stream" \
-d '{
"message": "Generate a depth map from this photo and use it to create a stereoscopic 3D image pair (left and right eye views). The stereo effect should be subtle enough for comfortable viewing but noticeable enough to create depth perception.",
"image_urls": ["https://example.com/photo-for-stereo.jpg"],
"mode": "max"
}'
```
### 9. Focus Stacking Depth
Generate depth maps for computational focus stacking and all-in-focus composites.
```bash
curl -X POST https://sense.eachlabs.run/chat \
-H "Content-Type: application/json" \
-H "X-API-Key: $EACHLABS_API_KEY" \
-H "Accept: text/event-stream" \
-d '{
"message": "Create a depth map from this macro/close-up photo. I need precise depth information to identify focus planes for computational focus stacking. Each depth layer should be clearly defined so I can select which areas should be in focus.",
"image_urls": ["https://example.com/macro-photo.jpg"],
"mode": "max"
}'
```
### 10. Depth-Aware Background Blur
Generate depth for applying realistic bokeh and background blur effects.
```bash
curl -X POST https://sense.eachlabs.run/chat \
-H "Content-Type: application/json" \
-H "X-API-Key: $EACHLABS_API_KEY" \
-H "Accept: text/event-stream" \
-d '{
"message": "Generate a depth map from this photo for applying depth-aware background blur. The subject in the foreground should be clearly separated from the background. I need accurate edge detection so the blur transition looks natural, similar to a portrait mode effect.",
"image_urls": ["https://example.com/photo-for-blur.jpg"],
"mode": "max"
}'
```
## Multi-Turn Depth Processing
Use `session_id` to refine depth maps or process multiple related images:
```bash
# Initial depth estimation
curl -X POST https://sense.eachlabs.run/chat \
-H "Content-Type: application/json" \
-H "X-API-Key: $EACHLABS_API_KEY" \
-H "Accept: text/event-stream" \
-d '{
"message": "Generate a depth map from this photo",
"image_urls": ["https://example.com/scene.jpg"],
"session_id": "depth-project-001"
}'
# Refine the depth map
curl -X POST https://sense.eachlabs.run/chat \
-H "Content-Type: application/json" \
-H "X-API-Key: $EACHLABS_API_KEY" \
-H "Accept: text/event-stream" \
-d '{
"message": "The depth map looks good but can you enhance the edge detection around the main subject? The boundaries are a bit fuzzy.",
"session_id": "depth-project-001"
}'
# Apply the depth map for an effect
curl -X POST https://sense.eachlabs.run/chat \
-H "Content-Type: application/json" \
-H "X-API-Key: $EACHLABS_API_KEY" \
-H "Accept: text/event-stream" \
-d '{
"message": "Now use this depth map to create a 3D parallax video animation with subtle camera movement",
"session_id": "depth-project-001"
}'
```
## Batch Depth Processing
Process multiple images for consistent depth estimation:
```bash
# Process first image
curl -X POST https://sense.eachlabs.run/chat \
-H "Content-Type: application/json" \
-H "X-API-Key: $EACHLABS_API_KEY" \
-H "Accept: text/event-stream" \
-d '{
"message": "Generate a depth map from this product photo. I will be sending more product images that need consistent depth estimation.",
"image_urls": ["https://example.com/product1.jpg"],
"session_id": "product-depth-batch",
"mode": "eco"
}'
# Process second image with same settings
curl -X POST https://sense.eachlabs.run/chat \
-H "Content-Type: application/json" \
-H "X-API-Key: $EACHLABS_API_KEY" \
-H "Accept: text/event-stream" \
-d '{
"message": "Generate depth map for this product using the same approach as before",
"image_urls": ["https://example.com/product2.jpg"],
"session_id": "product-depth-batch",
"mode": "eco"
}'
```
## Mode Selection
| Mode | Best For | Speed | Quality |
|------|----------|-------|---------|
| `max` | Final production depth maps, VR/AR applications, professional compositing | Slower | Highest precision |
| `eco` | Quick previews, batch processing, prototyping | Faster | Good quality |
**Recommendation:** Use `max` mode when depth accuracy is critical (VR/AR, 3D conversion, professional compositing). Use `eco` mode for rapid iteration and batch processing.
## Best Practices
### Input Image Quality
- **Resolution**: Higher resolution inputs produce more detailed depth maps
- **Lighting**: Even lighting helps with accurate depth estimation
- **Contrast**: Clear contrast between objects improves depth separation
- **Focus**: Sharp images yield better edge detection in depth maps
### Depth Map Applications
- **Parallax Effects**: Use 3-5 distinct depth layers for best results
- **Bokeh/Blur**: Ensure clean subject edges for natural blur falloff
- **3D Conversion**: Provide context about scene scale for metric depth
- **VR/AR**: Request metric depth for real-world distance accuracy
### Prompt Tips
1. **Specify output format**: Grayscale, normalized, or metric depth
2. **Describe the scene**: Help the model understand spatial relationships
3. **State your use case**: Different applications benefit from different depth characteristics
4. **Request edge quality**: Specify if you need sharp or smooth depth transitions
## Error Handling
| Error | Cause | Solution |
|-------|-------|----------|
| `Failed to create prediction: HTTP 422` | Insufficient balance | Top up at eachlabs.ai |
| Image loading failed | Invalid or inaccessible image URL | Verify image URL is publicly accessible |
| Timeout | Complex or high-resolution image | Set client timeout to minimum 10 minutes |
| Low quality depth output | Poor input image quality | Use higher resolution, better lit source image |
## Technical Notes
- **Client Timeout**: Set your HTTP client timeout to minimum 10 minutes for complex depth estimation
- **Image Formats**: Supports JPEG, PNG, WebP input images
- **Output Format**: Depth maps are typically output as grayscale PNG images
- **Depth Range**: Relative depth (0-1) by default; metric depth available on request
## Related Skills
- `each-sense` - Core API documentation
- `image-to-3d` - Full 3D model generation from images
- `image-editing` - Apply depth-based effects to images
- `video-generation` - Create parallax videos from depth maps
---
## Skill Companion Files
> Additional files collected from the skill directory layout.
### _meta.json
```json
{
"owner": "eftalyurtseven",
"slug": "depth-map-generation",
"displayName": "Depth Map Generation",
"latest": {
"version": "1.0.0",
"publishedAt": 1771595283111,
"commit": "https://github.com/openclaw/skills/commit/ce5e08d5e49307b7a0af34bc0021960f9c943776"
},
"history": []
}
```
### references/SSE-EVENTS.md
```markdown
# SSE Event Reference
Detailed documentation for all Server-Sent Events (SSE) returned by the each::sense `/chat` endpoint.
## Event Format
Each event follows this format:
```
data: {"type": "event_type", ...fields}\n\n
```
Stream ends with:
```
data: [DONE]\n\n
```
---
## Event Types
### thinking_delta
Claude's reasoning as it streams in real-time. Use this to show users what the AI is thinking.
```json
{
"type": "thinking_delta",
"content": "Let me find the best model for portrait generation..."
}
```
| Field | Type | Description |
|-------|------|-------------|
| `content` | string | Incremental thinking text |
---
### status
Current operation being executed. Shows tool usage and parameters.
```json
{
"type": "status",
"message": "Searching for image generation models...",
"tool_name": "search_models",
"parameters": {"use_case": "text to image portrait"}
}
```
| Field | Type | Description |
|-------|------|-------------|
| `message` | string | Human-readable status message |
| `tool_name` | string | Internal tool being used |
| `parameters` | object | Tool parameters (optional) |
---
### text_response
Text content from the AI (explanations, answers, plans).
```json
{
"type": "text_response",
"content": "I'll create a stunning portrait for you with cinematic lighting and a warm mood."
}
```
| Field | Type | Description |
|-------|------|-------------|
| `content` | string | Text response content |
---
### generation_response
Generated media URL (image or video). This is the primary output event.
```json
{
"type": "generation_response",
"url": "https://storage.eachlabs.ai/outputs/abc123.png",
"generations": ["https://storage.eachlabs.ai/outputs/abc123.png"],
"total": 1,
"tool_name": "execute_model",
"model": "nano-banana-pro"
}
```
| Field | Type | Description |
|-------|------|-------------|
| `url` | string | Primary output URL |
| `generations` | array | All generated URLs |
| `total` | number | Total number of generations |
| `tool_name` | string | Tool that generated output |
| `model` | string | Model used for generation |
---
### clarification_needed
AI needs more information to proceed. Present options to the user.
```json
{
"type": "clarification_needed",
"question": "What type of edit would you like to make to this image?",
"options": [
"Remove the background",
"Apply a style transfer",
"Upscale to higher resolution",
"Add or modify elements"
],
"context": "I can see you've uploaded an image, but I need to understand what changes you'd like."
}
```
| Field | Type | Description |
|-------|------|-------------|
| `question` | string | The question to ask the user |
| `options` | array | Suggested options (can be displayed as buttons) |
| `context` | string | Additional context about the clarification |
**Handling:** Display the question and options to the user. Send their response in a follow-up request with the same `session_id`.
---
### web_search_query
Web search being executed.
```json
{
"type": "web_search_query",
"query": "best AI video generation models 2024",
"recency": "month"
}
```
| Field | Type | Description |
|-------|------|-------------|
| `query` | string | Search query |
| `recency` | string | Time filter (day, week, month, year) |
---
### web_search_citations
Citations from web search results.
```json
{
"type": "web_search_citations",
"citations": [
"https://example.com/ai-video-comparison",
"https://techblog.com/veo3-review"
],
"count": 2
}
```
| Field | Type | Description |
|-------|------|-------------|
| `citations` | array | URLs of sources cited |
| `count` | number | Number of citations |
---
### workflow_created
New workflow was created for complex multi-step generation.
```json
{
"type": "workflow_created",
"workflow_id": "wf_abc123",
"version_id": "v1",
"input_schema": {
"properties": {
"character_description": {
"type": "text",
"required": true,
"default_value": ""
}
}
},
"steps_count": 5
}
```
| Field | Type | Description |
|-------|------|-------------|
| `workflow_id` | string | Unique workflow identifier |
| `version_id` | string | Workflow version |
| `input_schema` | object | Schema for workflow inputs |
| `steps_count` | number | Number of steps in workflow |
---
### workflow_fetched
Existing workflow was loaded (when `workflow_id` is provided in request).
```json
{
"type": "workflow_fetched",
"workflow_name": "Product Video Generator",
"existing_steps": 3,
"existing_definition": {...}
}
```
| Field | Type | Description |
|-------|------|-------------|
| `workflow_name` | string | Name of the workflow |
| `existing_steps` | number | Number of existing steps |
| `existing_definition` | object | Current workflow definition |
---
### workflow_built
Workflow definition was constructed.
```json
{
"type": "workflow_built",
"steps_count": 4,
"definition": {
"version": "v1",
"input_schema": {...},
"steps": [...]
}
}
```
| Field | Type | Description |
|-------|------|-------------|
| `steps_count` | number | Number of steps |
| `definition` | object | Full workflow definition |
---
### workflow_updated
Workflow was pushed to the EachLabs API.
```json
{
"type": "workflow_updated",
"success": true,
"workflow_id": "wf_abc123",
"version_id": "v1",
"definition": {...}
}
```
| Field | Type | Description |
|-------|------|-------------|
| `success` | boolean | Whether update succeeded |
| `workflow_id` | string | Workflow identifier |
| `version_id` | string | Version identifier |
| `definition` | object | Updated definition |
---
### execution_started
Workflow execution has begun.
```json
{
"type": "execution_started",
"execution_id": "exec_xyz789",
"workflow_id": "wf_abc123"
}
```
| Field | Type | Description |
|-------|------|-------------|
| `execution_id` | string | Unique execution identifier |
| `workflow_id` | string | Workflow being executed |
---
### execution_progress
Progress update during workflow execution. Sent approximately every 5 seconds.
```json
{
"type": "execution_progress",
"step_id": "step2",
"step_status": "completed",
"output": "https://storage.eachlabs.ai/outputs/step2.png",
"model": "nano-banana-pro",
"completed_steps": 2,
"total_steps": 5
}
```
| Field | Type | Description |
|-------|------|-------------|
| `step_id` | string | Current step identifier |
| `step_status` | string | Step status (running, completed, failed) |
| `output` | string | Step output URL (if available) |
| `model` | string | Model used for this step |
| `completed_steps` | number | Steps completed so far |
| `total_steps` | number | Total steps in workflow |
---
### execution_completed
Workflow execution finished successfully.
```json
{
"type": "execution_completed",
"execution_id": "exec_xyz789",
"status": "completed",
"output": "https://storage.eachlabs.ai/outputs/final.mp4",
"all_outputs": {
"step1": "https://storage.eachlabs.ai/outputs/step1.png",
"step2": "https://storage.eachlabs.ai/outputs/step2.png",
"step3": "https://storage.eachlabs.ai/outputs/final.mp4"
}
}
```
| Field | Type | Description |
|-------|------|-------------|
| `execution_id` | string | Execution identifier |
| `status` | string | Final status (completed, failed) |
| `output` | string | Final output URL |
| `all_outputs` | object | All step outputs keyed by step_id |
---
### tool_call
Details of a tool being called. Useful for debugging and transparency.
```json
{
"type": "tool_call",
"name": "execute_model",
"input": {
"model_name": "nano-banana-pro",
"inputs": {
"prompt": "A beautiful woman portrait...",
"aspect_ratio": "1:1"
}
}
}
```
| Field | Type | Description |
|-------|------|-------------|
| `name` | string | Tool name |
| `input` | object | Tool input parameters |
---
### message
Informational message from the agent.
```json
{
"type": "message",
"content": "Your video is being processed. This typically takes 2-3 minutes."
}
```
| Field | Type | Description |
|-------|------|-------------|
| `content` | string | Message content |
---
### complete
Final event with summary. Always sent when stream completes successfully.
```json
{
"type": "complete",
"task_id": "chat_1708345678901",
"status": "ok",
"tool_calls": [
{"name": "search_models", "result": "success"},
{"name": "get_model_details", "result": "success"},
{"name": "execute_model", "result": "success", "model": "nano-banana-pro"}
],
"generations": ["https://storage.eachlabs.ai/outputs/abc123.png"],
"model": "nano-banana-pro"
}
```
| Field | Type | Description |
|-------|------|-------------|
| `task_id` | string | Unique task identifier |
| `status` | string | Final status (ok, awaiting_input, error) |
| `tool_calls` | array | Summary of all tool calls |
| `generations` | array | All generated output URLs |
| `model` | string | Primary model used |
**Status values:**
- `ok` - Completed successfully
- `awaiting_input` - Waiting for user clarification
- `error` - An error occurred
---
### error
An error occurred during processing.
```json
{
"type": "error",
"message": "Failed to generate image: Invalid aspect ratio"
}
```
| Field | Type | Description |
|-------|------|-------------|
| `message` | string | Error message |
---
## Event Flow Examples
### Simple Image Generation
```
thinking_delta → "I'll create a beautiful portrait..."
status → "Searching for models..."
status → "Getting model details..."
status → "Generating with nano-banana-pro..."
generation_response → {url: "https://..."}
complete → {status: "ok", generations: [...]}
[DONE]
```
### Clarification Flow
```
thinking_delta → "I see an image, but need to know what edit..."
clarification_needed → {question: "What edit?", options: [...]}
complete → {status: "awaiting_input"}
[DONE]
```
### Workflow Execution
```
thinking_delta → "Creating a multi-step workflow..."
status → "Searching for models..."
workflow_created → {workflow_id: "wf_123", steps_count: 5}
execution_started → {execution_id: "exec_456"}
execution_progress → {completed_steps: 1, total_steps: 5}
execution_progress → {completed_steps: 2, total_steps: 5}
execution_progress → {completed_steps: 3, total_steps: 5}
execution_progress → {completed_steps: 4, total_steps: 5}
execution_completed → {output: "https://...", all_outputs: {...}}
complete → {status: "ok"}
[DONE]
```
### Web Search
```
thinking_delta → "Let me search for current information..."
web_search_query → {query: "best AI models 2024"}
status → "Searching the web..."
web_search_citations → {citations: [...], count: 3}
text_response → "Based on current information..."
complete → {status: "ok"}
[DONE]
```
```