comfyui
Generate high-quality images using a local ComfyUI instance. Use when the user wants private, powerful image generation via their own hardware and custom workflows. Requires a running ComfyUI server accessible on the local network.
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Install command
npx @skill-hub/cli install openclaw-skills-comfy-ui
Repository
Skill path: skills/dihan/comfy-ui
Generate high-quality images using a local ComfyUI instance. Use when the user wants private, powerful image generation via their own hardware and custom workflows. Requires a running ComfyUI server accessible on the local network.
Open repositoryBest for
Primary workflow: Ship Full Stack.
Technical facets: Full Stack, Backend.
Target audience: everyone.
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: openclaw.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install comfyui into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/openclaw/skills before adding comfyui to shared team environments
- Use comfyui for development workflows
Works across
Favorites: 0.
Sub-skills: 0.
Aggregator: No.
Original source / Raw SKILL.md
---
name: comfyui
description: Generate high-quality images using a local ComfyUI instance. Use when the user wants private, powerful image generation via their own hardware and custom workflows. Requires a running ComfyUI server accessible on the local network.
metadata:
{
"openclaw":
{
"emoji": "🎨",
"requires": { "env": ["COMFYUI_SERVER_ADDRESS"] },
},
}
---
# ComfyUI Local Skill
This skill allows OpenClaw to generate images by connecting to a ComfyUI instance running on the local network.
## Setup
1. **Server Address:** Set the `COMFYUI_SERVER_ADDRESS` environment variable to your PC's IP and port (e.g., `http://192.168.1.119:8189`).
2. **API Mode:** Ensure **"Enable Dev mode"** is turned on in your ComfyUI settings to allow API interactions.
## Usage
### Generate an Image
Run the internal generation script with a prompt:
```bash
python3 {skillDir}/scripts/comfy_gen.py "your image prompt" $COMFYUI_SERVER_ADDRESS
```
### Use a Custom Workflow
Place your API JSON workflows in the `workflows/` folder, then specify the path:
```bash
python3 {skillDir}/scripts/comfy_gen.py "your prompt" $COMFYUI_SERVER_ADDRESS --workflow {skillDir}/workflows/my_workflow.json
```
## Features
- **SDXL Default:** Uses a high-quality SDXL workflow (Juggernaut XL) by default.
- **Auto-Backup:** Designed to save images to `image-gens/` and can be configured to sync to local document folders.
- **Custom Workflows:** Supports external API JSON workflows saved in the `workflows/` folder. The script will automatically try to inject your prompt and a random seed into the workflow nodes.
## Implementation Details
The skill uses a Python helper (`scripts/comfy_gen.py`) to handle the WebSocket/HTTP handshake with the ComfyUI API, queue the prompt, and download the resulting image.
## ComfyUI Image Generation Notes:
1. **Server Address:**
* The ComfyUI server address needs to be passed as a direct argument to the `comfy_gen.py` script after the prompt, not just as an environment variable.
* Example: `python3 ... "Your prompt" http://192.168.1.119:8189 ...`
2. **Workflow Paths:**
* When specifying a workflow file path that contains spaces or special characters, it must be enclosed in single quotes to be parsed correctly by the script.
* Example: `--workflow '/path/to/your/workflow file name.json'`
3. **Lora Weight Control:**
* The current `comfy_gen.py` script does not appear to have a direct parameter for controlling Lora weights (e.g., setting 'l1lly' Lora to 0.90). This might need to be configured within the workflow JSON itself, or require modifications to the script or workflow.
4. **Output Filenames:**
* Generated images might be saved with temporary names (e.g., `ComfyUI_temp_...png`) rather than more descriptive ones by default.
5. **ComfyUI Setup:**
* Ensure "Enable Dev mode" is turned on in ComfyUI settings for API interactions.
---
## Referenced Files
> The following files are referenced in this skill and included for context.
### scripts/comfy_gen.py
```python
import json
import urllib.request
import urllib.parse
import sys
import os
import time
import argparse
def generate_image(prompt, server_address, workflow_path=None, checkpoint="SDXL/juggernautXL_ragnarokBy.safetensors"):
if workflow_path and os.path.exists(workflow_path):
with open(workflow_path, 'r') as f:
workflow = json.load(f)
# Smart Injection: Try to find prompt and seed nodes
found_prompt = False
for node_id, node in workflow.items():
# Inject prompt into CLIPTextEncode (usually the first one or one with specific inputs)
if node.get("class_type") == "CLIPTextEncode" and not found_prompt:
node["inputs"]["text"] = prompt
found_prompt = True
# Inject seed into KSampler or similar
if "seed" in node.get("inputs", {}):
node["inputs"]["seed"] = int(time.time())
else:
# Default Simple SDXL API Workflow
workflow = {
"3": {
"inputs": {
"seed": int(time.time()),
"steps": 25,
"cfg": 7,
"sampler_name": "dpmpp_2m",
"scheduler": "karras",
"denoise": 1,
"model": ["4", 0],
"positive": ["6", 0],
"negative": ["7", 0],
"latent_image": ["5", 0]
},
"class_type": "KSampler"
},
"4": {
"inputs": {
"ckpt_name": checkpoint
},
"class_type": "CheckpointLoaderSimple"
},
"5": {
"inputs": {
"width": 1024,
"height": 1024,
"batch_size": 1
},
"class_type": "EmptyLatentImage"
},
"6": {
"inputs": {
"text": prompt,
"clip": ["4", 1]
},
"class_type": "CLIPTextEncode"
},
"7": {
"inputs": {
"text": "text, watermark, low quality, blurry, distorted",
"clip": ["4", 1]
},
"class_type": "CLIPTextEncode"
},
"8": {
"inputs": {
"samples": ["3", 0],
"vae": ["4", 2]
},
"class_type": "VAEDecode"
},
"9": {
"inputs": {
"filename_prefix": "OpenClaw",
"images": ["8", 0]
},
"class_type": "SaveImage"
}
}
p = {"prompt": workflow}
data = json.dumps(p).encode('utf-8')
req = urllib.request.Request(f"{server_address}/prompt", data=data)
try:
with urllib.request.urlopen(req) as f:
response = json.loads(f.read().decode('utf-8'))
prompt_id = response['prompt_id']
except Exception as e:
print(f"Error connecting to ComfyUI: {e}")
sys.exit(1)
# Wait for completion
while True:
with urllib.request.urlopen(f"{server_address}/history/{prompt_id}") as f:
history = json.loads(f.read().decode('utf-8'))
if prompt_id in history:
outputs = history[prompt_id]['outputs']
for node_id in outputs:
if 'images' in outputs[node_id]:
image_data = outputs[node_id]['images'][0]
filename = image_data['filename']
subfolder = image_data['subfolder']
folder_type = image_data['type']
# Download image
image_url = f"{server_address}/view?filename={filename}&subfolder={subfolder}&type={folder_type}"
image_path = f"image-gens/{filename}"
os.makedirs("image-gens", exist_ok=True)
urllib.request.urlretrieve(image_url, image_path)
return image_path
time.sleep(2)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("prompt", help="The image generation prompt")
parser.add_argument("server", help="ComfyUI server address")
parser.add_argument("--workflow", help="Path to workflow JSON file", default=None)
args = parser.parse_args()
path = generate_image(args.prompt, args.server, args.workflow)
print(f"MEDIA:{path}")
print(f"Generated image saved to {path}")
```
---
## Skill Companion Files
> Additional files collected from the skill directory layout.
### _meta.json
```json
{
"owner": "dihan",
"slug": "comfy-ui",
"displayName": "ComfyUI Skill",
"latest": {
"version": "1.0.0",
"publishedAt": 1771769262329,
"commit": "https://github.com/openclaw/skills/commit/eb97a027d0a97188b59dad377ec8237414e3ebc4"
},
"history": []
}
```