mirror of
https://github.com/NousResearch/hermes-agent.git
synced 2026-05-03 02:11:48 +00:00
fix(skills/comfyui): bug fixes, cloud parity, expanded coverage, examples, tests
The audit of v4.1 surfaced ~70 issues across the five scripts and three
reference docs — most user-visible (silent file overwrites, status-error
misclassified as success, X-API-Key leaked to S3 on /api/view redirect,
Cloud endpoints that 404 because they were renamed). v5.0.0 fixes those
and fills the gaps that previously forced users to write their own glue
(WebSocket monitoring, batch/sweep, img2img upload helper, dep auto-fix,
log fetch, health check, example workflows).
Critical fixes
- run_workflow.py: poll_status now checks status_str==error BEFORE
completed:true, so a failed run no longer reports success
- run_workflow.py: download_output streams to disk via safe_path_join,
preserves server subfolder structure (no silent overwrites), and
retries with exponential backoff
- run_workflow.py: refuses to overwrite a link with a literal in
inject_params (would silently break wiring)
- _common.py: _StripSensitiveOnRedirectSession (subclasses
requests.Session.rebuild_auth) drops X-API-Key/Cookie on cross-host
redirects — fixes a real key-leak path through Cloud's signed-URL
download flow. Tested
- Cloud routing (verified live): /history → /history_v2,
/models/<f> → /experiment/models/<f>, plus folder aliases for the
unet ↔ diffusion_models and clip ↔ text_encoders rename
- check_deps.py: distinguishes 200/empty vs 404 folder_not_found vs
403 free-tier; emits concrete fix_command per missing dep
- extract_schema.py: prompt vs negative_prompt determined by tracing
KSampler.{positive,negative} connections (incl. through Reroute /
Primitive nodes) instead of meta-title heuristic; symmetric
duplicate-name resolution; cycle-safe trace_to_node
- hardware_check.py: multi-GPU pick-best, Apple variant detection,
Rosetta detection, WSL2, ROCm --json, disk-space check, optional
PyTorch probe; powershell preferred over deprecated wmic
- comfyui_setup.sh: prefers pipx → uvx → pip --user (with PEP-668
fallback); idempotent — skips relaunch if server already up;
configurable port/workspace; persistent log; SIGINT trap
New scripts
- run_batch.py — count or sweep (cartesian product), parallel up to
cloud tier limit
- ws_monitor.py — real-time WebSocket viewer; saves preview frames
- auto_fix_deps.py — runs comfy node install / model download for
whatever check_deps reports missing (with --dry-run)
- health_check.py — single command that runs the verification checklist
(comfy-cli + server + checkpoints + optional smoke test that cancels
itself to avoid burning compute)
- fetch_logs.py — pull traceback / status messages for a prompt_id
Coverage expansion
- Param patterns now cover Flux (BasicScheduler, BasicGuider,
RandomNoise, ModelSamplingFlux), SD3, Wan/Hunyuan/LTX video,
IPAdapter, rgthree, easy-use, AnimateDiff
- Embedding refs in CLIPTextEncode strings extracted as model deps
- ckpt_name / vae_name / lora_name / unet_name now controllable so
workflows can be retargeted per run
Examples
- workflows/{sd15,sdxl,flux_dev}_txt2img.json
- workflows/sdxl_{img2img,inpaint}.json
- workflows/upscale_4x.json
- workflows/{animatediff_video,wan_video_t2v}.json + README
Tests
- 117 tests (105 unit + 8 cloud integration + 4 cross-host security)
- Cloud tests auto-skip without COMFY_CLOUD_API_KEY; verified end-to-end
against live cloud API
Backwards compatibility
- All existing CLI flags continue to work; new behavior is opt-in
(--ws, --input-image, --randomize-seed, --flat-output, etc.)
This commit is contained in:
parent
7d48a16f14
commit
a7780fe05f
32 changed files with 6117 additions and 1372 deletions
86
skills/creative/comfyui/workflows/README.md
Normal file
86
skills/creative/comfyui/workflows/README.md
Normal file
|
|
@ -0,0 +1,86 @@
|
|||
# Example Workflows
|
||||
|
||||
These are starter API-format workflows for the most common tasks. They're
|
||||
ready to run with `scripts/run_workflow.py` once you've installed (or have
|
||||
cloud access to) the listed models.
|
||||
|
||||
| File | Purpose | Required models | Min VRAM |
|
||||
|------|---------|-----------------|----------|
|
||||
| `sd15_txt2img.json` | SD 1.5 text-to-image (512×512) | SD1.5 checkpoint, e.g. `v1-5-pruned-emaonly.safetensors` | 4 GB |
|
||||
| `sdxl_txt2img.json` | SDXL text-to-image (1024×1024) | `sd_xl_base_1.0.safetensors` | 8 GB |
|
||||
| `flux_dev_txt2img.json` | Flux Dev text-to-image (1024×1024) | `flux1-dev.safetensors`, `t5xxl_fp16.safetensors`, `clip_l.safetensors`, `ae.safetensors` | 24 GB (or use `flux1-dev-fp8`) |
|
||||
| `sdxl_img2img.json` | SDXL image-to-image | SDXL checkpoint | 8 GB |
|
||||
| `sdxl_inpaint.json` | SDXL inpainting (image + mask) | SDXL checkpoint | 8 GB |
|
||||
| `upscale_4x.json` | Standalone 4× ESRGAN upscale | `4x-UltraSharp.pth` (or any upscaler) | 4 GB |
|
||||
| `animatediff_video.json` | AnimateDiff text-to-video (16 frames) | SD1.5 checkpoint, `mm_sd_v15_v2.ckpt` motion module | 8 GB |
|
||||
| `wan_video_t2v.json` | Wan 2.x text-to-video (~33 frames) | `wan2.2_t2v_1.3B_fp16.safetensors`, `umt5_xxl_fp16.safetensors`, `wan_2.1_vae.safetensors` | 24 GB |
|
||||
|
||||
## Quick start
|
||||
|
||||
```bash
|
||||
# Run a workflow with prompt injection
|
||||
python3 ../scripts/run_workflow.py \
|
||||
--workflow sdxl_txt2img.json \
|
||||
--args '{"prompt": "majestic eagle in flight", "seed": 12345, "steps": 35}' \
|
||||
--output-dir ./out
|
||||
|
||||
# Img2img: upload an input image first via the script's helper
|
||||
python3 ../scripts/run_workflow.py \
|
||||
--workflow sdxl_img2img.json \
|
||||
--input-image image=./photo.png \
|
||||
--args '{"prompt": "make it watercolor", "denoise": 0.6}' \
|
||||
--output-dir ./out
|
||||
|
||||
# Cloud (set API key once)
|
||||
export COMFY_CLOUD_API_KEY="comfyui-..."
|
||||
python3 ../scripts/run_workflow.py \
|
||||
--workflow flux_dev_txt2img.json \
|
||||
--args '{"prompt": "a fox in a misty forest"}' \
|
||||
--host https://cloud.comfy.org \
|
||||
--output-dir ./out
|
||||
|
||||
# What can I tweak in this workflow?
|
||||
python3 ../scripts/extract_schema.py sdxl_txt2img.json --summary-only
|
||||
|
||||
# Are all required models / nodes installed?
|
||||
python3 ../scripts/check_deps.py wan_video_t2v.json
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- **Inpaint masks**: white pixels = "regenerate this region", black = preserve.
|
||||
ComfyUI's `LoadImageMask` reads the **red channel** by default; export your
|
||||
mask as a single-channel image or as a normal RGB where red==intensity.
|
||||
|
||||
- **Denoise strength** in img2img: `0.0` = output identical to input,
|
||||
`1.0` = ignore input entirely. Sweet spot is usually 0.4–0.7.
|
||||
|
||||
- **Flux Dev** needs ~24 GB VRAM in its base form. The `flux1-dev-fp8.safetensors`
|
||||
variant (already on Comfy Cloud) cuts that roughly in half.
|
||||
|
||||
- **Video workflows** can take many minutes. The skill auto-detects video
|
||||
output nodes and bumps the default timeout to 900s. Override with `--timeout 1800`.
|
||||
|
||||
- These JSON files are deliberately **API format** (top-level keys are node IDs
|
||||
with `class_type`), not editor format. To open them in ComfyUI's web UI for
|
||||
visual editing, use `Workflow → Load (API Format)` or `Workflow → Open` and
|
||||
follow the prompt.
|
||||
|
||||
## Cloud vs local model names
|
||||
|
||||
Comfy Cloud's preinstalled checkpoints sometimes have a `-fp16` suffix
|
||||
(`v1-5-pruned-emaonly-fp16.safetensors`) while the canonical local download
|
||||
keeps the original name (`v1-5-pruned-emaonly.safetensors`). The example
|
||||
workflows use the local-canonical names. When running on cloud, override with:
|
||||
|
||||
```bash
|
||||
python3 ../scripts/run_workflow.py \
|
||||
--workflow sd15_txt2img.json \
|
||||
--args '{"ckpt_name": "v1-5-pruned-emaonly-fp16.safetensors", "prompt": "..."}' \
|
||||
--host https://cloud.comfy.org
|
||||
```
|
||||
|
||||
The `ckpt_name`, `vae_name`, `lora_name`, `unet_name`, etc. are all exposed
|
||||
as controllable parameters by `extract_schema.py` — discover what's installed
|
||||
with `comfy model list` (local) or `curl /api/experiment/models/checkpoints`
|
||||
(cloud).
|
||||
64
skills/creative/comfyui/workflows/animatediff_video.json
Normal file
64
skills/creative/comfyui/workflows/animatediff_video.json
Normal file
|
|
@ -0,0 +1,64 @@
|
|||
{
|
||||
"_comment": "AnimateDiff text-to-video at 16 frames. Required: comfyui-animatediff-evolved + comfyui-videohelpersuite custom nodes; SD1.5 checkpoint; AnimateDiff motion module (e.g. mm_sd_v15_v2.ckpt in models/animatediff_models/). Outputs a webp animation.",
|
||||
"3": {
|
||||
"class_type": "KSampler",
|
||||
"_meta": {"title": "KSampler"},
|
||||
"inputs": {
|
||||
"seed": 42, "steps": 25, "cfg": 7.5,
|
||||
"sampler_name": "dpmpp_sde", "scheduler": "karras", "denoise": 1.0,
|
||||
"model": ["10", 0],
|
||||
"positive": ["6", 0],
|
||||
"negative": ["7", 0],
|
||||
"latent_image": ["5", 0]
|
||||
}
|
||||
},
|
||||
"4": {
|
||||
"class_type": "CheckpointLoaderSimple",
|
||||
"_meta": {"title": "Checkpoint"},
|
||||
"inputs": {"ckpt_name": "v1-5-pruned-emaonly.safetensors"}
|
||||
},
|
||||
"5": {
|
||||
"class_type": "EmptyLatentImage",
|
||||
"_meta": {"title": "Latent (16 frames)"},
|
||||
"inputs": {"width": 512, "height": 512, "batch_size": 16}
|
||||
},
|
||||
"6": {
|
||||
"class_type": "CLIPTextEncode",
|
||||
"_meta": {"title": "Positive Prompt"},
|
||||
"inputs": {"text": "a hot air balloon drifting over a mountain valley, sunset, cinematic", "clip": ["4", 1]}
|
||||
},
|
||||
"7": {
|
||||
"class_type": "CLIPTextEncode",
|
||||
"_meta": {"title": "Negative Prompt"},
|
||||
"inputs": {"text": "low quality, blurry, deformed, watermark", "clip": ["4", 1]}
|
||||
},
|
||||
"8": {
|
||||
"class_type": "VAEDecode",
|
||||
"_meta": {"title": "VAE Decode"},
|
||||
"inputs": {"samples": ["3", 0], "vae": ["4", 2]}
|
||||
},
|
||||
"9": {
|
||||
"class_type": "VHS_VideoCombine",
|
||||
"_meta": {"title": "Video Combine"},
|
||||
"inputs": {
|
||||
"frame_rate": 8.0,
|
||||
"loop_count": 0,
|
||||
"filename_prefix": "animatediff",
|
||||
"format": "video/h264-mp4",
|
||||
"pingpong": false,
|
||||
"save_output": true,
|
||||
"images": ["8", 0]
|
||||
}
|
||||
},
|
||||
"10": {
|
||||
"class_type": "ADE_AnimateDiffLoaderWithContext",
|
||||
"_meta": {"title": "AnimateDiff Loader"},
|
||||
"inputs": {
|
||||
"model": ["4", 0],
|
||||
"model_name": "mm_sd_v15_v2.ckpt",
|
||||
"beta_schedule": "sqrt_linear (AnimateDiff)",
|
||||
"motion_scale": 1.0,
|
||||
"apply_v2_models_properly": true
|
||||
}
|
||||
}
|
||||
}
|
||||
78
skills/creative/comfyui/workflows/flux_dev_txt2img.json
Normal file
78
skills/creative/comfyui/workflows/flux_dev_txt2img.json
Normal file
|
|
@ -0,0 +1,78 @@
|
|||
{
|
||||
"_comment": "Flux Dev text-to-image using the modern sampler chain (BasicScheduler/Guider/SamplerCustomAdvanced). Required: flux1-dev.safetensors (UNET), t5xxl_fp16.safetensors + clip_l.safetensors (CLIP), ae.safetensors (VAE).",
|
||||
"6": {
|
||||
"class_type": "CLIPTextEncode",
|
||||
"_meta": {"title": "Prompt"},
|
||||
"inputs": {"text": "a serene mountain landscape at golden hour, photorealistic", "clip": ["11", 0]}
|
||||
},
|
||||
"8": {
|
||||
"class_type": "VAEDecode",
|
||||
"_meta": {"title": "VAE Decode"},
|
||||
"inputs": {"samples": ["13", 0], "vae": ["10", 0]}
|
||||
},
|
||||
"9": {
|
||||
"class_type": "SaveImage",
|
||||
"_meta": {"title": "Save Image"},
|
||||
"inputs": {"filename_prefix": "flux_dev", "images": ["8", 0]}
|
||||
},
|
||||
"10": {
|
||||
"class_type": "VAELoader",
|
||||
"_meta": {"title": "VAE"},
|
||||
"inputs": {"vae_name": "ae.safetensors"}
|
||||
},
|
||||
"11": {
|
||||
"class_type": "DualCLIPLoader",
|
||||
"_meta": {"title": "DualCLIPLoader"},
|
||||
"inputs": {
|
||||
"clip_name1": "t5xxl_fp16.safetensors",
|
||||
"clip_name2": "clip_l.safetensors",
|
||||
"type": "flux"
|
||||
}
|
||||
},
|
||||
"12": {
|
||||
"class_type": "UNETLoader",
|
||||
"_meta": {"title": "UNET Loader"},
|
||||
"inputs": {"unet_name": "flux1-dev.safetensors", "weight_dtype": "default"}
|
||||
},
|
||||
"13": {
|
||||
"class_type": "SamplerCustomAdvanced",
|
||||
"_meta": {"title": "Sampler Custom"},
|
||||
"inputs": {
|
||||
"noise": ["25", 0],
|
||||
"guider": ["22", 0],
|
||||
"sampler": ["16", 0],
|
||||
"sigmas": ["17", 0],
|
||||
"latent_image": ["27", 0]
|
||||
}
|
||||
},
|
||||
"16": {
|
||||
"class_type": "KSamplerSelect",
|
||||
"_meta": {"title": "Sampler Select"},
|
||||
"inputs": {"sampler_name": "euler"}
|
||||
},
|
||||
"17": {
|
||||
"class_type": "BasicScheduler",
|
||||
"_meta": {"title": "Scheduler"},
|
||||
"inputs": {
|
||||
"scheduler": "simple",
|
||||
"steps": 20,
|
||||
"denoise": 1.0,
|
||||
"model": ["12", 0]
|
||||
}
|
||||
},
|
||||
"22": {
|
||||
"class_type": "BasicGuider",
|
||||
"_meta": {"title": "Guider"},
|
||||
"inputs": {"model": ["12", 0], "conditioning": ["6", 0]}
|
||||
},
|
||||
"25": {
|
||||
"class_type": "RandomNoise",
|
||||
"_meta": {"title": "Noise"},
|
||||
"inputs": {"noise_seed": 42}
|
||||
},
|
||||
"27": {
|
||||
"class_type": "EmptySD3LatentImage",
|
||||
"_meta": {"title": "Latent"},
|
||||
"inputs": {"width": 1024, "height": 1024, "batch_size": 1}
|
||||
}
|
||||
}
|
||||
49
skills/creative/comfyui/workflows/sd15_txt2img.json
Normal file
49
skills/creative/comfyui/workflows/sd15_txt2img.json
Normal file
|
|
@ -0,0 +1,49 @@
|
|||
{
|
||||
"_comment": "SD 1.5 text-to-image. Smallest model, fastest. Required model: v1-5-pruned-emaonly.safetensors (or any SD1.5 checkpoint)",
|
||||
"3": {
|
||||
"class_type": "KSampler",
|
||||
"_meta": {"title": "KSampler"},
|
||||
"inputs": {
|
||||
"seed": 156680208700286,
|
||||
"steps": 20,
|
||||
"cfg": 8.0,
|
||||
"sampler_name": "euler",
|
||||
"scheduler": "normal",
|
||||
"denoise": 1.0,
|
||||
"model": ["4", 0],
|
||||
"positive": ["6", 0],
|
||||
"negative": ["7", 0],
|
||||
"latent_image": ["5", 0]
|
||||
}
|
||||
},
|
||||
"4": {
|
||||
"class_type": "CheckpointLoaderSimple",
|
||||
"_meta": {"title": "Load Checkpoint"},
|
||||
"inputs": {"ckpt_name": "v1-5-pruned-emaonly.safetensors"}
|
||||
},
|
||||
"5": {
|
||||
"class_type": "EmptyLatentImage",
|
||||
"_meta": {"title": "Empty Latent"},
|
||||
"inputs": {"width": 512, "height": 512, "batch_size": 1}
|
||||
},
|
||||
"6": {
|
||||
"class_type": "CLIPTextEncode",
|
||||
"_meta": {"title": "Positive Prompt"},
|
||||
"inputs": {"text": "a beautiful landscape painting, masterpiece, highly detailed", "clip": ["4", 1]}
|
||||
},
|
||||
"7": {
|
||||
"class_type": "CLIPTextEncode",
|
||||
"_meta": {"title": "Negative Prompt"},
|
||||
"inputs": {"text": "ugly, blurry, low quality, deformed", "clip": ["4", 1]}
|
||||
},
|
||||
"8": {
|
||||
"class_type": "VAEDecode",
|
||||
"_meta": {"title": "VAE Decode"},
|
||||
"inputs": {"samples": ["3", 0], "vae": ["4", 2]}
|
||||
},
|
||||
"9": {
|
||||
"class_type": "SaveImage",
|
||||
"_meta": {"title": "Save Image"},
|
||||
"inputs": {"filename_prefix": "sd15", "images": ["8", 0]}
|
||||
}
|
||||
}
|
||||
54
skills/creative/comfyui/workflows/sdxl_img2img.json
Normal file
54
skills/creative/comfyui/workflows/sdxl_img2img.json
Normal file
|
|
@ -0,0 +1,54 @@
|
|||
{
|
||||
"_comment": "SDXL img2img: load an input image, encode to latent, denoise partially. Use --input-image image=./photo.png with run_workflow.py. Lower 'denoise' value preserves more of the source image.",
|
||||
"1": {
|
||||
"class_type": "LoadImage",
|
||||
"_meta": {"title": "Load Source Image"},
|
||||
"inputs": {"image": "REPLACE_WITH_UPLOADED_FILENAME.png"}
|
||||
},
|
||||
"3": {
|
||||
"class_type": "KSampler",
|
||||
"_meta": {"title": "KSampler"},
|
||||
"inputs": {
|
||||
"seed": 42,
|
||||
"steps": 30,
|
||||
"cfg": 7.5,
|
||||
"sampler_name": "dpmpp_2m",
|
||||
"scheduler": "karras",
|
||||
"denoise": 0.65,
|
||||
"model": ["4", 0],
|
||||
"positive": ["6", 0],
|
||||
"negative": ["7", 0],
|
||||
"latent_image": ["12", 0]
|
||||
}
|
||||
},
|
||||
"4": {
|
||||
"class_type": "CheckpointLoaderSimple",
|
||||
"_meta": {"title": "Load SDXL Base"},
|
||||
"inputs": {"ckpt_name": "sd_xl_base_1.0.safetensors"}
|
||||
},
|
||||
"6": {
|
||||
"class_type": "CLIPTextEncode",
|
||||
"_meta": {"title": "Positive Prompt"},
|
||||
"inputs": {"text": "make it cyberpunk, neon lights, futuristic", "clip": ["4", 1]}
|
||||
},
|
||||
"7": {
|
||||
"class_type": "CLIPTextEncode",
|
||||
"_meta": {"title": "Negative Prompt"},
|
||||
"inputs": {"text": "ugly, blurry, low quality, deformed", "clip": ["4", 1]}
|
||||
},
|
||||
"8": {
|
||||
"class_type": "VAEDecode",
|
||||
"_meta": {"title": "VAE Decode"},
|
||||
"inputs": {"samples": ["3", 0], "vae": ["4", 2]}
|
||||
},
|
||||
"9": {
|
||||
"class_type": "SaveImage",
|
||||
"_meta": {"title": "Save Image"},
|
||||
"inputs": {"filename_prefix": "sdxl_img2img", "images": ["8", 0]}
|
||||
},
|
||||
"12": {
|
||||
"class_type": "VAEEncode",
|
||||
"_meta": {"title": "VAE Encode"},
|
||||
"inputs": {"pixels": ["1", 0], "vae": ["4", 2]}
|
||||
}
|
||||
}
|
||||
59
skills/creative/comfyui/workflows/sdxl_inpaint.json
Normal file
59
skills/creative/comfyui/workflows/sdxl_inpaint.json
Normal file
|
|
@ -0,0 +1,59 @@
|
|||
{
|
||||
"_comment": "SDXL inpainting: given an image + mask, regenerate the masked region. Upload both: --input-image image=./photo.png --input-image mask_image=./mask.png. White pixels in mask = regenerate; black = preserve.",
|
||||
"1": {
|
||||
"class_type": "LoadImage",
|
||||
"_meta": {"title": "Load Source"},
|
||||
"inputs": {"image": "REPLACE_WITH_UPLOADED_FILENAME.png"}
|
||||
},
|
||||
"2": {
|
||||
"class_type": "LoadImageMask",
|
||||
"_meta": {"title": "Load Mask"},
|
||||
"inputs": {"image": "REPLACE_WITH_UPLOADED_MASK.png", "channel": "red"}
|
||||
},
|
||||
"3": {
|
||||
"class_type": "KSampler",
|
||||
"_meta": {"title": "KSampler"},
|
||||
"inputs": {
|
||||
"seed": 42,
|
||||
"steps": 30,
|
||||
"cfg": 7.5,
|
||||
"sampler_name": "dpmpp_2m",
|
||||
"scheduler": "karras",
|
||||
"denoise": 1.0,
|
||||
"model": ["4", 0],
|
||||
"positive": ["6", 0],
|
||||
"negative": ["7", 0],
|
||||
"latent_image": ["12", 0]
|
||||
}
|
||||
},
|
||||
"4": {
|
||||
"class_type": "CheckpointLoaderSimple",
|
||||
"_meta": {"title": "Checkpoint"},
|
||||
"inputs": {"ckpt_name": "sd_xl_base_1.0.safetensors"}
|
||||
},
|
||||
"6": {
|
||||
"class_type": "CLIPTextEncode",
|
||||
"_meta": {"title": "Positive Prompt"},
|
||||
"inputs": {"text": "fill with blooming flowers, photorealistic", "clip": ["4", 1]}
|
||||
},
|
||||
"7": {
|
||||
"class_type": "CLIPTextEncode",
|
||||
"_meta": {"title": "Negative Prompt"},
|
||||
"inputs": {"text": "ugly, blurry, deformed, bad anatomy", "clip": ["4", 1]}
|
||||
},
|
||||
"8": {
|
||||
"class_type": "VAEDecode",
|
||||
"_meta": {"title": "VAE Decode"},
|
||||
"inputs": {"samples": ["3", 0], "vae": ["4", 2]}
|
||||
},
|
||||
"9": {
|
||||
"class_type": "SaveImage",
|
||||
"_meta": {"title": "Save"},
|
||||
"inputs": {"filename_prefix": "sdxl_inpaint", "images": ["8", 0]}
|
||||
},
|
||||
"12": {
|
||||
"class_type": "VAEEncodeForInpaint",
|
||||
"_meta": {"title": "VAE Encode for Inpaint"},
|
||||
"inputs": {"pixels": ["1", 0], "mask": ["2", 0], "vae": ["4", 2], "grow_mask_by": 6}
|
||||
}
|
||||
}
|
||||
49
skills/creative/comfyui/workflows/sdxl_txt2img.json
Normal file
49
skills/creative/comfyui/workflows/sdxl_txt2img.json
Normal file
|
|
@ -0,0 +1,49 @@
|
|||
{
|
||||
"_comment": "SDXL text-to-image at 1024x1024. Required model: sd_xl_base_1.0.safetensors (or any SDXL checkpoint).",
|
||||
"3": {
|
||||
"class_type": "KSampler",
|
||||
"_meta": {"title": "KSampler"},
|
||||
"inputs": {
|
||||
"seed": 42,
|
||||
"steps": 30,
|
||||
"cfg": 7.5,
|
||||
"sampler_name": "dpmpp_2m",
|
||||
"scheduler": "karras",
|
||||
"denoise": 1.0,
|
||||
"model": ["4", 0],
|
||||
"positive": ["6", 0],
|
||||
"negative": ["7", 0],
|
||||
"latent_image": ["5", 0]
|
||||
}
|
||||
},
|
||||
"4": {
|
||||
"class_type": "CheckpointLoaderSimple",
|
||||
"_meta": {"title": "Load SDXL Base"},
|
||||
"inputs": {"ckpt_name": "sd_xl_base_1.0.safetensors"}
|
||||
},
|
||||
"5": {
|
||||
"class_type": "EmptyLatentImage",
|
||||
"_meta": {"title": "Empty Latent"},
|
||||
"inputs": {"width": 1024, "height": 1024, "batch_size": 1}
|
||||
},
|
||||
"6": {
|
||||
"class_type": "CLIPTextEncode",
|
||||
"_meta": {"title": "Positive Prompt"},
|
||||
"inputs": {"text": "cinematic photograph, dramatic lighting, intricate detail", "clip": ["4", 1]}
|
||||
},
|
||||
"7": {
|
||||
"class_type": "CLIPTextEncode",
|
||||
"_meta": {"title": "Negative Prompt"},
|
||||
"inputs": {"text": "ugly, blurry, low quality, deformed, watermark", "clip": ["4", 1]}
|
||||
},
|
||||
"8": {
|
||||
"class_type": "VAEDecode",
|
||||
"_meta": {"title": "VAE Decode"},
|
||||
"inputs": {"samples": ["3", 0], "vae": ["4", 2]}
|
||||
},
|
||||
"9": {
|
||||
"class_type": "SaveImage",
|
||||
"_meta": {"title": "Save Image"},
|
||||
"inputs": {"filename_prefix": "sdxl", "images": ["8", 0]}
|
||||
}
|
||||
}
|
||||
27
skills/creative/comfyui/workflows/upscale_4x.json
Normal file
27
skills/creative/comfyui/workflows/upscale_4x.json
Normal file
|
|
@ -0,0 +1,27 @@
|
|||
{
|
||||
"_comment": "Standalone 4x upscale of an input image using ESRGAN. Required model: 4x-UltraSharp.pth (or any upscaler in models/upscale_models/). Upload with --input-image image=./photo.png.",
|
||||
"1": {
|
||||
"class_type": "LoadImage",
|
||||
"_meta": {"title": "Load Image"},
|
||||
"inputs": {"image": "REPLACE_WITH_UPLOADED_FILENAME.png"}
|
||||
},
|
||||
"2": {
|
||||
"class_type": "UpscaleModelLoader",
|
||||
"_meta": {"title": "Load Upscale Model"},
|
||||
"inputs": {"model_name": "4x-UltraSharp.pth"}
|
||||
},
|
||||
"3": {
|
||||
"class_type": "ImageUpscaleWithModel",
|
||||
"_meta": {"title": "Upscale Image (with Model)"},
|
||||
"inputs": {
|
||||
"upscale_method": "lanczos",
|
||||
"upscale_model": ["2", 0],
|
||||
"image": ["1", 0]
|
||||
}
|
||||
},
|
||||
"4": {
|
||||
"class_type": "SaveImage",
|
||||
"_meta": {"title": "Save"},
|
||||
"inputs": {"filename_prefix": "upscaled_4x", "images": ["3", 0]}
|
||||
}
|
||||
}
|
||||
69
skills/creative/comfyui/workflows/wan_video_t2v.json
Normal file
69
skills/creative/comfyui/workflows/wan_video_t2v.json
Normal file
|
|
@ -0,0 +1,69 @@
|
|||
{
|
||||
"_comment": "Wan 2.1 text-to-video. Cloud: confirmed available. Local: download wan2.1_t2v_1.3B_fp16.safetensors → models/diffusion_models/ (or models/unet/), umt5_xxl_fp16.safetensors → models/text_encoders/ (or models/clip/), wan_2.1_vae.safetensors → models/vae/. Output: MP4. Large model — only on cloud or 24 GB+ local GPU.",
|
||||
"6": {
|
||||
"class_type": "CLIPTextEncode",
|
||||
"_meta": {"title": "Prompt"},
|
||||
"inputs": {
|
||||
"text": "a graceful crane taking flight from a misty lake at dawn, slow motion, 4k",
|
||||
"clip": ["38", 0]
|
||||
}
|
||||
},
|
||||
"7": {
|
||||
"class_type": "CLIPTextEncode",
|
||||
"_meta": {"title": "Negative Prompt"},
|
||||
"inputs": {
|
||||
"text": "static, blurry, watermark, low quality",
|
||||
"clip": ["38", 0]
|
||||
}
|
||||
},
|
||||
"8": {
|
||||
"class_type": "VAEDecode",
|
||||
"_meta": {"title": "VAE Decode"},
|
||||
"inputs": {"samples": ["3", 0], "vae": ["39", 0]}
|
||||
},
|
||||
"37": {
|
||||
"class_type": "UNETLoader",
|
||||
"_meta": {"title": "Wan UNET"},
|
||||
"inputs": {"unet_name": "wan2.1_t2v_1.3B_fp16.safetensors", "weight_dtype": "default"}
|
||||
},
|
||||
"38": {
|
||||
"class_type": "CLIPLoader",
|
||||
"_meta": {"title": "Wan CLIP"},
|
||||
"inputs": {"clip_name": "umt5_xxl_fp16.safetensors", "type": "wan"}
|
||||
},
|
||||
"39": {
|
||||
"class_type": "VAELoader",
|
||||
"_meta": {"title": "Wan VAE"},
|
||||
"inputs": {"vae_name": "wan_2.1_vae.safetensors"}
|
||||
},
|
||||
"3": {
|
||||
"class_type": "KSampler",
|
||||
"_meta": {"title": "KSampler"},
|
||||
"inputs": {
|
||||
"seed": 42, "steps": 30, "cfg": 6.0,
|
||||
"sampler_name": "uni_pc", "scheduler": "simple", "denoise": 1.0,
|
||||
"model": ["37", 0],
|
||||
"positive": ["6", 0],
|
||||
"negative": ["7", 0],
|
||||
"latent_image": ["40", 0]
|
||||
}
|
||||
},
|
||||
"40": {
|
||||
"class_type": "EmptyHunyuanLatentVideo",
|
||||
"_meta": {"title": "Latent Video (33 frames)"},
|
||||
"inputs": {"width": 832, "height": 480, "length": 33, "batch_size": 1}
|
||||
},
|
||||
"9": {
|
||||
"class_type": "VHS_VideoCombine",
|
||||
"_meta": {"title": "Video Combine"},
|
||||
"inputs": {
|
||||
"frame_rate": 16.0,
|
||||
"loop_count": 0,
|
||||
"filename_prefix": "wan_t2v",
|
||||
"format": "video/h264-mp4",
|
||||
"pingpong": false,
|
||||
"save_output": true,
|
||||
"images": ["8", 0]
|
||||
}
|
||||
}
|
||||
}
|
||||
Loading…
Add table
Add a link
Reference in a new issue