mirror of
https://github.com/NousResearch/hermes-agent.git
synced 2026-05-03 02:11:48 +00:00
docs(sidebar): collapse exploding skills tree to a single Skills node (#18259)
* docs(sidebar): collapse exploding skills tree to a single Skills node
The Skills sub-tree in the left sidebar expanded to 200+ entries
(22 bundled categories + 15 optional categories, every skill a page).
That's most of the nav on a first visit — docs for the actual product
get drowned in it.
Collapse the sidebar to:
Skills
godmode (hand-written spotlight)
google-workspace (hand-written spotlight)
Bundled catalog (reference/skills-catalog — table of all bundled)
Optional catalog (reference/optional-skills-catalog — table of all optional)
Per-skill pages still generate and are still reachable at their URLs;
they're linked from the two catalog tables and from the Skills overview
page. They just don't appear in the left nav anymore.
sidebars.ts goes from 649 lines to 247. generate-skill-docs.py loses
the bundled/optional sidebar render helpers.
Also picks up incidental generator output drift on current main
(comfyui skill content refresh; 4 new skill pages for
devops-kanban-orchestrator, devops-kanban-worker,
productivity-here-now, productivity-shopify; two catalog refreshes).
These are what the generator produces on main today — keeping them
committed avoids the next docs build showing 'working tree dirty'.
* docs(sidebar): drop godmode and google-workspace spotlight pages
Keep the Skills sidebar node strictly principled: two catalog links,
nothing else. There was no rule for which skills got spotlight pages
and which got auto-generated pages — just that these two happened to
be hand-written first.
Both pages still build and are still reachable at
/docs/user-guide/skills/godmode and
/docs/user-guide/skills/google-workspace. They're linked from the
catalog tables and the Skills overview page.
Sidebar Skills node now:
Skills
├── Bundled catalog
└── Optional catalog
This commit is contained in:
parent
50c046331d
commit
7c6c5619a7
9 changed files with 1264 additions and 792 deletions
|
|
@ -8,7 +8,7 @@ description: "Generate images, video, and audio with ComfyUI — install, launch
|
|||
|
||||
# Comfyui
|
||||
|
||||
Generate images, video, and audio with ComfyUI — install, launch, manage nodes/models, run workflows with parameter injection. Uses the official comfy-cli for lifecycle and direct REST API for execution.
|
||||
Generate images, video, and audio with ComfyUI — install, launch, manage nodes/models, run workflows with parameter injection. Uses the official comfy-cli for lifecycle and direct REST/WebSocket API for execution.
|
||||
|
||||
## Skill metadata
|
||||
|
||||
|
|
@ -16,11 +16,11 @@ Generate images, video, and audio with ComfyUI — install, launch, manage nodes
|
|||
|---|---|
|
||||
| Source | Bundled (installed by default) |
|
||||
| Path | `skills/creative/comfyui` |
|
||||
| Version | `4.1.0` |
|
||||
| Version | `5.0.0` |
|
||||
| Author | ['kshitijk4poor', 'alt-glitch'] |
|
||||
| License | MIT |
|
||||
| Platforms | macos, linux, windows |
|
||||
| Tags | `comfyui`, `image-generation`, `stable-diffusion`, `flux`, `creative`, `generative-ai`, `video-generation` |
|
||||
| Tags | `comfyui`, `image-generation`, `stable-diffusion`, `flux`, `sd3`, `wan-video`, `hunyuan-video`, `creative`, `generative-ai`, `video-generation` |
|
||||
| Related skills | [`stable-diffusion-image-generation`](/docs/user-guide/skills/optional/mlops/mlops-stable-diffusion), `image_gen` |
|
||||
|
||||
## Reference: full SKILL.md
|
||||
|
|
@ -31,327 +31,333 @@ The following is the complete skill definition that Hermes loads when this skill
|
|||
|
||||
# ComfyUI
|
||||
|
||||
Generate images, video, and audio through ComfyUI using the official `comfy-cli` for
|
||||
setup/management and direct REST API calls for workflow execution.
|
||||
Generate images, video, audio, and 3D content through ComfyUI using the
|
||||
official `comfy-cli` for setup/lifecycle and direct REST/WebSocket API
|
||||
for workflow execution.
|
||||
|
||||
**Reference files in this skill:**
|
||||
## What's in this skill
|
||||
|
||||
- `references/official-cli.md` — comfy-cli command reference (install, launch, nodes, models)
|
||||
- `references/rest-api.md` — ComfyUI REST API endpoints (local + cloud)
|
||||
- `references/workflow-format.md` — workflow JSON format, common node types, parameter mapping
|
||||
**Reference docs (`references/`):**
|
||||
|
||||
**Scripts in this skill:**
|
||||
- `official-cli.md` — every `comfy ...` command, with flags
|
||||
- `rest-api.md` — REST + WebSocket endpoints (local + cloud), payload schemas
|
||||
- `workflow-format.md` — API-format JSON, common node types, param mapping
|
||||
|
||||
- `scripts/hardware_check.py` — detect GPU/VRAM/Apple Silicon, decide local vs Comfy Cloud
|
||||
- `scripts/comfyui_setup.sh` — full setup automation (hardware check + install + launch + verify)
|
||||
- `scripts/extract_schema.py` — reads workflow JSON, outputs which parameters are controllable
|
||||
- `scripts/run_workflow.py` — injects user args, submits workflow, monitors progress, downloads outputs
|
||||
- `scripts/check_deps.py` — checks if required custom nodes and models are installed
|
||||
**Scripts (`scripts/`):**
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `_common.py` | Shared HTTP, cloud routing, node catalogs (don't run directly) |
|
||||
| `hardware_check.py` | Probe GPU/VRAM/disk → recommend local vs Comfy Cloud |
|
||||
| `comfyui_setup.sh` | Hardware check + comfy-cli + ComfyUI install + launch + verify |
|
||||
| `extract_schema.py` | Read a workflow → list controllable params + model deps |
|
||||
| `check_deps.py` | Check workflow against running server → list missing nodes/models |
|
||||
| `auto_fix_deps.py` | Run check_deps then `comfy node install` / `comfy model download` |
|
||||
| `run_workflow.py` | Inject params, submit, monitor, download outputs (HTTP or WS) |
|
||||
| `run_batch.py` | Submit a workflow N times with sweeps, parallel up to your tier |
|
||||
| `ws_monitor.py` | Real-time WebSocket viewer for executing jobs (live progress) |
|
||||
| `health_check.py` | Verification checklist runner — comfy-cli + server + models + smoke test |
|
||||
| `fetch_logs.py` | Pull traceback / status messages for a given prompt_id |
|
||||
|
||||
**Example workflows (`workflows/`):** SD 1.5, SDXL, Flux Dev, SDXL img2img,
|
||||
SDXL inpaint, ESRGAN upscale, AnimateDiff video, Wan T2V. See
|
||||
`workflows/README.md`.
|
||||
|
||||
## When to Use
|
||||
|
||||
- User asks to generate images with Stable Diffusion, SDXL, Flux, or other diffusion models
|
||||
- User wants to run a specific ComfyUI workflow
|
||||
- User asks to generate images with Stable Diffusion, SDXL, Flux, SD3, etc.
|
||||
- User wants to run a specific ComfyUI workflow file
|
||||
- User wants to chain generative steps (txt2img → upscale → face restore)
|
||||
- User needs ControlNet, inpainting, img2img, or other advanced pipelines
|
||||
- User asks to manage ComfyUI queue, check models, or install custom nodes
|
||||
- User wants video/audio generation via AnimateDiff, Hunyuan, AudioCraft, etc.
|
||||
- User wants video/audio/3D generation via AnimateDiff, Hunyuan, Wan, AudioCraft, etc.
|
||||
|
||||
## Architecture: Two Layers
|
||||
|
||||
<!-- ascii-guard-ignore -->
|
||||
```
|
||||
┌─────────────────────────────────────────────────────┐
|
||||
│ Layer 1: comfy-cli (official) │
|
||||
│ Setup, lifecycle, nodes, models │
|
||||
│ comfy install / launch / stop / node / model │
|
||||
│ Layer 1: comfy-cli (official lifecycle tool) │
|
||||
│ Setup, server lifecycle, custom nodes, models │
|
||||
│ → comfy install / launch / stop / node / model │
|
||||
└─────────────────────────┬───────────────────────────┘
|
||||
│
|
||||
┌─────────────────────────▼───────────────────────────┐
|
||||
│ Layer 2: REST API + skill scripts │
|
||||
│ Layer 2: REST/WebSocket API + skill scripts │
|
||||
│ Workflow execution, param injection, monitoring │
|
||||
│ POST /api/prompt, GET /api/view, WebSocket │
|
||||
│ scripts/run_workflow.py, extract_schema.py │
|
||||
│ POST /api/prompt, GET /api/view, WS /ws │
|
||||
│ → run_workflow.py, run_batch.py, ws_monitor.py │
|
||||
└─────────────────────────────────────────────────────┘
|
||||
```
|
||||
<!-- ascii-guard-ignore-end -->
|
||||
|
||||
**Why two layers?** The official CLI handles installation and server management excellently
|
||||
but has minimal workflow execution support (just raw file submission, no param injection,
|
||||
no structured output). The REST API fills that gap — the scripts in this skill handle the
|
||||
param injection, execution monitoring, and output download that the CLI doesn't do.
|
||||
**Why two layers?** The official CLI is excellent for installation and server
|
||||
management but has minimal workflow execution support. The REST/WS API fills
|
||||
that gap — the scripts handle param injection, execution monitoring, and
|
||||
output download that the CLI doesn't do.
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Detect Environment
|
||||
### Detect environment
|
||||
|
||||
```bash
|
||||
# What's available?
|
||||
command -v comfy >/dev/null 2>&1 && echo "comfy-cli: installed"
|
||||
curl -s http://127.0.0.1:8188/system_stats 2>/dev/null && echo "server: running"
|
||||
|
||||
# Can this machine actually run ComfyUI locally? (GPU/VRAM/Apple Silicon check)
|
||||
# Can this machine run ComfyUI locally? (GPU/VRAM/disk check)
|
||||
python3 scripts/hardware_check.py
|
||||
```
|
||||
|
||||
If nothing is installed, go to **Setup & Onboarding** below — but always run the
|
||||
hardware check first, before picking an install path.
|
||||
If the server is already running, skip to **Core Workflow**.
|
||||
If nothing is installed, see **Setup & Onboarding** below — but always run the
|
||||
hardware check first.
|
||||
|
||||
### One-line health check
|
||||
|
||||
```bash
|
||||
python3 scripts/health_check.py
|
||||
# → JSON: comfy_cli on PATH? server reachable? at least one checkpoint? smoke-test passes?
|
||||
```
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### Step 1: Get a Workflow
|
||||
### Step 1: Get a workflow JSON in API format
|
||||
|
||||
Users provide workflow JSON files. These come from:
|
||||
- ComfyUI web editor → "Save (API Format)" button
|
||||
- Community downloads (civitai, Reddit, Discord)
|
||||
- The `scripts/` directory of this skill (example workflows)
|
||||
Workflows must be in API format (each node has `class_type`). They come from:
|
||||
|
||||
**The workflow must be in API format** (node IDs as keys with `class_type`).
|
||||
If user has editor format (has `nodes[]` and `links[]` at top level), they
|
||||
need to re-export using "Save (API Format)" in the ComfyUI web editor.
|
||||
- ComfyUI web UI → **Workflow → Export (API)** (newer UI) or
|
||||
the legacy "Save (API Format)" button (older UI)
|
||||
- This skill's `workflows/` directory (ready-to-run examples)
|
||||
- Community downloads (civitai, Reddit, Discord) — usually editor format,
|
||||
must be loaded into ComfyUI then re-exported
|
||||
|
||||
### Step 2: Understand What's Controllable
|
||||
Editor format (top-level `nodes` and `links` arrays) is **not directly
|
||||
executable**. The scripts detect this and tell you to re-export.
|
||||
|
||||
### Step 2: See what's controllable
|
||||
|
||||
```bash
|
||||
python3 scripts/extract_schema.py workflow_api.json --summary-only
|
||||
# → {"parameter_count": 12, "has_negative_prompt": true, "has_seed": true, ...}
|
||||
|
||||
python3 scripts/extract_schema.py workflow_api.json
|
||||
# → full schema with parameters, model deps, embedding refs
|
||||
```
|
||||
|
||||
Output (JSON):
|
||||
```json
|
||||
{
|
||||
"parameters": {
|
||||
"prompt": {"node_id": "6", "field": "text", "type": "string", "value": "a cat"},
|
||||
"negative_prompt": {"node_id": "7", "field": "text", "type": "string", "value": "bad quality"},
|
||||
"seed": {"node_id": "3", "field": "seed", "type": "int", "value": 42},
|
||||
"steps": {"node_id": "3", "field": "steps", "type": "int", "value": 20},
|
||||
"width": {"node_id": "5", "field": "width", "type": "int", "value": 512},
|
||||
"height": {"node_id": "5", "field": "height", "type": "int", "value": 512}
|
||||
}
|
||||
}
|
||||
```
|
||||
### Step 3: Run with parameters
|
||||
|
||||
### Step 3: Run with Parameters
|
||||
|
||||
**Local:**
|
||||
```bash
|
||||
# Local (defaults to http://127.0.0.1:8188)
|
||||
python3 scripts/run_workflow.py \
|
||||
--workflow workflow_api.json \
|
||||
--args '{"prompt": "a beautiful sunset over mountains", "seed": 123, "steps": 30}' \
|
||||
--args '{"prompt": "a beautiful sunset over mountains", "seed": -1, "steps": 30}' \
|
||||
--output-dir ./outputs
|
||||
```
|
||||
|
||||
**Cloud:**
|
||||
```bash
|
||||
# Cloud (export API key once; uses correct /api routing automatically)
|
||||
export COMFY_CLOUD_API_KEY="comfyui-..."
|
||||
python3 scripts/run_workflow.py \
|
||||
--workflow workflow_api.json \
|
||||
--args '{"prompt": "a beautiful sunset", "seed": 123}' \
|
||||
--args '{"prompt": "..."}' \
|
||||
--host https://cloud.comfy.org \
|
||||
--api-key "$COMFY_CLOUD_API_KEY" \
|
||||
--output-dir ./outputs
|
||||
|
||||
# Real-time progress via WebSocket (requires `pip install websocket-client`)
|
||||
python3 scripts/run_workflow.py \
|
||||
--workflow flux_dev.json \
|
||||
--args '{"prompt": "..."}' \
|
||||
--ws
|
||||
|
||||
# img2img / inpaint: pass --input-image to upload + reference automatically
|
||||
python3 scripts/run_workflow.py \
|
||||
--workflow sdxl_img2img.json \
|
||||
--input-image image=./photo.png \
|
||||
--args '{"prompt": "make it watercolor", "denoise": 0.6}'
|
||||
|
||||
# Batch / sweep: 8 random seeds, parallel up to cloud tier limit
|
||||
python3 scripts/run_batch.py \
|
||||
--workflow sdxl.json \
|
||||
--args '{"prompt": "abstract"}' \
|
||||
--count 8 --randomize-seed --parallel 3 \
|
||||
--output-dir ./outputs/batch
|
||||
```
|
||||
|
||||
### Step 4: Present Results
|
||||
`-1` for `seed` (or omitting it with `--randomize-seed`) generates a fresh
|
||||
random seed per run.
|
||||
|
||||
### Step 4: Present results
|
||||
|
||||
The scripts emit JSON to stdout describing every output file:
|
||||
|
||||
The script outputs JSON with file paths:
|
||||
```json
|
||||
{
|
||||
"status": "success",
|
||||
"prompt_id": "abc-123",
|
||||
"outputs": [
|
||||
{"file": "./outputs/ComfyUI_00001_.png", "node_id": "9", "type": "image"}
|
||||
{"file": "./outputs/sdxl_00001_.png", "node_id": "9",
|
||||
"type": "image", "filename": "sdxl_00001_.png"}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Show images to the user via `vision_analyze` or return the file path directly.
|
||||
|
||||
## Decision Tree
|
||||
|
||||
| User says | Tool | Command |
|
||||
|-----------|------|---------|
|
||||
| "install ComfyUI" | comfy-cli | `comfy install` |
|
||||
| **Lifecycle (use comfy-cli)** | | |
|
||||
| "install ComfyUI" | comfy-cli | `bash scripts/comfyui_setup.sh` |
|
||||
| "start ComfyUI" | comfy-cli | `comfy launch --background` |
|
||||
| "stop ComfyUI" | comfy-cli | `comfy stop` |
|
||||
| "install X node" | comfy-cli | `comfy node install <name>` |
|
||||
| "download X model" | comfy-cli | `comfy model download --url <url>` |
|
||||
| "download X model" | comfy-cli | `comfy model download --url <url> --relative-path models/checkpoints` |
|
||||
| "list installed models" | comfy-cli | `comfy model list` |
|
||||
| "list installed nodes" | comfy-cli | `comfy node show installed` |
|
||||
| "generate an image" | script | `run_workflow.py --args '{"prompt": "..."}'` |
|
||||
| "use this image" (img2img) | REST | upload image, then run_workflow.py |
|
||||
| "what can I change in this workflow?" | script | `extract_schema.py workflow.json` |
|
||||
| "check if workflow deps are met" | script | `check_deps.py workflow.json` |
|
||||
| "what's in the queue?" | REST | `curl http://HOST:8188/queue` |
|
||||
| **Execution (use scripts)** | | |
|
||||
| "is everything ready?" | script | `health_check.py` (optionally with `--workflow X --smoke-test`) |
|
||||
| "what can I change in this workflow?" | script | `extract_schema.py W.json` |
|
||||
| "check if W's deps are met" | script | `check_deps.py W.json` |
|
||||
| "fix missing deps" | script | `auto_fix_deps.py W.json` |
|
||||
| "generate an image" | script | `run_workflow.py --workflow W --args '{...}'` |
|
||||
| "use this image" (img2img) | script | `run_workflow.py --input-image image=./x.png ...` |
|
||||
| "8 variations with random seeds" | script | `run_batch.py --count 8 --randomize-seed ...` |
|
||||
| "show me live progress" | script | `ws_monitor.py --prompt-id <id>` |
|
||||
| "fetch the error from job X" | script | `fetch_logs.py <prompt_id>` |
|
||||
| **Direct REST** | | |
|
||||
| "what's in the queue?" | REST | `curl http://HOST:8188/queue` (local) or `--host https://cloud.comfy.org` |
|
||||
| "cancel that" | REST | `curl -X POST http://HOST:8188/interrupt` |
|
||||
| "free GPU memory" | REST | `curl -X POST http://HOST:8188/free` |
|
||||
|
||||
## Setup & Onboarding
|
||||
|
||||
When a user asks to set up ComfyUI, the FIRST thing to do is ask them whether
|
||||
they want **Comfy Cloud** (hosted, zero install, API key) or **Local** (install
|
||||
ComfyUI on their machine). Do NOT start running install commands or hardware
|
||||
When a user asks to set up ComfyUI, **the FIRST thing to do is ask whether
|
||||
they want Comfy Cloud (hosted, zero install, API key) or Local (install
|
||||
ComfyUI on their machine)**. Don't start running install commands or hardware
|
||||
checks until they've answered.
|
||||
|
||||
**Official docs:** https://docs.comfy.org/installation
|
||||
**CLI docs:** https://docs.comfy.org/comfy-cli/getting-started
|
||||
**Cloud docs:** https://docs.comfy.org/get_started/cloud
|
||||
**Cloud API:** https://docs.comfy.org/development/cloud/overview
|
||||
|
||||
### Step 0: Ask Local vs Cloud (ALWAYS FIRST)
|
||||
|
||||
Present the tradeoff clearly and wait for the user to choose. Suggested script:
|
||||
Suggested script:
|
||||
|
||||
> "Do you want to run ComfyUI locally on your machine, or use Comfy Cloud?
|
||||
>
|
||||
> - **Comfy Cloud** — hosted on RTX 6000 Pro GPUs, all models pre-installed, zero setup. Requires an API key (paid subscription). Best if you don't have a capable GPU or want to skip installation.
|
||||
> - **Comfy Cloud** — hosted on RTX 6000 Pro GPUs, all common models pre-installed,
|
||||
> zero setup. Requires an API key (paid subscription required to actually run
|
||||
> workflows; free tier is read-only). Best if you don't have a capable GPU.
|
||||
> - **Local** — free, but your machine MUST meet the hardware requirements:
|
||||
> - NVIDIA GPU with **≥6 GB VRAM** (≥8 GB recommended for SDXL, ≥12 GB for Flux/video), OR
|
||||
> - NVIDIA GPU with **≥6 GB VRAM** (≥8 GB for SDXL, ≥12 GB for Flux/video), OR
|
||||
> - AMD GPU with ROCm support (Linux), OR
|
||||
> - Apple Silicon Mac (M1 or newer) with **≥16 GB unified memory** (≥32 GB recommended).
|
||||
> - Apple Silicon Mac (M1+) with **≥16 GB unified memory** (≥32 GB recommended).
|
||||
> - Intel Macs and machines with no GPU will NOT work — use Cloud instead.
|
||||
>
|
||||
> Which would you like?"
|
||||
|
||||
Route based on their answer:
|
||||
Routing:
|
||||
|
||||
- **User picks Cloud** → skip to **Path A** (no hardware check needed).
|
||||
- **User picks Local** → go to **Step 1: Hardware Check** to verify their machine actually meets the requirements, then pick an install path from Paths B-E based on the verdict.
|
||||
- **User is unsure / asks for a recommendation** → run the hardware check anyway and let the verdict decide.
|
||||
- **Cloud** → skip to **Path A**.
|
||||
- **Local** → run hardware check first, then pick a path from Paths B–E based on the verdict.
|
||||
- **Unsure** → run the hardware check and let the verdict decide.
|
||||
|
||||
### Step 1: Verify Hardware (ONLY if user chose local)
|
||||
|
||||
```bash
|
||||
python3 scripts/hardware_check.py --json
|
||||
# Optional: also probe `torch` for actual CUDA/MPS:
|
||||
python3 scripts/hardware_check.py --json --check-pytorch
|
||||
```
|
||||
|
||||
It detects OS, GPU (NVIDIA CUDA / AMD ROCm / Apple Silicon / Intel Arc), VRAM,
|
||||
and unified/system RAM, then returns a verdict plus a suggested `comfy-cli` flag:
|
||||
| Verdict | Meaning | Action |
|
||||
|------------|---------------------------------------------------------------|--------|
|
||||
| `ok` | ≥8 GB VRAM (discrete) OR ≥32 GB unified (Apple Silicon) | Local install — use `comfy_cli_flag` from report |
|
||||
| `marginal` | SD1.5 works; SDXL tight; Flux/video unlikely | Local OK for light workflows, else **Path A (Cloud)** |
|
||||
| `cloud` | No usable GPU, <6 GB VRAM, <16 GB Apple unified, Intel Mac, Rosetta Python | **Switch to Cloud** unless user explicitly forces local |
|
||||
|
||||
| Verdict | Meaning | Action |
|
||||
|------------|-----------------------------------------------------------|-------------------------------------------------|
|
||||
| `ok` | ≥8 GB VRAM (discrete) OR ≥32 GB unified (Apple Silicon) | Local install — use `comfy_cli_flag` from report |
|
||||
| `marginal` | SD1.5 works; SDXL tight; Flux/video unlikely | Local OK for light workflows, else **Path A (Cloud)** |
|
||||
| `cloud` | No usable GPU, <6 GB VRAM, <16 GB Apple unified, Intel Mac | **User chose local but their machine doesn't meet requirements** — surface the `notes` and ask if they want to switch to Cloud |
|
||||
The script also surfaces `wsl: true` (WSL2 with NVIDIA passthrough) and
|
||||
`rosetta: true` (x86_64 Python on Apple Silicon — must reinstall as ARM64).
|
||||
|
||||
Hardware thresholds the skill enforces:
|
||||
|
||||
- **Discrete GPU minimum:** 6 GB VRAM. Below that, most modern models won't load.
|
||||
- **Apple Silicon:** M1 or newer (ARM64). Intel Macs have no MPS backend — Cloud only.
|
||||
- **Apple Silicon memory:** 16 GB unified minimum. 8 GB M1/M2 will swap/OOM on SDXL/Flux.
|
||||
- **No accelerator at all:** CPU-only is listed as a comfy-cli option but a single SDXL
|
||||
image takes 10+ minutes — treat it as unusable and route to Cloud.
|
||||
|
||||
If verdict is `cloud` but the user explicitly wanted local, DO NOT proceed
|
||||
silently. Show the `notes` array verbatim, explain which requirement they
|
||||
don't meet, and ask whether they want to (a) switch to Cloud or (b) force
|
||||
a local install anyway (marginal/cloud-verdict local installs will OOM or
|
||||
be unusably slow on modern models).
|
||||
|
||||
The report's `comfy_cli_flag` field gives you the exact flag for Step 2 below:
|
||||
`--nvidia`, `--amd`, or `--m-series`. For Intel Arc, use Path E (manual install).
|
||||
|
||||
Surface the `notes` array verbatim to the user so they understand why a
|
||||
particular path was recommended.
|
||||
If verdict is `cloud` but the user wants local, do not proceed silently.
|
||||
Show the `notes` array verbatim and ask whether they want to (a) switch to
|
||||
Cloud or (b) force a local install (will OOM or be unusably slow on modern models).
|
||||
|
||||
### Choosing an Installation Path
|
||||
|
||||
Use the hardware check result first. The table below is a fallback for when the user
|
||||
has already told you their hardware or you need to narrow down between multiple
|
||||
viable paths:
|
||||
Use the hardware check first. The table below is the fallback for when the
|
||||
user has already told you their hardware:
|
||||
|
||||
| Situation | Recommended Path |
|
||||
|-----------|-----------------|
|
||||
|-----------|------------------|
|
||||
| `verdict: cloud` from hardware check | **Path A: Comfy Cloud** |
|
||||
| No GPU / just want to try it | **Path A: Comfy Cloud** (zero setup) |
|
||||
| Windows + NVIDIA GPU + non-technical | **Path B: ComfyUI Desktop** (one-click installer) |
|
||||
| Windows + NVIDIA GPU + technical | **Path C: Portable** or **Path D: comfy-cli** |
|
||||
| Linux + any GPU | **Path D: comfy-cli** (easiest) or Path E manual |
|
||||
| macOS + Apple Silicon | **Path B: ComfyUI Desktop** or **Path D: comfy-cli** |
|
||||
| Headless / server / CI | **Path D: comfy-cli** |
|
||||
| No GPU / want to try without commitment | **Path A: Comfy Cloud** |
|
||||
| Windows + NVIDIA + non-technical | **Path B: ComfyUI Desktop** |
|
||||
| Windows + NVIDIA + technical | **Path C: Portable** or **Path D: comfy-cli** |
|
||||
| Linux + any GPU | **Path D: comfy-cli** (easiest) |
|
||||
| macOS + Apple Silicon | **Path B: Desktop** or **Path D: comfy-cli** |
|
||||
| Headless / server / CI / agents | **Path D: comfy-cli** |
|
||||
|
||||
For the fully automated path (hardware check → install → launch), just run:
|
||||
For the fully automated path (hardware check → install → launch → verify):
|
||||
|
||||
```bash
|
||||
bash scripts/comfyui_setup.sh
|
||||
# Or with overrides:
|
||||
bash scripts/comfyui_setup.sh --m-series --port=8190 --workspace=/data/comfy
|
||||
```
|
||||
|
||||
It runs `hardware_check.py` internally, refuses to install locally when the verdict
|
||||
is `cloud`, picks the right `comfy-cli` flag otherwise, then installs and launches.
|
||||
It runs `hardware_check.py` internally, refuses to install locally when the
|
||||
verdict is `cloud` (unless `--force-cloud-override`), picks the right
|
||||
`comfy-cli` flag, and prefers `pipx`/`uvx` over global `pip` to avoid polluting
|
||||
system Python.
|
||||
|
||||
---
|
||||
|
||||
### Path A: Comfy Cloud (No Local Install)
|
||||
|
||||
For users without a capable GPU or who want zero setup.
|
||||
Powered by RTX 6000 Pro GPUs, all models pre-installed.
|
||||
For users without a capable GPU or who want zero setup. Hosted on RTX 6000 Pro.
|
||||
|
||||
**Docs:** https://docs.comfy.org/get_started/cloud
|
||||
|
||||
1. Go to https://comfy.org/cloud and sign up
|
||||
2. Get an API key at https://platform.comfy.org/login
|
||||
- Click `+ New` in API Keys section → Generate
|
||||
- Save immediately (only visible once)
|
||||
1. Sign up at https://comfy.org/cloud
|
||||
2. Generate an API key at https://platform.comfy.org/login
|
||||
3. Set the key:
|
||||
```bash
|
||||
export COMFY_CLOUD_API_KEY="comfyui-xxxxxxxxxxxx"
|
||||
```
|
||||
4. Run workflows via the script or web UI:
|
||||
4. Run workflows:
|
||||
```bash
|
||||
python3 scripts/run_workflow.py \
|
||||
--workflow workflow_api.json \
|
||||
--args '{"prompt": "a cat"}' \
|
||||
--workflow workflows/flux_dev_txt2img.json \
|
||||
--args '{"prompt": "..."}' \
|
||||
--host https://cloud.comfy.org \
|
||||
--api-key "$COMFY_CLOUD_API_KEY" \
|
||||
--output-dir ./outputs
|
||||
```
|
||||
|
||||
**Pricing:** https://www.comfy.org/cloud/pricing
|
||||
Subscription required. Concurrent limits: Free/Standard: 1 job, Creator: 3, Pro: 5.
|
||||
**Concurrent jobs:** Free/Standard 1, Creator 3, Pro 5. Free tier
|
||||
**cannot run workflows via API** — only browse models. Paid subscription
|
||||
required for `/api/prompt`, `/api/upload/*`, `/api/view`, etc.
|
||||
|
||||
---
|
||||
|
||||
### Path B: ComfyUI Desktop (Windows/macOS)
|
||||
### Path B: ComfyUI Desktop (Windows / macOS)
|
||||
|
||||
One-click installer for non-technical users. Currently Beta.
|
||||
|
||||
**Docs:** https://docs.comfy.org/installation/desktop
|
||||
|
||||
- **Windows (NVIDIA):** https://download.comfy.org/windows/nsis/x64
|
||||
- **macOS (Apple Silicon):** Available from https://comfy.org (download page)
|
||||
- **macOS (Apple Silicon):** https://comfy.org
|
||||
|
||||
Steps:
|
||||
1. Download and run installer
|
||||
2. Select GPU type (NVIDIA recommended, or CPU mode)
|
||||
3. Choose install location (SSD recommended, ~15GB needed)
|
||||
4. Optionally migrate from existing ComfyUI Portable install
|
||||
5. Desktop launches automatically — web UI opens in browser
|
||||
|
||||
Desktop manages its own Python environment. For CLI access to the bundled env:
|
||||
```bash
|
||||
cd <install_dir>/ComfyUI
|
||||
.venv/Scripts/activate # Windows
|
||||
# or use the built-in terminal in the Desktop UI
|
||||
```
|
||||
|
||||
**Limitations:** Desktop uses stable releases (may lag behind latest).
|
||||
Linux not supported for Desktop — use comfy-cli or manual install.
|
||||
Linux is **not supported** for Desktop — use Path D.
|
||||
|
||||
---
|
||||
|
||||
### Path C: ComfyUI Portable (Windows Only)
|
||||
|
||||
Standalone package with embedded Python. Extract and run. No install.
|
||||
|
||||
**Docs:** https://docs.comfy.org/installation/comfyui_portable_windows
|
||||
|
||||
1. Download from https://github.com/comfyanonymous/ComfyUI/releases
|
||||
- Standard: Python 3.13 + CUDA 13.0 (modern NVIDIA GPUs)
|
||||
- Alt: PyTorch CUDA 12.6 + Python 3.12 (NVIDIA 10 series and older)
|
||||
- AMD (experimental)
|
||||
2. Extract with 7-Zip
|
||||
3. Run `run_nvidia_gpu.bat` (or `run_cpu.bat`)
|
||||
4. Wait for "To see the GUI go to: http://127.0.0.1:8188"
|
||||
|
||||
Update: run `update/update_comfyui.bat` (latest commit) or
|
||||
`update/update_comfyui_stable.bat` (latest stable release).
|
||||
Download from https://github.com/comfyanonymous/ComfyUI/releases, extract,
|
||||
run `run_nvidia_gpu.bat`. Update via `update/update_comfyui_stable.bat`.
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -360,22 +366,19 @@ Update: run `update/update_comfyui.bat` (latest commit) or
|
|||
The official CLI is the best path for headless/automated setups.
|
||||
|
||||
**Docs:** https://docs.comfy.org/comfy-cli/getting-started
|
||||
**Repo:** https://github.com/Comfy-Org/comfy-cli
|
||||
|
||||
#### Prerequisites
|
||||
- Python 3.10+ (3.13 recommended)
|
||||
- pip (or conda/uv)
|
||||
- GPU drivers installed (CUDA for NVIDIA, ROCm for AMD)
|
||||
|
||||
#### Install comfy-cli
|
||||
|
||||
```bash
|
||||
pip install comfy-cli
|
||||
# or
|
||||
# Recommended:
|
||||
pipx install comfy-cli
|
||||
# Or use uvx without installing:
|
||||
uvx --from comfy-cli comfy --help
|
||||
# Or (if pipx/uvx unavailable):
|
||||
pip install --user comfy-cli
|
||||
```
|
||||
|
||||
Disable analytics (avoids interactive prompt):
|
||||
Disable analytics non-interactively:
|
||||
```bash
|
||||
comfy --skip-prompt tracking disable
|
||||
```
|
||||
|
|
@ -383,270 +386,225 @@ comfy --skip-prompt tracking disable
|
|||
#### Install ComfyUI
|
||||
|
||||
```bash
|
||||
# Interactive (prompts for GPU type)
|
||||
comfy install
|
||||
|
||||
# Non-interactive variants:
|
||||
comfy --skip-prompt install --nvidia # NVIDIA (CUDA)
|
||||
comfy --skip-prompt install --amd # AMD (ROCm, Linux)
|
||||
comfy --skip-prompt install --m-series # Apple Silicon (MPS)
|
||||
comfy --skip-prompt install --cpu # CPU only (slow)
|
||||
|
||||
# With faster dependency resolution:
|
||||
comfy --skip-prompt install --nvidia --fast-deps
|
||||
comfy --skip-prompt install --nvidia --fast-deps # uv-based dep resolution
|
||||
```
|
||||
|
||||
Default location: `~/comfy/ComfyUI` (Linux), `~/Documents/comfy/ComfyUI` (macOS/Win).
|
||||
Override with: `comfy --workspace /custom/path install`
|
||||
Default location: `~/comfy/ComfyUI` (Linux), `~/Documents/comfy/ComfyUI`
|
||||
(macOS/Win). Override with `comfy --workspace /custom/path install`.
|
||||
|
||||
#### Launch Server
|
||||
#### Launch / verify
|
||||
|
||||
```bash
|
||||
comfy launch --background # background daemon on :8188
|
||||
comfy launch # foreground (see logs)
|
||||
comfy launch -- --listen 0.0.0.0 # accessible on LAN
|
||||
comfy launch -- --port 8190 # custom port
|
||||
comfy launch -- --lowvram # low VRAM mode (6GB cards)
|
||||
```
|
||||
|
||||
Verify server is running:
|
||||
```bash
|
||||
curl -s http://127.0.0.1:8188/system_stats | python3 -m json.tool
|
||||
```
|
||||
|
||||
Stop background server:
|
||||
```bash
|
||||
comfy stop
|
||||
comfy launch --background # background daemon on :8188
|
||||
comfy launch -- --listen 0.0.0.0 --port 8190 # LAN-accessible custom port
|
||||
curl -s http://127.0.0.1:8188/system_stats # health check
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Path E: Manual Install (Advanced / All Hardware)
|
||||
### Path E: Manual Install (Advanced / Unsupported Hardware)
|
||||
|
||||
For full control or unsupported hardware (Ascend NPU, Cambricon MLU, Intel Arc).
|
||||
For Ascend NPU, Cambricon MLU, Intel Arc, or other unsupported hardware.
|
||||
|
||||
**Docs:** https://docs.comfy.org/installation/manual_install
|
||||
**GitHub:** https://github.com/comfyanonymous/ComfyUI
|
||||
|
||||
```bash
|
||||
# 1. Create environment
|
||||
conda create -n comfyenv python=3.13
|
||||
conda activate comfyenv
|
||||
|
||||
# 2. Clone
|
||||
git clone https://github.com/comfyanonymous/ComfyUI.git
|
||||
cd ComfyUI
|
||||
|
||||
# 3. Install PyTorch (pick your hardware)
|
||||
# NVIDIA:
|
||||
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu130
|
||||
# AMD (ROCm 6.4):
|
||||
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.4
|
||||
# Apple Silicon:
|
||||
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu
|
||||
# Intel Arc:
|
||||
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/xpu
|
||||
# CPU only:
|
||||
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cpu
|
||||
|
||||
# 4. Install ComfyUI deps
|
||||
pip install -r requirements.txt
|
||||
|
||||
# 5. Run
|
||||
python main.py
|
||||
# With options: python main.py --listen 0.0.0.0 --port 8188
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Post-Install: Download Models
|
||||
|
||||
ComfyUI needs at least one checkpoint model to generate images.
|
||||
|
||||
**Using comfy-cli:**
|
||||
```bash
|
||||
# SDXL (general purpose, ~6.5GB)
|
||||
# SDXL (general purpose, ~6.5 GB)
|
||||
comfy model download \
|
||||
--url "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors" \
|
||||
--relative-path models/checkpoints
|
||||
|
||||
# SD 1.5 (lighter, ~4GB, good for low VRAM)
|
||||
# SD 1.5 (lighter, ~4 GB, good for 6 GB cards)
|
||||
comfy model download \
|
||||
--url "https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors" \
|
||||
--relative-path models/checkpoints
|
||||
|
||||
# From CivitAI (may need API token):
|
||||
# Flux Dev fp8 (smaller variant, ~12 GB)
|
||||
comfy model download \
|
||||
--url "https://huggingface.co/Comfy-Org/flux1-dev/resolve/main/flux1-dev-fp8.safetensors" \
|
||||
--relative-path models/checkpoints
|
||||
|
||||
# CivitAI (set token first):
|
||||
comfy model download \
|
||||
--url "https://civitai.com/api/download/models/128713" \
|
||||
--relative-path models/checkpoints \
|
||||
--set-civitai-api-token "YOUR_TOKEN"
|
||||
|
||||
# LoRA adapters:
|
||||
comfy model download --url "<URL>" --relative-path models/loras
|
||||
```
|
||||
|
||||
**Manual download:** Place `.safetensors` / `.ckpt` files directly into the
|
||||
`ComfyUI/models/checkpoints/` directory (or `loras/`, `vae/`, etc.).
|
||||
|
||||
List installed models:
|
||||
```bash
|
||||
comfy model list
|
||||
```
|
||||
|
||||
---
|
||||
List installed: `comfy model list`.
|
||||
|
||||
### Post-Install: Install Custom Nodes
|
||||
|
||||
Custom nodes extend ComfyUI's capabilities (upscaling, video, ControlNet, etc.).
|
||||
|
||||
```bash
|
||||
comfy node install comfyui-impact-pack # popular utility pack
|
||||
comfy node install comfyui-animatediff-evolved # video generation
|
||||
comfy node install comfyui-controlnet-aux # ControlNet preprocessors
|
||||
comfy node install comfyui-essentials # common helpers
|
||||
comfy node update all # update all nodes
|
||||
comfy node install comfyui-impact-pack # popular utility pack
|
||||
comfy node install comfyui-animatediff-evolved # video generation
|
||||
comfy node install comfyui-controlnet-aux # ControlNet preprocessors
|
||||
comfy node install comfyui-essentials # common helpers
|
||||
comfy node update all
|
||||
comfy node install-deps --workflow=workflow.json # install everything a workflow needs
|
||||
```
|
||||
|
||||
Check what's installed:
|
||||
```bash
|
||||
comfy node show installed
|
||||
```
|
||||
|
||||
Install deps for a specific workflow:
|
||||
```bash
|
||||
comfy node install-deps --workflow=workflow_api.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Post-Install: Verify Setup
|
||||
### Post-Install: Verify
|
||||
|
||||
```bash
|
||||
# Check server is responsive
|
||||
curl -s http://127.0.0.1:8188/system_stats | python3 -m json.tool
|
||||
python3 scripts/health_check.py
|
||||
# → comfy_cli on PATH? server reachable? checkpoints? smoke test?
|
||||
|
||||
# Check a workflow's dependencies
|
||||
python3 scripts/check_deps.py workflow_api.json --host 127.0.0.1 --port 8188
|
||||
python3 scripts/check_deps.py my_workflow.json
|
||||
# → are this workflow's nodes/models/embeddings installed?
|
||||
|
||||
# Test a generation
|
||||
python3 scripts/run_workflow.py \
|
||||
--workflow workflow_api.json \
|
||||
--args '{"prompt": "test image, high quality"}' \
|
||||
--workflow workflows/sd15_txt2img.json \
|
||||
--args '{"prompt": "test", "steps": 4}' \
|
||||
--output-dir ./test-outputs
|
||||
```
|
||||
|
||||
## Image Upload (img2img / Inpainting)
|
||||
|
||||
Upload files directly via REST:
|
||||
The simplest way is to use `--input-image` with `run_workflow.py`:
|
||||
|
||||
```bash
|
||||
# Upload input image
|
||||
python3 scripts/run_workflow.py \
|
||||
--workflow workflows/sdxl_img2img.json \
|
||||
--input-image image=./photo.png \
|
||||
--args '{"prompt": "make it cyberpunk", "denoise": 0.6}'
|
||||
```
|
||||
|
||||
The flag uploads `photo.png`, then injects its server-side filename into
|
||||
whatever schema parameter is named `image`. For inpainting, pass both:
|
||||
|
||||
```bash
|
||||
python3 scripts/run_workflow.py \
|
||||
--workflow workflows/sdxl_inpaint.json \
|
||||
--input-image image=./photo.png \
|
||||
--input-image mask_image=./mask.png \
|
||||
--args '{"prompt": "fill with flowers"}'
|
||||
```
|
||||
|
||||
Manual upload via REST:
|
||||
```bash
|
||||
curl -X POST "http://127.0.0.1:8188/upload/image" \
|
||||
-F "image=@photo.png" -F "type=input" -F "overwrite=true"
|
||||
# Returns: {"name": "photo.png", "subfolder": "", "type": "input"}
|
||||
|
||||
# Upload mask for inpainting
|
||||
curl -X POST "http://127.0.0.1:8188/upload/mask" \
|
||||
-F "image=@mask.png" -F "type=input" \
|
||||
-F 'original_ref={"filename":"photo.png","subfolder":"","type":"input"}'
|
||||
```
|
||||
|
||||
Then reference the uploaded filename in workflow args:
|
||||
```bash
|
||||
python3 scripts/run_workflow.py --workflow inpaint.json \
|
||||
--args '{"image": "photo.png", "mask": "mask.png", "prompt": "fill with flowers"}'
|
||||
```
|
||||
|
||||
## Cloud Execution
|
||||
|
||||
Base URL: `https://cloud.comfy.org`
|
||||
Auth: `X-API-Key` header
|
||||
|
||||
```bash
|
||||
# Submit workflow
|
||||
python3 scripts/run_workflow.py \
|
||||
--workflow workflow_api.json \
|
||||
--args '{"prompt": "cyberpunk city"}' \
|
||||
--host https://cloud.comfy.org \
|
||||
--api-key "$COMFY_CLOUD_API_KEY" \
|
||||
--output-dir ./outputs \
|
||||
--timeout 300
|
||||
|
||||
# Upload image for cloud workflows
|
||||
# Cloud equivalent:
|
||||
curl -X POST "https://cloud.comfy.org/api/upload/image" \
|
||||
-H "X-API-Key: $COMFY_CLOUD_API_KEY" \
|
||||
-F "image=@input.png" -F "type=input" -F "overwrite=true"
|
||||
-F "image=@photo.png" -F "type=input" -F "overwrite=true"
|
||||
```
|
||||
|
||||
Concurrent job limits:
|
||||
| Tier | Concurrent Jobs |
|
||||
|------|----------------|
|
||||
| Free/Standard | 1 |
|
||||
| Creator | 3 |
|
||||
| Pro | 5 |
|
||||
## Cloud Specifics
|
||||
|
||||
Extra submissions queue automatically.
|
||||
- **Base URL:** `https://cloud.comfy.org`
|
||||
- **Auth:** `X-API-Key` header (or `?token=KEY` for WebSocket)
|
||||
- **API key:** set `$COMFY_CLOUD_API_KEY` once and the scripts pick it up automatically
|
||||
- **Output download:** `/api/view` returns a 302 to a signed URL; the scripts
|
||||
follow it and strip `X-API-Key` before fetching from the storage backend
|
||||
(don't leak the API key to S3/CloudFront).
|
||||
- **Endpoint differences from local ComfyUI:**
|
||||
- `/api/object_info`, `/api/queue`, `/api/userdata` — **403 on free tier**;
|
||||
paid only.
|
||||
- `/history` is renamed to `/history_v2` on cloud (the scripts route
|
||||
automatically).
|
||||
- `/models/<folder>` is renamed to `/experiment/models/<folder>` on cloud
|
||||
(the scripts route automatically).
|
||||
- `clientId` in WebSocket is currently ignored — all connections for a
|
||||
user receive the same broadcast. Filter by `prompt_id` client-side.
|
||||
- `subfolder` is accepted on uploads but ignored — cloud has a flat namespace.
|
||||
- **Concurrent jobs:** Free/Standard: 1, Creator: 3, Pro: 5. Extras queue
|
||||
automatically. Use `run_batch.py --parallel N` to saturate your tier.
|
||||
|
||||
## Queue & System Management
|
||||
|
||||
```bash
|
||||
# Check queue
|
||||
# Local
|
||||
curl -s http://127.0.0.1:8188/queue | python3 -m json.tool
|
||||
|
||||
# Clear pending queue
|
||||
curl -X POST http://127.0.0.1:8188/queue -d '{"clear": true}'
|
||||
|
||||
# Cancel running job
|
||||
curl -X POST http://127.0.0.1:8188/interrupt
|
||||
|
||||
# Free GPU memory (unload all models)
|
||||
curl -X POST http://127.0.0.1:8188/free -H "Content-Type: application/json" \
|
||||
curl -X POST http://127.0.0.1:8188/queue -d '{"clear": true}' # cancel pending
|
||||
curl -X POST http://127.0.0.1:8188/interrupt # cancel running
|
||||
curl -X POST http://127.0.0.1:8188/free \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"unload_models": true, "free_memory": true}'
|
||||
|
||||
# System stats (VRAM, RAM, GPU info)
|
||||
curl -s http://127.0.0.1:8188/system_stats | python3 -m json.tool
|
||||
# Cloud — same paths under /api/, plus:
|
||||
python3 scripts/fetch_logs.py --tail-queue --host https://cloud.comfy.org
|
||||
```
|
||||
|
||||
## Pitfalls
|
||||
|
||||
1. **API format required** — `comfy run` and the scripts only accept API-format workflow JSON.
|
||||
If the user has editor format (from "Save" not "Save (API Format)"), they need to
|
||||
re-export. Check: API format has `class_type` in each node object, editor format has
|
||||
top-level `nodes` and `links` arrays.
|
||||
1. **API format required** — every script and the `/api/prompt` endpoint expect
|
||||
API-format workflow JSON. The scripts detect editor format (top-level
|
||||
`nodes` and `links` arrays) and tell you to re-export via
|
||||
"Workflow → Export (API)" (newer UI) or "Save (API Format)" (older UI).
|
||||
|
||||
2. **Server must be running** — All execution requires a live server. `comfy launch --background`
|
||||
starts one. Check with `curl http://127.0.0.1:8188/system_stats`.
|
||||
2. **Server must be running** — all execution requires a live server.
|
||||
`comfy launch --background` starts one. Verify with
|
||||
`curl http://127.0.0.1:8188/system_stats`.
|
||||
|
||||
3. **Model names are exact** — Case-sensitive, includes file extension. Use
|
||||
3. **Model names are exact** — case-sensitive, includes file extension.
|
||||
`check_deps.py` does fuzzy matching (with/without extension and folder
|
||||
prefix), but the workflow itself must use the canonical name. Use
|
||||
`comfy model list` to discover what's installed.
|
||||
|
||||
4. **Missing custom nodes** — "class_type not found" means a required node isn't installed.
|
||||
Run `check_deps.py` to find what's missing, then `comfy node install <name>`.
|
||||
4. **Missing custom nodes** — "class_type not found" means a required node
|
||||
isn't installed. `check_deps.py` reports which package to install;
|
||||
`auto_fix_deps.py` runs the install for you.
|
||||
|
||||
5. **Working directory** — `comfy-cli` auto-detects the ComfyUI workspace. If commands
|
||||
fail with "no workspace found", use `comfy --workspace /path/to/ComfyUI <command>`
|
||||
or `comfy set-default /path/to/ComfyUI`.
|
||||
5. **Working directory** — `comfy-cli` auto-detects the ComfyUI workspace.
|
||||
If commands fail with "no workspace found", use
|
||||
`comfy --workspace /path/to/ComfyUI <command>` or
|
||||
`comfy set-default /path/to/ComfyUI`.
|
||||
|
||||
6. **Cloud vs local output download** — Cloud `/api/view` returns a 302 redirect to a
|
||||
signed URL. Always follow redirects (`curl -L`). The `run_workflow.py` script handles
|
||||
this automatically.
|
||||
6. **Cloud free-tier API limits** — `/api/prompt`, `/api/view`, `/api/upload/*`,
|
||||
`/api/object_info` all return 403 on free accounts. `health_check.py` and
|
||||
`check_deps.py` handle this gracefully and surface a clear message.
|
||||
|
||||
7. **Timeout for video/audio** — Long generations (video, high step counts) can take
|
||||
minutes. Pass `--timeout 600` to `run_workflow.py`. Default is 120 seconds.
|
||||
7. **Timeout for video/audio workflows** — auto-detected when an output node
|
||||
is `VHS_VideoCombine`, `SaveVideo`, etc.; the default jumps from 300 s to
|
||||
900 s. Override explicitly with `--timeout 1800`.
|
||||
|
||||
8. **tracking prompt** — First run of `comfy` may prompt for analytics tracking consent.
|
||||
Use `comfy --skip-prompt tracking disable` to skip it non-interactively.
|
||||
8. **Path traversal in output filenames** — server-supplied filenames are
|
||||
passed through `safe_path_join` to refuse anything escaping `--output-dir`.
|
||||
Keep this protection on — workflows with custom save nodes can produce
|
||||
arbitrary paths.
|
||||
|
||||
9. **comfy-cli invocation via uvx** — If comfy-cli is not installed globally, invoke with
|
||||
`uvx --from comfy-cli comfy <command>`. All examples in this skill use bare `comfy`
|
||||
but prepend `uvx --from comfy-cli` if needed.
|
||||
9. **Workflow JSON is arbitrary code** — custom nodes run Python, so
|
||||
submitting an unknown workflow has the same trust profile as `eval`.
|
||||
Inspect workflows from untrusted sources before running.
|
||||
|
||||
10. **Auto-randomized seed** — pass `seed: -1` in `--args` (or use
|
||||
`--randomize-seed` and omit the seed) to get a fresh seed per run.
|
||||
The actual seed is logged to stderr.
|
||||
|
||||
11. **`tracking` prompt** — first run of `comfy` may prompt for analytics.
|
||||
Use `comfy --skip-prompt tracking disable` to skip non-interactively.
|
||||
`comfyui_setup.sh` does this for you.
|
||||
|
||||
## Verification Checklist
|
||||
|
||||
Use `python3 scripts/health_check.py` to run the whole list at once. Manual:
|
||||
|
||||
- [ ] `hardware_check.py` verdict is `ok` OR the user explicitly chose Comfy Cloud
|
||||
- [ ] `comfy` available on PATH (or `uvx --from comfy-cli comfy --help` works)
|
||||
- [ ] `curl http://127.0.0.1:8188/system_stats` returns JSON
|
||||
- [ ] `comfy model list` shows at least one checkpoint
|
||||
- [ ] Workflow JSON is in API format (has `class_type` keys)
|
||||
- [ ] `check_deps.py` reports no missing nodes/models
|
||||
- [ ] Test run completes and outputs are saved
|
||||
- [ ] `comfy --version` works (or `uvx --from comfy-cli comfy --help`)
|
||||
- [ ] `curl http://HOST:PORT/system_stats` returns JSON
|
||||
- [ ] `comfy model list` shows at least one checkpoint (local) OR
|
||||
`/api/experiment/models/checkpoints` returns models (cloud)
|
||||
- [ ] Workflow JSON is in API format
|
||||
- [ ] `check_deps.py` reports `is_ready: true` (or only `node_check_skipped`
|
||||
on cloud free tier)
|
||||
- [ ] Test run with a small workflow completes; outputs land in `--output-dir`
|
||||
|
|
|
|||
|
|
@ -0,0 +1,170 @@
|
|||
---
|
||||
title: "Kanban Orchestrator"
|
||||
sidebar_label: "Kanban Orchestrator"
|
||||
description: "Decomposition playbook + specialist-roster conventions + anti-temptation rules for an orchestrator profile routing work through Kanban"
|
||||
---
|
||||
|
||||
{/* This page is auto-generated from the skill's SKILL.md by website/scripts/generate-skill-docs.py. Edit the source SKILL.md, not this page. */}
|
||||
|
||||
# Kanban Orchestrator
|
||||
|
||||
Decomposition playbook + specialist-roster conventions + anti-temptation rules for an orchestrator profile routing work through Kanban. The "don't do the work yourself" rule and the basic lifecycle are auto-injected into every kanban worker's system prompt; this skill is the deeper playbook when you're specifically playing the orchestrator role.
|
||||
|
||||
## Skill metadata
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| Source | Bundled (installed by default) |
|
||||
| Path | `skills/devops/kanban-orchestrator` |
|
||||
| Version | `2.0.0` |
|
||||
| Tags | `kanban`, `multi-agent`, `orchestration`, `routing` |
|
||||
| Related skills | [`kanban-worker`](/docs/user-guide/skills/bundled/devops/devops-kanban-worker) |
|
||||
|
||||
## Reference: full SKILL.md
|
||||
|
||||
:::info
|
||||
The following is the complete skill definition that Hermes loads when this skill is triggered. This is what the agent sees as instructions when the skill is active.
|
||||
:::
|
||||
|
||||
# Kanban Orchestrator — Decomposition Playbook
|
||||
|
||||
> The **core worker lifecycle** (including the `kanban_create` fan-out pattern and the "decompose, don't execute" rule) is auto-injected into every kanban process via the `KANBAN_GUIDANCE` system-prompt block. This skill is the deeper playbook when you're an orchestrator profile whose whole job is routing.
|
||||
|
||||
## When to use the board (vs. just doing the work)
|
||||
|
||||
Create Kanban tasks when any of these are true:
|
||||
|
||||
1. **Multiple specialists are needed.** Research + analysis + writing is three profiles.
|
||||
2. **The work should survive a crash or restart.** Long-running, recurring, or important.
|
||||
3. **The user might want to interject.** Human-in-the-loop at any step.
|
||||
4. **Multiple subtasks can run in parallel.** Fan-out for speed.
|
||||
5. **Review / iteration is expected.** A reviewer profile loops on drafter output.
|
||||
6. **The audit trail matters.** Board rows persist in SQLite forever.
|
||||
|
||||
If *none* of those apply — it's a small one-shot reasoning task — use `delegate_task` instead or answer the user directly.
|
||||
|
||||
## The anti-temptation rules
|
||||
|
||||
Your job description says "route, don't execute." The rules that enforce that:
|
||||
|
||||
- **Do not execute the work yourself.** Your restricted toolset usually doesn't even include terminal/file/code/web for implementation. If you find yourself "just fixing this quickly" — stop and create a task for the right specialist.
|
||||
- **For any concrete task, create a Kanban task and assign it.** Every single time.
|
||||
- **If no specialist fits, ask the user which profile to create.** Do not default to doing it yourself under "close enough."
|
||||
- **Decompose, route, and summarize — that's the whole job.**
|
||||
|
||||
## The standard specialist roster (convention)
|
||||
|
||||
Unless the user's setup has customized profiles, assume these exist. Adjust to whatever the user actually has — ask if you're unsure.
|
||||
|
||||
| Profile | Does | Typical workspace |
|
||||
|---|---|---|
|
||||
| `researcher` | Reads sources, gathers facts, writes findings | `scratch` |
|
||||
| `analyst` | Synthesizes, ranks, de-dupes. Consumes multiple `researcher` outputs | `scratch` |
|
||||
| `writer` | Drafts prose in the user's voice | `scratch` or `dir:` into their Obsidian vault |
|
||||
| `reviewer` | Reads output, leaves findings, gates approval | `scratch` |
|
||||
| `backend-eng` | Writes server-side code | `worktree` |
|
||||
| `frontend-eng` | Writes client-side code | `worktree` |
|
||||
| `ops` | Runs scripts, manages services, handles deployments | `dir:` into ops scripts repo |
|
||||
| `pm` | Writes specs, acceptance criteria | `scratch` |
|
||||
|
||||
## Decomposition playbook
|
||||
|
||||
### Step 1 — Understand the goal
|
||||
|
||||
Ask clarifying questions if the goal is ambiguous. Cheap to ask; expensive to spawn the wrong fleet.
|
||||
|
||||
### Step 2 — Sketch the task graph
|
||||
|
||||
Before creating anything, draft the graph out loud (in your response to the user). Example for "Analyze whether we should migrate to Postgres":
|
||||
|
||||
```
|
||||
T1 researcher research: Postgres cost vs current
|
||||
T2 researcher research: Postgres performance vs current
|
||||
T3 analyst synthesize migration recommendation parents: T1, T2
|
||||
T4 writer draft decision memo parents: T3
|
||||
```
|
||||
|
||||
Show this to the user. Let them correct it before you create anything.
|
||||
|
||||
### Step 3 — Create tasks and link
|
||||
|
||||
```python
|
||||
t1 = kanban_create(
|
||||
title="research: Postgres cost vs current",
|
||||
assignee="researcher",
|
||||
body="Compare estimated infrastructure costs, migration costs, and ongoing ops costs over a 3-year window. Sources: AWS/GCP pricing, team time estimates, current Postgres bills from peers.",
|
||||
tenant=os.environ.get("HERMES_TENANT"),
|
||||
)["task_id"]
|
||||
|
||||
t2 = kanban_create(
|
||||
title="research: Postgres performance vs current",
|
||||
assignee="researcher",
|
||||
body="Compare query latency, throughput, and scaling characteristics at our expected data volume (~500GB, 10k QPS peak). Sources: benchmark papers, public case studies, pgbench results if easy.",
|
||||
)["task_id"]
|
||||
|
||||
t3 = kanban_create(
|
||||
title="synthesize migration recommendation",
|
||||
assignee="analyst",
|
||||
body="Read the findings from T1 (cost) and T2 (performance). Produce a 1-page recommendation with explicit trade-offs and a go/no-go call.",
|
||||
parents=[t1, t2],
|
||||
)["task_id"]
|
||||
|
||||
t4 = kanban_create(
|
||||
title="draft decision memo",
|
||||
assignee="writer",
|
||||
body="Turn the analyst's recommendation into a 2-page memo for the CTO. Match the tone of previous decision memos in the team's knowledge base.",
|
||||
parents=[t3],
|
||||
)["task_id"]
|
||||
```
|
||||
|
||||
`parents=[...]` gates promotion — children stay in `todo` until every parent reaches `done`, then auto-promote to `ready`. No manual coordination needed; the dispatcher and dependency engine handle it.
|
||||
|
||||
### Step 4 — Complete your own task
|
||||
|
||||
If you were spawned as a task yourself (e.g. `planner` profile was assigned `T0: "investigate Postgres migration"`), mark it done with a summary of what you created:
|
||||
|
||||
```python
|
||||
kanban_complete(
|
||||
summary="decomposed into T1-T4: 2 researchers parallel, 1 analyst on their outputs, 1 writer on the recommendation",
|
||||
metadata={
|
||||
"task_graph": {
|
||||
"T1": {"assignee": "researcher", "parents": []},
|
||||
"T2": {"assignee": "researcher", "parents": []},
|
||||
"T3": {"assignee": "analyst", "parents": ["T1", "T2"]},
|
||||
"T4": {"assignee": "writer", "parents": ["T3"]},
|
||||
},
|
||||
},
|
||||
)
|
||||
```
|
||||
|
||||
### Step 5 — Report back to the user
|
||||
|
||||
Tell them what you created in plain prose:
|
||||
|
||||
> I've queued 4 tasks:
|
||||
> - **T1** (researcher): cost comparison
|
||||
> - **T2** (researcher): performance comparison, in parallel with T1
|
||||
> - **T3** (analyst): synthesizes T1 + T2 into a recommendation
|
||||
> - **T4** (writer): turns T3 into a CTO memo
|
||||
>
|
||||
> The dispatcher will pick up T1 and T2 now. T3 starts when both finish. You'll get a gateway ping when T4 completes. Use the dashboard or `hermes kanban tail <id>` to follow along.
|
||||
|
||||
## Common patterns
|
||||
|
||||
**Fan-out + fan-in (research → synthesize):** N `researcher` tasks with no parents, one `analyst` task with all of them as parents.
|
||||
|
||||
**Pipeline with gates:** `pm → backend-eng → reviewer`. Each stage's `parents=[previous_task]`. Reviewer blocks or completes; if reviewer blocks, the operator unblocks with feedback and respawns.
|
||||
|
||||
**Same-profile queue:** 50 tasks, all assigned to `translator`, no dependencies between them. Dispatcher serializes — translator processes them in priority order, accumulating experience in their own memory.
|
||||
|
||||
**Human-in-the-loop:** Any task can `kanban_block()` to wait for input. Dispatcher respawns after `/unblock`. The comment thread carries the full context.
|
||||
|
||||
## Pitfalls
|
||||
|
||||
**Reassignment vs. new task.** If a reviewer blocks with "needs changes," create a NEW task linked from the reviewer's task — don't re-run the same task with a stern look. The new task is assigned to the original implementer profile.
|
||||
|
||||
**Argument order for links.** `kanban_link(parent_id=..., child_id=...)` — parent first. Mixing them up demotes the wrong task to `todo`.
|
||||
|
||||
**Don't pre-create the whole graph if the shape depends on intermediate findings.** If T3's structure depends on what T1 and T2 find, let T3 exist as a "synthesize findings" task whose own first step is to read parent handoffs and plan the rest. Orchestrators can spawn orchestrators.
|
||||
|
||||
**Tenant inheritance.** If `HERMES_TENANT` is set in your env, pass `tenant=os.environ.get("HERMES_TENANT")` on every `kanban_create` call so child tasks stay in the same namespace.
|
||||
|
|
@ -0,0 +1,152 @@
|
|||
---
|
||||
title: "Kanban Worker — Pitfalls, examples, and edge cases for Hermes Kanban workers"
|
||||
sidebar_label: "Kanban Worker"
|
||||
description: "Pitfalls, examples, and edge cases for Hermes Kanban workers"
|
||||
---
|
||||
|
||||
{/* This page is auto-generated from the skill's SKILL.md by website/scripts/generate-skill-docs.py. Edit the source SKILL.md, not this page. */}
|
||||
|
||||
# Kanban Worker
|
||||
|
||||
Pitfalls, examples, and edge cases for Hermes Kanban workers. The lifecycle itself is auto-injected into every worker's system prompt as KANBAN_GUIDANCE (from agent/prompt_builder.py); this skill is what you load when you want deeper detail on specific scenarios.
|
||||
|
||||
## Skill metadata
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| Source | Bundled (installed by default) |
|
||||
| Path | `skills/devops/kanban-worker` |
|
||||
| Version | `2.0.0` |
|
||||
| Tags | `kanban`, `multi-agent`, `collaboration`, `workflow`, `pitfalls` |
|
||||
| Related skills | [`kanban-orchestrator`](/docs/user-guide/skills/bundled/devops/devops-kanban-orchestrator) |
|
||||
|
||||
## Reference: full SKILL.md
|
||||
|
||||
:::info
|
||||
The following is the complete skill definition that Hermes loads when this skill is triggered. This is what the agent sees as instructions when the skill is active.
|
||||
:::
|
||||
|
||||
# Kanban Worker — Pitfalls and Examples
|
||||
|
||||
> You're seeing this skill because the Hermes Kanban dispatcher spawned you as a worker with `--skills kanban-worker` — it's loaded automatically for every dispatched worker. The **lifecycle** (6 steps: orient → work → heartbeat → block/complete) also lives in the `KANBAN_GUIDANCE` block that's auto-injected into your system prompt. This skill is the deeper detail: good handoff shapes, retry diagnostics, edge cases.
|
||||
|
||||
## Workspace handling
|
||||
|
||||
Your workspace kind determines how you should behave inside `$HERMES_KANBAN_WORKSPACE`:
|
||||
|
||||
| Kind | What it is | How to work |
|
||||
|---|---|---|
|
||||
| `scratch` | Fresh tmp dir, yours alone | Read/write freely; it gets GC'd when the task is archived. |
|
||||
| `dir:<path>` | Shared persistent directory | Other runs will read what you write. Treat it like long-lived state. Path is guaranteed absolute (the kernel rejects relative paths). |
|
||||
| `worktree` | Git worktree at the resolved path | If `.git` doesn't exist, run `git worktree add <path> <branch>` from the main repo first, then cd and work normally. Commit work here. |
|
||||
|
||||
## Tenant isolation
|
||||
|
||||
If `$HERMES_TENANT` is set, the task belongs to a tenant namespace. When reading or writing persistent memory, prefix memory entries with the tenant so context doesn't leak across tenants:
|
||||
|
||||
- Good: `business-a: Acme is our biggest customer`
|
||||
- Bad (leaks): `Acme is our biggest customer`
|
||||
|
||||
## Good summary + metadata shapes
|
||||
|
||||
The `kanban_complete(summary=..., metadata=...)` handoff is how downstream workers read what you did. Patterns that work:
|
||||
|
||||
**Coding task:**
|
||||
```python
|
||||
kanban_complete(
|
||||
summary="shipped rate limiter — token bucket, keys on user_id with IP fallback, 14 tests pass",
|
||||
metadata={
|
||||
"changed_files": ["rate_limiter.py", "tests/test_rate_limiter.py"],
|
||||
"tests_run": 14,
|
||||
"tests_passed": 14,
|
||||
"decisions": ["user_id primary, IP fallback for unauthenticated requests"],
|
||||
},
|
||||
)
|
||||
```
|
||||
|
||||
**Research task:**
|
||||
```python
|
||||
kanban_complete(
|
||||
summary="3 competing libraries reviewed; vLLM wins on throughput, SGLang on latency, Tensorrt-LLM on memory efficiency",
|
||||
metadata={
|
||||
"sources_read": 12,
|
||||
"recommendation": "vLLM",
|
||||
"benchmarks": {"vllm": 1.0, "sglang": 0.87, "trtllm": 0.72},
|
||||
},
|
||||
)
|
||||
```
|
||||
|
||||
**Review task:**
|
||||
```python
|
||||
kanban_complete(
|
||||
summary="reviewed PR #123; 2 blocking issues found (SQL injection in /search, missing CSRF on /settings)",
|
||||
metadata={
|
||||
"pr_number": 123,
|
||||
"findings": [
|
||||
{"severity": "critical", "file": "api/search.py", "line": 42, "issue": "raw SQL concat"},
|
||||
{"severity": "high", "file": "api/settings.py", "issue": "missing CSRF middleware"},
|
||||
],
|
||||
"approved": False,
|
||||
},
|
||||
)
|
||||
```
|
||||
|
||||
Shape `metadata` so downstream parsers (reviewers, aggregators, schedulers) can use it without re-reading your prose.
|
||||
|
||||
## Block reasons that get answered fast
|
||||
|
||||
Bad: `"stuck"` — the human has no context.
|
||||
|
||||
Good: one sentence naming the specific decision you need. Leave longer context as a comment instead.
|
||||
|
||||
```python
|
||||
kanban_comment(
|
||||
task_id=os.environ["HERMES_KANBAN_TASK"],
|
||||
body="Full context: I have user IPs from Cloudflare headers but some users are behind NATs with thousands of peers. Keying on IP alone causes false positives.",
|
||||
)
|
||||
kanban_block(reason="Rate limit key choice: IP (simple, NAT-unsafe) or user_id (requires auth, skips anonymous endpoints)?")
|
||||
```
|
||||
|
||||
The block message is what appears in the dashboard / gateway notifier. The comment is the deeper context a human reads when they open the task.
|
||||
|
||||
## Heartbeats worth sending
|
||||
|
||||
Good heartbeats name progress: `"epoch 12/50, loss 0.31"`, `"scanned 1.2M/2.4M rows"`, `"uploaded 47/120 videos"`.
|
||||
|
||||
Bad heartbeats: `"still working"`, empty notes, sub-second intervals. Every few minutes max; skip entirely for tasks under ~2 minutes.
|
||||
|
||||
## Retry scenarios
|
||||
|
||||
If you open the task and `kanban_show` returns `runs: [...]` with one or more closed runs, you're a retry. The prior runs' `outcome` / `summary` / `error` tell you what didn't work. Don't repeat that path. Typical retry diagnostics:
|
||||
|
||||
- `outcome: "timed_out"` — the previous attempt hit `max_runtime_seconds`. You may need to chunk the work or shorten it.
|
||||
- `outcome: "crashed"` — OOM or segfault. Reduce memory footprint.
|
||||
- `outcome: "spawn_failed"` + `error: "..."` — usually a profile config issue (missing credential, bad PATH). Ask the human via `kanban_block` instead of retrying blindly.
|
||||
- `outcome: "reclaimed"` + `summary: "task archived..."` — operator archived the task out from under the previous run; you probably shouldn't be running at all, check status carefully.
|
||||
- `outcome: "blocked"` — a previous attempt blocked; the unblock comment should be in the thread by now.
|
||||
|
||||
## Do NOT
|
||||
|
||||
- Call `delegate_task` as a substitute for `kanban_create`. `delegate_task` is for short reasoning subtasks inside YOUR run; `kanban_create` is for cross-agent handoffs that outlive one API loop.
|
||||
- Modify files outside `$HERMES_KANBAN_WORKSPACE` unless the task body says to.
|
||||
- Create follow-up tasks assigned to yourself — assign to the right specialist.
|
||||
- Complete a task you didn't actually finish. Block it instead.
|
||||
|
||||
## Pitfalls
|
||||
|
||||
**Task state can change between dispatch and your startup.** Between when the dispatcher claimed and when your process actually booted, the task may have been blocked, reassigned, or archived. Always `kanban_show` first. If it reports `blocked` or `archived`, stop — you shouldn't be running.
|
||||
|
||||
**Workspace may have stale artifacts.** Especially `dir:` and `worktree` workspaces can have files from previous runs. Read the comment thread — it usually explains why you're running again and what state the workspace is in.
|
||||
|
||||
**Don't rely on the CLI when the guidance is available.** The `kanban_*` tools work across all terminal backends (Docker, Modal, SSH). `hermes kanban <verb>` from your terminal tool will fail in containerized backends because the CLI isn't installed there. When in doubt, use the tool.
|
||||
|
||||
## CLI fallback (for scripting)
|
||||
|
||||
Every tool has a CLI equivalent for human operators and scripts:
|
||||
- `kanban_show` ↔ `hermes kanban show <id> --json`
|
||||
- `kanban_complete` ↔ `hermes kanban complete <id> --summary "..." --metadata '{...}'`
|
||||
- `kanban_block` ↔ `hermes kanban block <id> "reason"`
|
||||
- `kanban_create` ↔ `hermes kanban create "title" --assignee <profile> [--parent <id>]`
|
||||
- etc.
|
||||
|
||||
Use the tools from inside an agent; the CLI exists for the human at the terminal.
|
||||
|
|
@ -0,0 +1,231 @@
|
|||
---
|
||||
title: "Here.Now — Publish static sites to {slug}"
|
||||
sidebar_label: "Here.Now"
|
||||
description: "Publish static sites to {slug}"
|
||||
---
|
||||
|
||||
{/* This page is auto-generated from the skill's SKILL.md by website/scripts/generate-skill-docs.py. Edit the source SKILL.md, not this page. */}
|
||||
|
||||
# Here.Now
|
||||
|
||||
Publish static sites to {slug}.here.now and store private files in cloud Drives for agent-to-agent handoff.
|
||||
|
||||
## Skill metadata
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| Source | Optional — install with `hermes skills install official/productivity/here-now` |
|
||||
| Path | `optional-skills/productivity/here-now` |
|
||||
| Version | `1.15.3` |
|
||||
| Author | here.now |
|
||||
| License | MIT |
|
||||
| Platforms | macos, linux |
|
||||
| Tags | `here.now`, `herenow`, `publish`, `deploy`, `hosting`, `static-site`, `web`, `share`, `URL`, `drive`, `storage` |
|
||||
|
||||
## Reference: full SKILL.md
|
||||
|
||||
:::info
|
||||
The following is the complete skill definition that Hermes loads when this skill is triggered. This is what the agent sees as instructions when the skill is active.
|
||||
:::
|
||||
|
||||
# here.now
|
||||
|
||||
here.now lets agents publish websites and store private files in cloud Drives.
|
||||
|
||||
Use here.now for two jobs:
|
||||
|
||||
- **Sites**: publish websites and files at `{slug}.here.now`.
|
||||
- **Drives**: store private agent files in cloud folders.
|
||||
|
||||
## Current docs
|
||||
|
||||
**Before answering questions about here.now capabilities, features, or workflows, read the current docs:**
|
||||
|
||||
→ **https://here.now/docs**
|
||||
|
||||
Read the docs:
|
||||
|
||||
- at the first here.now-related interaction in a conversation
|
||||
- any time the user asks how to do something
|
||||
- any time the user asks what is possible, supported, or recommended
|
||||
- before telling the user a feature is unsupported
|
||||
|
||||
Topics that require current docs (do not rely on local skill text alone):
|
||||
|
||||
- Drives and Drive sharing
|
||||
- custom domains
|
||||
- payments and payment gating
|
||||
- forking
|
||||
- proxy routes and service variables
|
||||
- handles and links
|
||||
- limits and quotas
|
||||
- SPA routing
|
||||
- error handling and remediation
|
||||
- feature availability
|
||||
|
||||
**If docs and live API behavior disagree, trust the live API behavior.**
|
||||
|
||||
If the docs fetch fails or times out, continue with the local skill and live API/script output. Prefer live API behavior for active operations.
|
||||
|
||||
## Requirements
|
||||
|
||||
- Required binaries: `curl`, `file`, `jq`
|
||||
- Optional environment variable: `$HERENOW_API_KEY`
|
||||
- Optional Drive token variable: `$HERENOW_DRIVE_TOKEN`
|
||||
- Optional credentials file: `~/.herenow/credentials`
|
||||
- Skill helper paths:
|
||||
- `${HERMES_SKILL_DIR}/scripts/publish.sh` for publishing sites
|
||||
- `${HERMES_SKILL_DIR}/scripts/drive.sh` for private Drive storage
|
||||
|
||||
## Create a site
|
||||
|
||||
```bash
|
||||
PUBLISH="${HERMES_SKILL_DIR}/scripts/publish.sh"
|
||||
bash "$PUBLISH" {file-or-dir} --client hermes
|
||||
```
|
||||
|
||||
Outputs the live URL (e.g. `https://bright-canvas-a7k2.here.now/`).
|
||||
|
||||
Under the hood this is a three-step flow: create/update -> upload files -> finalize. A site is not live until finalize succeeds.
|
||||
|
||||
Without an API key this creates an **anonymous site** that expires in 24 hours.
|
||||
With a saved API key, the site is permanent.
|
||||
|
||||
**File structure:** For HTML sites, place `index.html` at the root of the directory you publish, not inside a subdirectory. The directory's contents become the site root. For example, publish `my-site/` where `my-site/index.html` exists — don't publish a parent folder that contains `my-site/`.
|
||||
|
||||
You can also publish raw files without any HTML. Single files get a rich auto-viewer (images, PDF, video, audio). Multiple files get an auto-generated directory listing with folder navigation and an image gallery.
|
||||
|
||||
## Update an existing site
|
||||
|
||||
```bash
|
||||
PUBLISH="${HERMES_SKILL_DIR}/scripts/publish.sh"
|
||||
bash "$PUBLISH" {file-or-dir} --slug {slug} --client hermes
|
||||
```
|
||||
|
||||
The script auto-loads the `claimToken` from `.herenow/state.json` when updating anonymous sites. Pass `--claim-token {token}` to override.
|
||||
|
||||
Authenticated updates require a saved API key.
|
||||
|
||||
## Use a Drive
|
||||
|
||||
Use a Drive when the user wants private cloud storage for agent files: documents, context, memory, plans, assets, media, research, code, and anything else that should persist without being published as a website.
|
||||
|
||||
Every signed-in account has a default Drive named `My Drive`.
|
||||
|
||||
```bash
|
||||
DRIVE="${HERMES_SKILL_DIR}/scripts/drive.sh"
|
||||
bash "$DRIVE" default
|
||||
bash "$DRIVE" ls "My Drive"
|
||||
bash "$DRIVE" put "My Drive" notes/today.md --from ./notes/today.md
|
||||
bash "$DRIVE" cat "My Drive" notes/today.md
|
||||
bash "$DRIVE" share "My Drive" --perms write --prefix notes/ --ttl 7d
|
||||
```
|
||||
|
||||
Use scoped Drive tokens for agent-to-agent handoff. If you receive a `herenow_drive` share block, use its `token` as `Authorization: Bearer <token>` against `api_base`, respect `pathPrefix` when present, and preserve ETags on writes. A `pathPrefix` of `null` means full-Drive access. If the skill is available, prefer `drive.sh`; otherwise call the listed API operations directly.
|
||||
|
||||
## API key storage
|
||||
|
||||
The publish script reads the API key from these sources (first match wins):
|
||||
|
||||
1. `--api-key {key}` flag (CI/scripting only — avoid in interactive use)
|
||||
2. `$HERENOW_API_KEY` environment variable
|
||||
3. `~/.herenow/credentials` file (recommended for agents)
|
||||
|
||||
To store a key, write it to the credentials file:
|
||||
|
||||
```bash
|
||||
mkdir -p ~/.herenow && echo "{API_KEY}" > ~/.herenow/credentials && chmod 600 ~/.herenow/credentials
|
||||
```
|
||||
|
||||
**IMPORTANT**: After receiving an API key, save it immediately — run the command above yourself. Do not ask the user to run it manually. Avoid passing the key via CLI flags (e.g. `--api-key`) in interactive sessions; the credentials file is the preferred storage method.
|
||||
|
||||
Never commit credentials or local state files (`~/.herenow/credentials`, `.herenow/state.json`) to source control.
|
||||
|
||||
## Getting an API key
|
||||
|
||||
To upgrade from anonymous (24h) to permanent sites:
|
||||
|
||||
1. Ask the user for their email address.
|
||||
2. Request a one-time sign-in code:
|
||||
|
||||
```bash
|
||||
curl -sS https://here.now/api/auth/agent/request-code \
|
||||
-H "content-type: application/json" \
|
||||
-d '{"email": "user@example.com"}'
|
||||
```
|
||||
|
||||
3. Tell the user: "Check your inbox for a sign-in code from here.now and paste it here."
|
||||
4. Verify the code and get the API key:
|
||||
|
||||
```bash
|
||||
curl -sS https://here.now/api/auth/agent/verify-code \
|
||||
-H "content-type: application/json" \
|
||||
-d '{"email":"user@example.com","code":"ABCD-2345"}'
|
||||
```
|
||||
|
||||
5. Save the returned `apiKey` yourself (do not ask the user to do this):
|
||||
|
||||
```bash
|
||||
mkdir -p ~/.herenow && echo "{API_KEY}" > ~/.herenow/credentials && chmod 600 ~/.herenow/credentials
|
||||
```
|
||||
|
||||
## State file
|
||||
|
||||
After every site create/update, the script writes to `.herenow/state.json` in the working directory:
|
||||
|
||||
```json
|
||||
{
|
||||
"publishes": {
|
||||
"bright-canvas-a7k2": {
|
||||
"siteUrl": "https://bright-canvas-a7k2.here.now/",
|
||||
"claimToken": "abc123",
|
||||
"claimUrl": "https://here.now/claim?slug=bright-canvas-a7k2&token=abc123",
|
||||
"expiresAt": "2026-02-18T01:00:00.000Z"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Before creating or updating sites, you may check this file to find prior slugs.
|
||||
Treat `.herenow/state.json` as internal cache only.
|
||||
Never present this local file path as a URL, and never use it as source of truth for auth mode, expiry, or claim URL.
|
||||
|
||||
## What to tell the user
|
||||
|
||||
For published sites:
|
||||
|
||||
- Always share the `siteUrl` from the current script run.
|
||||
- Read and follow `publish_result.*` lines from script stderr to determine auth mode.
|
||||
- When `publish_result.auth_mode=authenticated`: tell the user the site is **permanent** and saved to their account. No claim URL is needed.
|
||||
- When `publish_result.auth_mode=anonymous`: tell the user the site **expires in 24 hours**. Share the claim URL (if `publish_result.claim_url` is non-empty and starts with `https://`) so they can keep it permanently. Warn that claim tokens are only returned once and cannot be recovered.
|
||||
- Never tell the user to inspect `.herenow/state.json` for claim URLs or auth status.
|
||||
|
||||
For Drives:
|
||||
|
||||
- Do not describe Drive files as public URLs.
|
||||
- Tell the user Drive contents are private unless shared with a scoped token.
|
||||
- When sharing access with another agent, prefer a scoped token with a narrow `pathPrefix` and short TTL.
|
||||
|
||||
## publish.sh options
|
||||
|
||||
| Flag | Description |
|
||||
| ---------------------- | -------------------------------------------- |
|
||||
| `--slug {slug}` | Update an existing site instead of creating |
|
||||
| `--claim-token {token}`| Override claim token for anonymous updates |
|
||||
| `--title {text}` | Viewer title (non-HTML sites) |
|
||||
| `--description {text}` | Viewer description |
|
||||
| `--ttl {seconds}` | Set expiry (authenticated only) |
|
||||
| `--client {name}` | Agent name for attribution (e.g. `hermes`) |
|
||||
| `--base-url {url}` | API base URL (default: `https://here.now`) |
|
||||
| `--allow-nonherenow-base-url` | Allow sending auth to non-default `--base-url` |
|
||||
| `--api-key {key}` | API key override (prefer credentials file) |
|
||||
| `--spa` | Enable SPA routing (serve index.html for unknown paths) |
|
||||
| `--forkable` | Allow others to fork this site |
|
||||
|
||||
## Beyond publish.sh
|
||||
|
||||
For Drive operations, use `drive.sh` or the Drive API. For broader account and site management — delete, metadata, passwords, payments, domains, handles, links, variables, proxy routes, forking, duplication, and more — see the current docs:
|
||||
|
||||
→ **https://here.now/docs**
|
||||
|
||||
Full docs: https://here.now/docs
|
||||
|
|
@ -0,0 +1,376 @@
|
|||
---
|
||||
title: "Shopify — Shopify Admin & Storefront GraphQL APIs via curl"
|
||||
sidebar_label: "Shopify"
|
||||
description: "Shopify Admin & Storefront GraphQL APIs via curl"
|
||||
---
|
||||
|
||||
{/* This page is auto-generated from the skill's SKILL.md by website/scripts/generate-skill-docs.py. Edit the source SKILL.md, not this page. */}
|
||||
|
||||
# Shopify
|
||||
|
||||
Shopify Admin & Storefront GraphQL APIs via curl. Products, orders, customers, inventory, metafields.
|
||||
|
||||
## Skill metadata
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| Source | Optional — install with `hermes skills install official/productivity/shopify` |
|
||||
| Path | `optional-skills/productivity/shopify` |
|
||||
| Version | `1.0.0` |
|
||||
| Author | community |
|
||||
| License | MIT |
|
||||
| Tags | `Shopify`, `E-commerce`, `Commerce`, `API`, `GraphQL` |
|
||||
| Related skills | [`airtable`](/docs/user-guide/skills/bundled/productivity/productivity-airtable), [`xurl`](/docs/user-guide/skills/bundled/social-media/social-media-xurl) |
|
||||
|
||||
## Reference: full SKILL.md
|
||||
|
||||
:::info
|
||||
The following is the complete skill definition that Hermes loads when this skill is triggered. This is what the agent sees as instructions when the skill is active.
|
||||
:::
|
||||
|
||||
# Shopify — Admin & Storefront GraphQL APIs
|
||||
|
||||
Work with Shopify stores directly through `curl`: list products, manage inventory, pull orders, update customers, read metafields. No SDK, no app framework — just the GraphQL endpoint and a custom-app access token.
|
||||
|
||||
The REST Admin API is legacy since 2024-04 and only receives security fixes. **Use GraphQL Admin** for all admin work. Use **Storefront GraphQL** for read-only customer-facing queries (products, collections, cart).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. In Shopify admin: **Settings → Apps and sales channels → Develop apps → Create an app**.
|
||||
2. Click **Configure Admin API scopes**, select what you need (examples below), save.
|
||||
3. **Install app** → the Admin API access token appears ONCE. Copy it immediately — Shopify will never show it again. Tokens start with `shpat_`.
|
||||
4. Save to `~/.hermes/.env`:
|
||||
```
|
||||
SHOPIFY_ACCESS_TOKEN=shpat_xxxxxxxxxxxxxxxxxxxx
|
||||
SHOPIFY_STORE_DOMAIN=my-store.myshopify.com
|
||||
SHOPIFY_API_VERSION=2026-01
|
||||
```
|
||||
|
||||
> **Heads up:** As of January 1, 2026, new "legacy custom apps" created in the Shopify admin are gone. New setups should use the **Dev Dashboard** (`shopify.dev/docs/apps/build/dev-dashboard`). Existing admin-created apps keep working. If the user's shop has no existing custom app and it's after 2026-01-01, direct them to Dev Dashboard instead of the admin flow.
|
||||
|
||||
Common scopes by task:
|
||||
- Products / collections: `read_products`, `write_products`
|
||||
- Inventory: `read_inventory`, `write_inventory`, `read_locations`
|
||||
- Orders: `read_orders`, `write_orders` (30 most recent without `read_all_orders`)
|
||||
- Customers: `read_customers`, `write_customers`
|
||||
- Draft orders: `read_draft_orders`, `write_draft_orders`
|
||||
- Fulfillments: `read_fulfillments`, `write_fulfillments`
|
||||
- Metafields / metaobjects: covered by the matching resource scopes
|
||||
|
||||
## API Basics
|
||||
|
||||
- **Endpoint:** `https://$SHOPIFY_STORE_DOMAIN/admin/api/$SHOPIFY_API_VERSION/graphql.json`
|
||||
- **Auth header:** `X-Shopify-Access-Token: $SHOPIFY_ACCESS_TOKEN` (NOT `Authorization: Bearer`)
|
||||
- **Method:** always `POST`, always `Content-Type: application/json`, body is `{"query": "...", "variables": {...}}`
|
||||
- **HTTP 200 does not mean success.** GraphQL returns errors in a top-level `errors` array and per-field `userErrors`. Always check both.
|
||||
- **IDs are GID strings:** `gid://shopify/Product/10079467700516`, `gid://shopify/Variant/...`, `gid://shopify/Order/...`. Pass these verbatim — don't strip the prefix.
|
||||
- **Rate limit:** calculated via query cost (leaky bucket). Each response has `extensions.cost` with `requestedQueryCost`, `actualQueryCost`, `throttleStatus.{currentlyAvailable, maximumAvailable, restoreRate}`. Back off when `currentlyAvailable` drops below your next query's cost. Standard shops = 100 points bucket, 50/s restore; Plus = 1000/100.
|
||||
|
||||
Base curl pattern (reusable):
|
||||
|
||||
```bash
|
||||
shop_gql() {
|
||||
local query="$1"
|
||||
local variables="${2:-{}}"
|
||||
curl -sS -X POST \
|
||||
"https://${SHOPIFY_STORE_DOMAIN}/admin/api/${SHOPIFY_API_VERSION:-2026-01}/graphql.json" \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "X-Shopify-Access-Token: ${SHOPIFY_ACCESS_TOKEN}" \
|
||||
--data "$(jq -nc --arg q "$query" --argjson v "$variables" '{query: $q, variables: $v}')"
|
||||
}
|
||||
```
|
||||
|
||||
Pipe through `jq` for readable output. `-sS` keeps errors visible but hides the progress bar.
|
||||
|
||||
## Discovery
|
||||
|
||||
### Shop info + current API version
|
||||
```bash
|
||||
shop_gql '{ shop { name myshopifyDomain primaryDomain { url } currencyCode plan { displayName } } }' | jq
|
||||
```
|
||||
|
||||
### List all supported API versions
|
||||
```bash
|
||||
shop_gql '{ publicApiVersions { handle supported } }' | jq '.data.publicApiVersions[] | select(.supported)'
|
||||
```
|
||||
|
||||
## Products
|
||||
|
||||
### Search products (first 20 matching query)
|
||||
```bash
|
||||
shop_gql '
|
||||
query($q: String!) {
|
||||
products(first: 20, query: $q) {
|
||||
edges { node { id title handle status totalInventory variants(first: 5) { edges { node { id sku price inventoryQuantity } } } } }
|
||||
pageInfo { hasNextPage endCursor }
|
||||
}
|
||||
}' '{"q":"hoodie status:active"}' | jq
|
||||
```
|
||||
|
||||
Query syntax supports `title:`, `sku:`, `vendor:`, `product_type:`, `status:active`, `tag:`, `created_at:>2025-01-01`. Full grammar: https://shopify.dev/docs/api/usage/search-syntax
|
||||
|
||||
### Paginate products (cursor)
|
||||
```bash
|
||||
shop_gql '
|
||||
query($cursor: String) {
|
||||
products(first: 100, after: $cursor) {
|
||||
edges { cursor node { id handle } }
|
||||
pageInfo { hasNextPage endCursor }
|
||||
}
|
||||
}' '{"cursor":null}'
|
||||
# subsequent calls: pass the previous endCursor
|
||||
```
|
||||
|
||||
### Get a product with variants + metafields
|
||||
```bash
|
||||
shop_gql '
|
||||
query($id: ID!) {
|
||||
product(id: $id) {
|
||||
id title handle descriptionHtml tags status
|
||||
variants(first: 20) { edges { node { id sku price compareAtPrice inventoryQuantity selectedOptions { name value } } } }
|
||||
metafields(first: 20) { edges { node { namespace key type value } } }
|
||||
}
|
||||
}' '{"id":"gid://shopify/Product/10079467700516"}' | jq
|
||||
```
|
||||
|
||||
### Create a product with one variant
|
||||
```bash
|
||||
shop_gql '
|
||||
mutation($input: ProductCreateInput!) {
|
||||
productCreate(product: $input) {
|
||||
product { id handle }
|
||||
userErrors { field message }
|
||||
}
|
||||
}' '{"input":{"title":"Test Hoodie","status":"DRAFT","vendor":"Hermes","productType":"Apparel","tags":["test"]}}'
|
||||
```
|
||||
|
||||
Variants now have their own mutations in recent versions:
|
||||
|
||||
```bash
|
||||
# Add variants after creating the product
|
||||
shop_gql '
|
||||
mutation($productId: ID!, $variants: [ProductVariantsBulkInput!]!) {
|
||||
productVariantsBulkCreate(productId: $productId, variants: $variants) {
|
||||
productVariants { id sku price }
|
||||
userErrors { field message }
|
||||
}
|
||||
}' '{"productId":"gid://shopify/Product/...","variants":[{"optionValues":[{"optionName":"Size","name":"M"}],"price":"49.00","inventoryItem":{"sku":"HD-M","tracked":true}}]}'
|
||||
```
|
||||
|
||||
### Update price / SKU
|
||||
```bash
|
||||
shop_gql '
|
||||
mutation($productId: ID!, $variants: [ProductVariantsBulkInput!]!) {
|
||||
productVariantsBulkUpdate(productId: $productId, variants: $variants) {
|
||||
productVariants { id sku price }
|
||||
userErrors { field message }
|
||||
}
|
||||
}' '{"productId":"gid://shopify/Product/...","variants":[{"id":"gid://shopify/ProductVariant/...","price":"55.00"}]}'
|
||||
```
|
||||
|
||||
## Orders
|
||||
|
||||
### List recent orders (last 30 by default without `read_all_orders`)
|
||||
```bash
|
||||
shop_gql '
|
||||
{
|
||||
orders(first: 20, reverse: true, query: "financial_status:paid") {
|
||||
edges { node {
|
||||
id name createdAt displayFinancialStatus displayFulfillmentStatus
|
||||
totalPriceSet { shopMoney { amount currencyCode } }
|
||||
customer { id displayName email }
|
||||
lineItems(first: 10) { edges { node { title quantity sku } } }
|
||||
} }
|
||||
}
|
||||
}' | jq
|
||||
```
|
||||
|
||||
Useful order query filters: `financial_status:paid|pending|refunded`, `fulfillment_status:unfulfilled|fulfilled`, `created_at:>2025-01-01`, `tag:gift`, `email:foo@example.com`.
|
||||
|
||||
### Fetch a single order with shipping address
|
||||
```bash
|
||||
shop_gql '
|
||||
query($id: ID!) {
|
||||
order(id: $id) {
|
||||
id name email
|
||||
shippingAddress { name address1 address2 city province country zip phone }
|
||||
lineItems(first: 50) { edges { node { title quantity variant { sku } originalUnitPriceSet { shopMoney { amount currencyCode } } } } }
|
||||
transactions { id kind status amountSet { shopMoney { amount currencyCode } } }
|
||||
}
|
||||
}' '{"id":"gid://shopify/Order/...."}' | jq
|
||||
```
|
||||
|
||||
## Customers
|
||||
|
||||
```bash
|
||||
# Search
|
||||
shop_gql '
|
||||
{
|
||||
customers(first: 10, query: "email:*@example.com") {
|
||||
edges { node { id email displayName numberOfOrders amountSpent { amount currencyCode } } }
|
||||
}
|
||||
}'
|
||||
|
||||
# Create
|
||||
shop_gql '
|
||||
mutation($input: CustomerInput!) {
|
||||
customerCreate(input: $input) {
|
||||
customer { id email }
|
||||
userErrors { field message }
|
||||
}
|
||||
}' '{"input":{"email":"test@example.com","firstName":"Test","lastName":"User","tags":["api-created"]}}'
|
||||
```
|
||||
|
||||
## Inventory
|
||||
|
||||
Inventory lives on **inventory items** tied to variants, quantities tracked per **location**.
|
||||
|
||||
```bash
|
||||
# Get inventory for a variant across all locations
|
||||
shop_gql '
|
||||
query($id: ID!) {
|
||||
productVariant(id: $id) {
|
||||
id sku
|
||||
inventoryItem {
|
||||
id tracked
|
||||
inventoryLevels(first: 10) {
|
||||
edges { node { location { id name } quantities(names: ["available","on_hand","committed"]) { name quantity } } }
|
||||
}
|
||||
}
|
||||
}
|
||||
}' '{"id":"gid://shopify/ProductVariant/..."}'
|
||||
```
|
||||
|
||||
Adjust stock (delta) — uses `inventoryAdjustQuantities`:
|
||||
|
||||
```bash
|
||||
shop_gql '
|
||||
mutation($input: InventoryAdjustQuantitiesInput!) {
|
||||
inventoryAdjustQuantities(input: $input) {
|
||||
inventoryAdjustmentGroup { reason changes { name delta } }
|
||||
userErrors { field message }
|
||||
}
|
||||
}' '{
|
||||
"input": {
|
||||
"reason": "correction",
|
||||
"name": "available",
|
||||
"changes": [{"delta": 5, "inventoryItemId": "gid://shopify/InventoryItem/...", "locationId": "gid://shopify/Location/..."}]
|
||||
}
|
||||
}'
|
||||
```
|
||||
|
||||
Set absolute stock (not delta) — `inventorySetQuantities`:
|
||||
|
||||
```bash
|
||||
shop_gql '
|
||||
mutation($input: InventorySetQuantitiesInput!) {
|
||||
inventorySetQuantities(input: $input) {
|
||||
inventoryAdjustmentGroup { id }
|
||||
userErrors { field message }
|
||||
}
|
||||
}' '{"input":{"reason":"correction","name":"available","ignoreCompareQuantity":true,"quantities":[{"inventoryItemId":"gid://shopify/InventoryItem/...","locationId":"gid://shopify/Location/...","quantity":100}]}}'
|
||||
```
|
||||
|
||||
## Metafields & Metaobjects
|
||||
|
||||
Metafields attach custom data to resources (products, customers, orders, shop).
|
||||
|
||||
```bash
|
||||
# Read
|
||||
shop_gql '
|
||||
query($id: ID!) {
|
||||
product(id: $id) {
|
||||
metafields(first: 10, namespace: "custom") {
|
||||
edges { node { key type value } }
|
||||
}
|
||||
}
|
||||
}' '{"id":"gid://shopify/Product/..."}'
|
||||
|
||||
# Write (works for any owner type)
|
||||
shop_gql '
|
||||
mutation($metafields: [MetafieldsSetInput!]!) {
|
||||
metafieldsSet(metafields: $metafields) {
|
||||
metafields { id key namespace }
|
||||
userErrors { field message code }
|
||||
}
|
||||
}' '{"metafields":[{"ownerId":"gid://shopify/Product/...","namespace":"custom","key":"care_instructions","type":"multi_line_text_field","value":"Wash cold. Tumble dry low."}]}'
|
||||
```
|
||||
|
||||
## Storefront API (public read-only)
|
||||
|
||||
Different endpoint, different token, used for customer-facing apps/hydrogen-style headless setups. Headers differ:
|
||||
|
||||
- **Endpoint:** `https://$SHOPIFY_STORE_DOMAIN/api/$SHOPIFY_API_VERSION/graphql.json`
|
||||
- **Auth header (public):** `X-Shopify-Storefront-Access-Token: <public token>` — embeddable in browser
|
||||
- **Auth header (private):** `Shopify-Storefront-Private-Token: <private token>` — server-only
|
||||
|
||||
```bash
|
||||
curl -sS -X POST \
|
||||
"https://${SHOPIFY_STORE_DOMAIN}/api/${SHOPIFY_API_VERSION:-2026-01}/graphql.json" \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "X-Shopify-Storefront-Access-Token: ${SHOPIFY_STOREFRONT_TOKEN}" \
|
||||
-d '{"query":"{ shop { name } products(first: 5) { edges { node { id title handle } } } }"}' | jq
|
||||
```
|
||||
|
||||
## Bulk Operations
|
||||
|
||||
For dumps larger than rate limits allow (full product catalog, all orders for a year):
|
||||
|
||||
```bash
|
||||
# 1. Start bulk query
|
||||
shop_gql '
|
||||
mutation {
|
||||
bulkOperationRunQuery(query: """
|
||||
{ products { edges { node { id title handle variants { edges { node { sku price } } } } } } }
|
||||
""") {
|
||||
bulkOperation { id status }
|
||||
userErrors { field message }
|
||||
}
|
||||
}'
|
||||
|
||||
# 2. Poll status
|
||||
shop_gql '{ currentBulkOperation { id status errorCode objectCount fileSize url partialDataUrl } }'
|
||||
|
||||
# 3. When status=COMPLETED, download the JSONL file
|
||||
curl -sS "$URL" > products.jsonl
|
||||
```
|
||||
|
||||
Each JSONL line is a node, and nested connections are emitted as separate lines with `__parentId`. Reassemble client-side if needed.
|
||||
|
||||
## Webhooks
|
||||
|
||||
Subscribe to events so you don't have to poll:
|
||||
|
||||
```bash
|
||||
shop_gql '
|
||||
mutation($topic: WebhookSubscriptionTopic!, $sub: WebhookSubscriptionInput!) {
|
||||
webhookSubscriptionCreate(topic: $topic, webhookSubscription: $sub) {
|
||||
webhookSubscription { id topic endpoint { __typename ... on WebhookHttpEndpoint { callbackUrl } } }
|
||||
userErrors { field message }
|
||||
}
|
||||
}' '{"topic":"ORDERS_CREATE","sub":{"callbackUrl":"https://example.com/webhook","format":"JSON"}}'
|
||||
```
|
||||
|
||||
Verify incoming webhook HMAC using the app's client secret (not the access token):
|
||||
|
||||
```bash
|
||||
echo -n "$REQUEST_BODY" | openssl dgst -sha256 -hmac "$APP_SECRET" -binary | base64
|
||||
# Compare to X-Shopify-Hmac-Sha256 header
|
||||
```
|
||||
|
||||
## Pitfalls
|
||||
|
||||
- **REST endpoints still exist but are frozen.** Don't write new integrations against `/admin/api/.../products.json`. Use GraphQL.
|
||||
- **Token format check.** Admin tokens start with `shpat_`. Storefront public tokens with `shpua_`. If you have one and the wrong header, every request returns 401 without a useful error body.
|
||||
- **403 with a valid token = missing scope.** Shopify returns `{"errors":[{"message":"Access denied for ..."}]}`. Re-configure Admin API scopes on the app, then reinstall to regenerate the token.
|
||||
- **`userErrors` is empty != success.** Also check `data.<mutation>.<resource>` is non-null. Some failures populate neither — inspect the whole response.
|
||||
- **GID vs numeric ID.** Legacy REST gave numeric IDs; GraphQL wants full GID strings. To convert: `gid://shopify/Product/<numeric>`.
|
||||
- **Rate limit surprise.** A single `products(first: 250)` with deep nesting can cost 1000+ points and throttle immediately on a standard-plan shop. Start narrow, read `extensions.cost`, adjust.
|
||||
- **Pagination order.** `products(first: N, reverse: true)` sorts by `id DESC`, not `created_at`. Use `sortKey: CREATED_AT, reverse: true` for "newest first."
|
||||
- **`read_all_orders` for historical data.** Without it, `orders(...)` silently caps at the 60-day window. You won't get an error, just fewer results than expected. For Shopify Plus merchants with many orders, request this scope via the app's protected-data settings.
|
||||
- **Currencies are strings.** Amounts come back as `"49.00"` not `49.0`. Don't `jq tonumber` blindly if you care about zero-padding.
|
||||
- **Multi-currency Money fields** have `shopMoney` (store's currency) AND `presentmentMoney` (customer's). Pick one consistently.
|
||||
|
||||
## Safety
|
||||
|
||||
Mutations in Shopify are real — they create products, charge refunds, cancel orders, ship fulfillments. Before running `productDelete`, `orderCancel`, `refundCreate`, or any bulk mutation: state clearly what the change is, on which shop, and confirm with the user. There is no staging clone of production data unless the user has a separate dev store.
|
||||
Loading…
Add table
Add a link
Reference in a new issue