Hermes's skill loader (agent/skill_utils.skill_matches_platform) already honors
the 'platforms:' frontmatter field and skip-loads skills whose declared
platform list doesn't include sys.platform. Seven bundled skills are in fact
Linux/macOS-only but never declared it, so they leak into Windows skill
listings and sometimes load with broken instructions.
Audited all 160 SKILL.md files (skills/ + optional-skills/) for Windows-
hostile signals: apt-get/brew/systemd/chmod+x install flows, ptrace/proc
runtime dependencies, bash-only launcher scripts, and package dependencies
with no Windows build. The 7 below fail one or more of those tests in a way
that fundamentally can't be papered over by docs edits:
minecraft-modpack-server bash start.sh + chmod +x + apt openjdk
evaluating-llms-harness lm-eval-harness bash launcher scripts
distributed-llm-pretraining-
torchtitan bash multi-node torchrun launcher
python-debugpy remote attach relies on /proc ptrace_scope
pytorch-fsdp NCCL backend; Windows path is WSL only
tensorrt-llm NVIDIA TensorRT-LLM has no Windows build
searxng-search Docker volume flow assumes POSIX $(pwd)
All seven get 'platforms: [linux, macos]'. On Windows the loader now skips
them silently — no more phantom skill listings, no more mid-task failures
because an Apple-only path was surfaced as a suggestion.
Cross-platform skills that merely CONTAIN signals in examples or
install-instructions (brew install as one of several paths, /tmp/ in a code
snippet, etc.) are NOT touched by this commit. A broader audit that
declares the ~140 cross-platform skills as 'platforms: [linux, macos,
windows]' can follow as a separate change once each has been verified
working on Windows.
The installed user copies under ~/AppData/Local/hermes/skills/ (when they
exist) are also patched so the running session reflects the gating
immediately, but only the in-repo files are committed here.
For 14 of 74 compressed skills, the original description contained
trigger keywords, technique counts, attribution, or use-case phrases
not covered by the existing body content. Prepends a 'When to use' /
'What's inside' block near the top so the agent still has the full
context when the skill is loaded.
Skills salvaged:
- codex, ascii-video, creative-ideation, excalidraw, manim-video, p5js
- gif-search, heartmula, youtube-content
- lm-evaluation-harness, obliteratus, vllm, axolotl
- powerpoint
Remaining 60 skills were verified to already cover the dropped content
in their existing body sections (When to Use, overview, intro prose)
or had short descriptions fully captured by the new compressed form.
Target: every skill's description fits in a one-line gateway menu and
leads with trigger keywords an agent would match on. Drops filler like
'Use this skill to', 'A skill for', 'This skill provides'.
Before: max description length was 791 chars (architecture-diagram),
74 of 81 built-in skills were >60 chars.
After: max 60, mean 54, all 81 built-in skills <=60.
Rewritten with double-quoted YAML scalars to preserve Chinese/arrow
glyphs (baoyu-comic, yuanbao, youtube-content).