mirror of
https://github.com/NousResearch/hermes-agent.git
synced 2026-05-01 01:51:44 +00:00
skills: adapt spike/sketch + 2 references from gsd-build/get-shit-done (MIT) (#17421)
* skills: port spike, sketch, and gates/context-budget references from GSD Adds two new lightweight standalone skills and two reference docs adapted from gsd-build/get-shit-done (MIT © 2025 Lex Christopherson). All ports coexist cleanly with a full `npx get-shit-done-cc --hermes --global` install — GSD lives under `skills/gsd-*/`, these ports live at their natural Hermes category paths, zero name collisions. New skills: - skills/software-development/spike/ — Lightweight "spike an idea with throwaway experiments" workflow: decompose into Given/When/Then questions, research per-spike, build comparable variants, close with VALIDATED/PARTIAL/INVALIDATED verdict. Standalone alternative to the full `gsd-spike` (which requires `.planning/spikes/` state machinery and the rest of GSD). - skills/creative/sketch/ — Lightweight "sketch 2-3 HTML design variants" workflow: intake (feel, references, core action), produce differentiated variants along a design axis, head-to-head comparison. Standalone alternative to the full `gsd-sketch`. New references under subagent-driven-development/: - references/context-budget-discipline.md — Four-tier context degradation model (PEAK/GOOD/DEGRADING/POOR at 0-30%/30-50%/50-70%/70%+) with read-depth rules that scale with context window size, plus early warning signs of silent degradation (silent partial completion, increasing vagueness, skipped protocol steps). - references/gates-taxonomy.md — Four canonical gate types for validation checkpoints: Pre-flight (precondition block), Revision (bounded retry loop with stall detection), Escalation (pause for human decision), Abort (terminate to prevent damage). Each ships with behavior, recovery, and examples. Collision guard: each port has explicit "If the user has the full GSD system installed" guidance directing the agent to prefer `gsd-spike` / `gsd-sketch` when the full workflow is available. Verified end-to-end with 86 GSD skills + these 2 Hermes ports installed in the same HERMES_HOME — 90 total skills, zero duplicate names, both counterparts appear in the system prompt with distinct descriptions. Attribution preserved in each SKILL.md footer per MIT notice requirement. Full GSD system now installable via `npx get-shit-done-cc --hermes --global` (gsd-build/get-shit-done#2845). * skills/gsd-port: tighten descriptions, surface Hermes-native tools Review feedback adjustments to the spike/sketch ports from the previous commit on this branch: - description lengths trimmed to <=60 chars with trigger-first phrasing (spike: 55 chars 'Throwaway experiments to validate an idea before build.'; sketch: 55 chars 'Throwaway HTML mockups: 2-3 design variants to compare.') - author field credits gsd-build/get-shit-done explicitly - stale duplicate top-level `tags:` removed from sketch frontmatter (Hermes reads only metadata.hermes.tags — the top-level field was dead weight) - spike research step now shows concrete Hermes tool calls (web_search, web_extract with real URLs, terminal for venv inspection) instead of just naming the tool names - spike build step adds a worked tool-sequence example (terminal + write_file + terminal to run) and a delegate_task fan-out pattern for parallel comparison spikes (002a / 002b) - sketch build step adds browser_navigate + browser_vision verification step — visual spot-check that catches layout bugs pure source inspection misses - sketch Output section adds a worked tool-sequence example mirroring the spike pattern Descriptions now lead with 'Throwaway' (the pattern-match word that signals 'disposable / not production code') — gives the agent a clean activation signal in the system-prompt skill index.
This commit is contained in:
parent
fe6c86623f
commit
aea72c0936
5 changed files with 568 additions and 0 deletions
196
skills/software-development/spike/SKILL.md
Normal file
196
skills/software-development/spike/SKILL.md
Normal file
|
|
@ -0,0 +1,196 @@
|
|||
---
|
||||
name: spike
|
||||
description: "Throwaway experiments to validate an idea before build."
|
||||
version: 1.0.0
|
||||
author: Hermes Agent (adapted from gsd-build/get-shit-done)
|
||||
license: MIT
|
||||
metadata:
|
||||
hermes:
|
||||
tags: [spike, prototype, experiment, feasibility, throwaway, exploration, research, planning, mvp, proof-of-concept]
|
||||
related_skills: [sketch, writing-plans, subagent-driven-development, plan]
|
||||
---
|
||||
|
||||
# Spike
|
||||
|
||||
Use this skill when the user wants to **feel out an idea** before committing to a real build — validating feasibility, comparing approaches, or surfacing unknowns that no amount of research will answer. Spikes are disposable by design. Throw them away once they've paid their debt.
|
||||
|
||||
Load this when the user says things like "let me try this", "I want to see if X works", "spike this out", "before I commit to Y", "quick prototype of Z", "is this even possible?", or "compare A vs B".
|
||||
|
||||
## When NOT to use this
|
||||
|
||||
- The answer is knowable from docs or reading code — just do research, don't build
|
||||
- The work is production path — use `writing-plans` / `plan` instead
|
||||
- The idea is already validated — jump straight to implementation
|
||||
|
||||
## If the user has the full GSD system installed
|
||||
|
||||
If `gsd-spike` shows up as a sibling skill (installed via `npx get-shit-done-cc --hermes`), prefer **`gsd-spike`** when the user wants the full GSD workflow: persistent `.planning/spikes/` state, MANIFEST tracking across sessions, Given/When/Then verdict format, and commit patterns that integrate with the rest of GSD. This skill is the lightweight standalone version for users who don't have (or don't want) the full system.
|
||||
|
||||
## Core method
|
||||
|
||||
Regardless of scale, every spike follows this loop:
|
||||
|
||||
```
|
||||
decompose → research → build → verdict
|
||||
↑__________________________________________↓
|
||||
iterate on findings
|
||||
```
|
||||
|
||||
### 1. Decompose
|
||||
|
||||
Break the user's idea into **2-5 independent feasibility questions**. Each question is one spike. Present them as a table with Given/When/Then framing:
|
||||
|
||||
| # | Spike | Validates (Given/When/Then) | Risk |
|
||||
|---|-------|----------------------------|------|
|
||||
| 001 | websocket-streaming | Given a WS connection, when LLM streams tokens, then client receives chunks < 100ms | High |
|
||||
| 002a | pdf-parse-pdfjs | Given a multi-page PDF, when parsed with pdfjs, then structured text is extractable | Medium |
|
||||
| 002b | pdf-parse-camelot | Given a multi-page PDF, when parsed with camelot, then structured text is extractable | Medium |
|
||||
|
||||
**Spike types:**
|
||||
- **standard** — one approach answering one question
|
||||
- **comparison** — same question, different approaches (shared number, letter suffix `a`/`b`/`c`)
|
||||
|
||||
**Good spike questions:** specific feasibility with observable output.
|
||||
**Bad spike questions:** too broad, no observable output, or just "read the docs about X".
|
||||
|
||||
**Order by risk.** The spike most likely to kill the idea runs first. No point prototyping the easy parts if the hard part doesn't work.
|
||||
|
||||
**Skip decomposition** only if the user already knows exactly what they want to spike and says so. Then take their idea as a single spike.
|
||||
|
||||
### 2. Align (for multi-spike ideas)
|
||||
|
||||
Present the spike table. Ask: "Build all in this order, or adjust?" Let the user drop, reorder, or re-frame before you write any code.
|
||||
|
||||
### 3. Research (per spike, before building)
|
||||
|
||||
Spikes are not research-free — you research enough to pick the right approach, then you build. Per spike:
|
||||
|
||||
1. **Brief it.** 2-3 sentences: what this spike is, why it matters, key risk.
|
||||
2. **Surface competing approaches** if there's real choice:
|
||||
|
||||
| Approach | Tool/Library | Pros | Cons | Status |
|
||||
|----------|-------------|------|------|--------|
|
||||
| ... | ... | ... | ... | maintained / abandoned / beta |
|
||||
|
||||
3. **Pick one.** State why. If 2+ are credible, build quick variants within the spike.
|
||||
4. **Skip research** for pure logic with no external dependencies.
|
||||
|
||||
Use Hermes tools for the research step:
|
||||
|
||||
- `web_search("python websocket streaming libraries 2025")` — find candidates
|
||||
- `web_extract(urls=["https://websockets.readthedocs.io/..."])` — read the actual docs (returns markdown)
|
||||
- `terminal("pip show websockets | grep Version")` — check what's installed in the project's venv
|
||||
|
||||
For libraries without docs pages, clone and read their `README.md` / `examples/` via `read_file`. Context7 MCP (if the user has it configured) is also a good source — `mcp_*_resolve-library-id` then `mcp_*_query-docs`.
|
||||
|
||||
### 4. Build
|
||||
|
||||
One directory per spike. Keep it standalone.
|
||||
|
||||
```
|
||||
spikes/
|
||||
├── 001-websocket-streaming/
|
||||
│ ├── README.md
|
||||
│ └── main.py
|
||||
├── 002a-pdf-parse-pdfjs/
|
||||
│ ├── README.md
|
||||
│ └── parse.js
|
||||
└── 002b-pdf-parse-camelot/
|
||||
├── README.md
|
||||
└── parse.py
|
||||
```
|
||||
|
||||
**Bias toward something the user can interact with.** Spikes fail when the only output is a log line that says "it works." The user wants to *feel* the spike working. Default choices, in order of preference:
|
||||
|
||||
1. A runnable CLI that takes input and prints observable output
|
||||
2. A minimal HTML page that demonstrates the behavior
|
||||
3. A small web server with one endpoint
|
||||
4. A unit test that exercises the question with recognizable assertions
|
||||
|
||||
**Depth over speed.** Never declare "it works" after one happy-path run. Test edge cases. Follow surprising findings. The verdict is only trustworthy when the investigation was honest.
|
||||
|
||||
**Avoid** unless the spike specifically requires it: complex package management, build tools/bundlers, Docker, env files, config systems. Hardcode everything — it's a spike.
|
||||
|
||||
**Building one spike** — a typical tool sequence:
|
||||
|
||||
```
|
||||
terminal("mkdir -p spikes/001-websocket-streaming")
|
||||
write_file("spikes/001-websocket-streaming/README.md", "# 001: websocket-streaming\n\n...")
|
||||
write_file("spikes/001-websocket-streaming/main.py", "...")
|
||||
terminal("cd spikes/001-websocket-streaming && python3 main.py")
|
||||
# Observe output, iterate.
|
||||
```
|
||||
|
||||
**Parallel comparison spikes (002a / 002b) — delegate.** When two approaches can run in parallel and both need real engineering (not 10-line prototypes), fan out with `delegate_task`:
|
||||
|
||||
```
|
||||
delegate_task(tasks=[
|
||||
{"goal": "Build 002a-pdf-parse-pdfjs: ...", "toolsets": ["terminal", "file", "web"]},
|
||||
{"goal": "Build 002b-pdf-parse-camelot: ...", "toolsets": ["terminal", "file", "web"]},
|
||||
])
|
||||
```
|
||||
|
||||
Each subagent returns its own verdict; you write the head-to-head.
|
||||
|
||||
### 5. Verdict
|
||||
|
||||
Each spike's `README.md` closes with:
|
||||
|
||||
```markdown
|
||||
## Verdict: VALIDATED | PARTIAL | INVALIDATED
|
||||
|
||||
### What worked
|
||||
- ...
|
||||
|
||||
### What didn't
|
||||
- ...
|
||||
|
||||
### Surprises
|
||||
- ...
|
||||
|
||||
### Recommendation for the real build
|
||||
- ...
|
||||
```
|
||||
|
||||
**VALIDATED** = the core question was answered yes, with evidence.
|
||||
**PARTIAL** = it works under constraints X, Y, Z — document them.
|
||||
**INVALIDATED** = doesn't work, for this reason. This is a successful spike.
|
||||
|
||||
## Comparison spikes
|
||||
|
||||
When two approaches answer the same question (002a / 002b), build them **back to back**, then do a head-to-head comparison at the end:
|
||||
|
||||
```markdown
|
||||
## Head-to-head: pdfjs vs camelot
|
||||
|
||||
| Dimension | pdfjs (002a) | camelot (002b) |
|
||||
|-----------|--------------|----------------|
|
||||
| Extraction quality | 9/10 structured | 7/10 table-only |
|
||||
| Setup complexity | npm install, 1 line | pip + ghostscript |
|
||||
| Perf on 100-page PDF | 3s | 18s |
|
||||
| Handles rotated text | no | yes |
|
||||
|
||||
**Winner:** pdfjs for our use case. Camelot if we need table-first extraction later.
|
||||
```
|
||||
|
||||
## Frontier mode (picking what to spike next)
|
||||
|
||||
If spikes already exist and the user says "what should I spike next?", walk the existing directories and look for:
|
||||
|
||||
- **Integration risks** — two validated spikes that touch the same resource but were tested independently
|
||||
- **Data handoffs** — spike A's output was assumed compatible with spike B's input; never proven
|
||||
- **Gaps in the vision** — capabilities assumed but unproven
|
||||
- **Alternative approaches** — different angles for PARTIAL or INVALIDATED spikes
|
||||
|
||||
Propose 2-4 candidates as Given/When/Then. Let the user pick.
|
||||
|
||||
## Output
|
||||
|
||||
- Create `spikes/` (or `.planning/spikes/` if the user is using GSD conventions) in the repo root
|
||||
- One dir per spike: `NNN-descriptive-name/`
|
||||
- `README.md` per spike captures question, approach, results, verdict
|
||||
- Keep the code throwaway — a spike that takes 2 days to "clean up for production" was a bad spike
|
||||
|
||||
## Attribution
|
||||
|
||||
Adapted from the GSD (Get Shit Done) project's `/gsd-spike` workflow — MIT © 2025 Lex Christopherson ([gsd-build/get-shit-done](https://github.com/gsd-build/get-shit-done)). The full GSD system offers persistent spike state, MANIFEST tracking, and integration with a broader spec-driven development pipeline; install with `npx get-shit-done-cc --hermes --global`.
|
||||
Loading…
Add table
Add a link
Reference in a new issue