mirror of
https://github.com/NousResearch/hermes-agent.git
synced 2026-04-25 00:51:20 +00:00
docs(honcho): wizard cadence default 2, prewarm/depth + observation + multi-peer
- cli: setup wizard pre-fills dialecticCadence=2 (code default stays 1 so unset → every turn) - honcho.md: fix stale dialecticCadence default in tables, add Session-Start Prewarm subsection (depth runs at init), add Query-Adaptive Reasoning Level subsection, expand Observation section with directional vs unified semantics and per-peer patterns - memory-providers.md: fix stale default, rename Multi-agent/Profiles to Multi-peer setup, add concrete walkthrough for new profiles and sync, document observation toggles + presets, link to honcho.md - SKILL.md: fix stale defaults, add Depth at session start callout
This commit is contained in:
parent
5f9907c116
commit
098efde848
4 changed files with 103 additions and 17 deletions
|
|
@ -77,7 +77,7 @@ Cost and depth are controlled by three independent knobs:
|
|||
| Knob | Controls | Default |
|
||||
|------|----------|---------|
|
||||
| `contextCadence` | Turns between `context()` API calls (base layer refresh) | `1` |
|
||||
| `dialecticCadence` | Turns between `peer.chat()` LLM calls (dialectic layer refresh) | `3` |
|
||||
| `dialecticCadence` | Turns between `peer.chat()` LLM calls (dialectic layer refresh) | `1` (code default) / `2` (setup wizard default) |
|
||||
| `dialecticDepth` | Number of `.chat()` passes per dialectic invocation (1–3) | `1` |
|
||||
|
||||
These are orthogonal — you can have frequent context refreshes with infrequent dialectic, or deep multi-pass dialectic at low frequency. Example: `contextCadence: 1, dialecticCadence: 5, dialecticDepth: 2` refreshes base context every turn, runs dialectic every 5 turns, and each dialectic run makes 2 passes.
|
||||
|
|
@ -94,6 +94,14 @@ Each pass uses a proportional reasoning level (lighter early passes, base level
|
|||
|
||||
Passes bail out early if the prior pass returned strong signal (long, structured output), so depth 3 doesn't always mean 3 LLM calls.
|
||||
|
||||
### Session-Start Prewarm
|
||||
|
||||
On session init, Honcho fires a dialectic call in the background at the full configured `dialecticDepth` and hands the result directly to turn 1's context assembly. A single-pass prewarm on a cold peer often returns thin output — multi-pass depth runs the audit/reconcile cycle before the user ever speaks. If prewarm hasn't landed by turn 1, turn 1 falls back to a synchronous call with a bounded timeout.
|
||||
|
||||
### Query-Adaptive Reasoning Level
|
||||
|
||||
The auto-injected dialectic scales `dialecticReasoningLevel` by query length: +1 level at ≥120 chars, +2 at ≥400, clamped at `reasoningLevelCap` (default `"high"`). Disable with `reasoningHeuristic: false` to pin every auto call to `dialecticReasoningLevel`. `"max"` is reserved for explicit tool-path selection via `honcho_reasoning`.
|
||||
|
||||
## Configuration Options
|
||||
|
||||
Honcho is configured in `~/.honcho/config.json` (global) or `$HERMES_HOME/honcho.json` (profile-local). The setup wizard handles this for you.
|
||||
|
|
@ -104,7 +112,7 @@ Honcho is configured in `~/.honcho/config.json` (global) or `$HERMES_HOME/honcho
|
|||
|-----|---------|-------------|
|
||||
| `contextTokens` | `null` (uncapped) | Token budget for auto-injected context per turn. Set to an integer (e.g. 1200) to cap. Truncates at word boundaries |
|
||||
| `contextCadence` | `1` | Minimum turns between `context()` API calls (base layer refresh) |
|
||||
| `dialecticCadence` | `3` | Minimum turns between `peer.chat()` LLM calls (dialectic layer). In `tools` mode, irrelevant — model calls explicitly |
|
||||
| `dialecticCadence` | `1` (wizard sets `2`) | Minimum turns between `peer.chat()` LLM calls (dialectic layer). Code default fires every turn when the key is unset; the setup wizard pre-fills `2`. In `tools` mode, irrelevant — model calls explicitly |
|
||||
| `dialecticDepth` | `1` | Number of `.chat()` passes per dialectic invocation. Clamped to 1–3 |
|
||||
| `dialecticDepthLevels` | `null` | Optional array of reasoning levels per pass, e.g. `["minimal", "low", "medium"]`. Overrides proportional defaults |
|
||||
| `dialecticReasoningLevel` | `'low'` | Base reasoning level: `minimal`, `low`, `medium`, `high`, `max` |
|
||||
|
|
@ -142,6 +150,41 @@ Honcho is configured in `~/.honcho/config.json` (global) or `$HERMES_HOME/honcho
|
|||
|
||||
In `tools` mode, the model is fully in control — it calls `honcho_reasoning` when it wants, at whatever `reasoning_level` it picks. Cadence and budget settings only apply to modes with auto-injection (`hybrid` and `context`).
|
||||
|
||||
## Observation (Directional vs. Unified)
|
||||
|
||||
Honcho models a conversation as peers exchanging messages. Each peer has two observation toggles that map 1:1 to Honcho's `SessionPeerConfig`:
|
||||
|
||||
| Toggle | Effect |
|
||||
|--------|--------|
|
||||
| `observeMe` | Honcho builds a representation of this peer from its own messages |
|
||||
| `observeOthers` | This peer observes the other peer's messages (feeds cross-peer reasoning) |
|
||||
|
||||
Two peers × two toggles = four flags. `observationMode` is a shorthand preset:
|
||||
|
||||
| Preset | User flags | AI flags | Semantics |
|
||||
|--------|-----------|----------|-----------|
|
||||
| `"directional"` (default) | me: on, others: on | me: on, others: on | Full mutual observation. Enables cross-peer dialectic — "what does the AI know about the user, based on what the user said and the AI replied." |
|
||||
| `"unified"` | me: on, others: off | me: off, others: on | Shared-pool semantics — the AI observes the user's messages only, the user peer only self-models. Single-observer pool. |
|
||||
|
||||
Override the preset with an explicit `observation` block for per-peer control:
|
||||
|
||||
```json
|
||||
"observation": {
|
||||
"user": { "observeMe": true, "observeOthers": true },
|
||||
"ai": { "observeMe": true, "observeOthers": false }
|
||||
}
|
||||
```
|
||||
|
||||
Common patterns:
|
||||
|
||||
| Intent | Config |
|
||||
|--------|--------|
|
||||
| Full observation (most users) | `"observationMode": "directional"` |
|
||||
| AI shouldn't re-model the user from its own replies | `"ai": {"observeMe": true, "observeOthers": false}` |
|
||||
| Strong persona the AI peer shouldn't update from self-observation | `"ai": {"observeMe": false, "observeOthers": true}` |
|
||||
|
||||
Server-side toggles set via the Honcho dashboard win over local defaults — Hermes syncs them back at session init.
|
||||
|
||||
## Tools
|
||||
|
||||
When Honcho is active as the memory provider, five tools become available:
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue