Fixes #6672 Memory providers now receive on_session_switch() whenever AIAgent.session_id rotates mid-process — /resume, /branch, /reset, /new, and context compression. Before this, providers that cached per-session state in initialize() (Hindsight's _session_id, _document_id, accumulated _session_turns, _turn_counter) kept writing into the old session's record after the agent had moved on. MemoryProvider ABC ------------------ - New optional hook on_session_switch(new_session_id, *, parent_session_id='', reset=False, **kwargs) with no-op default for backward compat. reset=True signals /reset or /new — providers should flush accumulated per-session buffers. reset=False for /resume, /branch, compression where the logical conversation continues. MemoryManager ------------- - on_session_switch() fans the hook out to every registered provider. Isolated try/except per provider — one bad provider can't block others. - Empty/None new_session_id is a no-op to avoid corrupting provider state during shutdown paths. run_agent.py ------------ - _sync_external_memory_for_turn now passes session_id=self.session_id into sync_all() and queue_prefetch_all(). Providers with defensive session_id updates in sync_turn (Hindsight already had this at plugins/memory/hindsight/__init__.py:1199) now actually receive the current id. - Compression block at ~L8884 already notified the context engine of the rollover; now also calls _memory_manager.on_session_switch(reason='compression'). cli.py ------ - new_session() fires reset=True, reason='new_session' so providers flush buffers. - _handle_resume_command fires reset=False, reason='resume' with the previous session as parent_session_id. - _handle_branch_command fires reset=False, reason='branch' with the parent session_id already captured for the DB parent link. gateway/run.py -------------- - _handle_resume_command now evicts the cached AIAgent, mirroring /branch and /reset. The next message rebuilds a fresh agent whose memory provider initialize() runs with the correct session_id — matches the pattern the gateway already uses for provider state cross-session transitions. Hindsight reference implementation ---------------------------------- - plugins/memory/hindsight/__init__.py adds on_session_switch that: updates _session_id, mints a fresh _document_id (prevents vectorize-io/hindsight#1303 overwrite), and clears _session_turns / _turn_counter / _turn_index so in-flight batches don't flush under the new document id. parent_session_id only overwritten when provided (avoids clobbering on a bare switch). Tests ----- - tests/agent/test_memory_session_switch.py: new dedicated file. ABC default no-op, manager fan-out, failure isolation, empty-id no-op, session_id propagation through sync_all/queue_prefetch_all, Hindsight state transitions for every reset/non-reset case, parent preservation. - tests/cli/test_branch_command.py: new test verifying /branch fires the hook with correct parent_session_id + reset=False + reason. - tests/gateway/test_resume_command.py: new test verifying /resume evicts the cached agent. - tests/run_agent/test_memory_sync_interrupted.py: updated existing assertions to account for the session_id kwarg on sync_all and queue_prefetch_all. E2E verified (real imports, tmp HERMES_HOME): - /resume: session_id updates, doc_id fresh, buffers cleared, parent set - /branch: session_id forks, parent links to original - /new: reset=True clears accumulated state - compression: reason='compression' propagated, lineage preserved - Empty id: no-op, state preserved - Legacy provider without on_session_switch: no crash Reported by @nicoloboschi (Hindsight maintainer); related scope-widening comment by @kidonng extending coverage to compression. |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| plugin.yaml | ||
| README.md | ||
Hindsight Memory Provider
Long-term memory with knowledge graph, entity resolution, and multi-strategy retrieval. Supports cloud, local embedded, and local external modes.
Requirements
- Cloud: API key from ui.hindsight.vectorize.io
- Local Embedded: API key for a supported LLM provider (OpenAI, Anthropic, Gemini, Groq, OpenRouter, MiniMax, Ollama, or any OpenAI-compatible endpoint). Embeddings and reranking run locally — no additional API keys needed.
- Local External: A running Hindsight instance (Docker or self-hosted) reachable over HTTP.
Setup
hermes memory setup # select "hindsight"
The setup wizard will install dependencies automatically via uv and walk you through configuration.
Or manually (cloud mode with defaults):
hermes config set memory.provider hindsight
echo "HINDSIGHT_API_KEY=your-key" >> ~/.hermes/.env
Cloud
Connects to the Hindsight Cloud API. Requires an API key from ui.hindsight.vectorize.io.
Local Embedded
Hermes spins up a local Hindsight daemon with built-in PostgreSQL. Requires an LLM API key for memory extraction and synthesis. The daemon starts automatically in the background on first use and stops after 5 minutes of inactivity.
Supports any OpenAI-compatible LLM endpoint (llama.cpp, vLLM, LM Studio, etc.) — pick openai_compatible as the provider and enter the base URL.
Daemon startup logs: ~/.hermes/logs/hindsight-embed.log
Daemon runtime logs: ~/.hindsight/profiles/<profile>.log
To open the Hindsight web UI (local embedded mode only):
hindsight-embed -p hermes ui start
Local External
Points the plugin at an existing Hindsight instance you're already running (Docker, self-hosted, etc.). No daemon management — just a URL and an optional API key.
Config
Config file: ~/.hermes/hindsight/config.json
Connection
| Key | Default | Description |
|---|---|---|
mode |
cloud |
cloud, local_embedded, or local_external |
api_url |
https://api.hindsight.vectorize.io |
API URL (cloud and local_external modes) |
Memory Bank
| Key | Default | Description |
|---|---|---|
bank_id |
hermes |
Memory bank name (static fallback used when bank_id_template is unset or resolves empty) |
bank_id_template |
— | Optional template to derive the bank name dynamically. Placeholders: {profile}, {workspace}, {platform}, {user}, {session}. Example: hermes-{profile} isolates memory per active Hermes profile. Empty placeholders collapse cleanly (e.g. hermes-{user} with no user becomes hermes). |
bank_mission |
— | Reflect mission (identity/framing for reflect reasoning). Applied via Banks API. |
bank_retain_mission |
— | Retain mission (steers what gets extracted). Applied via Banks API. |
Recall
| Key | Default | Description |
|---|---|---|
recall_budget |
mid |
Recall thoroughness: low / mid / high |
recall_prefetch_method |
recall |
Auto-recall method: recall (raw facts) or reflect (LLM synthesis) |
recall_max_tokens |
4096 |
Maximum tokens for recall results |
recall_max_input_chars |
800 |
Maximum input query length for auto-recall |
recall_prompt_preamble |
— | Custom preamble for recalled memories in context |
recall_tags |
— | Tags to filter when searching memories |
recall_tags_match |
any |
Tag matching mode: any / all / any_strict / all_strict |
auto_recall |
true |
Automatically recall memories before each turn |
Retain
| Key | Default | Description |
|---|---|---|
auto_retain |
true |
Automatically retain conversation turns |
retain_async |
true |
Process retain asynchronously on the Hindsight server |
retain_every_n_turns |
1 |
Retain every N turns (1 = every turn) |
retain_context |
conversation between Hermes Agent and the User |
Context label for retained memories |
retain_tags |
— | Default tags applied to retained memories; merged with per-call tool tags |
retain_source |
— | Optional metadata.source attached to retained memories |
retain_user_prefix |
User |
Label used before user turns in auto-retained transcripts |
retain_assistant_prefix |
Assistant |
Label used before assistant turns in auto-retained transcripts |
Integration
| Key | Default | Description |
|---|---|---|
memory_mode |
hybrid |
How memories are integrated into the agent |
memory_mode:
hybrid— automatic context injection + tools available to the LLMcontext— automatic injection only, no tools exposedtools— tools only, no automatic injection
Local Embedded LLM
| Key | Default | Description |
|---|---|---|
llm_provider |
openai |
openai, anthropic, gemini, groq, openrouter, minimax, ollama, lmstudio, openai_compatible |
llm_model |
per-provider | Model name (e.g. gpt-4o-mini, qwen/qwen3.5-9b) |
llm_base_url |
— | Endpoint URL for openai_compatible (e.g. http://192.168.1.10:8080/v1) |
The LLM API key is stored in ~/.hermes/.env as HINDSIGHT_LLM_API_KEY.
Tools
Available in hybrid and tools memory modes:
| Tool | Description |
|---|---|
hindsight_retain |
Store information with auto entity extraction; supports optional per-call tags |
hindsight_recall |
Multi-strategy search (semantic + entity graph) |
hindsight_reflect |
Cross-memory synthesis (LLM-powered) |
Environment Variables
| Variable | Description |
|---|---|
HINDSIGHT_API_KEY |
API key for Hindsight Cloud |
HINDSIGHT_LLM_API_KEY |
LLM API key for local mode |
HINDSIGHT_API_LLM_BASE_URL |
LLM Base URL for local mode (e.g. OpenRouter) |
HINDSIGHT_API_URL |
Override API endpoint |
HINDSIGHT_BANK_ID |
Override bank name |
HINDSIGHT_BUDGET |
Override recall budget |
HINDSIGHT_MODE |
Override mode (cloud, local_embedded, local_external) |
Client Version
Requires hindsight-client >= 0.4.22. The plugin auto-upgrades on session start if an older version is detected.