feat(memory): pluggable memory provider interface with profile isolation, review fixes, and honcho CLI restoration (#4623)

* feat(memory): add pluggable memory provider interface with profile isolation

Introduces a pluggable MemoryProvider ABC so external memory backends can
integrate with Hermes without modifying core files. Each backend becomes a
plugin implementing a standard interface, orchestrated by MemoryManager.

Key architecture:
- agent/memory_provider.py — ABC with core + optional lifecycle hooks
- agent/memory_manager.py — single integration point in the agent loop
- agent/builtin_memory_provider.py — wraps existing MEMORY.md/USER.md

Profile isolation fixes applied to all 6 shipped plugins:
- Cognitive Memory: use get_hermes_home() instead of raw env var
- Hindsight Memory: check $HERMES_HOME/hindsight/config.json first,
  fall back to legacy ~/.hindsight/ for backward compat
- Hermes Memory Store: replace hardcoded ~/.hermes paths with
  get_hermes_home() for config loading and DB path defaults
- Mem0 Memory: use get_hermes_home() instead of raw env var
- RetainDB Memory: auto-derive profile-scoped project name from
  hermes_home path (hermes-<profile>), explicit env var overrides
- OpenViking Memory: read-only, no local state, isolation via .env

MemoryManager.initialize_all() now injects hermes_home into kwargs so
every provider can resolve profile-scoped storage without importing
get_hermes_home() themselves.

Plugin system: adds register_memory_provider() to PluginContext and
get_plugin_memory_providers() accessor.

Based on PR #3825. 46 tests (37 unit + 5 E2E + 4 plugin registration).

* refactor(memory): drop cognitive plugin, rewrite OpenViking as full provider

Remove cognitive-memory plugin (#727) — core mechanics are broken:
decay runs 24x too fast (hourly not daily), prefetch uses row ID as
timestamp, search limited by importance not similarity.

Rewrite openviking-memory plugin from a read-only search wrapper into
a full bidirectional memory provider using the complete OpenViking
session lifecycle API:

- sync_turn: records user/assistant messages to OpenViking session
  (threaded, non-blocking)
- on_session_end: commits session to trigger automatic memory extraction
  into 6 categories (profile, preferences, entities, events, cases,
  patterns)
- prefetch: background semantic search via find() endpoint
- on_memory_write: mirrors built-in memory writes to the session
- is_available: checks env var only, no network calls (ABC compliance)

Tools expanded from 3 to 5:
- viking_search: semantic search with mode/scope/limit
- viking_read: tiered content (abstract ~100tok / overview ~2k / full)
- viking_browse: filesystem-style navigation (list/tree/stat)
- viking_remember: explicit memory storage via session
- viking_add_resource: ingest URLs/docs into knowledge base

Uses direct HTTP via httpx (no openviking SDK dependency needed).
Response truncation on viking_read to prevent context flooding.

* fix(memory): harden Mem0 plugin — thread safety, non-blocking sync, circuit breaker

- Remove redundant mem0_context tool (identical to mem0_search with
  rerank=true, top_k=5 — wastes a tool slot and confuses the model)
- Thread sync_turn so it's non-blocking — Mem0's server-side LLM
  extraction can take 5-10s, was stalling the agent after every turn
- Add threading.Lock around _get_client() for thread-safe lazy init
  (prefetch and sync threads could race on first client creation)
- Add circuit breaker: after 5 consecutive API failures, pause calls
  for 120s instead of hammering a down server every turn. Auto-resets
  after cooldown. Logs a warning when tripped.
- Track success/failure in prefetch, sync_turn, and all tool calls
- Wait for previous sync to finish before starting a new one (prevents
  unbounded thread accumulation on rapid turns)
- Clean up shutdown to join both prefetch and sync threads

* fix(memory): enforce single external memory provider limit

MemoryManager now rejects a second non-builtin provider with a warning.
Built-in memory (MEMORY.md/USER.md) is always accepted. Only ONE
external plugin provider is allowed at a time. This prevents tool
schema bloat (some providers add 3-5 tools each) and conflicting
memory backends.

The warning message directs users to configure memory.provider in
config.yaml to select which provider to activate.

Updated all 47 tests to use builtin + one external pattern instead
of multiple externals. Added test_second_external_rejected to verify
the enforcement.

* feat(memory): add ByteRover memory provider plugin

Implements the ByteRover integration (from PR #3499 by hieuntg81) as a
MemoryProvider plugin instead of direct run_agent.py modifications.

ByteRover provides persistent memory via the brv CLI — a hierarchical
knowledge tree with tiered retrieval (fuzzy text then LLM-driven search).
Local-first with optional cloud sync.

Plugin capabilities:
- prefetch: background brv query for relevant context
- sync_turn: curate conversation turns (threaded, non-blocking)
- on_memory_write: mirror built-in memory writes to brv
- on_pre_compress: extract insights before context compression

Tools (3):
- brv_query: search the knowledge tree
- brv_curate: store facts/decisions/patterns
- brv_status: check CLI version and context tree state

Profile isolation: working directory at $HERMES_HOME/byterover/ (scoped
per profile). Binary resolution cached with thread-safe double-checked
locking. All write operations threaded to avoid blocking the agent
(curate can take 120s with LLM processing).

* fix(memory): thread remaining sync_turns, fix holographic, add config key

Plugin fixes:
- Hindsight: thread sync_turn (was blocking up to 30s via _run_in_thread)
- RetainDB: thread sync_turn (was blocking on HTTP POST)
- Both: shutdown now joins sync threads alongside prefetch threads

Holographic retrieval fixes:
- reason(): removed dead intersection_key computation (bundled but never
  used in scoring). Now reuses pre-computed entity_residuals directly,
  moved role_content encoding outside the inner loop.
- contradict(): added _MAX_CONTRADICT_FACTS=500 scaling guard. Above
  500 facts, only checks the most recently updated ones to avoid O(n^2)
  explosion (~125K comparisons at 500 is acceptable).

Config:
- Added memory.provider key to DEFAULT_CONFIG ("" = builtin only).
  No version bump needed (deep_merge handles new keys automatically).

* feat(memory): extract Honcho as a MemoryProvider plugin

Creates plugins/honcho-memory/ as a thin adapter over the existing
honcho_integration/ package. All 4 Honcho tools (profile, search,
context, conclude) move from the normal tool registry to the
MemoryProvider interface.

The plugin delegates all work to HonchoSessionManager — no Honcho
logic is reimplemented. It uses the existing config chain:
$HERMES_HOME/honcho.json -> ~/.honcho/config.json -> env vars.

Lifecycle hooks:
- initialize: creates HonchoSessionManager via existing client factory
- prefetch: background dialectic query
- sync_turn: records messages + flushes to API (threaded)
- on_memory_write: mirrors user profile writes as conclusions
- on_session_end: flushes all pending messages

This is a prerequisite for the MemoryManager wiring in run_agent.py.
Once wired, Honcho goes through the same provider interface as all
other memory plugins, and the scattered Honcho code in run_agent.py
can be consolidated into the single MemoryManager integration point.

* feat(memory): wire MemoryManager into run_agent.py

Adds 8 integration points for the external memory provider plugin,
all purely additive (zero existing code modified):

1. Init (~L1130): Create MemoryManager, find matching plugin provider
   from memory.provider config, initialize with session context
2. Tool injection (~L1160): Append provider tool schemas to self.tools
   and self.valid_tool_names after memory_manager init
3. System prompt (~L2705): Add external provider's system_prompt_block
   alongside existing MEMORY.md/USER.md blocks
4. Tool routing (~L5362): Route provider tool calls through
   memory_manager.handle_tool_call() before the catchall handler
5. Memory write bridge (~L5353): Notify external provider via
   on_memory_write() when the built-in memory tool writes
6. Pre-compress (~L5233): Call on_pre_compress() before context
   compression discards messages
7. Prefetch (~L6421): Inject provider prefetch results into the
   current-turn user message (same pattern as Honcho turn context)
8. Turn sync + session end (~L8161, ~L8172): sync_all() after each
   completed turn, queue_prefetch_all() for next turn, on_session_end()
   + shutdown_all() at conversation end

All hooks are wrapped in try/except — a failing provider never breaks
the agent. The existing memory system, Honcho integration, and all
other code paths are completely untouched.

Full suite: 7222 passed, 4 pre-existing failures.

* refactor(memory): remove legacy Honcho integration from core

Extracts all Honcho-specific code from run_agent.py, model_tools.py,
toolsets.py, and gateway/run.py. Honcho is now exclusively available
as a memory provider plugin (plugins/honcho-memory/).

Removed from run_agent.py (-457 lines):
- Honcho init block (session manager creation, activation, config)
- 8 Honcho methods: _honcho_should_activate, _strip_honcho_tools,
  _activate_honcho, _register_honcho_exit_hook, _queue_honcho_prefetch,
  _honcho_prefetch, _honcho_save_user_observation, _honcho_sync
- _inject_honcho_turn_context module-level function
- Honcho system prompt block (tool descriptions, CLI commands)
- Honcho context injection in api_messages building
- Honcho params from __init__ (honcho_session_key, honcho_manager,
  honcho_config)
- HONCHO_TOOL_NAMES constant
- All honcho-specific tool dispatch forwarding

Removed from other files:
- model_tools.py: honcho_tools import, honcho params from handle_function_call
- toolsets.py: honcho toolset definition, honcho tools from core tools list
- gateway/run.py: honcho params from AIAgent constructor calls

Removed tests (-339 lines):
- 9 Honcho-specific test methods from test_run_agent.py
- TestHonchoAtexitFlush class from test_exit_cleanup_interrupt.py

Restored two regex constants (_SURROGATE_RE, _BUDGET_WARNING_RE) that
were accidentally removed during the honcho function extraction.

The honcho_integration/ package is kept intact — the plugin delegates
to it. tools/honcho_tools.py registry entries are now dead code (import
commented out in model_tools.py) but the file is preserved for reference.

Full suite: 7207 passed, 4 pre-existing failures. Zero regressions.

* refactor(memory): restructure plugins, add CLI, clean gateway, migration notice

Plugin restructure:
- Move all memory plugins from plugins/<name>-memory/ to plugins/memory/<name>/
  (byterover, hindsight, holographic, honcho, mem0, openviking, retaindb)
- New plugins/memory/__init__.py discovery module that scans the directory
  directly, loading providers by name without the general plugin system
- run_agent.py uses load_memory_provider() instead of get_plugin_memory_providers()

CLI wiring:
- hermes memory setup — interactive curses picker + config wizard
- hermes memory status — show active provider, config, availability
- hermes memory off — disable external provider (built-in only)
- hermes honcho — now shows migration notice pointing to hermes memory setup

Gateway cleanup:
- Remove _get_or_create_gateway_honcho (already removed in prev commit)
- Remove _shutdown_gateway_honcho and _shutdown_all_gateway_honcho methods
- Remove all calls to shutdown methods (4 call sites)
- Remove _honcho_managers/_honcho_configs dict references

Dead code removal:
- Delete tools/honcho_tools.py (279 lines, import was already commented out)
- Delete tests/gateway/test_honcho_lifecycle.py (131 lines, tested removed methods)
- Remove if False placeholder from run_agent.py

Migration:
- Honcho migration notice on startup: detects existing honcho.json or
  ~/.honcho/config.json, prints guidance to run hermes memory setup.
  Only fires when memory.provider is not set and not in quiet mode.

Full suite: 7203 passed, 4 pre-existing failures. Zero regressions.

* feat(memory): standardize plugin config + add per-plugin documentation

Config architecture:
- Add save_config(values, hermes_home) to MemoryProvider ABC
- Honcho: writes to $HERMES_HOME/honcho.json (SDK native)
- Mem0: writes to $HERMES_HOME/mem0.json
- Hindsight: writes to $HERMES_HOME/hindsight/config.json
- Holographic: writes to config.yaml under plugins.hermes-memory-store
- OpenViking/RetainDB/ByteRover: env-var only (default no-op)

Setup wizard (hermes memory setup):
- Now calls provider.save_config() for non-secret config
- Secrets still go to .env via env vars
- Only memory.provider activation key goes to config.yaml

Documentation:
- README.md for each of the 7 providers in plugins/memory/<name>/
- Requirements, setup (wizard + manual), config reference, tools table
- Consistent format across all providers

The contract for new memory plugins:
- get_config_schema() declares all fields (REQUIRED)
- save_config() writes native config (REQUIRED if not env-var-only)
- Secrets use env_var field in schema, written to .env by wizard
- README.md in the plugin directory

* docs: add memory providers user guide + developer guide

New pages:
- user-guide/features/memory-providers.md — comprehensive guide covering
  all 7 shipped providers (Honcho, OpenViking, Mem0, Hindsight,
  Holographic, RetainDB, ByteRover). Each with setup, config, tools,
  cost, and unique features. Includes comparison table and profile
  isolation notes.
- developer-guide/memory-provider-plugin.md — how to build a new memory
  provider plugin. Covers ABC, required methods, config schema,
  save_config, threading contract, profile isolation, testing.

Updated pages:
- user-guide/features/memory.md — replaced Honcho section with link to
  new Memory Providers page
- user-guide/features/honcho.md — replaced with migration redirect to
  the new Memory Providers page
- sidebars.ts — added both new pages to navigation

* fix(memory): auto-migrate Honcho users to memory provider plugin

When honcho.json or ~/.honcho/config.json exists but memory.provider
is not set, automatically set memory.provider: honcho in config.yaml
and activate the plugin. The plugin reads the same config files, so
all data and credentials are preserved. Zero user action needed.

Persists the migration to config.yaml so it only fires once. Prints
a one-line confirmation in non-quiet mode.

* fix(memory): only auto-migrate Honcho when enabled + credentialed

Check HonchoClientConfig.enabled AND (api_key OR base_url) before
auto-migrating — not just file existence. Prevents false activation
for users who disabled Honcho, stopped using it (config lingers),
or have ~/.honcho/ from a different tool.

* feat(memory): auto-install pip dependencies during hermes memory setup

Reads pip_dependencies from plugin.yaml, checks which are missing,
installs them via pip before config walkthrough. Also shows install
guidance for external_dependencies (e.g. brv CLI for ByteRover).

Updated all 7 plugin.yaml files with pip_dependencies:
- honcho: honcho-ai
- mem0: mem0ai
- openviking: httpx
- hindsight: hindsight-client
- holographic: (none)
- retaindb: requests
- byterover: (external_dependencies for brv CLI)

* fix: remove remaining Honcho crash risks from cli.py and gateway

cli.py: removed Honcho session re-mapping block (would crash importing
deleted tools/honcho_tools.py), Honcho flush on compress, Honcho
session display on startup, Honcho shutdown on exit, honcho_session_key
AIAgent param.

gateway/run.py: removed honcho_session_key params from helper methods,
sync_honcho param, _honcho.shutdown() block.

tests: fixed test_cron_session_with_honcho_key_skipped (was passing
removed honcho_key param to _flush_memories_for_session).

* fix: include plugins/ in pyproject.toml package list

Without this, plugins/memory/ wouldn't be included in non-editable
installs. Hermes always runs from the repo checkout so this is belt-
and-suspenders, but prevents breakage if the install method changes.

* fix(memory): correct pip-to-import name mapping for dep checks

The heuristic dep.replace('-', '_') fails for packages where the pip
name differs from the import name: honcho-ai→honcho, mem0ai→mem0,
hindsight-client→hindsight_client. Added explicit mapping table so
hermes memory setup doesn't try to reinstall already-installed packages.

* chore: remove dead code from old plugin memory registration path

- hermes_cli/plugins.py: removed register_memory_provider(),
  _memory_providers list, get_plugin_memory_providers() — memory
  providers now use plugins/memory/ discovery, not the general plugin system
- hermes_cli/main.py: stripped 74 lines of dead honcho argparse
  subparsers (setup, status, sessions, map, peer, mode, tokens,
  identity, migrate) — kept only the migration redirect
- agent/memory_provider.py: updated docstring to reflect new
  registration path
- tests: replaced TestPluginMemoryProviderRegistration with
  TestPluginMemoryDiscovery that tests the actual plugins/memory/
  discovery system. Added 3 new tests (discover, load, nonexistent).

* chore: delete dead honcho_integration/cli.py and its tests

cli.py (794 lines) was the old 'hermes honcho' command handler — nobody
calls it since cmd_honcho was replaced with a migration redirect.

Deleted tests that imported from removed code:
- tests/honcho_integration/test_cli.py (tested _resolve_api_key)
- tests/honcho_integration/test_config_isolation.py (tested CLI config paths)
- tests/tools/test_honcho_tools.py (tested the deleted tools/honcho_tools.py)

Remaining honcho_integration/ files (actively used by the plugin):
- client.py (445 lines) — config loading, SDK client creation
- session.py (991 lines) — session management, queries, flush

* refactor: move honcho_integration/ into the honcho plugin

Moves client.py (445 lines) and session.py (991 lines) from the
top-level honcho_integration/ package into plugins/memory/honcho/.
No Honcho code remains in the main codebase.

- plugins/memory/honcho/client.py — config loading, SDK client creation
- plugins/memory/honcho/session.py — session management, queries, flush
- Updated all imports: run_agent.py (auto-migration), hermes_cli/doctor.py,
  plugin __init__.py, session.py cross-import, all tests
- Removed honcho_integration/ package and pyproject.toml entry
- Renamed tests/honcho_integration/ → tests/honcho_plugin/

* docs: update architecture + gateway-internals for memory provider system

- architecture.md: replaced honcho_integration/ with plugins/memory/
- gateway-internals.md: replaced Honcho-specific session routing and
  flush lifecycle docs with generic memory provider interface docs

* fix: update stale mock path for resolve_active_host after honcho plugin migration

* fix(memory): address review feedback — P0 lifecycle, ABC contract, honcho CLI restore

Review feedback from Honcho devs (erosika):

P0 — Provider lifecycle:
- Remove on_session_end() + shutdown_all() from run_conversation() tail
  (was killing providers after every turn in multi-turn sessions)
- Add shutdown_memory_provider() method on AIAgent for callers
- Wire shutdown into CLI atexit, reset_conversation, gateway stop/expiry

Bug fixes:
- Remove sync_honcho=False kwarg from /btw callsites (TypeError crash)
- Fix doctor.py references to dead 'hermes honcho setup' command
- Cache prefetch_all() before tool loop (was re-calling every iteration)

ABC contract hardening (all backwards-compatible):
- Add session_id kwarg to prefetch/sync_turn/queue_prefetch
- Make on_pre_compress() return str (provider insights in compression)
- Add **kwargs to on_turn_start() for runtime context
- Add on_delegation() hook for parent-side subagent observation
- Document agent_context/agent_identity/agent_workspace kwargs on
  initialize() (prevents cron corruption, enables profile scoping)
- Fix docstring: single external provider, not multiple

Honcho CLI restoration:
- Add plugins/memory/honcho/cli.py (from main's honcho_integration/cli.py
  with imports adapted to plugin path)
- Restore full hermes honcho command with all subcommands (status, peer,
  mode, tokens, identity, enable/disable, sync, peers, --target-profile)
- Restore auto-clone on profile creation + sync on hermes update
- hermes honcho setup now redirects to hermes memory setup

* fix(memory): wire on_delegation, skip_memory for cron/flush, fix ByteRover return type

- Wire on_delegation() in delegate_tool.py — parent's memory provider
  is notified with task+result after each subagent completes
- Add skip_memory=True to cron scheduler (prevents cron system prompts
  from corrupting user representations — closes #4052)
- Add skip_memory=True to gateway flush agent (throwaway agent shouldn't
  activate memory provider)
- Fix ByteRover on_pre_compress() return type: None -> str

* fix(honcho): port profile isolation fixes from PR #4632

Ports 5 bug fixes found during profile testing (erosika's PR #4632):

1. 3-tier config resolution — resolve_config_path() now checks
   $HERMES_HOME/honcho.json → ~/.hermes/honcho.json → ~/.honcho/config.json
   (non-default profiles couldn't find shared host blocks)

2. Thread host=_host_key() through from_global_config() in cmd_setup,
   cmd_status, cmd_identity (--target-profile was being ignored)

3. Use bare profile name as aiPeer (not host key with dots) — Honcho's
   peer ID pattern is ^[a-zA-Z0-9_-]+$, dots are invalid

4. Wrap add_peers() in try/except — was fatal on new AI peers, killed
   all message uploads for the session

5. Gate Honcho clone behind --clone/--clone-all on profile create
   (bare create should be blank-slate)

Also: sanitize assistant_peer_id via _sanitize_id()

* fix(tests): add module cleanup fixture to test_cli_provider_resolution

test_cli_provider_resolution._import_cli() wipes tools.*, cli, and
run_agent from sys.modules to force fresh imports, but had no cleanup.
This poisoned all subsequent tests on the same xdist worker — mocks
targeting tools.file_tools, tools.send_message_tool, etc. patched the
NEW module object while already-imported functions still referenced
the OLD one. Caused ~25 cascade failures: send_message KeyError,
process_registry FileNotFoundError, file_read_guards timeouts,
read_loop_detection file-not-found, mcp_oauth None port, and
provider_parity/codex_execution stale tool lists.

Fix: autouse fixture saves all affected modules before each test and
restores them after, matching the pattern in
test_managed_browserbase_and_modal.py.
This commit is contained in:
Teknium 2026-04-02 15:33:51 -07:00 committed by GitHub
parent e0b2bdb089
commit 924bc67eee
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
69 changed files with 7501 additions and 2317 deletions

213
plugins/memory/__init__.py Normal file
View file

@ -0,0 +1,213 @@
"""Memory provider plugin discovery.
Scans ``plugins/memory/<name>/`` directories for memory provider plugins.
Each subdirectory must contain ``__init__.py`` with a class implementing
the MemoryProvider ABC.
Memory providers are separate from the general plugin system they live
in the repo and are always available without user installation. Only ONE
can be active at a time, selected via ``memory.provider`` in config.yaml.
Usage:
from plugins.memory import discover_memory_providers, load_memory_provider
available = discover_memory_providers() # [(name, desc, available), ...]
provider = load_memory_provider("openviking") # MemoryProvider instance
"""
from __future__ import annotations
import importlib
import importlib.util
import logging
import sys
from pathlib import Path
from typing import List, Optional, Tuple
logger = logging.getLogger(__name__)
_MEMORY_PLUGINS_DIR = Path(__file__).parent
def discover_memory_providers() -> List[Tuple[str, str, bool]]:
"""Scan plugins/memory/ for available providers.
Returns list of (name, description, is_available) tuples.
Does NOT import the providers just reads plugin.yaml for metadata
and does a lightweight availability check.
"""
results = []
if not _MEMORY_PLUGINS_DIR.is_dir():
return results
for child in sorted(_MEMORY_PLUGINS_DIR.iterdir()):
if not child.is_dir() or child.name.startswith(("_", ".")):
continue
init_file = child / "__init__.py"
if not init_file.exists():
continue
# Read description from plugin.yaml if available
desc = ""
yaml_file = child / "plugin.yaml"
if yaml_file.exists():
try:
import yaml
with open(yaml_file) as f:
meta = yaml.safe_load(f) or {}
desc = meta.get("description", "")
except Exception:
pass
# Quick availability check — try loading and calling is_available()
available = True
try:
provider = _load_provider_from_dir(child)
if provider:
available = provider.is_available()
else:
available = False
except Exception:
available = False
results.append((child.name, desc, available))
return results
def load_memory_provider(name: str) -> Optional["MemoryProvider"]:
"""Load and return a MemoryProvider instance by name.
Returns None if the provider is not found or fails to load.
"""
provider_dir = _MEMORY_PLUGINS_DIR / name
if not provider_dir.is_dir():
logger.debug("Memory provider '%s' not found in %s", name, _MEMORY_PLUGINS_DIR)
return None
try:
provider = _load_provider_from_dir(provider_dir)
if provider:
return provider
logger.warning("Memory provider '%s' loaded but no provider instance found", name)
return None
except Exception as e:
logger.warning("Failed to load memory provider '%s': %s", name, e)
return None
def _load_provider_from_dir(provider_dir: Path) -> Optional["MemoryProvider"]:
"""Import a provider module and extract the MemoryProvider instance.
The module must have either:
- A register(ctx) function (plugin-style) we simulate a ctx
- A top-level class that extends MemoryProvider we instantiate it
"""
name = provider_dir.name
module_name = f"plugins.memory.{name}"
init_file = provider_dir / "__init__.py"
if not init_file.exists():
return None
# Check if already loaded
if module_name in sys.modules:
mod = sys.modules[module_name]
else:
# Handle relative imports within the plugin
# First ensure the parent packages are registered
for parent in ("plugins", "plugins.memory"):
if parent not in sys.modules:
parent_path = Path(__file__).parent
if parent == "plugins":
parent_path = parent_path.parent
parent_init = parent_path / "__init__.py"
if parent_init.exists():
spec = importlib.util.spec_from_file_location(
parent, str(parent_init),
submodule_search_locations=[str(parent_path)]
)
if spec:
parent_mod = importlib.util.module_from_spec(spec)
sys.modules[parent] = parent_mod
try:
spec.loader.exec_module(parent_mod)
except Exception:
pass
# Now load the provider module
spec = importlib.util.spec_from_file_location(
module_name, str(init_file),
submodule_search_locations=[str(provider_dir)]
)
if not spec:
return None
mod = importlib.util.module_from_spec(spec)
sys.modules[module_name] = mod
# Register submodules so relative imports work
# e.g., "from .store import MemoryStore" in holographic plugin
for sub_file in provider_dir.glob("*.py"):
if sub_file.name == "__init__.py":
continue
sub_name = sub_file.stem
full_sub_name = f"{module_name}.{sub_name}"
if full_sub_name not in sys.modules:
sub_spec = importlib.util.spec_from_file_location(
full_sub_name, str(sub_file)
)
if sub_spec:
sub_mod = importlib.util.module_from_spec(sub_spec)
sys.modules[full_sub_name] = sub_mod
try:
sub_spec.loader.exec_module(sub_mod)
except Exception as e:
logger.debug("Failed to load submodule %s: %s", full_sub_name, e)
try:
spec.loader.exec_module(mod)
except Exception as e:
logger.debug("Failed to exec_module %s: %s", module_name, e)
sys.modules.pop(module_name, None)
return None
# Try register(ctx) pattern first (how our plugins are written)
if hasattr(mod, "register"):
collector = _ProviderCollector()
try:
mod.register(collector)
if collector.provider:
return collector.provider
except Exception as e:
logger.debug("register() failed for %s: %s", name, e)
# Fallback: find a MemoryProvider subclass and instantiate it
from agent.memory_provider import MemoryProvider
for attr_name in dir(mod):
attr = getattr(mod, attr_name, None)
if (isinstance(attr, type) and issubclass(attr, MemoryProvider)
and attr is not MemoryProvider):
try:
return attr()
except Exception:
pass
return None
class _ProviderCollector:
"""Fake plugin context that captures register_memory_provider calls."""
def __init__(self):
self.provider = None
def register_memory_provider(self, provider):
self.provider = provider
# No-op for other registration methods
def register_tool(self, *args, **kwargs):
pass
def register_hook(self, *args, **kwargs):
pass

View file

@ -0,0 +1,41 @@
# ByteRover Memory Provider
Persistent memory via the `brv` CLI — hierarchical knowledge tree with tiered retrieval (fuzzy text → LLM-driven search).
## Requirements
Install the ByteRover CLI:
```bash
curl -fsSL https://byterover.dev/install.sh | sh
# or
npm install -g byterover-cli
```
## Setup
```bash
hermes memory setup # select "byterover"
```
Or manually:
```bash
hermes config set memory.provider byterover
# Optional cloud sync:
echo "BRV_API_KEY=your-key" >> ~/.hermes/.env
```
## Config
| Env Var | Required | Description |
|---------|----------|-------------|
| `BRV_API_KEY` | No | Cloud sync key (optional, local-first by default) |
Working directory: `$HERMES_HOME/byterover/` (profile-scoped).
## Tools
| Tool | Description |
|------|-------------|
| `brv_query` | Search the knowledge tree |
| `brv_curate` | Store facts, decisions, patterns |
| `brv_status` | CLI version, tree stats, sync state |

View file

@ -0,0 +1,398 @@
"""ByteRover memory plugin — MemoryProvider interface.
Persistent memory via the ByteRover CLI (``brv``). Organizes knowledge into
a hierarchical context tree with tiered retrieval (fuzzy text LLM-driven
search). Local-first with optional cloud sync.
Original PR #3499 by hieuntg81, adapted to MemoryProvider ABC.
Requires: ``brv`` CLI installed (npm install -g byterover-cli or
curl -fsSL https://byterover.dev/install.sh | sh).
Config via environment variables (profile-scoped via each profile's .env):
BRV_API_KEY ByteRover API key (for cloud features, optional for local)
Working directory: $HERMES_HOME/byterover/ (profile-scoped context tree)
"""
from __future__ import annotations
import json
import logging
import os
import shutil
import subprocess
import threading
import time
from pathlib import Path
from typing import Any, Dict, List, Optional
from agent.memory_provider import MemoryProvider
logger = logging.getLogger(__name__)
# Timeouts
_QUERY_TIMEOUT = 30 # brv query — should be fast
_CURATE_TIMEOUT = 120 # brv curate — may involve LLM processing
# Minimum lengths to filter noise
_MIN_QUERY_LEN = 10
_MIN_OUTPUT_LEN = 20
# ---------------------------------------------------------------------------
# brv binary resolution (cached, thread-safe)
# ---------------------------------------------------------------------------
_brv_path_lock = threading.Lock()
_cached_brv_path: Optional[str] = None
def _resolve_brv_path() -> Optional[str]:
"""Find the brv binary on PATH or well-known install locations."""
global _cached_brv_path
with _brv_path_lock:
if _cached_brv_path is not None:
return _cached_brv_path if _cached_brv_path != "" else None
found = shutil.which("brv")
if not found:
home = Path.home()
candidates = [
home / ".brv-cli" / "bin" / "brv",
Path("/usr/local/bin/brv"),
home / ".npm-global" / "bin" / "brv",
]
for c in candidates:
if c.exists():
found = str(c)
break
with _brv_path_lock:
if _cached_brv_path is not None:
return _cached_brv_path if _cached_brv_path != "" else None
_cached_brv_path = found or ""
return found
def _run_brv(args: List[str], timeout: int = _QUERY_TIMEOUT,
cwd: str = None) -> dict:
"""Run a brv CLI command. Returns {success, output, error}."""
brv_path = _resolve_brv_path()
if not brv_path:
return {"success": False, "error": "brv CLI not found. Install: npm install -g byterover-cli"}
cmd = [brv_path] + args
effective_cwd = cwd or str(_get_brv_cwd())
Path(effective_cwd).mkdir(parents=True, exist_ok=True)
env = os.environ.copy()
brv_bin_dir = str(Path(brv_path).parent)
env["PATH"] = brv_bin_dir + os.pathsep + env.get("PATH", "")
try:
result = subprocess.run(
cmd, capture_output=True, text=True,
timeout=timeout, cwd=effective_cwd, env=env,
)
stdout = result.stdout.strip()
stderr = result.stderr.strip()
if result.returncode == 0:
return {"success": True, "output": stdout}
return {"success": False, "error": stderr or stdout or f"brv exited {result.returncode}"}
except subprocess.TimeoutExpired:
return {"success": False, "error": f"brv timed out after {timeout}s"}
except FileNotFoundError:
global _cached_brv_path
with _brv_path_lock:
_cached_brv_path = None
return {"success": False, "error": "brv CLI not found"}
except Exception as e:
return {"success": False, "error": str(e)}
def _get_brv_cwd() -> Path:
"""Profile-scoped working directory for the brv context tree."""
from hermes_constants import get_hermes_home
return get_hermes_home() / "byterover"
# ---------------------------------------------------------------------------
# Tool schemas
# ---------------------------------------------------------------------------
QUERY_SCHEMA = {
"name": "brv_query",
"description": (
"Search ByteRover's persistent knowledge tree for relevant context. "
"Returns memories, project knowledge, architectural decisions, and "
"patterns from previous sessions. Use for any question where past "
"context would help."
),
"parameters": {
"type": "object",
"properties": {
"query": {"type": "string", "description": "What to search for."},
},
"required": ["query"],
},
}
CURATE_SCHEMA = {
"name": "brv_curate",
"description": (
"Store important information in ByteRover's persistent knowledge tree. "
"Use for architectural decisions, bug fixes, user preferences, project "
"patterns — anything worth remembering across sessions. ByteRover's LLM "
"automatically categorizes and organizes the memory."
),
"parameters": {
"type": "object",
"properties": {
"content": {"type": "string", "description": "The information to remember."},
},
"required": ["content"],
},
}
STATUS_SCHEMA = {
"name": "brv_status",
"description": "Check ByteRover status — CLI version, context tree stats, cloud sync state.",
"parameters": {"type": "object", "properties": {}, "required": []},
}
# ---------------------------------------------------------------------------
# MemoryProvider implementation
# ---------------------------------------------------------------------------
class ByteRoverMemoryProvider(MemoryProvider):
"""ByteRover persistent memory via the brv CLI."""
def __init__(self):
self._cwd = ""
self._session_id = ""
self._turn_count = 0
self._prefetch_result = ""
self._prefetch_lock = threading.Lock()
self._prefetch_thread: Optional[threading.Thread] = None
self._sync_thread: Optional[threading.Thread] = None
@property
def name(self) -> str:
return "byterover"
def is_available(self) -> bool:
"""Check if brv CLI is installed. No network calls."""
return _resolve_brv_path() is not None
def get_config_schema(self):
return [
{
"key": "api_key",
"description": "ByteRover API key (optional, for cloud sync)",
"secret": True,
"env_var": "BRV_API_KEY",
"url": "https://app.byterover.dev",
},
]
def initialize(self, session_id: str, **kwargs) -> None:
self._cwd = str(_get_brv_cwd())
self._session_id = session_id
self._turn_count = 0
Path(self._cwd).mkdir(parents=True, exist_ok=True)
def system_prompt_block(self) -> str:
if not _resolve_brv_path():
return ""
return (
"# ByteRover Memory\n"
"Active. Persistent knowledge tree with hierarchical context.\n"
"Use brv_query to search past knowledge, brv_curate to store "
"important facts, brv_status to check state."
)
def prefetch(self, query: str, *, session_id: str = "") -> str:
if self._prefetch_thread and self._prefetch_thread.is_alive():
self._prefetch_thread.join(timeout=3.0)
with self._prefetch_lock:
result = self._prefetch_result
self._prefetch_result = ""
if not result:
return ""
return f"## ByteRover Context\n{result}"
def queue_prefetch(self, query: str, *, session_id: str = "") -> None:
if not query or len(query.strip()) < _MIN_QUERY_LEN:
return
def _run():
try:
result = _run_brv(
["query", "--", query.strip()[:5000]],
timeout=_QUERY_TIMEOUT, cwd=self._cwd,
)
if result["success"] and result.get("output"):
output = result["output"].strip()
if len(output) > _MIN_OUTPUT_LEN:
with self._prefetch_lock:
self._prefetch_result = output
except Exception as e:
logger.debug("ByteRover prefetch failed: %s", e)
self._prefetch_thread = threading.Thread(
target=_run, daemon=True, name="brv-prefetch"
)
self._prefetch_thread.start()
def sync_turn(self, user_content: str, assistant_content: str, *, session_id: str = "") -> None:
"""Curate the conversation turn in background (non-blocking)."""
self._turn_count += 1
# Only curate substantive turns
if len(user_content.strip()) < _MIN_QUERY_LEN:
return
def _sync():
try:
combined = f"User: {user_content[:2000]}\nAssistant: {assistant_content[:2000]}"
_run_brv(
["curate", "--", combined],
timeout=_CURATE_TIMEOUT, cwd=self._cwd,
)
except Exception as e:
logger.debug("ByteRover sync failed: %s", e)
# Wait for previous sync
if self._sync_thread and self._sync_thread.is_alive():
self._sync_thread.join(timeout=5.0)
self._sync_thread = threading.Thread(
target=_sync, daemon=True, name="brv-sync"
)
self._sync_thread.start()
def on_memory_write(self, action: str, target: str, content: str) -> None:
"""Mirror built-in memory writes to ByteRover."""
if action not in ("add", "replace") or not content:
return
def _write():
try:
label = "User profile" if target == "user" else "Agent memory"
_run_brv(
["curate", "--", f"[{label}] {content}"],
timeout=_CURATE_TIMEOUT, cwd=self._cwd,
)
except Exception as e:
logger.debug("ByteRover memory mirror failed: %s", e)
t = threading.Thread(target=_write, daemon=True, name="brv-memwrite")
t.start()
def on_pre_compress(self, messages: List[Dict[str, Any]]) -> str:
"""Extract insights before context compression discards turns."""
if not messages:
return ""
# Build a summary of messages about to be compressed
parts = []
for msg in messages[-10:]: # last 10 messages
role = msg.get("role", "")
content = msg.get("content", "")
if isinstance(content, str) and content.strip() and role in ("user", "assistant"):
parts.append(f"{role}: {content[:500]}")
if not parts:
return ""
combined = "\n".join(parts)
def _flush():
try:
_run_brv(
["curate", "--", f"[Pre-compression context]\n{combined}"],
timeout=_CURATE_TIMEOUT, cwd=self._cwd,
)
logger.info("ByteRover pre-compression flush: %d messages", len(parts))
except Exception as e:
logger.debug("ByteRover pre-compression flush failed: %s", e)
t = threading.Thread(target=_flush, daemon=True, name="brv-flush")
t.start()
return ""
def get_tool_schemas(self) -> List[Dict[str, Any]]:
return [QUERY_SCHEMA, CURATE_SCHEMA, STATUS_SCHEMA]
def handle_tool_call(self, tool_name: str, args: dict, **kwargs) -> str:
if tool_name == "brv_query":
return self._tool_query(args)
elif tool_name == "brv_curate":
return self._tool_curate(args)
elif tool_name == "brv_status":
return self._tool_status()
return json.dumps({"error": f"Unknown tool: {tool_name}"})
def shutdown(self) -> None:
for t in (self._sync_thread, self._prefetch_thread):
if t and t.is_alive():
t.join(timeout=10.0)
# -- Tool implementations ------------------------------------------------
def _tool_query(self, args: dict) -> str:
query = args.get("query", "")
if not query:
return json.dumps({"error": "query is required"})
result = _run_brv(
["query", "--", query.strip()[:5000]],
timeout=_QUERY_TIMEOUT, cwd=self._cwd,
)
if not result["success"]:
return json.dumps({"error": result.get("error", "Query failed")})
output = result.get("output", "").strip()
if not output or len(output) < _MIN_OUTPUT_LEN:
return json.dumps({"result": "No relevant memories found."})
# Truncate very long results
if len(output) > 8000:
output = output[:8000] + "\n\n[... truncated]"
return json.dumps({"result": output})
def _tool_curate(self, args: dict) -> str:
content = args.get("content", "")
if not content:
return json.dumps({"error": "content is required"})
result = _run_brv(
["curate", "--", content],
timeout=_CURATE_TIMEOUT, cwd=self._cwd,
)
if not result["success"]:
return json.dumps({"error": result.get("error", "Curate failed")})
return json.dumps({"result": "Memory curated successfully."})
def _tool_status(self) -> str:
result = _run_brv(["status"], timeout=15, cwd=self._cwd)
if not result["success"]:
return json.dumps({"error": result.get("error", "Status check failed")})
return json.dumps({"status": result.get("output", "")})
# ---------------------------------------------------------------------------
# Plugin entry point
# ---------------------------------------------------------------------------
def register(ctx) -> None:
"""Register ByteRover as a memory provider plugin."""
ctx.register_memory_provider(ByteRoverMemoryProvider())

View file

@ -0,0 +1,9 @@
name: byterover
version: 1.0.0
description: "ByteRover — persistent knowledge tree with tiered retrieval via the brv CLI."
external_dependencies:
- name: brv
install: "curl -fsSL https://byterover.dev/install.sh | sh"
check: "brv --version"
hooks:
- on_pre_compress

View file

@ -0,0 +1,38 @@
# Hindsight Memory Provider
Long-term memory with knowledge graph, entity resolution, and multi-strategy retrieval. Supports cloud and local modes.
## Requirements
- Cloud: `pip install hindsight-client` + API key from [app.hindsight.vectorize.io](https://app.hindsight.vectorize.io)
- Local: `pip install hindsight` + LLM API key for embeddings
## Setup
```bash
hermes memory setup # select "hindsight"
```
Or manually:
```bash
hermes config set memory.provider hindsight
echo "HINDSIGHT_API_KEY=your-key" >> ~/.hermes/.env
```
## Config
Config file: `$HERMES_HOME/hindsight/config.json` (or `~/.hindsight/config.json` legacy)
| Key | Default | Description |
|-----|---------|-------------|
| `mode` | `cloud` | `cloud` or `local` |
| `bank_id` | `hermes` | Memory bank identifier |
| `budget` | `mid` | Recall thoroughness: `low`/`mid`/`high` |
## Tools
| Tool | Description |
|------|-------------|
| `hindsight_retain` | Store information with auto entity extraction |
| `hindsight_recall` | Multi-strategy search (semantic + entity graph) |
| `hindsight_reflect` | Cross-memory synthesis (LLM-powered) |

View file

@ -0,0 +1,358 @@
"""Hindsight memory plugin — MemoryProvider interface.
Long-term memory with knowledge graph, entity resolution, and multi-strategy
retrieval. Supports cloud (API key) and local (embedded PostgreSQL) modes.
Original PR #1811 by benfrank241, adapted to MemoryProvider ABC.
Config via environment variables:
HINDSIGHT_API_KEY API key for Hindsight Cloud
HINDSIGHT_BANK_ID memory bank identifier (default: hermes)
HINDSIGHT_BUDGET recall budget: low/mid/high (default: mid)
HINDSIGHT_API_URL API endpoint
HINDSIGHT_MODE cloud or local (default: cloud)
Or via $HERMES_HOME/hindsight/config.json (profile-scoped), falling back to
~/.hindsight/config.json (legacy, shared) for backward compatibility.
"""
from __future__ import annotations
import json
import logging
import os
import queue
import threading
from typing import Any, Dict, List
from agent.memory_provider import MemoryProvider
logger = logging.getLogger(__name__)
_DEFAULT_API_URL = "https://api.hindsight.vectorize.io"
_VALID_BUDGETS = {"low", "mid", "high"}
# ---------------------------------------------------------------------------
# Thread helper (from original PR — avoids aiohttp event loop conflicts)
# ---------------------------------------------------------------------------
def _run_in_thread(fn, timeout: float = 30.0):
result_q: queue.Queue = queue.Queue(maxsize=1)
def _run():
import asyncio
asyncio.set_event_loop(None)
try:
result_q.put(("ok", fn()))
except Exception as exc:
result_q.put(("err", exc))
t = threading.Thread(target=_run, daemon=True, name="hindsight-call")
t.start()
kind, value = result_q.get(timeout=timeout)
if kind == "err":
raise value
return value
# ---------------------------------------------------------------------------
# Tool schemas
# ---------------------------------------------------------------------------
RETAIN_SCHEMA = {
"name": "hindsight_retain",
"description": (
"Store information to long-term memory. Hindsight automatically "
"extracts structured facts, resolves entities, and indexes for retrieval."
),
"parameters": {
"type": "object",
"properties": {
"content": {"type": "string", "description": "The information to store."},
"context": {"type": "string", "description": "Short label (e.g. 'user preference', 'project decision')."},
},
"required": ["content"],
},
}
RECALL_SCHEMA = {
"name": "hindsight_recall",
"description": (
"Search long-term memory. Returns memories ranked by relevance using "
"semantic search, keyword matching, entity graph traversal, and reranking."
),
"parameters": {
"type": "object",
"properties": {
"query": {"type": "string", "description": "What to search for."},
},
"required": ["query"],
},
}
REFLECT_SCHEMA = {
"name": "hindsight_reflect",
"description": (
"Synthesize a reasoned answer from long-term memories. Unlike recall, "
"this reasons across all stored memories to produce a coherent response."
),
"parameters": {
"type": "object",
"properties": {
"query": {"type": "string", "description": "The question to reflect on."},
},
"required": ["query"],
},
}
# ---------------------------------------------------------------------------
# Config
# ---------------------------------------------------------------------------
def _load_config() -> dict:
"""Load config from profile-scoped path, legacy path, or env vars.
Resolution order:
1. $HERMES_HOME/hindsight/config.json (profile-scoped)
2. ~/.hindsight/config.json (legacy, shared)
3. Environment variables
"""
from pathlib import Path
from hermes_constants import get_hermes_home
# Profile-scoped path (preferred)
profile_path = get_hermes_home() / "hindsight" / "config.json"
if profile_path.exists():
try:
return json.loads(profile_path.read_text(encoding="utf-8"))
except Exception:
pass
# Legacy shared path (backward compat)
legacy_path = Path.home() / ".hindsight" / "config.json"
if legacy_path.exists():
try:
return json.loads(legacy_path.read_text(encoding="utf-8"))
except Exception:
pass
return {
"mode": os.environ.get("HINDSIGHT_MODE", "cloud"),
"apiKey": os.environ.get("HINDSIGHT_API_KEY", ""),
"banks": {
"hermes": {
"bankId": os.environ.get("HINDSIGHT_BANK_ID", "hermes"),
"budget": os.environ.get("HINDSIGHT_BUDGET", "mid"),
"enabled": True,
}
},
}
# ---------------------------------------------------------------------------
# MemoryProvider implementation
# ---------------------------------------------------------------------------
class HindsightMemoryProvider(MemoryProvider):
"""Hindsight long-term memory with knowledge graph and multi-strategy retrieval."""
def __init__(self):
self._config = None
self._api_key = None
self._bank_id = "hermes"
self._budget = "mid"
self._mode = "cloud"
self._prefetch_result = ""
self._prefetch_lock = threading.Lock()
self._prefetch_thread = None
self._sync_thread = None
@property
def name(self) -> str:
return "hindsight"
def is_available(self) -> bool:
try:
cfg = _load_config()
mode = cfg.get("mode", "cloud")
if mode == "local":
embed = cfg.get("embed", {})
return bool(embed.get("llmApiKey") or os.environ.get("HINDSIGHT_LLM_API_KEY"))
api_key = cfg.get("apiKey") or os.environ.get("HINDSIGHT_API_KEY", "")
return bool(api_key)
except Exception:
return False
def save_config(self, values, hermes_home):
"""Write config to $HERMES_HOME/hindsight/config.json."""
import json
from pathlib import Path
config_dir = Path(hermes_home) / "hindsight"
config_dir.mkdir(parents=True, exist_ok=True)
config_path = config_dir / "config.json"
existing = {}
if config_path.exists():
try:
existing = json.loads(config_path.read_text())
except Exception:
pass
existing.update(values)
config_path.write_text(json.dumps(existing, indent=2))
def get_config_schema(self):
return [
{"key": "mode", "description": "Cloud API or local embedded mode", "default": "cloud", "choices": ["cloud", "local"]},
{"key": "api_key", "description": "Hindsight Cloud API key", "secret": True, "env_var": "HINDSIGHT_API_KEY", "url": "https://app.hindsight.vectorize.io"},
{"key": "bank_id", "description": "Memory bank identifier", "default": "hermes"},
{"key": "budget", "description": "Recall thoroughness", "default": "mid", "choices": ["low", "mid", "high"]},
{"key": "llm_provider", "description": "LLM provider for local mode", "default": "anthropic", "choices": ["anthropic", "openai", "groq", "ollama"]},
{"key": "llm_api_key", "description": "LLM API key for local mode", "secret": True, "env_var": "HINDSIGHT_LLM_API_KEY"},
{"key": "llm_model", "description": "LLM model for local mode", "default": "claude-haiku-4-5-20251001"},
]
def _make_client(self):
"""Create a fresh Hindsight client (thread-safe)."""
if self._mode == "local":
from hindsight import HindsightEmbedded
embed = self._config.get("embed", {})
return HindsightEmbedded(
profile=embed.get("profile", "hermes"),
llm_provider=embed.get("llmProvider", ""),
llm_api_key=embed.get("llmApiKey", ""),
llm_model=embed.get("llmModel", ""),
)
from hindsight_client import Hindsight
return Hindsight(api_key=self._api_key, timeout=30.0)
def initialize(self, session_id: str, **kwargs) -> None:
self._config = _load_config()
self._mode = self._config.get("mode", "cloud")
self._api_key = self._config.get("apiKey") or os.environ.get("HINDSIGHT_API_KEY", "")
banks = self._config.get("banks", {}).get("hermes", {})
self._bank_id = banks.get("bankId", "hermes")
budget = banks.get("budget", "mid")
self._budget = budget if budget in _VALID_BUDGETS else "mid"
# Ensure bank exists
try:
client = _run_in_thread(self._make_client)
_run_in_thread(lambda: client.create_bank(bank_id=self._bank_id, name=self._bank_id))
except Exception:
pass # Already exists
def system_prompt_block(self) -> str:
return (
f"# Hindsight Memory\n"
f"Active. Bank: {self._bank_id}, budget: {self._budget}.\n"
f"Use hindsight_recall to search, hindsight_reflect for synthesis, "
f"hindsight_retain to store facts."
)
def prefetch(self, query: str, *, session_id: str = "") -> str:
if self._prefetch_thread and self._prefetch_thread.is_alive():
self._prefetch_thread.join(timeout=3.0)
with self._prefetch_lock:
result = self._prefetch_result
self._prefetch_result = ""
if not result:
return ""
return f"## Hindsight Memory\n{result}"
def queue_prefetch(self, query: str, *, session_id: str = "") -> None:
def _run():
try:
client = self._make_client()
resp = client.recall(bank_id=self._bank_id, query=query, budget=self._budget)
if resp.results:
text = "\n".join(r.text for r in resp.results if r.text)
with self._prefetch_lock:
self._prefetch_result = text
except Exception as e:
logger.debug("Hindsight prefetch failed: %s", e)
self._prefetch_thread = threading.Thread(target=_run, daemon=True, name="hindsight-prefetch")
self._prefetch_thread.start()
def sync_turn(self, user_content: str, assistant_content: str, *, session_id: str = "") -> None:
"""Retain conversation turn in background (non-blocking)."""
combined = f"User: {user_content}\nAssistant: {assistant_content}"
def _sync():
try:
_run_in_thread(
lambda: self._make_client().retain(
bank_id=self._bank_id, content=combined, context="conversation"
)
)
except Exception as e:
logger.warning("Hindsight sync failed: %s", e)
if self._sync_thread and self._sync_thread.is_alive():
self._sync_thread.join(timeout=5.0)
self._sync_thread = threading.Thread(target=_sync, daemon=True, name="hindsight-sync")
self._sync_thread.start()
def get_tool_schemas(self) -> List[Dict[str, Any]]:
return [RETAIN_SCHEMA, RECALL_SCHEMA, REFLECT_SCHEMA]
def handle_tool_call(self, tool_name: str, args: dict, **kwargs) -> str:
if tool_name == "hindsight_retain":
content = args.get("content", "")
if not content:
return json.dumps({"error": "Missing required parameter: content"})
context = args.get("context")
try:
_run_in_thread(
lambda: self._make_client().retain(
bank_id=self._bank_id, content=content, context=context
)
)
return json.dumps({"result": "Memory stored successfully."})
except Exception as e:
return json.dumps({"error": f"Failed to store memory: {e}"})
elif tool_name == "hindsight_recall":
query = args.get("query", "")
if not query:
return json.dumps({"error": "Missing required parameter: query"})
try:
resp = _run_in_thread(
lambda: self._make_client().recall(
bank_id=self._bank_id, query=query, budget=self._budget
)
)
if not resp.results:
return json.dumps({"result": "No relevant memories found."})
lines = [f"{i}. {r.text}" for i, r in enumerate(resp.results, 1)]
return json.dumps({"result": "\n".join(lines)})
except Exception as e:
return json.dumps({"error": f"Failed to search memory: {e}"})
elif tool_name == "hindsight_reflect":
query = args.get("query", "")
if not query:
return json.dumps({"error": "Missing required parameter: query"})
try:
resp = _run_in_thread(
lambda: self._make_client().reflect(
bank_id=self._bank_id, query=query, budget=self._budget
)
)
return json.dumps({"result": resp.text or "No relevant memories found."})
except Exception as e:
return json.dumps({"error": f"Failed to reflect: {e}"})
return json.dumps({"error": f"Unknown tool: {tool_name}"})
def shutdown(self) -> None:
for t in (self._prefetch_thread, self._sync_thread):
if t and t.is_alive():
t.join(timeout=5.0)
def register(ctx) -> None:
"""Register Hindsight as a memory provider plugin."""
ctx.register_memory_provider(HindsightMemoryProvider())

View file

@ -0,0 +1,9 @@
name: hindsight
version: 1.0.0
description: "Hindsight — long-term memory with knowledge graph, entity resolution, and multi-strategy retrieval."
pip_dependencies:
- hindsight-client
requires_env:
- HINDSIGHT_API_KEY
hooks:
- on_session_end

View file

@ -0,0 +1,36 @@
# Holographic Memory Provider
Local SQLite fact store with FTS5 search, trust scoring, entity resolution, and HRR-based compositional retrieval.
## Requirements
None — uses SQLite (always available). NumPy optional for HRR algebra.
## Setup
```bash
hermes memory setup # select "holographic"
```
Or manually:
```bash
hermes config set memory.provider holographic
```
## Config
Config in `config.yaml` under `plugins.hermes-memory-store`:
| Key | Default | Description |
|-----|---------|-------------|
| `db_path` | `$HERMES_HOME/memory_store.db` | SQLite database path |
| `auto_extract` | `false` | Auto-extract facts at session end |
| `default_trust` | `0.5` | Default trust score for new facts |
| `hrr_dim` | `1024` | HRR vector dimensions |
## Tools
| Tool | Description |
|------|-------------|
| `fact_store` | 9 actions: add, search, probe, related, reason, contradict, update, remove, list |
| `fact_feedback` | Rate facts as helpful/unhelpful (trains trust scores) |

View file

@ -0,0 +1,395 @@
"""hermes-memory-store — holographic memory plugin using MemoryProvider interface.
Registers as a MemoryProvider plugin, giving the agent structured fact storage
with entity resolution, trust scoring, and HRR-based compositional retrieval.
Original plugin by dusterbloom (PR #2351), adapted to the MemoryProvider ABC.
Config in $HERMES_HOME/config.yaml (profile-scoped):
plugins:
hermes-memory-store:
db_path: $HERMES_HOME/memory_store.db
auto_extract: false
default_trust: 0.5
min_trust_threshold: 0.3
temporal_decay_half_life: 0
"""
from __future__ import annotations
import json
import logging
import re
from pathlib import Path
from typing import Any, Dict, List
from agent.memory_provider import MemoryProvider
from .store import MemoryStore
from .retrieval import FactRetriever
logger = logging.getLogger(__name__)
# ---------------------------------------------------------------------------
# Tool schemas (unchanged from original PR)
# ---------------------------------------------------------------------------
FACT_STORE_SCHEMA = {
"name": "fact_store",
"description": (
"Deep structured memory with algebraic reasoning. "
"Use alongside the memory tool — memory for always-on context, "
"fact_store for deep recall and compositional queries.\n\n"
"ACTIONS (simple → powerful):\n"
"• add — Store a fact the user would expect you to remember.\n"
"• search — Keyword lookup ('editor config', 'deploy process').\n"
"• probe — Entity recall: ALL facts about a person/thing.\n"
"• related — What connects to an entity? Structural adjacency.\n"
"• reason — Compositional: facts connected to MULTIPLE entities simultaneously.\n"
"• contradict — Memory hygiene: find facts making conflicting claims.\n"
"• update/remove/list — CRUD operations.\n\n"
"IMPORTANT: Before answering questions about the user, ALWAYS probe or reason first."
),
"parameters": {
"type": "object",
"properties": {
"action": {
"type": "string",
"enum": ["add", "search", "probe", "related", "reason", "contradict", "update", "remove", "list"],
},
"content": {"type": "string", "description": "Fact content (required for 'add')."},
"query": {"type": "string", "description": "Search query (required for 'search')."},
"entity": {"type": "string", "description": "Entity name for 'probe'/'related'."},
"entities": {"type": "array", "items": {"type": "string"}, "description": "Entity names for 'reason'."},
"fact_id": {"type": "integer", "description": "Fact ID for 'update'/'remove'."},
"category": {"type": "string", "enum": ["user_pref", "project", "tool", "general"]},
"tags": {"type": "string", "description": "Comma-separated tags."},
"trust_delta": {"type": "number", "description": "Trust adjustment for 'update'."},
"min_trust": {"type": "number", "description": "Minimum trust filter (default: 0.3)."},
"limit": {"type": "integer", "description": "Max results (default: 10)."},
},
"required": ["action"],
},
}
FACT_FEEDBACK_SCHEMA = {
"name": "fact_feedback",
"description": (
"Rate a fact after using it. Mark 'helpful' if accurate, 'unhelpful' if outdated. "
"This trains the memory — good facts rise, bad facts sink."
),
"parameters": {
"type": "object",
"properties": {
"action": {"type": "string", "enum": ["helpful", "unhelpful"]},
"fact_id": {"type": "integer", "description": "The fact ID to rate."},
},
"required": ["action", "fact_id"],
},
}
# ---------------------------------------------------------------------------
# Config
# ---------------------------------------------------------------------------
def _load_plugin_config() -> dict:
from hermes_constants import get_hermes_home
config_path = get_hermes_home() / "config.yaml"
if not config_path.exists():
return {}
try:
import yaml
with open(config_path) as f:
all_config = yaml.safe_load(f) or {}
return all_config.get("plugins", {}).get("hermes-memory-store", {}) or {}
except Exception:
return {}
# ---------------------------------------------------------------------------
# MemoryProvider implementation
# ---------------------------------------------------------------------------
class HolographicMemoryProvider(MemoryProvider):
"""Holographic memory with structured facts, entity resolution, and HRR retrieval."""
def __init__(self, config: dict | None = None):
self._config = config or _load_plugin_config()
self._store = None
self._retriever = None
self._min_trust = float(self._config.get("min_trust_threshold", 0.3))
@property
def name(self) -> str:
return "holographic"
def is_available(self) -> bool:
return True # SQLite is always available, numpy is optional
def save_config(self, values, hermes_home):
"""Write config to config.yaml under plugins.hermes-memory-store."""
from pathlib import Path
config_path = Path(hermes_home) / "config.yaml"
try:
import yaml
existing = {}
if config_path.exists():
with open(config_path) as f:
existing = yaml.safe_load(f) or {}
existing.setdefault("plugins", {})
existing["plugins"]["hermes-memory-store"] = values
with open(config_path, "w") as f:
yaml.dump(existing, f, default_flow_style=False)
except Exception:
pass
def get_config_schema(self):
from hermes_constants import display_hermes_home
_default_db = f"{display_hermes_home()}/memory_store.db"
return [
{"key": "db_path", "description": "SQLite database path", "default": _default_db},
{"key": "auto_extract", "description": "Auto-extract facts at session end", "default": "false", "choices": ["true", "false"]},
{"key": "default_trust", "description": "Default trust score for new facts", "default": "0.5"},
{"key": "hrr_dim", "description": "HRR vector dimensions", "default": "1024"},
]
def initialize(self, session_id: str, **kwargs) -> None:
from hermes_constants import get_hermes_home
_default_db = str(get_hermes_home() / "memory_store.db")
db_path = self._config.get("db_path", _default_db)
default_trust = float(self._config.get("default_trust", 0.5))
hrr_dim = int(self._config.get("hrr_dim", 1024))
hrr_weight = float(self._config.get("hrr_weight", 0.3))
temporal_decay = int(self._config.get("temporal_decay_half_life", 0))
self._store = MemoryStore(db_path=db_path, default_trust=default_trust, hrr_dim=hrr_dim)
self._retriever = FactRetriever(
store=self._store,
temporal_decay_half_life=temporal_decay,
hrr_weight=hrr_weight,
hrr_dim=hrr_dim,
)
self._session_id = session_id
def system_prompt_block(self) -> str:
if not self._store:
return ""
try:
total = self._store._conn.execute(
"SELECT COUNT(*) FROM facts"
).fetchone()[0]
except Exception:
total = 0
if total == 0:
return ""
return (
f"# Holographic Memory\n"
f"Active. {total} facts stored with entity resolution and trust scoring.\n"
f"Use fact_store to search, probe entities, reason across entities, or add facts.\n"
f"Use fact_feedback to rate facts after using them (trains trust scores)."
)
def prefetch(self, query: str, *, session_id: str = "") -> str:
if not self._retriever or not query:
return ""
try:
results = self._retriever.search(query, min_trust=self._min_trust, limit=5)
if not results:
return ""
lines = []
for r in results:
trust = r.get("trust", 0)
lines.append(f"- [{trust:.1f}] {r.get('content', '')}")
return "## Holographic Memory\n" + "\n".join(lines)
except Exception as e:
logger.debug("Holographic prefetch failed: %s", e)
return ""
def sync_turn(self, user_content: str, assistant_content: str, *, session_id: str = "") -> None:
# Holographic memory stores explicit facts via tools, not auto-sync.
# The on_session_end hook handles auto-extraction if configured.
pass
def get_tool_schemas(self) -> List[Dict[str, Any]]:
return [FACT_STORE_SCHEMA, FACT_FEEDBACK_SCHEMA]
def handle_tool_call(self, tool_name: str, args: Dict[str, Any], **kwargs) -> str:
if tool_name == "fact_store":
return self._handle_fact_store(args)
elif tool_name == "fact_feedback":
return self._handle_fact_feedback(args)
return json.dumps({"error": f"Unknown tool: {tool_name}"})
def on_session_end(self, messages: List[Dict[str, Any]]) -> None:
if not self._config.get("auto_extract", False):
return
if not self._store or not messages:
return
self._auto_extract_facts(messages)
def on_memory_write(self, action: str, target: str, content: str) -> None:
"""Mirror built-in memory writes as facts."""
if action == "add" and self._store and content:
try:
category = "user_pref" if target == "user" else "general"
self._store.add_fact(content, category=category)
except Exception as e:
logger.debug("Holographic memory_write mirror failed: %s", e)
def shutdown(self) -> None:
self._store = None
self._retriever = None
# -- Tool handlers -------------------------------------------------------
def _handle_fact_store(self, args: dict) -> str:
try:
action = args["action"]
store = self._store
retriever = self._retriever
if action == "add":
fact_id = store.add_fact(
args["content"],
category=args.get("category", "general"),
tags=args.get("tags", ""),
)
return json.dumps({"fact_id": fact_id, "status": "added"})
elif action == "search":
results = retriever.search(
args["query"],
category=args.get("category"),
min_trust=float(args.get("min_trust", self._min_trust)),
limit=int(args.get("limit", 10)),
)
return json.dumps({"results": results, "count": len(results)})
elif action == "probe":
results = retriever.probe(
args["entity"],
category=args.get("category"),
limit=int(args.get("limit", 10)),
)
return json.dumps({"results": results, "count": len(results)})
elif action == "related":
results = retriever.related(
args["entity"],
category=args.get("category"),
limit=int(args.get("limit", 10)),
)
return json.dumps({"results": results, "count": len(results)})
elif action == "reason":
entities = args.get("entities", [])
if not entities:
return json.dumps({"error": "reason requires 'entities' list"})
results = retriever.reason(
entities,
category=args.get("category"),
limit=int(args.get("limit", 10)),
)
return json.dumps({"results": results, "count": len(results)})
elif action == "contradict":
results = retriever.contradict(
category=args.get("category"),
limit=int(args.get("limit", 10)),
)
return json.dumps({"results": results, "count": len(results)})
elif action == "update":
updated = store.update_fact(
int(args["fact_id"]),
content=args.get("content"),
trust_delta=float(args["trust_delta"]) if "trust_delta" in args else None,
tags=args.get("tags"),
category=args.get("category"),
)
return json.dumps({"updated": updated})
elif action == "remove":
removed = store.remove_fact(int(args["fact_id"]))
return json.dumps({"removed": removed})
elif action == "list":
facts = store.list_facts(
category=args.get("category"),
min_trust=float(args.get("min_trust", 0.0)),
limit=int(args.get("limit", 10)),
)
return json.dumps({"facts": facts, "count": len(facts)})
else:
return json.dumps({"error": f"Unknown action: {action}"})
except KeyError as exc:
return json.dumps({"error": f"Missing required argument: {exc}"})
except Exception as exc:
return json.dumps({"error": str(exc)})
def _handle_fact_feedback(self, args: dict) -> str:
try:
fact_id = int(args["fact_id"])
helpful = args["action"] == "helpful"
result = self._store.record_feedback(fact_id, helpful=helpful)
return json.dumps(result)
except KeyError as exc:
return json.dumps({"error": f"Missing required argument: {exc}"})
except Exception as exc:
return json.dumps({"error": str(exc)})
# -- Auto-extraction (on_session_end) ------------------------------------
def _auto_extract_facts(self, messages: list) -> None:
_PREF_PATTERNS = [
re.compile(r'\bI\s+(?:prefer|like|love|use|want|need)\s+(.+)', re.IGNORECASE),
re.compile(r'\bmy\s+(?:favorite|preferred|default)\s+\w+\s+is\s+(.+)', re.IGNORECASE),
re.compile(r'\bI\s+(?:always|never|usually)\s+(.+)', re.IGNORECASE),
]
_DECISION_PATTERNS = [
re.compile(r'\bwe\s+(?:decided|agreed|chose)\s+(?:to\s+)?(.+)', re.IGNORECASE),
re.compile(r'\bthe\s+project\s+(?:uses|needs|requires)\s+(.+)', re.IGNORECASE),
]
extracted = 0
for msg in messages:
if msg.get("role") != "user":
continue
content = msg.get("content", "")
if not isinstance(content, str) or len(content) < 10:
continue
for pattern in _PREF_PATTERNS:
if pattern.search(content):
try:
self._store.add_fact(content[:400], category="user_pref")
extracted += 1
except Exception:
pass
break
for pattern in _DECISION_PATTERNS:
if pattern.search(content):
try:
self._store.add_fact(content[:400], category="project")
extracted += 1
except Exception:
pass
break
if extracted:
logger.info("Auto-extracted %d facts from conversation", extracted)
# ---------------------------------------------------------------------------
# Plugin entry point
# ---------------------------------------------------------------------------
def register(ctx) -> None:
"""Register the holographic memory provider with the plugin system."""
config = _load_plugin_config()
provider = HolographicMemoryProvider(config=config)
ctx.register_memory_provider(provider)

View file

@ -0,0 +1,203 @@
"""Holographic Reduced Representations (HRR) with phase encoding.
HRRs are a vector symbolic architecture for encoding compositional structure
into fixed-width distributed representations. This module uses *phase vectors*:
each concept is a vector of angles in [0, 2π). The algebraic operations are:
bind circular convolution (phase addition) associates two concepts
unbind circular correlation (phase subtraction) retrieves a bound value
bundle superposition (circular mean) merges multiple concepts
Phase encoding is numerically stable, avoids the magnitude collapse of
traditional complex-number HRRs, and maps cleanly to cosine similarity.
Atoms are generated deterministically from SHA-256 so representations are
identical across processes, machines, and language versions.
References:
Plate (1995) Holographic Reduced Representations
Gayler (2004) Vector Symbolic Architectures answer Jackendoff's challenges
"""
import hashlib
import logging
import struct
import math
try:
import numpy as np
_HAS_NUMPY = True
except ImportError:
_HAS_NUMPY = False
logger = logging.getLogger(__name__)
_TWO_PI = 2.0 * math.pi
def _require_numpy() -> None:
if not _HAS_NUMPY:
raise RuntimeError("numpy is required for holographic operations")
def encode_atom(word: str, dim: int = 1024) -> "np.ndarray":
"""Deterministic phase vector via SHA-256 counter blocks.
Uses hashlib (not numpy RNG) for cross-platform reproducibility.
Algorithm:
- Generate enough SHA-256 blocks by hashing f"{word}:{i}" for i=0,1,2,...
- Concatenate digests, interpret as uint16 values via struct.unpack
- Scale to [0, 2π): phases = values * (2π / 65536)
- Truncate to dim elements
- Returns np.float64 array of shape (dim,)
"""
_require_numpy()
# Each SHA-256 digest is 32 bytes = 16 uint16 values.
values_per_block = 16
blocks_needed = math.ceil(dim / values_per_block)
uint16_values: list[int] = []
for i in range(blocks_needed):
digest = hashlib.sha256(f"{word}:{i}".encode()).digest()
uint16_values.extend(struct.unpack("<16H", digest))
phases = np.array(uint16_values[:dim], dtype=np.float64) * (_TWO_PI / 65536.0)
return phases
def bind(a: "np.ndarray", b: "np.ndarray") -> "np.ndarray":
"""Circular convolution = element-wise phase addition.
Binding associates two concepts into a single composite vector.
The result is dissimilar to both inputs (quasi-orthogonal).
"""
_require_numpy()
return (a + b) % _TWO_PI
def unbind(memory: "np.ndarray", key: "np.ndarray") -> "np.ndarray":
"""Circular correlation = element-wise phase subtraction.
Unbinding retrieves the value associated with a key from a memory vector.
unbind(bind(a, b), a) b (up to superposition noise)
"""
_require_numpy()
return (memory - key) % _TWO_PI
def bundle(*vectors: "np.ndarray") -> "np.ndarray":
"""Superposition via circular mean of complex exponentials.
Bundling merges multiple vectors into one that is similar to each input.
The result can hold O(sqrt(dim)) items before similarity degrades.
"""
_require_numpy()
complex_sum = np.sum([np.exp(1j * v) for v in vectors], axis=0)
return np.angle(complex_sum) % _TWO_PI
def similarity(a: "np.ndarray", b: "np.ndarray") -> float:
"""Phase cosine similarity. Range [-1, 1].
Returns 1.0 for identical vectors, near 0.0 for random (unrelated) vectors,
and -1.0 for perfectly anti-correlated vectors.
"""
_require_numpy()
return float(np.mean(np.cos(a - b)))
def encode_text(text: str, dim: int = 1024) -> "np.ndarray":
"""Bag-of-words: bundle of atom vectors for each token.
Tokenizes by lowercasing, splitting on whitespace, and stripping
leading/trailing punctuation from each token.
Returns bundle of all token atom vectors.
If text is empty or produces no tokens, returns encode_atom("__hrr_empty__", dim).
"""
_require_numpy()
tokens = [
token.strip(".,!?;:\"'()[]{}")
for token in text.lower().split()
]
tokens = [t for t in tokens if t]
if not tokens:
return encode_atom("__hrr_empty__", dim)
atom_vectors = [encode_atom(token, dim) for token in tokens]
return bundle(*atom_vectors)
def encode_fact(content: str, entities: list[str], dim: int = 1024) -> "np.ndarray":
"""Structured encoding: content bound to ROLE_CONTENT, each entity bound to ROLE_ENTITY, all bundled.
Role vectors are reserved atoms: "__hrr_role_content__", "__hrr_role_entity__"
Components:
1. bind(encode_text(content, dim), encode_atom("__hrr_role_content__", dim))
2. For each entity: bind(encode_atom(entity.lower(), dim), encode_atom("__hrr_role_entity__", dim))
3. bundle all components together
This enables algebraic extraction:
unbind(fact, bind(entity, ROLE_ENTITY)) content_vector
"""
_require_numpy()
role_content = encode_atom("__hrr_role_content__", dim)
role_entity = encode_atom("__hrr_role_entity__", dim)
components: list[np.ndarray] = [
bind(encode_text(content, dim), role_content)
]
for entity in entities:
components.append(bind(encode_atom(entity.lower(), dim), role_entity))
return bundle(*components)
def phases_to_bytes(phases: "np.ndarray") -> bytes:
"""Serialize phase vector to bytes. float64 tobytes — 8 KB at dim=1024."""
_require_numpy()
return phases.tobytes()
def bytes_to_phases(data: bytes) -> "np.ndarray":
"""Deserialize bytes back to phase vector. Inverse of phases_to_bytes.
The .copy() call is required because frombuffer returns a read-only view
backed by the bytes object; callers expect a mutable array.
"""
_require_numpy()
return np.frombuffer(data, dtype=np.float64).copy()
def snr_estimate(dim: int, n_items: int) -> float:
"""Signal-to-noise ratio estimate for holographic storage.
SNR = sqrt(dim / n_items) when n_items > 0, else inf.
The SNR falls below 2.0 when n_items > dim / 4, meaning retrieval
errors become likely. Logs a warning when this threshold is crossed.
"""
_require_numpy()
if n_items <= 0:
return float("inf")
snr = math.sqrt(dim / n_items)
if snr < 2.0:
logger.warning(
"HRR storage near capacity: SNR=%.2f (dim=%d, n_items=%d). "
"Retrieval accuracy may degrade. Consider increasing dim or reducing stored items.",
snr,
dim,
n_items,
)
return snr

View file

@ -0,0 +1,5 @@
name: holographic
version: 0.1.0
description: "Holographic memory — local SQLite fact store with FTS5 search, trust scoring, and HRR-based compositional retrieval."
hooks:
- on_session_end

View file

@ -0,0 +1,593 @@
"""Hybrid keyword/BM25 retrieval for the memory store.
Ported from KIK memory_agent.py combines FTS5 full-text search with
Jaccard similarity reranking and trust-weighted scoring.
"""
from __future__ import annotations
import math
from datetime import datetime, timezone
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from .store import MemoryStore
try:
from . import holographic as hrr
except ImportError:
import holographic as hrr # type: ignore[no-redef]
class FactRetriever:
"""Multi-strategy fact retrieval with trust-weighted scoring."""
def __init__(
self,
store: MemoryStore,
temporal_decay_half_life: int = 0, # days, 0 = disabled
fts_weight: float = 0.4,
jaccard_weight: float = 0.3,
hrr_weight: float = 0.3,
hrr_dim: int = 1024,
):
self.store = store
self.half_life = temporal_decay_half_life
self.hrr_dim = hrr_dim
# Auto-redistribute weights if numpy unavailable
if hrr_weight > 0 and not hrr._HAS_NUMPY:
fts_weight = 0.6
jaccard_weight = 0.4
hrr_weight = 0.0
self.fts_weight = fts_weight
self.jaccard_weight = jaccard_weight
self.hrr_weight = hrr_weight
def search(
self,
query: str,
category: str | None = None,
min_trust: float = 0.3,
limit: int = 10,
) -> list[dict]:
"""Hybrid search: FTS5 candidates → Jaccard rerank → trust weighting.
Pipeline:
1. FTS5 search: Get limit*3 candidates from SQLite full-text search
2. Jaccard boost: Token overlap between query and fact content
3. Trust weighting: final_score = relevance * trust_score
4. Temporal decay (optional): decay = 0.5^(age_days / half_life)
Returns list of dicts with fact data + 'score' field, sorted by score desc.
"""
# Stage 1: Get FTS5 candidates (more than limit for reranking headroom)
candidates = self._fts_candidates(query, category, min_trust, limit * 3)
if not candidates:
return []
# Stage 2: Rerank with Jaccard + trust + optional decay
query_tokens = self._tokenize(query)
scored = []
for fact in candidates:
content_tokens = self._tokenize(fact["content"])
tag_tokens = self._tokenize(fact.get("tags", ""))
all_tokens = content_tokens | tag_tokens
jaccard = self._jaccard_similarity(query_tokens, all_tokens)
fts_score = fact.get("fts_rank", 0.0)
# HRR similarity
if self.hrr_weight > 0 and fact.get("hrr_vector"):
fact_vec = hrr.bytes_to_phases(fact["hrr_vector"])
query_vec = hrr.encode_text(query, self.hrr_dim)
hrr_sim = (hrr.similarity(query_vec, fact_vec) + 1.0) / 2.0 # shift to [0,1]
else:
hrr_sim = 0.5 # neutral
# Combine FTS5 + Jaccard + HRR
relevance = (self.fts_weight * fts_score
+ self.jaccard_weight * jaccard
+ self.hrr_weight * hrr_sim)
# Trust weighting
score = relevance * fact["trust_score"]
# Optional temporal decay
if self.half_life > 0:
score *= self._temporal_decay(fact.get("updated_at") or fact.get("created_at"))
fact["score"] = score
scored.append(fact)
# Sort by score descending, return top limit
scored.sort(key=lambda x: x["score"], reverse=True)
results = scored[:limit]
# Strip raw HRR bytes — callers expect JSON-serializable dicts
for fact in results:
fact.pop("hrr_vector", None)
return results
def probe(
self,
entity: str,
category: str | None = None,
limit: int = 10,
) -> list[dict]:
"""Compositional entity query using HRR algebra.
Unbinds entity from memory bank to extract associated content.
This is NOT keyword search it uses algebraic structure to find facts
where the entity plays a structural role.
Falls back to FTS5 search if numpy unavailable.
"""
if not hrr._HAS_NUMPY:
# Fallback to keyword search on entity name
return self.search(entity, category=category, limit=limit)
conn = self.store._conn
# Encode entity as role-bound vector
role_entity = hrr.encode_atom("__hrr_role_entity__", self.hrr_dim)
entity_vec = hrr.encode_atom(entity.lower(), self.hrr_dim)
probe_key = hrr.bind(entity_vec, role_entity)
# Try category-specific bank first, then all facts
if category:
bank_name = f"cat:{category}"
bank_row = conn.execute(
"SELECT vector FROM memory_banks WHERE bank_name = ?",
(bank_name,),
).fetchone()
if bank_row:
bank_vec = hrr.bytes_to_phases(bank_row["vector"])
extracted = hrr.unbind(bank_vec, probe_key)
# Use extracted signal to score individual facts
return self._score_facts_by_vector(
extracted, category=category, limit=limit
)
# Score against individual fact vectors directly
where = "WHERE hrr_vector IS NOT NULL"
params: list = []
if category:
where += " AND category = ?"
params.append(category)
rows = conn.execute(
f"""
SELECT fact_id, content, category, tags, trust_score,
retrieval_count, helpful_count, created_at, updated_at,
hrr_vector
FROM facts
{where}
""",
params,
).fetchall()
if not rows:
# Final fallback: keyword search
return self.search(entity, category=category, limit=limit)
scored = []
for row in rows:
fact = dict(row)
fact_vec = hrr.bytes_to_phases(fact.pop("hrr_vector"))
# Unbind probe key from fact to see if entity is structurally present
residual = hrr.unbind(fact_vec, probe_key)
# Compare residual against content signal
role_content = hrr.encode_atom("__hrr_role_content__", self.hrr_dim)
content_vec = hrr.bind(hrr.encode_text(fact["content"], self.hrr_dim), role_content)
sim = hrr.similarity(residual, content_vec)
fact["score"] = (sim + 1.0) / 2.0 * fact["trust_score"]
scored.append(fact)
scored.sort(key=lambda x: x["score"], reverse=True)
return scored[:limit]
def related(
self,
entity: str,
category: str | None = None,
limit: int = 10,
) -> list[dict]:
"""Discover facts that share structural connections with an entity.
Unlike probe (which finds facts *about* an entity), related finds
facts that are connected through shared context e.g., other entities
mentioned alongside this one, or content that overlaps structurally.
Falls back to FTS5 search if numpy unavailable.
"""
if not hrr._HAS_NUMPY:
return self.search(entity, category=category, limit=limit)
conn = self.store._conn
# Encode entity as a bare atom (not role-bound — we want ANY structural match)
entity_vec = hrr.encode_atom(entity.lower(), self.hrr_dim)
# Get all facts with vectors
where = "WHERE hrr_vector IS NOT NULL"
params: list = []
if category:
where += " AND category = ?"
params.append(category)
rows = conn.execute(
f"""
SELECT fact_id, content, category, tags, trust_score,
retrieval_count, helpful_count, created_at, updated_at,
hrr_vector
FROM facts
{where}
""",
params,
).fetchall()
if not rows:
return self.search(entity, category=category, limit=limit)
# Score each fact by how much the entity's atom appears in its vector
# This catches both role-bound entity matches AND content word matches
scored = []
for row in rows:
fact = dict(row)
fact_vec = hrr.bytes_to_phases(fact.pop("hrr_vector"))
# Check structural similarity: unbind entity from fact
residual = hrr.unbind(fact_vec, entity_vec)
# A high-similarity residual to ANY known role vector means this entity
# plays a structural role in the fact
role_entity = hrr.encode_atom("__hrr_role_entity__", self.hrr_dim)
role_content = hrr.encode_atom("__hrr_role_content__", self.hrr_dim)
entity_role_sim = hrr.similarity(residual, role_entity)
content_role_sim = hrr.similarity(residual, role_content)
# Take the max — entity could appear in either role
best_sim = max(entity_role_sim, content_role_sim)
fact["score"] = (best_sim + 1.0) / 2.0 * fact["trust_score"]
scored.append(fact)
scored.sort(key=lambda x: x["score"], reverse=True)
return scored[:limit]
def reason(
self,
entities: list[str],
category: str | None = None,
limit: int = 10,
) -> list[dict]:
"""Multi-entity compositional query — vector-space JOIN.
Given multiple entities, algebraically intersects their structural
connections to find facts related to ALL of them simultaneously.
This is compositional reasoning that no embedding DB can do.
Example: reason(["peppi", "backend"]) finds facts where peppi AND
backend both play structural roles without keyword matching.
Falls back to FTS5 search if numpy unavailable.
"""
if not hrr._HAS_NUMPY or not entities:
# Fallback: search with all entities as keywords
query = " ".join(entities)
return self.search(query, category=category, limit=limit)
conn = self.store._conn
role_entity = hrr.encode_atom("__hrr_role_entity__", self.hrr_dim)
# For each entity, compute what the bank "remembers" about it
# by unbinding entity+role from each fact vector
entity_residuals = []
for entity in entities:
entity_vec = hrr.encode_atom(entity.lower(), self.hrr_dim)
probe_key = hrr.bind(entity_vec, role_entity)
entity_residuals.append(probe_key)
# Get all facts with vectors
where = "WHERE hrr_vector IS NOT NULL"
params: list = []
if category:
where += " AND category = ?"
params.append(category)
rows = conn.execute(
f"""
SELECT fact_id, content, category, tags, trust_score,
retrieval_count, helpful_count, created_at, updated_at,
hrr_vector
FROM facts
{where}
""",
params,
).fetchall()
if not rows:
query = " ".join(entities)
return self.search(query, category=category, limit=limit)
# Score each fact by how much EACH entity is structurally present.
# A fact scores high only if ALL entities have structural presence
# (AND semantics via min, vs OR which would use mean/max).
role_content = hrr.encode_atom("__hrr_role_content__", self.hrr_dim)
scored = []
for row in rows:
fact = dict(row)
fact_vec = hrr.bytes_to_phases(fact.pop("hrr_vector"))
entity_scores = []
for probe_key in entity_residuals:
residual = hrr.unbind(fact_vec, probe_key)
sim = hrr.similarity(residual, role_content)
entity_scores.append(sim)
min_sim = min(entity_scores)
fact["score"] = (min_sim + 1.0) / 2.0 * fact["trust_score"]
scored.append(fact)
scored.sort(key=lambda x: x["score"], reverse=True)
return scored[:limit]
def contradict(
self,
category: str | None = None,
threshold: float = 0.3,
limit: int = 10,
) -> list[dict]:
"""Find potentially contradictory facts via entity overlap + content divergence.
Two facts contradict when they share entities (same subject) but have
low content-vector similarity (different claims). This is automated
memory hygiene no other memory system does this.
Returns pairs of facts with a contradiction score.
Falls back to empty list if numpy unavailable.
"""
if not hrr._HAS_NUMPY:
return []
conn = self.store._conn
# Get all facts with vectors and their linked entities
where = "WHERE f.hrr_vector IS NOT NULL"
params: list = []
if category:
where += " AND f.category = ?"
params.append(category)
rows = conn.execute(
f"""
SELECT f.fact_id, f.content, f.category, f.tags, f.trust_score,
f.created_at, f.updated_at, f.hrr_vector
FROM facts f
{where}
""",
params,
).fetchall()
if len(rows) < 2:
return []
# Guard against O(n²) explosion on large fact stores.
# At 500 facts, that's ~125K comparisons — acceptable.
# Above that, only check the most recently updated facts.
_MAX_CONTRADICT_FACTS = 500
if len(rows) > _MAX_CONTRADICT_FACTS:
rows = sorted(rows, key=lambda r: r["updated_at"] or r["created_at"], reverse=True)
rows = rows[:_MAX_CONTRADICT_FACTS]
# Build entity sets per fact
fact_entities: dict[int, set[str]] = {}
for row in rows:
fid = row["fact_id"]
entity_rows = conn.execute(
"""
SELECT e.name FROM entities e
JOIN fact_entities fe ON fe.entity_id = e.entity_id
WHERE fe.fact_id = ?
""",
(fid,),
).fetchall()
fact_entities[fid] = {r["name"].lower() for r in entity_rows}
# Compare all pairs: high entity overlap + low content similarity = contradiction
facts = [dict(r) for r in rows]
contradictions = []
for i in range(len(facts)):
for j in range(i + 1, len(facts)):
f1, f2 = facts[i], facts[j]
ents1 = fact_entities.get(f1["fact_id"], set())
ents2 = fact_entities.get(f2["fact_id"], set())
if not ents1 or not ents2:
continue
# Entity overlap (Jaccard)
entity_overlap = len(ents1 & ents2) / len(ents1 | ents2) if (ents1 | ents2) else 0.0
if entity_overlap < 0.3:
continue # Not enough entity overlap to be contradictory
# Content similarity via HRR vectors
v1 = hrr.bytes_to_phases(f1["hrr_vector"])
v2 = hrr.bytes_to_phases(f2["hrr_vector"])
content_sim = hrr.similarity(v1, v2)
# High entity overlap + low content similarity = potential contradiction
# contradiction_score: higher = more contradictory
contradiction_score = entity_overlap * (1.0 - (content_sim + 1.0) / 2.0)
if contradiction_score >= threshold:
# Strip hrr_vector from output (not JSON serializable)
f1_clean = {k: v for k, v in f1.items() if k != "hrr_vector"}
f2_clean = {k: v for k, v in f2.items() if k != "hrr_vector"}
contradictions.append({
"fact_a": f1_clean,
"fact_b": f2_clean,
"entity_overlap": round(entity_overlap, 3),
"content_similarity": round(content_sim, 3),
"contradiction_score": round(contradiction_score, 3),
"shared_entities": sorted(ents1 & ents2),
})
contradictions.sort(key=lambda x: x["contradiction_score"], reverse=True)
return contradictions[:limit]
def _score_facts_by_vector(
self,
target_vec: "np.ndarray",
category: str | None = None,
limit: int = 10,
) -> list[dict]:
"""Score facts by similarity to a target vector."""
conn = self.store._conn
where = "WHERE hrr_vector IS NOT NULL"
params: list = []
if category:
where += " AND category = ?"
params.append(category)
rows = conn.execute(
f"""
SELECT fact_id, content, category, tags, trust_score,
retrieval_count, helpful_count, created_at, updated_at,
hrr_vector
FROM facts
{where}
""",
params,
).fetchall()
scored = []
for row in rows:
fact = dict(row)
fact_vec = hrr.bytes_to_phases(fact.pop("hrr_vector"))
sim = hrr.similarity(target_vec, fact_vec)
fact["score"] = (sim + 1.0) / 2.0 * fact["trust_score"]
scored.append(fact)
scored.sort(key=lambda x: x["score"], reverse=True)
return scored[:limit]
def _fts_candidates(
self,
query: str,
category: str | None,
min_trust: float,
limit: int,
) -> list[dict]:
"""Get raw FTS5 candidates from the store.
Uses the store's database connection directly for FTS5 MATCH
with rank scoring. Normalizes FTS5 rank to [0, 1] range.
"""
conn = self.store._conn
# Build query - FTS5 rank is negative (lower = better match)
# We need to join facts_fts with facts to get all columns
params: list = []
where_clauses = ["facts_fts MATCH ?"]
params.append(query)
if category:
where_clauses.append("f.category = ?")
params.append(category)
where_clauses.append("f.trust_score >= ?")
params.append(min_trust)
where_sql = " AND ".join(where_clauses)
sql = f"""
SELECT f.*, facts_fts.rank as fts_rank_raw
FROM facts_fts
JOIN facts f ON f.fact_id = facts_fts.rowid
WHERE {where_sql}
ORDER BY facts_fts.rank
LIMIT ?
"""
params.append(limit)
try:
rows = conn.execute(sql, params).fetchall()
except Exception:
# FTS5 MATCH can fail on malformed queries — fall back to empty
return []
if not rows:
return []
# Normalize FTS5 rank: rank is negative, lower = better
# Convert to positive score in [0, 1] range
raw_ranks = [abs(row["fts_rank_raw"]) for row in rows]
max_rank = max(raw_ranks) if raw_ranks else 1.0
max_rank = max(max_rank, 1e-6) # avoid div by zero
results = []
for row, raw_rank in zip(rows, raw_ranks):
fact = dict(row)
fact.pop("fts_rank_raw", None)
fact["fts_rank"] = raw_rank / max_rank # normalize to [0, 1]
results.append(fact)
return results
@staticmethod
def _tokenize(text: str) -> set[str]:
"""Simple whitespace tokenization with lowercasing.
Strips common punctuation. No stemming/lemmatization (Phase 1).
"""
if not text:
return set()
# Split on whitespace, lowercase, strip punctuation
tokens = set()
for word in text.lower().split():
cleaned = word.strip(".,;:!?\"'()[]{}#@<>")
if cleaned:
tokens.add(cleaned)
return tokens
@staticmethod
def _jaccard_similarity(set_a: set, set_b: set) -> float:
"""Jaccard similarity coefficient: |A ∩ B| / |A B|."""
if not set_a or not set_b:
return 0.0
intersection = len(set_a & set_b)
union = len(set_a | set_b)
return intersection / union if union > 0 else 0.0
def _temporal_decay(self, timestamp_str: str | None) -> float:
"""Exponential decay: 0.5^(age_days / half_life_days).
Returns 1.0 if decay is disabled or timestamp is missing.
"""
if not self.half_life or not timestamp_str:
return 1.0
try:
if isinstance(timestamp_str, str):
# Parse ISO format timestamp from SQLite
ts = datetime.fromisoformat(timestamp_str.replace("Z", "+00:00"))
else:
ts = timestamp_str
if ts.tzinfo is None:
ts = ts.replace(tzinfo=timezone.utc)
age_days = (datetime.now(timezone.utc) - ts).total_seconds() / 86400
if age_days < 0:
return 1.0
return math.pow(0.5, age_days / self.half_life)
except (ValueError, TypeError):
return 1.0

View file

@ -0,0 +1,575 @@
"""
SQLite-backed fact store with entity resolution and trust scoring.
Single-user Hermes memory store plugin.
"""
import re
import sqlite3
import threading
from datetime import datetime
from pathlib import Path
try:
from . import holographic as hrr
except ImportError:
import holographic as hrr # type: ignore[no-redef]
_SCHEMA = """
CREATE TABLE IF NOT EXISTS facts (
fact_id INTEGER PRIMARY KEY AUTOINCREMENT,
content TEXT NOT NULL UNIQUE,
category TEXT DEFAULT 'general',
tags TEXT DEFAULT '',
trust_score REAL DEFAULT 0.5,
retrieval_count INTEGER DEFAULT 0,
helpful_count INTEGER DEFAULT 0,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
hrr_vector BLOB
);
CREATE TABLE IF NOT EXISTS entities (
entity_id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
entity_type TEXT DEFAULT 'unknown',
aliases TEXT DEFAULT '',
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE IF NOT EXISTS fact_entities (
fact_id INTEGER REFERENCES facts(fact_id),
entity_id INTEGER REFERENCES entities(entity_id),
PRIMARY KEY (fact_id, entity_id)
);
CREATE INDEX IF NOT EXISTS idx_facts_trust ON facts(trust_score DESC);
CREATE INDEX IF NOT EXISTS idx_facts_category ON facts(category);
CREATE INDEX IF NOT EXISTS idx_entities_name ON entities(name);
CREATE VIRTUAL TABLE IF NOT EXISTS facts_fts
USING fts5(content, tags, content=facts, content_rowid=fact_id);
CREATE TRIGGER IF NOT EXISTS facts_ai AFTER INSERT ON facts BEGIN
INSERT INTO facts_fts(rowid, content, tags)
VALUES (new.fact_id, new.content, new.tags);
END;
CREATE TRIGGER IF NOT EXISTS facts_ad AFTER DELETE ON facts BEGIN
INSERT INTO facts_fts(facts_fts, rowid, content, tags)
VALUES ('delete', old.fact_id, old.content, old.tags);
END;
CREATE TRIGGER IF NOT EXISTS facts_au AFTER UPDATE ON facts BEGIN
INSERT INTO facts_fts(facts_fts, rowid, content, tags)
VALUES ('delete', old.fact_id, old.content, old.tags);
INSERT INTO facts_fts(rowid, content, tags)
VALUES (new.fact_id, new.content, new.tags);
END;
CREATE TABLE IF NOT EXISTS memory_banks (
bank_id INTEGER PRIMARY KEY AUTOINCREMENT,
bank_name TEXT NOT NULL UNIQUE,
vector BLOB NOT NULL,
dim INTEGER NOT NULL,
fact_count INTEGER DEFAULT 0,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
"""
# Trust adjustment constants
_HELPFUL_DELTA = 0.05
_UNHELPFUL_DELTA = -0.10
_TRUST_MIN = 0.0
_TRUST_MAX = 1.0
# Entity extraction patterns
_RE_CAPITALIZED = re.compile(r'\b([A-Z][a-z]+(?:\s+[A-Z][a-z]+)+)\b')
_RE_DOUBLE_QUOTE = re.compile(r'"([^"]+)"')
_RE_SINGLE_QUOTE = re.compile(r"'([^']+)'")
_RE_AKA = re.compile(
r'(\w+(?:\s+\w+)*)\s+(?:aka|also known as)\s+(\w+(?:\s+\w+)*)',
re.IGNORECASE,
)
def _clamp_trust(value: float) -> float:
return max(_TRUST_MIN, min(_TRUST_MAX, value))
class MemoryStore:
"""SQLite-backed fact store with entity resolution and trust scoring."""
def __init__(
self,
db_path: "str | Path | None" = None,
default_trust: float = 0.5,
hrr_dim: int = 1024,
) -> None:
if db_path is None:
from hermes_constants import get_hermes_home
db_path = str(get_hermes_home() / "memory_store.db")
self.db_path = Path(db_path).expanduser()
self.db_path.parent.mkdir(parents=True, exist_ok=True)
self.default_trust = _clamp_trust(default_trust)
self.hrr_dim = hrr_dim
self._hrr_available = hrr._HAS_NUMPY
self._conn: sqlite3.Connection = sqlite3.connect(
str(self.db_path),
check_same_thread=False,
timeout=10.0,
)
self._lock = threading.RLock()
self._conn.row_factory = sqlite3.Row
self._init_db()
# ------------------------------------------------------------------
# Initialisation
# ------------------------------------------------------------------
def _init_db(self) -> None:
"""Create tables, indexes, and triggers if they do not exist. Enable WAL mode."""
self._conn.execute("PRAGMA journal_mode=WAL")
self._conn.executescript(_SCHEMA)
# Migrate: add hrr_vector column if missing (safe for existing databases)
columns = {row[1] for row in self._conn.execute("PRAGMA table_info(facts)").fetchall()}
if "hrr_vector" not in columns:
self._conn.execute("ALTER TABLE facts ADD COLUMN hrr_vector BLOB")
self._conn.commit()
# ------------------------------------------------------------------
# Public API
# ------------------------------------------------------------------
def add_fact(
self,
content: str,
category: str = "general",
tags: str = "",
) -> int:
"""Insert a fact and return its fact_id.
Deduplicates by content (UNIQUE constraint). On duplicate, returns
the existing fact_id without modifying the row. Extracts entities from
the content and links them to the fact.
"""
with self._lock:
content = content.strip()
if not content:
raise ValueError("content must not be empty")
try:
cur = self._conn.execute(
"""
INSERT INTO facts (content, category, tags, trust_score)
VALUES (?, ?, ?, ?)
""",
(content, category, tags, self.default_trust),
)
self._conn.commit()
fact_id: int = cur.lastrowid # type: ignore[assignment]
except sqlite3.IntegrityError:
# Duplicate content — return existing id
row = self._conn.execute(
"SELECT fact_id FROM facts WHERE content = ?", (content,)
).fetchone()
return int(row["fact_id"])
# Entity extraction and linking
for name in self._extract_entities(content):
entity_id = self._resolve_entity(name)
self._link_fact_entity(fact_id, entity_id)
# Compute HRR vector after entity linking
self._compute_hrr_vector(fact_id, content)
self._rebuild_bank(category)
return fact_id
def search_facts(
self,
query: str,
category: str | None = None,
min_trust: float = 0.3,
limit: int = 10,
) -> list[dict]:
"""Full-text search over facts using FTS5.
Returns a list of fact dicts ordered by FTS5 rank, then trust_score
descending. Also increments retrieval_count for matched facts.
"""
with self._lock:
query = query.strip()
if not query:
return []
params: list = [query, min_trust]
category_clause = ""
if category is not None:
category_clause = "AND f.category = ?"
params.append(category)
params.append(limit)
sql = f"""
SELECT f.fact_id, f.content, f.category, f.tags,
f.trust_score, f.retrieval_count, f.helpful_count,
f.created_at, f.updated_at
FROM facts f
JOIN facts_fts fts ON fts.rowid = f.fact_id
WHERE facts_fts MATCH ?
AND f.trust_score >= ?
{category_clause}
ORDER BY fts.rank, f.trust_score DESC
LIMIT ?
"""
rows = self._conn.execute(sql, params).fetchall()
results = [self._row_to_dict(r) for r in rows]
if results:
ids = [r["fact_id"] for r in results]
placeholders = ",".join("?" * len(ids))
self._conn.execute(
f"UPDATE facts SET retrieval_count = retrieval_count + 1 WHERE fact_id IN ({placeholders})",
ids,
)
self._conn.commit()
return results
def update_fact(
self,
fact_id: int,
content: str | None = None,
trust_delta: float | None = None,
tags: str | None = None,
category: str | None = None,
) -> bool:
"""Partially update a fact. Trust is clamped to [0, 1].
Returns True if the row existed, False otherwise.
"""
with self._lock:
row = self._conn.execute(
"SELECT fact_id, trust_score FROM facts WHERE fact_id = ?", (fact_id,)
).fetchone()
if row is None:
return False
assignments: list[str] = ["updated_at = CURRENT_TIMESTAMP"]
params: list = []
if content is not None:
assignments.append("content = ?")
params.append(content.strip())
if tags is not None:
assignments.append("tags = ?")
params.append(tags)
if category is not None:
assignments.append("category = ?")
params.append(category)
if trust_delta is not None:
new_trust = _clamp_trust(row["trust_score"] + trust_delta)
assignments.append("trust_score = ?")
params.append(new_trust)
params.append(fact_id)
self._conn.execute(
f"UPDATE facts SET {', '.join(assignments)} WHERE fact_id = ?",
params,
)
self._conn.commit()
# If content changed, re-extract entities
if content is not None:
self._conn.execute(
"DELETE FROM fact_entities WHERE fact_id = ?", (fact_id,)
)
for name in self._extract_entities(content):
entity_id = self._resolve_entity(name)
self._link_fact_entity(fact_id, entity_id)
self._conn.commit()
# Recompute HRR vector if content changed
if content is not None:
self._compute_hrr_vector(fact_id, content)
# Rebuild bank for relevant category
cat = category or self._conn.execute(
"SELECT category FROM facts WHERE fact_id = ?", (fact_id,)
).fetchone()["category"]
self._rebuild_bank(cat)
return True
def remove_fact(self, fact_id: int) -> bool:
"""Delete a fact and its entity links. Returns True if the row existed."""
with self._lock:
row = self._conn.execute(
"SELECT fact_id, category FROM facts WHERE fact_id = ?", (fact_id,)
).fetchone()
if row is None:
return False
self._conn.execute(
"DELETE FROM fact_entities WHERE fact_id = ?", (fact_id,)
)
self._conn.execute("DELETE FROM facts WHERE fact_id = ?", (fact_id,))
self._conn.commit()
self._rebuild_bank(row["category"])
return True
def list_facts(
self,
category: str | None = None,
min_trust: float = 0.0,
limit: int = 50,
) -> list[dict]:
"""Browse facts ordered by trust_score descending.
Optionally filter by category and minimum trust score.
"""
with self._lock:
params: list = [min_trust]
category_clause = ""
if category is not None:
category_clause = "AND category = ?"
params.append(category)
params.append(limit)
sql = f"""
SELECT fact_id, content, category, tags, trust_score,
retrieval_count, helpful_count, created_at, updated_at
FROM facts
WHERE trust_score >= ?
{category_clause}
ORDER BY trust_score DESC
LIMIT ?
"""
rows = self._conn.execute(sql, params).fetchall()
return [self._row_to_dict(r) for r in rows]
def record_feedback(self, fact_id: int, helpful: bool) -> dict:
"""Record user feedback and adjust trust asymmetrically.
helpful=True -> trust += 0.05, helpful_count += 1
helpful=False -> trust -= 0.10
Returns a dict with fact_id, old_trust, new_trust, helpful_count.
Raises KeyError if fact_id does not exist.
"""
with self._lock:
row = self._conn.execute(
"SELECT fact_id, trust_score, helpful_count FROM facts WHERE fact_id = ?",
(fact_id,),
).fetchone()
if row is None:
raise KeyError(f"fact_id {fact_id} not found")
old_trust: float = row["trust_score"]
delta = _HELPFUL_DELTA if helpful else _UNHELPFUL_DELTA
new_trust = _clamp_trust(old_trust + delta)
helpful_increment = 1 if helpful else 0
self._conn.execute(
"""
UPDATE facts
SET trust_score = ?,
helpful_count = helpful_count + ?,
updated_at = CURRENT_TIMESTAMP
WHERE fact_id = ?
""",
(new_trust, helpful_increment, fact_id),
)
self._conn.commit()
return {
"fact_id": fact_id,
"old_trust": old_trust,
"new_trust": new_trust,
"helpful_count": row["helpful_count"] + helpful_increment,
}
# ------------------------------------------------------------------
# Entity helpers
# ------------------------------------------------------------------
def _extract_entities(self, text: str) -> list[str]:
"""Extract entity candidates from text using simple regex rules.
Rules applied (in order):
1. Capitalized multi-word phrases e.g. "John Doe"
2. Double-quoted terms e.g. "Python"
3. Single-quoted terms e.g. 'pytest'
4. AKA patterns e.g. "Guido aka BDFL" -> two entities
Returns a deduplicated list preserving first-seen order.
"""
seen: set[str] = set()
candidates: list[str] = []
def _add(name: str) -> None:
stripped = name.strip()
if stripped and stripped.lower() not in seen:
seen.add(stripped.lower())
candidates.append(stripped)
for m in _RE_CAPITALIZED.finditer(text):
_add(m.group(1))
for m in _RE_DOUBLE_QUOTE.finditer(text):
_add(m.group(1))
for m in _RE_SINGLE_QUOTE.finditer(text):
_add(m.group(1))
for m in _RE_AKA.finditer(text):
_add(m.group(1))
_add(m.group(2))
return candidates
def _resolve_entity(self, name: str) -> int:
"""Find an existing entity by name or alias (case-insensitive) or create one.
Returns the entity_id.
"""
# Exact name match
row = self._conn.execute(
"SELECT entity_id FROM entities WHERE name LIKE ?", (name,)
).fetchone()
if row is not None:
return int(row["entity_id"])
# Search aliases — aliases stored as comma-separated; use LIKE with % boundaries
alias_row = self._conn.execute(
"""
SELECT entity_id FROM entities
WHERE ',' || aliases || ',' LIKE '%,' || ? || ',%'
""",
(name,),
).fetchone()
if alias_row is not None:
return int(alias_row["entity_id"])
# Create new entity
cur = self._conn.execute(
"INSERT INTO entities (name) VALUES (?)", (name,)
)
self._conn.commit()
return int(cur.lastrowid) # type: ignore[return-value]
def _link_fact_entity(self, fact_id: int, entity_id: int) -> None:
"""Insert into fact_entities, silently ignore if the link already exists."""
self._conn.execute(
"""
INSERT OR IGNORE INTO fact_entities (fact_id, entity_id)
VALUES (?, ?)
""",
(fact_id, entity_id),
)
self._conn.commit()
def _compute_hrr_vector(self, fact_id: int, content: str) -> None:
"""Compute and store HRR vector for a fact. No-op if numpy unavailable."""
with self._lock:
if not self._hrr_available:
return
# Get entities linked to this fact
rows = self._conn.execute(
"""
SELECT e.name FROM entities e
JOIN fact_entities fe ON fe.entity_id = e.entity_id
WHERE fe.fact_id = ?
""",
(fact_id,),
).fetchall()
entities = [row["name"] for row in rows]
vector = hrr.encode_fact(content, entities, self.hrr_dim)
self._conn.execute(
"UPDATE facts SET hrr_vector = ? WHERE fact_id = ?",
(hrr.phases_to_bytes(vector), fact_id),
)
self._conn.commit()
def _rebuild_bank(self, category: str) -> None:
"""Full rebuild of a category's memory bank from all its fact vectors."""
with self._lock:
if not self._hrr_available:
return
bank_name = f"cat:{category}"
rows = self._conn.execute(
"SELECT hrr_vector FROM facts WHERE category = ? AND hrr_vector IS NOT NULL",
(category,),
).fetchall()
if not rows:
self._conn.execute("DELETE FROM memory_banks WHERE bank_name = ?", (bank_name,))
self._conn.commit()
return
vectors = [hrr.bytes_to_phases(row["hrr_vector"]) for row in rows]
bank_vector = hrr.bundle(*vectors)
fact_count = len(vectors)
# Check SNR
hrr.snr_estimate(self.hrr_dim, fact_count)
self._conn.execute(
"""
INSERT INTO memory_banks (bank_name, vector, dim, fact_count, updated_at)
VALUES (?, ?, ?, ?, CURRENT_TIMESTAMP)
ON CONFLICT(bank_name) DO UPDATE SET
vector = excluded.vector,
dim = excluded.dim,
fact_count = excluded.fact_count,
updated_at = excluded.updated_at
""",
(bank_name, hrr.phases_to_bytes(bank_vector), self.hrr_dim, fact_count),
)
self._conn.commit()
def rebuild_all_vectors(self, dim: int | None = None) -> int:
"""Recompute all HRR vectors + banks from text. For recovery/migration.
Returns the number of facts processed.
"""
with self._lock:
if not self._hrr_available:
return 0
if dim is not None:
self.hrr_dim = dim
rows = self._conn.execute(
"SELECT fact_id, content, category FROM facts"
).fetchall()
categories: set[str] = set()
for row in rows:
self._compute_hrr_vector(row["fact_id"], row["content"])
categories.add(row["category"])
for category in categories:
self._rebuild_bank(category)
return len(rows)
# ------------------------------------------------------------------
# Utilities
# ------------------------------------------------------------------
def _row_to_dict(self, row: sqlite3.Row) -> dict:
"""Convert a sqlite3.Row to a plain dict."""
return dict(row)
def close(self) -> None:
"""Close the database connection."""
self._conn.close()
def __enter__(self) -> "MemoryStore":
return self
def __exit__(self, *_: object) -> None:
self.close()

View file

@ -0,0 +1,35 @@
# Honcho Memory Provider
AI-native cross-session user modeling with dialectic Q&A, semantic search, peer cards, and persistent conclusions.
## Requirements
- `pip install honcho-ai`
- Honcho API key from [app.honcho.dev](https://app.honcho.dev)
## Setup
```bash
hermes memory setup # select "honcho"
```
Or manually:
```bash
hermes config set memory.provider honcho
echo "HONCHO_API_KEY=your-key" >> ~/.hermes/.env
```
## Config
Config file: `$HERMES_HOME/honcho.json` (or `~/.honcho/config.json` legacy)
Existing Honcho users: your config and data are preserved. Just set `memory.provider: honcho`.
## Tools
| Tool | Description |
|------|-------------|
| `honcho_profile` | User's peer card — key facts, no LLM |
| `honcho_search` | Semantic search over stored context |
| `honcho_context` | LLM-synthesized answer from memory |
| `honcho_conclude` | Write a fact about the user to memory |

View file

@ -0,0 +1,355 @@
"""Honcho memory plugin — MemoryProvider for Honcho AI-native memory.
Provides cross-session user modeling with dialectic Q&A, semantic search,
peer cards, and persistent conclusions via the Honcho SDK. Honcho provides AI-native cross-session user
modeling with dialectic Q&A, semantic search, peer cards, and conclusions.
The 4 tools (profile, search, context, conclude) are exposed through
the MemoryProvider interface.
Config: Uses the existing Honcho config chain:
1. $HERMES_HOME/honcho.json (profile-scoped)
2. ~/.honcho/config.json (legacy global)
3. Environment variables
"""
from __future__ import annotations
import json
import logging
import threading
from typing import Any, Dict, List, Optional
from agent.memory_provider import MemoryProvider
logger = logging.getLogger(__name__)
# ---------------------------------------------------------------------------
# Tool schemas (moved from tools/honcho_tools.py)
# ---------------------------------------------------------------------------
PROFILE_SCHEMA = {
"name": "honcho_profile",
"description": (
"Retrieve the user's peer card from Honcho — a curated list of key facts "
"about them (name, role, preferences, communication style, patterns). "
"Fast, no LLM reasoning, minimal cost. "
"Use this at conversation start or when you need a quick factual snapshot."
),
"parameters": {"type": "object", "properties": {}, "required": []},
}
SEARCH_SCHEMA = {
"name": "honcho_search",
"description": (
"Semantic search over Honcho's stored context about the user. "
"Returns raw excerpts ranked by relevance — no LLM synthesis. "
"Cheaper and faster than honcho_context. "
"Good when you want to find specific past facts and reason over them yourself."
),
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "What to search for in Honcho's memory.",
},
"max_tokens": {
"type": "integer",
"description": "Token budget for returned context (default 800, max 2000).",
},
},
"required": ["query"],
},
}
CONTEXT_SCHEMA = {
"name": "honcho_context",
"description": (
"Ask Honcho a natural language question and get a synthesized answer. "
"Uses Honcho's LLM (dialectic reasoning) — higher cost than honcho_profile or honcho_search. "
"Can query about any peer: the user (default) or the AI assistant."
),
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "A natural language question.",
},
"peer": {
"type": "string",
"description": "Which peer to query about: 'user' (default) or 'ai'.",
},
},
"required": ["query"],
},
}
CONCLUDE_SCHEMA = {
"name": "honcho_conclude",
"description": (
"Write a conclusion about the user back to Honcho's memory. "
"Conclusions are persistent facts that build the user's profile. "
"Use when the user states a preference, corrects you, or shares "
"something to remember across sessions."
),
"parameters": {
"type": "object",
"properties": {
"conclusion": {
"type": "string",
"description": "A factual statement about the user to persist.",
}
},
"required": ["conclusion"],
},
}
# ---------------------------------------------------------------------------
# MemoryProvider implementation
# ---------------------------------------------------------------------------
class HonchoMemoryProvider(MemoryProvider):
"""Honcho AI-native memory with dialectic Q&A and persistent user modeling."""
def __init__(self):
self._manager = None # HonchoSessionManager
self._config = None # HonchoClientConfig
self._session_key = ""
self._prefetch_result = ""
self._prefetch_lock = threading.Lock()
self._prefetch_thread: Optional[threading.Thread] = None
self._sync_thread: Optional[threading.Thread] = None
@property
def name(self) -> str:
return "honcho"
def is_available(self) -> bool:
"""Check if Honcho is configured. No network calls."""
try:
from plugins.memory.honcho.client import HonchoClientConfig
cfg = HonchoClientConfig.from_global_config()
return cfg.enabled and bool(cfg.api_key or cfg.base_url)
except Exception:
return False
def save_config(self, values, hermes_home):
"""Write config to $HERMES_HOME/honcho.json (Honcho SDK native format)."""
import json
from pathlib import Path
config_path = Path(hermes_home) / "honcho.json"
existing = {}
if config_path.exists():
try:
existing = json.loads(config_path.read_text())
except Exception:
pass
existing.update(values)
config_path.write_text(json.dumps(existing, indent=2))
def get_config_schema(self):
return [
{"key": "api_key", "description": "Honcho API key", "secret": True, "env_var": "HONCHO_API_KEY", "url": "https://app.honcho.dev"},
{"key": "base_url", "description": "Honcho base URL", "default": "https://api.honcho.dev"},
]
def initialize(self, session_id: str, **kwargs) -> None:
"""Initialize Honcho session manager."""
try:
from plugins.memory.honcho.client import HonchoClientConfig, get_honcho_client
from plugins.memory.honcho.session import HonchoSessionManager
cfg = HonchoClientConfig.from_global_config()
if not cfg.enabled or not (cfg.api_key or cfg.base_url):
logger.debug("Honcho not configured — plugin inactive")
return
self._config = cfg
client = get_honcho_client(cfg)
self._manager = HonchoSessionManager(
honcho=client,
config=cfg,
context_tokens=cfg.context_tokens,
)
# Build session key from kwargs or session_id
platform = kwargs.get("platform", "cli")
user_id = kwargs.get("user_id", "")
if user_id:
self._session_key = f"{platform}:{user_id}"
else:
self._session_key = session_id
except ImportError:
logger.debug("honcho-ai package not installed — plugin inactive")
except Exception as e:
logger.warning("Honcho init failed: %s", e)
self._manager = None
def system_prompt_block(self) -> str:
if not self._manager or not self._session_key:
return ""
return (
"# Honcho Memory\n"
"Active. AI-native cross-session user modeling.\n"
"Use honcho_profile for a quick factual snapshot, "
"honcho_search for raw excerpts, honcho_context for synthesized answers, "
"honcho_conclude to save facts about the user."
)
def prefetch(self, query: str, *, session_id: str = "") -> str:
"""Return prefetched dialectic context from background thread."""
if self._prefetch_thread and self._prefetch_thread.is_alive():
self._prefetch_thread.join(timeout=3.0)
with self._prefetch_lock:
result = self._prefetch_result
self._prefetch_result = ""
if not result:
return ""
return f"## Honcho Context\n{result}"
def queue_prefetch(self, query: str, *, session_id: str = "") -> None:
"""Fire a background dialectic query for the upcoming turn."""
if not self._manager or not self._session_key or not query:
return
def _run():
try:
result = self._manager.dialectic_query(
self._session_key, query, peer="user"
)
if result and result.strip():
with self._prefetch_lock:
self._prefetch_result = result
except Exception as e:
logger.debug("Honcho prefetch failed: %s", e)
self._prefetch_thread = threading.Thread(
target=_run, daemon=True, name="honcho-prefetch"
)
self._prefetch_thread.start()
def sync_turn(self, user_content: str, assistant_content: str, *, session_id: str = "") -> None:
"""Record the conversation turn in Honcho (non-blocking)."""
if not self._manager or not self._session_key:
return
def _sync():
try:
session = self._manager.get_or_create_session(self._session_key)
session.add_message("user", user_content[:4000])
session.add_message("assistant", assistant_content[:4000])
# Flush to Honcho API
self._manager._flush_session(session)
except Exception as e:
logger.debug("Honcho sync_turn failed: %s", e)
if self._sync_thread and self._sync_thread.is_alive():
self._sync_thread.join(timeout=5.0)
self._sync_thread = threading.Thread(
target=_sync, daemon=True, name="honcho-sync"
)
self._sync_thread.start()
def on_memory_write(self, action: str, target: str, content: str) -> None:
"""Mirror built-in user profile writes as Honcho conclusions."""
if action != "add" or target != "user" or not content:
return
if not self._manager or not self._session_key:
return
def _write():
try:
self._manager.create_conclusion(self._session_key, content)
except Exception as e:
logger.debug("Honcho memory mirror failed: %s", e)
t = threading.Thread(target=_write, daemon=True, name="honcho-memwrite")
t.start()
def on_session_end(self, messages: List[Dict[str, Any]]) -> None:
"""Flush all pending messages to Honcho on session end."""
if not self._manager:
return
# Wait for pending sync
if self._sync_thread and self._sync_thread.is_alive():
self._sync_thread.join(timeout=10.0)
try:
self._manager.flush_all()
except Exception as e:
logger.debug("Honcho session-end flush failed: %s", e)
def get_tool_schemas(self) -> List[Dict[str, Any]]:
return [PROFILE_SCHEMA, SEARCH_SCHEMA, CONTEXT_SCHEMA, CONCLUDE_SCHEMA]
def handle_tool_call(self, tool_name: str, args: dict, **kwargs) -> str:
if not self._manager or not self._session_key:
return json.dumps({"error": "Honcho is not active for this session."})
try:
if tool_name == "honcho_profile":
card = self._manager.get_peer_card(self._session_key)
if not card:
return json.dumps({"result": "No profile facts available yet."})
return json.dumps({"result": card})
elif tool_name == "honcho_search":
query = args.get("query", "")
if not query:
return json.dumps({"error": "Missing required parameter: query"})
max_tokens = min(int(args.get("max_tokens", 800)), 2000)
result = self._manager.search_context(
self._session_key, query, max_tokens=max_tokens
)
if not result:
return json.dumps({"result": "No relevant context found."})
return json.dumps({"result": result})
elif tool_name == "honcho_context":
query = args.get("query", "")
if not query:
return json.dumps({"error": "Missing required parameter: query"})
peer = args.get("peer", "user")
result = self._manager.dialectic_query(
self._session_key, query, peer=peer
)
return json.dumps({"result": result or "No result from Honcho."})
elif tool_name == "honcho_conclude":
conclusion = args.get("conclusion", "")
if not conclusion:
return json.dumps({"error": "Missing required parameter: conclusion"})
ok = self._manager.create_conclusion(self._session_key, conclusion)
if ok:
return json.dumps({"result": f"Conclusion saved: {conclusion}"})
return json.dumps({"error": "Failed to save conclusion."})
return json.dumps({"error": f"Unknown tool: {tool_name}"})
except Exception as e:
logger.error("Honcho tool %s failed: %s", tool_name, e)
return json.dumps({"error": f"Honcho {tool_name} failed: {e}"})
def shutdown(self) -> None:
for t in (self._prefetch_thread, self._sync_thread):
if t and t.is_alive():
t.join(timeout=5.0)
# Flush any remaining messages
if self._manager:
try:
self._manager.flush_all()
except Exception:
pass
# ---------------------------------------------------------------------------
# Plugin entry point
# ---------------------------------------------------------------------------
def register(ctx) -> None:
"""Register Honcho as a memory provider plugin."""
ctx.register_memory_provider(HonchoMemoryProvider())

1166
plugins/memory/honcho/cli.py Normal file

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,485 @@
"""Honcho client initialization and configuration.
Resolution order for config file:
1. $HERMES_HOME/honcho.json (instance-local, enables isolated Hermes instances)
2. ~/.honcho/config.json (global, shared across all Honcho-enabled apps)
3. Environment variables (HONCHO_API_KEY, HONCHO_ENVIRONMENT)
Resolution order for host-specific settings:
1. Explicit host block fields (always win)
2. Flat/global fields from config root
3. Defaults (host name as workspace/peer)
"""
from __future__ import annotations
import json
import os
import logging
from dataclasses import dataclass, field
from pathlib import Path
from hermes_constants import get_hermes_home
from typing import Any, TYPE_CHECKING
if TYPE_CHECKING:
from honcho import Honcho
logger = logging.getLogger(__name__)
GLOBAL_CONFIG_PATH = Path.home() / ".honcho" / "config.json"
HOST = "hermes"
def resolve_active_host() -> str:
"""Derive the Honcho host key from the active Hermes profile.
Resolution order:
1. HERMES_HONCHO_HOST env var (explicit override)
2. Active profile name via profiles system -> ``hermes.<profile>``
3. Fallback: ``"hermes"`` (default profile)
"""
explicit = os.environ.get("HERMES_HONCHO_HOST", "").strip()
if explicit:
return explicit
try:
from hermes_cli.profiles import get_active_profile_name
profile = get_active_profile_name()
if profile and profile not in ("default", "custom"):
return f"{HOST}.{profile}"
except Exception:
pass
return HOST
def resolve_config_path() -> Path:
"""Return the active Honcho config path.
Resolution order:
1. $HERMES_HOME/honcho.json (profile-local, if it exists)
2. ~/.hermes/honcho.json (default profile shared host blocks live here)
3. ~/.honcho/config.json (global, cross-app interop)
Returns the global path if none exist (for first-time setup writes).
"""
local_path = get_hermes_home() / "honcho.json"
if local_path.exists():
return local_path
# Default profile's config — host blocks accumulate here via setup/clone
default_path = Path.home() / ".hermes" / "honcho.json"
if default_path != local_path and default_path.exists():
return default_path
return GLOBAL_CONFIG_PATH
_RECALL_MODE_ALIASES = {"auto": "hybrid"}
_VALID_RECALL_MODES = {"hybrid", "context", "tools"}
def _normalize_recall_mode(val: str) -> str:
"""Normalize legacy recall mode values (e.g. 'auto''hybrid')."""
val = _RECALL_MODE_ALIASES.get(val, val)
return val if val in _VALID_RECALL_MODES else "hybrid"
def _resolve_memory_mode(
global_val: str | dict,
host_val: str | dict | None,
) -> dict:
"""Parse memoryMode (string or object) into memory_mode + peer_memory_modes.
Resolution order: host-level wins over global.
String form: applies as the default for all peers.
Object form: { "default": "hybrid", "hermes": "honcho", ... }
"default" key sets the fallback; other keys are per-peer overrides.
"""
# Pick the winning value (host beats global)
val = host_val if host_val is not None else global_val
if isinstance(val, dict):
default = val.get("default", "hybrid")
overrides = {k: v for k, v in val.items() if k != "default"}
else:
default = str(val) if val else "hybrid"
overrides = {}
return {"memory_mode": default, "peer_memory_modes": overrides}
@dataclass
class HonchoClientConfig:
"""Configuration for Honcho client, resolved for a specific host."""
host: str = HOST
workspace_id: str = "hermes"
api_key: str | None = None
environment: str = "production"
# Optional base URL for self-hosted Honcho (overrides environment mapping)
base_url: str | None = None
# Identity
peer_name: str | None = None
ai_peer: str = "hermes"
linked_hosts: list[str] = field(default_factory=list)
# Toggles
enabled: bool = False
save_messages: bool = True
# memoryMode: default for all peers. "hybrid" / "honcho"
memory_mode: str = "hybrid"
# Per-peer overrides — any named Honcho peer. Override memory_mode when set.
# Config object form: "memoryMode": { "default": "hybrid", "hermes": "honcho" }
peer_memory_modes: dict[str, str] = field(default_factory=dict)
def peer_memory_mode(self, peer_name: str) -> str:
"""Return the effective memory mode for a named peer.
Resolution: per-peer override global memory_mode default.
"""
return self.peer_memory_modes.get(peer_name, self.memory_mode)
# Write frequency: "async" (background thread), "turn" (sync per turn),
# "session" (flush on session end), or int (every N turns)
write_frequency: str | int = "async"
# Prefetch budget
context_tokens: int | None = None
# Dialectic (peer.chat) settings
# reasoning_level: "minimal" | "low" | "medium" | "high" | "max"
# Used as the default; prefetch_dialectic may bump it dynamically.
dialectic_reasoning_level: str = "low"
# Max chars of dialectic result to inject into Hermes system prompt
dialectic_max_chars: int = 600
# Recall mode: how memory retrieval works when Honcho is active.
# "hybrid" — auto-injected context + Honcho tools available (model decides)
# "context" — auto-injected context only, Honcho tools removed
# "tools" — Honcho tools only, no auto-injected context
recall_mode: str = "hybrid"
# Session resolution
session_strategy: str = "per-directory"
session_peer_prefix: bool = False
sessions: dict[str, str] = field(default_factory=dict)
# Raw global config for anything else consumers need
raw: dict[str, Any] = field(default_factory=dict)
# True when Honcho was explicitly configured for this host (hosts.hermes
# block exists or enabled was set explicitly), vs auto-enabled from a
# stray HONCHO_API_KEY env var.
explicitly_configured: bool = False
@classmethod
def from_env(
cls,
workspace_id: str = "hermes",
host: str | None = None,
) -> HonchoClientConfig:
"""Create config from environment variables (fallback)."""
resolved_host = host or resolve_active_host()
api_key = os.environ.get("HONCHO_API_KEY")
base_url = os.environ.get("HONCHO_BASE_URL", "").strip() or None
return cls(
host=resolved_host,
workspace_id=workspace_id,
api_key=api_key,
environment=os.environ.get("HONCHO_ENVIRONMENT", "production"),
base_url=base_url,
ai_peer=resolved_host,
enabled=bool(api_key or base_url),
)
@classmethod
def from_global_config(
cls,
host: str | None = None,
config_path: Path | None = None,
) -> HonchoClientConfig:
"""Create config from the resolved Honcho config path.
Resolution: $HERMES_HOME/honcho.json -> ~/.honcho/config.json -> env vars.
When host is None, derives it from the active Hermes profile.
"""
resolved_host = host or resolve_active_host()
path = config_path or resolve_config_path()
if not path.exists():
logger.debug("No global Honcho config at %s, falling back to env", path)
return cls.from_env(host=resolved_host)
try:
raw = json.loads(path.read_text(encoding="utf-8"))
except (json.JSONDecodeError, OSError) as e:
logger.warning("Failed to read %s: %s, falling back to env", path, e)
return cls.from_env(host=resolved_host)
host_block = (raw.get("hosts") or {}).get(resolved_host, {})
# A hosts.hermes block or explicit enabled flag means the user
# intentionally configured Honcho for this host.
_explicitly_configured = bool(host_block) or raw.get("enabled") is True
# Explicit host block fields win, then flat/global, then defaults
workspace = (
host_block.get("workspace")
or raw.get("workspace")
or resolved_host
)
ai_peer = (
host_block.get("aiPeer")
or raw.get("aiPeer")
or resolved_host
)
linked_hosts = host_block.get("linkedHosts", [])
api_key = (
host_block.get("apiKey")
or raw.get("apiKey")
or os.environ.get("HONCHO_API_KEY")
)
environment = (
host_block.get("environment")
or raw.get("environment", "production")
)
base_url = (
raw.get("baseUrl")
or os.environ.get("HONCHO_BASE_URL", "").strip()
or None
)
# Auto-enable when API key or base_url is present (unless explicitly disabled)
# Host-level enabled wins, then root-level, then auto-enable if key/url exists.
host_enabled = host_block.get("enabled")
root_enabled = raw.get("enabled")
if host_enabled is not None:
enabled = host_enabled
elif root_enabled is not None:
enabled = root_enabled
else:
# Not explicitly set anywhere -> auto-enable if API key or base_url exists
enabled = bool(api_key or base_url)
# write_frequency: accept int or string
raw_wf = (
host_block.get("writeFrequency")
or raw.get("writeFrequency")
or "async"
)
try:
write_frequency: str | int = int(raw_wf)
except (TypeError, ValueError):
write_frequency = str(raw_wf)
# saveMessages: host wins (None-aware since False is valid)
host_save = host_block.get("saveMessages")
save_messages = host_save if host_save is not None else raw.get("saveMessages", True)
# sessionStrategy / sessionPeerPrefix: host first, root fallback
session_strategy = (
host_block.get("sessionStrategy")
or raw.get("sessionStrategy", "per-directory")
)
host_prefix = host_block.get("sessionPeerPrefix")
session_peer_prefix = (
host_prefix if host_prefix is not None
else raw.get("sessionPeerPrefix", False)
)
return cls(
host=resolved_host,
workspace_id=workspace,
api_key=api_key,
environment=environment,
base_url=base_url,
peer_name=host_block.get("peerName") or raw.get("peerName"),
ai_peer=ai_peer,
linked_hosts=linked_hosts,
enabled=enabled,
save_messages=save_messages,
**_resolve_memory_mode(
raw.get("memoryMode", "hybrid"),
host_block.get("memoryMode"),
),
write_frequency=write_frequency,
context_tokens=host_block.get("contextTokens") or raw.get("contextTokens"),
dialectic_reasoning_level=(
host_block.get("dialecticReasoningLevel")
or raw.get("dialecticReasoningLevel")
or "low"
),
dialectic_max_chars=int(
host_block.get("dialecticMaxChars")
or raw.get("dialecticMaxChars")
or 600
),
recall_mode=_normalize_recall_mode(
host_block.get("recallMode")
or raw.get("recallMode")
or "hybrid"
),
session_strategy=session_strategy,
session_peer_prefix=session_peer_prefix,
sessions=raw.get("sessions", {}),
raw=raw,
explicitly_configured=_explicitly_configured,
)
@staticmethod
def _git_repo_name(cwd: str) -> str | None:
"""Return the git repo root directory name, or None if not in a repo."""
import subprocess
try:
root = subprocess.run(
["git", "rev-parse", "--show-toplevel"],
capture_output=True, text=True, cwd=cwd, timeout=5,
)
if root.returncode == 0:
return Path(root.stdout.strip()).name
except (OSError, subprocess.TimeoutExpired):
pass
return None
def resolve_session_name(
self,
cwd: str | None = None,
session_title: str | None = None,
session_id: str | None = None,
) -> str | None:
"""Resolve Honcho session name.
Resolution order:
1. Manual directory override from sessions map
2. Hermes session title (from /title command)
3. per-session strategy Hermes session_id ({timestamp}_{hex})
4. per-repo strategy git repo root directory name
5. per-directory strategy directory basename
6. global strategy workspace name
"""
import re
if not cwd:
cwd = os.getcwd()
# Manual override always wins
manual = self.sessions.get(cwd)
if manual:
return manual
# /title mid-session remap
if session_title:
sanitized = re.sub(r'[^a-zA-Z0-9_-]', '-', session_title).strip('-')
if sanitized:
if self.session_peer_prefix and self.peer_name:
return f"{self.peer_name}-{sanitized}"
return sanitized
# per-session: inherit Hermes session_id (new Honcho session each run)
if self.session_strategy == "per-session" and session_id:
if self.session_peer_prefix and self.peer_name:
return f"{self.peer_name}-{session_id}"
return session_id
# per-repo: one Honcho session per git repository
if self.session_strategy == "per-repo":
base = self._git_repo_name(cwd) or Path(cwd).name
if self.session_peer_prefix and self.peer_name:
return f"{self.peer_name}-{base}"
return base
# per-directory: one Honcho session per working directory (default)
if self.session_strategy in ("per-directory", "per-session"):
base = Path(cwd).name
if self.session_peer_prefix and self.peer_name:
return f"{self.peer_name}-{base}"
return base
# global: single session across all directories
return self.workspace_id
def get_linked_workspaces(self) -> list[str]:
"""Resolve linked host keys to workspace names."""
hosts = self.raw.get("hosts", {})
workspaces = []
for host_key in self.linked_hosts:
block = hosts.get(host_key, {})
ws = block.get("workspace") or host_key
if ws != self.workspace_id:
workspaces.append(ws)
return workspaces
_honcho_client: Honcho | None = None
def get_honcho_client(config: HonchoClientConfig | None = None) -> Honcho:
"""Get or create the Honcho client singleton.
When no config is provided, attempts to load ~/.honcho/config.json
first, falling back to environment variables.
"""
global _honcho_client
if _honcho_client is not None:
return _honcho_client
if config is None:
config = HonchoClientConfig.from_global_config()
if not config.api_key and not config.base_url:
raise ValueError(
"Honcho API key not found. "
"Get your API key at https://app.honcho.dev, "
"then run 'hermes honcho setup' or set HONCHO_API_KEY. "
"For local instances, set HONCHO_BASE_URL instead."
)
try:
from honcho import Honcho
except ImportError:
raise ImportError(
"honcho-ai is required for Honcho integration. "
"Install it with: pip install honcho-ai"
)
# Allow config.yaml honcho.base_url to override the SDK's environment
# mapping, enabling remote self-hosted Honcho deployments without
# requiring the server to live on localhost.
resolved_base_url = config.base_url
if not resolved_base_url:
try:
from hermes_cli.config import load_config
hermes_cfg = load_config()
honcho_cfg = hermes_cfg.get("honcho", {})
if isinstance(honcho_cfg, dict):
resolved_base_url = honcho_cfg.get("base_url", "").strip() or None
except Exception:
pass
if resolved_base_url:
logger.info("Initializing Honcho client (base_url: %s, workspace: %s)", resolved_base_url, config.workspace_id)
else:
logger.info("Initializing Honcho client (host: %s, workspace: %s)", config.host, config.workspace_id)
# Local Honcho instances don't require an API key, but the SDK
# expects a non-empty string. Use a placeholder for local URLs.
_is_local = resolved_base_url and (
"localhost" in resolved_base_url
or "127.0.0.1" in resolved_base_url
or "::1" in resolved_base_url
)
effective_api_key = config.api_key or ("local" if _is_local else None)
kwargs: dict = {
"workspace_id": config.workspace_id,
"api_key": effective_api_key,
"environment": config.environment,
}
if resolved_base_url:
kwargs["base_url"] = resolved_base_url
_honcho_client = Honcho(**kwargs)
return _honcho_client
def reset_honcho_client() -> None:
"""Reset the Honcho client singleton (useful for testing)."""
global _honcho_client
_honcho_client = None

View file

@ -0,0 +1,7 @@
name: honcho
version: 1.0.0
description: "Honcho AI-native memory — cross-session user modeling with dialectic Q&A, semantic search, and persistent conclusions."
pip_dependencies:
- honcho-ai
hooks:
- on_session_end

View file

@ -0,0 +1,997 @@
"""Honcho-based session management for conversation history."""
from __future__ import annotations
import queue
import re
import logging
import threading
from dataclasses import dataclass, field
from datetime import datetime
from typing import Any, TYPE_CHECKING
from plugins.memory.honcho.client import get_honcho_client
if TYPE_CHECKING:
from honcho import Honcho
logger = logging.getLogger(__name__)
# Sentinel to signal the async writer thread to shut down
_ASYNC_SHUTDOWN = object()
@dataclass
class HonchoSession:
"""
A conversation session backed by Honcho.
Provides a local message cache that syncs to Honcho's
AI-native memory system for user modeling.
"""
key: str # channel:chat_id
user_peer_id: str # Honcho peer ID for the user
assistant_peer_id: str # Honcho peer ID for the assistant
honcho_session_id: str # Honcho session ID
messages: list[dict[str, Any]] = field(default_factory=list)
created_at: datetime = field(default_factory=datetime.now)
updated_at: datetime = field(default_factory=datetime.now)
metadata: dict[str, Any] = field(default_factory=dict)
def add_message(self, role: str, content: str, **kwargs: Any) -> None:
"""Add a message to the local cache."""
msg = {
"role": role,
"content": content,
"timestamp": datetime.now().isoformat(),
**kwargs,
}
self.messages.append(msg)
self.updated_at = datetime.now()
def get_history(self, max_messages: int = 50) -> list[dict[str, Any]]:
"""Get message history for LLM context."""
recent = (
self.messages[-max_messages:]
if len(self.messages) > max_messages
else self.messages
)
return [{"role": m["role"], "content": m["content"]} for m in recent]
def clear(self) -> None:
"""Clear all messages in the session."""
self.messages = []
self.updated_at = datetime.now()
class HonchoSessionManager:
"""
Manages conversation sessions using Honcho.
Runs alongside hermes' existing SQLite state and file-based memory,
adding persistent cross-session user modeling via Honcho's AI-native memory.
"""
def __init__(
self,
honcho: Honcho | None = None,
context_tokens: int | None = None,
config: Any | None = None,
):
"""
Initialize the session manager.
Args:
honcho: Optional Honcho client. If not provided, uses the singleton.
context_tokens: Max tokens for context() calls (None = Honcho default).
config: HonchoClientConfig from global config (provides peer_name, ai_peer,
write_frequency, memory_mode, etc.).
"""
self._honcho = honcho
self._context_tokens = context_tokens
self._config = config
self._cache: dict[str, HonchoSession] = {}
self._peers_cache: dict[str, Any] = {}
self._sessions_cache: dict[str, Any] = {}
# Write frequency state
write_frequency = (config.write_frequency if config else "async")
self._write_frequency = write_frequency
self._turn_counter: int = 0
# Prefetch caches: session_key → last result (consumed once per turn)
self._context_cache: dict[str, dict] = {}
self._dialectic_cache: dict[str, str] = {}
self._prefetch_cache_lock = threading.Lock()
self._dialectic_reasoning_level: str = (
config.dialectic_reasoning_level if config else "low"
)
self._dialectic_max_chars: int = (
config.dialectic_max_chars if config else 600
)
# Async write queue — started lazily on first enqueue
self._async_queue: queue.Queue | None = None
self._async_thread: threading.Thread | None = None
if write_frequency == "async":
self._async_queue = queue.Queue()
self._async_thread = threading.Thread(
target=self._async_writer_loop,
name="honcho-async-writer",
daemon=True,
)
self._async_thread.start()
@property
def honcho(self) -> Honcho:
"""Get the Honcho client, initializing if needed."""
if self._honcho is None:
self._honcho = get_honcho_client()
return self._honcho
def _get_or_create_peer(self, peer_id: str) -> Any:
"""
Get or create a Honcho peer.
Peers are lazy -- no API call until first use.
Observation settings are controlled per-session via SessionPeerConfig.
"""
if peer_id in self._peers_cache:
return self._peers_cache[peer_id]
peer = self.honcho.peer(peer_id)
self._peers_cache[peer_id] = peer
return peer
def _get_or_create_honcho_session(
self, session_id: str, user_peer: Any, assistant_peer: Any
) -> tuple[Any, list]:
"""
Get or create a Honcho session with peers configured.
Returns:
Tuple of (honcho_session, existing_messages).
"""
if session_id in self._sessions_cache:
logger.debug("Honcho session '%s' retrieved from cache", session_id)
return self._sessions_cache[session_id], []
session = self.honcho.session(session_id)
# Configure peer observation settings.
# observe_me=True for AI peer so Honcho watches what the agent says
# and builds its representation over time — enabling identity formation.
try:
from honcho.session import SessionPeerConfig
user_config = SessionPeerConfig(observe_me=True, observe_others=True)
ai_config = SessionPeerConfig(observe_me=True, observe_others=True)
session.add_peers([(user_peer, user_config), (assistant_peer, ai_config)])
except Exception as e:
logger.warning(
"Honcho session '%s' add_peers failed (non-fatal): %s",
session_id, e,
)
# Load existing messages via context() - single call for messages + metadata
existing_messages = []
try:
ctx = session.context(summary=True, tokens=self._context_tokens)
existing_messages = ctx.messages or []
# Verify chronological ordering
if existing_messages and len(existing_messages) > 1:
timestamps = [m.created_at for m in existing_messages if m.created_at]
if timestamps and timestamps != sorted(timestamps):
logger.warning(
"Honcho messages not chronologically ordered for session '%s', sorting",
session_id,
)
existing_messages = sorted(
existing_messages,
key=lambda m: m.created_at or datetime.min,
)
if existing_messages:
logger.info(
"Honcho session '%s' retrieved (%d existing messages)",
session_id, len(existing_messages),
)
else:
logger.info("Honcho session '%s' created (new)", session_id)
except Exception as e:
logger.warning(
"Honcho session '%s' loaded (failed to fetch context: %s)",
session_id, e,
)
self._sessions_cache[session_id] = session
return session, existing_messages
def _sanitize_id(self, id_str: str) -> str:
"""Sanitize an ID to match Honcho's pattern: ^[a-zA-Z0-9_-]+"""
return re.sub(r'[^a-zA-Z0-9_-]', '-', id_str)
def get_or_create(self, key: str) -> HonchoSession:
"""
Get an existing session or create a new one.
Args:
key: Session key (usually channel:chat_id).
Returns:
The session.
"""
if key in self._cache:
logger.debug("Local session cache hit: %s", key)
return self._cache[key]
# Use peer names from global config when available
if self._config and self._config.peer_name:
user_peer_id = self._sanitize_id(self._config.peer_name)
else:
# Fallback: derive from session key
parts = key.split(":", 1)
channel = parts[0] if len(parts) > 1 else "default"
chat_id = parts[1] if len(parts) > 1 else key
user_peer_id = self._sanitize_id(f"user-{channel}-{chat_id}")
assistant_peer_id = self._sanitize_id(
self._config.ai_peer if self._config else "hermes-assistant"
)
# Sanitize session ID for Honcho
honcho_session_id = self._sanitize_id(key)
# Get or create peers
user_peer = self._get_or_create_peer(user_peer_id)
assistant_peer = self._get_or_create_peer(assistant_peer_id)
# Get or create Honcho session
honcho_session, existing_messages = self._get_or_create_honcho_session(
honcho_session_id, user_peer, assistant_peer
)
# Convert Honcho messages to local format
local_messages = []
for msg in existing_messages:
role = "assistant" if msg.peer_id == assistant_peer_id else "user"
local_messages.append({
"role": role,
"content": msg.content,
"timestamp": msg.created_at.isoformat() if msg.created_at else "",
"_synced": True, # Already in Honcho
})
# Create local session wrapper with existing messages
session = HonchoSession(
key=key,
user_peer_id=user_peer_id,
assistant_peer_id=assistant_peer_id,
honcho_session_id=honcho_session_id,
messages=local_messages,
)
self._cache[key] = session
return session
def _flush_session(self, session: HonchoSession) -> bool:
"""Internal: write unsynced messages to Honcho synchronously."""
if not session.messages:
return True
user_peer = self._get_or_create_peer(session.user_peer_id)
assistant_peer = self._get_or_create_peer(session.assistant_peer_id)
honcho_session = self._sessions_cache.get(session.honcho_session_id)
if not honcho_session:
honcho_session, _ = self._get_or_create_honcho_session(
session.honcho_session_id, user_peer, assistant_peer
)
new_messages = [m for m in session.messages if not m.get("_synced")]
if not new_messages:
return True
honcho_messages = []
for msg in new_messages:
peer = user_peer if msg["role"] == "user" else assistant_peer
honcho_messages.append(peer.message(msg["content"]))
try:
honcho_session.add_messages(honcho_messages)
for msg in new_messages:
msg["_synced"] = True
logger.debug("Synced %d messages to Honcho for %s", len(honcho_messages), session.key)
self._cache[session.key] = session
return True
except Exception as e:
for msg in new_messages:
msg["_synced"] = False
logger.error("Failed to sync messages to Honcho: %s", e)
self._cache[session.key] = session
return False
def _async_writer_loop(self) -> None:
"""Background daemon thread: drains the async write queue."""
while True:
try:
item = self._async_queue.get(timeout=5)
if item is _ASYNC_SHUTDOWN:
break
first_error: Exception | None = None
try:
success = self._flush_session(item)
except Exception as e:
success = False
first_error = e
if success:
continue
if first_error is not None:
logger.warning("Honcho async write failed, retrying once: %s", first_error)
else:
logger.warning("Honcho async write failed, retrying once")
import time as _time
_time.sleep(2)
try:
retry_success = self._flush_session(item)
except Exception as e2:
logger.error("Honcho async write retry failed, dropping batch: %s", e2)
continue
if not retry_success:
logger.error("Honcho async write retry failed, dropping batch")
except queue.Empty:
continue
except Exception as e:
logger.error("Honcho async writer error: %s", e)
def save(self, session: HonchoSession) -> None:
"""Save messages to Honcho, respecting write_frequency.
write_frequency modes:
"async" enqueue for background thread (zero blocking, zero token cost)
"turn" flush synchronously every turn
"session" defer until flush_session() is called explicitly
N (int) flush every N turns
"""
self._turn_counter += 1
wf = self._write_frequency
if wf == "async":
if self._async_queue is not None:
self._async_queue.put(session)
elif wf == "turn":
self._flush_session(session)
elif wf == "session":
# Accumulate; caller must call flush_all() at session end
pass
elif isinstance(wf, int) and wf > 0:
if self._turn_counter % wf == 0:
self._flush_session(session)
def flush_all(self) -> None:
"""Flush all pending unsynced messages for all cached sessions.
Called at session end for "session" write_frequency, or to force
a sync before process exit regardless of mode.
"""
for session in list(self._cache.values()):
try:
self._flush_session(session)
except Exception as e:
logger.error("Honcho flush_all error for %s: %s", session.key, e)
# Drain async queue synchronously if it exists
if self._async_queue is not None:
while not self._async_queue.empty():
try:
item = self._async_queue.get_nowait()
if item is not _ASYNC_SHUTDOWN:
self._flush_session(item)
except queue.Empty:
break
def shutdown(self) -> None:
"""Gracefully shut down the async writer thread."""
if self._async_queue is not None and self._async_thread is not None:
self.flush_all()
self._async_queue.put(_ASYNC_SHUTDOWN)
self._async_thread.join(timeout=10)
def delete(self, key: str) -> bool:
"""Delete a session from local cache."""
if key in self._cache:
del self._cache[key]
return True
return False
def new_session(self, key: str) -> HonchoSession:
"""
Create a new session, preserving the old one for user modeling.
Creates a fresh session with a new ID while keeping the old
session's data in Honcho for continued user modeling.
"""
import time
# Remove old session from caches (but don't delete from Honcho)
old_session = self._cache.pop(key, None)
if old_session:
self._sessions_cache.pop(old_session.honcho_session_id, None)
# Create new session with timestamp suffix
timestamp = int(time.time())
new_key = f"{key}:{timestamp}"
# get_or_create will create a fresh session
session = self.get_or_create(new_key)
# Cache under the original key so callers find it by the expected name
self._cache[key] = session
logger.info("Created new session for %s (honcho: %s)", key, session.honcho_session_id)
return session
_REASONING_LEVELS = ("minimal", "low", "medium", "high", "max")
def _dynamic_reasoning_level(self, query: str) -> str:
"""
Pick a reasoning level based on message complexity.
Uses the configured default as a floor; bumps up for longer or
more complex messages so Honcho applies more inference where it matters.
< 120 chars default (typically "low")
120400 chars one level above default (cap at "high")
> 400 chars two levels above default (cap at "high")
"max" is never selected automatically reserve it for explicit config.
"""
levels = self._REASONING_LEVELS
default_idx = levels.index(self._dialectic_reasoning_level) if self._dialectic_reasoning_level in levels else 1
n = len(query)
if n < 120:
bump = 0
elif n < 400:
bump = 1
else:
bump = 2
# Cap at "high" (index 3) for auto-selection
idx = min(default_idx + bump, 3)
return levels[idx]
def dialectic_query(
self, session_key: str, query: str,
reasoning_level: str | None = None,
peer: str = "user",
) -> str:
"""
Query Honcho's dialectic endpoint about a peer.
Runs an LLM on Honcho's backend against the target peer's full
representation. Higher latency than context() call async via
prefetch_dialectic() to avoid blocking the response.
Args:
session_key: The session key to query against.
query: Natural language question.
reasoning_level: Override the config default. If None, uses
_dynamic_reasoning_level(query).
peer: Which peer to query "user" (default) or "ai".
Returns:
Honcho's synthesized answer, or empty string on failure.
"""
session = self._cache.get(session_key)
if not session:
return ""
peer_id = session.assistant_peer_id if peer == "ai" else session.user_peer_id
target_peer = self._get_or_create_peer(peer_id)
level = reasoning_level or self._dynamic_reasoning_level(query)
try:
result = target_peer.chat(query, reasoning_level=level) or ""
# Apply Hermes-side char cap before caching
if result and self._dialectic_max_chars and len(result) > self._dialectic_max_chars:
result = result[:self._dialectic_max_chars].rsplit(" ", 1)[0] + ""
return result
except Exception as e:
logger.warning("Honcho dialectic query failed: %s", e)
return ""
def prefetch_dialectic(self, session_key: str, query: str) -> None:
"""
Fire a dialectic_query in a background thread, caching the result.
Non-blocking. The result is available via pop_dialectic_result()
on the next call (typically the following turn). Reasoning level
is selected dynamically based on query complexity.
Args:
session_key: The session key to query against.
query: The user's current message, used as the query.
"""
def _run():
result = self.dialectic_query(session_key, query)
if result:
self.set_dialectic_result(session_key, result)
t = threading.Thread(target=_run, name="honcho-dialectic-prefetch", daemon=True)
t.start()
def set_dialectic_result(self, session_key: str, result: str) -> None:
"""Store a prefetched dialectic result in a thread-safe way."""
if not result:
return
with self._prefetch_cache_lock:
self._dialectic_cache[session_key] = result
def pop_dialectic_result(self, session_key: str) -> str:
"""
Return and clear the cached dialectic result for this session.
Returns empty string if no result is ready yet.
"""
with self._prefetch_cache_lock:
return self._dialectic_cache.pop(session_key, "")
def prefetch_context(self, session_key: str, user_message: str | None = None) -> None:
"""
Fire get_prefetch_context in a background thread, caching the result.
Non-blocking. Consumed next turn via pop_context_result(). This avoids
a synchronous HTTP round-trip blocking every response.
"""
def _run():
result = self.get_prefetch_context(session_key, user_message)
if result:
self.set_context_result(session_key, result)
t = threading.Thread(target=_run, name="honcho-context-prefetch", daemon=True)
t.start()
def set_context_result(self, session_key: str, result: dict[str, str]) -> None:
"""Store a prefetched context result in a thread-safe way."""
if not result:
return
with self._prefetch_cache_lock:
self._context_cache[session_key] = result
def pop_context_result(self, session_key: str) -> dict[str, str]:
"""
Return and clear the cached context result for this session.
Returns empty dict if no result is ready yet (first turn).
"""
with self._prefetch_cache_lock:
return self._context_cache.pop(session_key, {})
def get_prefetch_context(self, session_key: str, user_message: str | None = None) -> dict[str, str]:
"""
Pre-fetch user and AI peer context from Honcho.
Fetches peer_representation and peer_card for both peers. search_query
is intentionally omitted it would only affect additional excerpts
that this code does not consume, and passing the raw message exposes
conversation content in server access logs.
Args:
session_key: The session key to get context for.
user_message: Unused; kept for call-site compatibility.
Returns:
Dictionary with 'representation', 'card', 'ai_representation',
and 'ai_card' keys.
"""
session = self._cache.get(session_key)
if not session:
return {}
honcho_session = self._sessions_cache.get(session.honcho_session_id)
if not honcho_session:
return {}
result: dict[str, str] = {}
try:
ctx = honcho_session.context(
summary=False,
tokens=self._context_tokens,
peer_target=session.user_peer_id,
peer_perspective=session.assistant_peer_id,
)
card = ctx.peer_card or []
result["representation"] = ctx.peer_representation or ""
result["card"] = "\n".join(card) if isinstance(card, list) else str(card)
except Exception as e:
logger.warning("Failed to fetch user context from Honcho: %s", e)
# Also fetch AI peer's own representation so Hermes knows itself.
try:
ai_ctx = honcho_session.context(
summary=False,
tokens=self._context_tokens,
peer_target=session.assistant_peer_id,
peer_perspective=session.user_peer_id,
)
ai_card = ai_ctx.peer_card or []
result["ai_representation"] = ai_ctx.peer_representation or ""
result["ai_card"] = "\n".join(ai_card) if isinstance(ai_card, list) else str(ai_card)
except Exception as e:
logger.debug("Failed to fetch AI peer context from Honcho: %s", e)
return result
def migrate_local_history(self, session_key: str, messages: list[dict[str, Any]]) -> bool:
"""
Upload local session history to Honcho as a file.
Used when Honcho activates mid-conversation to preserve prior context.
Args:
session_key: The session key (e.g., "telegram:123456").
messages: Local messages (dicts with role, content, timestamp).
Returns:
True if upload succeeded, False otherwise.
"""
session = self._cache.get(session_key)
if not session:
logger.warning("No local session cached for '%s', skipping migration", session_key)
return False
honcho_session = self._sessions_cache.get(session.honcho_session_id)
if not honcho_session:
logger.warning("No Honcho session cached for '%s', skipping migration", session_key)
return False
user_peer = self._get_or_create_peer(session.user_peer_id)
content_bytes = self._format_migration_transcript(session_key, messages)
first_ts = messages[0].get("timestamp") if messages else None
try:
honcho_session.upload_file(
file=("prior_history.txt", content_bytes, "text/plain"),
peer=user_peer,
metadata={"source": "local_jsonl", "count": len(messages)},
created_at=first_ts,
)
logger.info("Migrated %d local messages to Honcho for %s", len(messages), session_key)
return True
except Exception as e:
logger.error("Failed to upload local history to Honcho for %s: %s", session_key, e)
return False
@staticmethod
def _format_migration_transcript(session_key: str, messages: list[dict[str, Any]]) -> bytes:
"""Format local messages as an XML transcript for Honcho file upload."""
timestamps = [m.get("timestamp", "") for m in messages]
time_range = f"{timestamps[0]} to {timestamps[-1]}" if timestamps else "unknown"
lines = [
"<prior_conversation_history>",
"<context>",
"This conversation history occurred BEFORE the Honcho memory system was activated.",
"These messages are the preceding elements of this conversation session and should",
"be treated as foundational context for all subsequent interactions. The user and",
"assistant have already established rapport through these exchanges.",
"</context>",
"",
f'<transcript session_key="{session_key}" message_count="{len(messages)}"',
f' time_range="{time_range}">',
"",
]
for msg in messages:
ts = msg.get("timestamp", "?")
role = msg.get("role", "unknown")
content = msg.get("content") or ""
lines.append(f"[{ts}] {role}: {content}")
lines.append("")
lines.append("</transcript>")
lines.append("</prior_conversation_history>")
return "\n".join(lines).encode("utf-8")
def migrate_memory_files(self, session_key: str, memory_dir: str) -> bool:
"""
Upload MEMORY.md and USER.md to Honcho as files.
Used when Honcho activates on an instance that already has locally
consolidated memory. Backwards compatible -- skips if files don't exist.
Args:
session_key: The session key to associate files with.
memory_dir: Path to the memories directory (~/.hermes/memories/).
Returns:
True if at least one file was uploaded, False otherwise.
"""
from pathlib import Path
memory_path = Path(memory_dir)
if not memory_path.exists():
return False
session = self._cache.get(session_key)
if not session:
logger.warning("No local session cached for '%s', skipping memory migration", session_key)
return False
honcho_session = self._sessions_cache.get(session.honcho_session_id)
if not honcho_session:
logger.warning("No Honcho session cached for '%s', skipping memory migration", session_key)
return False
user_peer = self._get_or_create_peer(session.user_peer_id)
assistant_peer = self._get_or_create_peer(session.assistant_peer_id)
uploaded = False
files = [
(
"MEMORY.md",
"consolidated_memory.md",
"Long-term agent notes and preferences",
user_peer,
"user",
),
(
"USER.md",
"user_profile.md",
"User profile and preferences",
user_peer,
"user",
),
(
"SOUL.md",
"agent_soul.md",
"Agent persona and identity configuration",
assistant_peer,
"ai",
),
]
for filename, upload_name, description, target_peer, target_kind in files:
filepath = memory_path / filename
if not filepath.exists():
continue
content = filepath.read_text(encoding="utf-8").strip()
if not content:
continue
wrapped = (
f"<prior_memory_file>\n"
f"<context>\n"
f"This file was consolidated from local conversations BEFORE Honcho was activated.\n"
f"{description}. Treat as foundational context for this user.\n"
f"</context>\n"
f"\n"
f"{content}\n"
f"</prior_memory_file>\n"
)
try:
honcho_session.upload_file(
file=(upload_name, wrapped.encode("utf-8"), "text/plain"),
peer=target_peer,
metadata={
"source": "local_memory",
"original_file": filename,
"target_peer": target_kind,
},
)
logger.info(
"Uploaded %s to Honcho for %s (%s peer)",
filename,
session_key,
target_kind,
)
uploaded = True
except Exception as e:
logger.error("Failed to upload %s to Honcho: %s", filename, e)
return uploaded
def get_peer_card(self, session_key: str) -> list[str]:
"""
Fetch the user peer's card — a curated list of key facts.
Fast, no LLM reasoning. Returns raw structured facts Honcho has
inferred about the user (name, role, preferences, patterns).
Empty list if unavailable.
"""
session = self._cache.get(session_key)
if not session:
return []
honcho_session = self._sessions_cache.get(session.honcho_session_id)
if not honcho_session:
return []
try:
ctx = honcho_session.context(
summary=False,
tokens=200,
peer_target=session.user_peer_id,
peer_perspective=session.assistant_peer_id,
)
card = ctx.peer_card or []
return card if isinstance(card, list) else [str(card)]
except Exception as e:
logger.debug("Failed to fetch peer card from Honcho: %s", e)
return []
def search_context(self, session_key: str, query: str, max_tokens: int = 800) -> str:
"""
Semantic search over Honcho session context.
Returns raw excerpts ranked by relevance to the query. No LLM
reasoning cheaper and faster than dialectic_query. Good for
factual lookups where the model will do its own synthesis.
Args:
session_key: Session to search against.
query: Search query for semantic matching.
max_tokens: Token budget for returned content.
Returns:
Relevant context excerpts as a string, or empty string if none.
"""
session = self._cache.get(session_key)
if not session:
return ""
honcho_session = self._sessions_cache.get(session.honcho_session_id)
if not honcho_session:
return ""
try:
ctx = honcho_session.context(
summary=False,
tokens=max_tokens,
peer_target=session.user_peer_id,
peer_perspective=session.assistant_peer_id,
search_query=query,
)
parts = []
if ctx.peer_representation:
parts.append(ctx.peer_representation)
card = ctx.peer_card or []
if card:
facts = card if isinstance(card, list) else [str(card)]
parts.append("\n".join(f"- {f}" for f in facts))
return "\n\n".join(parts)
except Exception as e:
logger.debug("Honcho search_context failed: %s", e)
return ""
def create_conclusion(self, session_key: str, content: str) -> bool:
"""Write a conclusion about the user back to Honcho.
Conclusions are facts the AI peer observes about the user
preferences, corrections, clarifications, project context.
They feed into the user's peer card and representation.
Args:
session_key: Session to associate the conclusion with.
content: The conclusion text (e.g. "User prefers dark mode").
Returns:
True on success, False on failure.
"""
if not content or not content.strip():
return False
session = self._cache.get(session_key)
if not session:
logger.warning("No session cached for '%s', skipping conclusion", session_key)
return False
assistant_peer = self._get_or_create_peer(session.assistant_peer_id)
try:
conclusions_scope = assistant_peer.conclusions_of(session.user_peer_id)
conclusions_scope.create([{
"content": content.strip(),
"session_id": session.honcho_session_id,
}])
logger.info("Created conclusion for %s: %s", session_key, content[:80])
return True
except Exception as e:
logger.error("Failed to create conclusion: %s", e)
return False
def seed_ai_identity(self, session_key: str, content: str, source: str = "manual") -> bool:
"""
Seed the AI peer's Honcho representation from text content.
Useful for priming AI identity from SOUL.md, exported chats, or
any structured description. The content is sent as an assistant
peer message so Honcho's reasoning model can incorporate it.
Args:
session_key: The session key to associate with.
content: The identity/persona content to seed.
source: Metadata tag for the source (e.g. "soul_md", "export").
Returns:
True on success, False on failure.
"""
if not content or not content.strip():
return False
session = self._cache.get(session_key)
if not session:
logger.warning("No session cached for '%s', skipping AI seed", session_key)
return False
assistant_peer = self._get_or_create_peer(session.assistant_peer_id)
honcho_session = self._sessions_cache.get(session.honcho_session_id)
if not honcho_session:
logger.warning("No Honcho session cached for '%s', skipping AI seed", session_key)
return False
try:
wrapped = (
f"<ai_identity_seed>\n"
f"<source>{source}</source>\n"
f"\n"
f"{content.strip()}\n"
f"</ai_identity_seed>"
)
honcho_session.add_messages([assistant_peer.message(wrapped)])
logger.info("Seeded AI identity from '%s' into %s", source, session_key)
return True
except Exception as e:
logger.error("Failed to seed AI identity: %s", e)
return False
def get_ai_representation(self, session_key: str) -> dict[str, str]:
"""
Fetch the AI peer's current Honcho representation.
Returns:
Dict with 'representation' and 'card' keys, empty strings if unavailable.
"""
session = self._cache.get(session_key)
if not session:
return {"representation": "", "card": ""}
honcho_session = self._sessions_cache.get(session.honcho_session_id)
if not honcho_session:
return {"representation": "", "card": ""}
try:
ctx = honcho_session.context(
summary=False,
tokens=self._context_tokens,
peer_target=session.assistant_peer_id,
peer_perspective=session.user_peer_id,
)
ai_card = ctx.peer_card or []
return {
"representation": ctx.peer_representation or "",
"card": "\n".join(ai_card) if isinstance(ai_card, list) else str(ai_card),
}
except Exception as e:
logger.debug("Failed to fetch AI representation: %s", e)
return {"representation": "", "card": ""}
def list_sessions(self) -> list[dict[str, Any]]:
"""List all cached sessions."""
return [
{
"key": s.key,
"created_at": s.created_at.isoformat(),
"updated_at": s.updated_at.isoformat(),
"message_count": len(s.messages),
}
for s in self._cache.values()
]

View file

@ -0,0 +1,38 @@
# Mem0 Memory Provider
Server-side LLM fact extraction with semantic search, reranking, and automatic deduplication.
## Requirements
- `pip install mem0ai`
- Mem0 API key from [app.mem0.ai](https://app.mem0.ai)
## Setup
```bash
hermes memory setup # select "mem0"
```
Or manually:
```bash
hermes config set memory.provider mem0
echo "MEM0_API_KEY=your-key" >> ~/.hermes/.env
```
## Config
Config file: `$HERMES_HOME/mem0.json`
| Key | Default | Description |
|-----|---------|-------------|
| `user_id` | `hermes-user` | User identifier on Mem0 |
| `agent_id` | `hermes` | Agent identifier |
| `rerank` | `true` | Enable reranking for recall |
## Tools
| Tool | Description |
|------|-------------|
| `mem0_profile` | All stored memories about the user |
| `mem0_search` | Semantic search with optional reranking |
| `mem0_conclude` | Store a fact verbatim (no LLM extraction) |

View file

@ -0,0 +1,344 @@
"""Mem0 memory plugin — MemoryProvider interface.
Server-side LLM fact extraction, semantic search with reranking, and
automatic deduplication via the Mem0 Platform API.
Original PR #2933 by kartik-mem0, adapted to MemoryProvider ABC.
Config via environment variables:
MEM0_API_KEY Mem0 Platform API key (required)
MEM0_USER_ID User identifier (default: hermes-user)
MEM0_AGENT_ID Agent identifier (default: hermes)
Or via $HERMES_HOME/mem0.json.
"""
from __future__ import annotations
import json
import logging
import os
import threading
import time
from pathlib import Path
from typing import Any, Dict, List
from agent.memory_provider import MemoryProvider
logger = logging.getLogger(__name__)
# Circuit breaker: after this many consecutive failures, pause API calls
# for _BREAKER_COOLDOWN_SECS to avoid hammering a down server.
_BREAKER_THRESHOLD = 5
_BREAKER_COOLDOWN_SECS = 120
# ---------------------------------------------------------------------------
# Config
# ---------------------------------------------------------------------------
def _load_config() -> dict:
"""Load config from $HERMES_HOME/mem0.json or env vars."""
from hermes_constants import get_hermes_home
config_path = get_hermes_home() / "mem0.json"
if config_path.exists():
try:
return json.loads(config_path.read_text(encoding="utf-8"))
except Exception:
pass
return {
"api_key": os.environ.get("MEM0_API_KEY", ""),
"user_id": os.environ.get("MEM0_USER_ID", "hermes-user"),
"agent_id": os.environ.get("MEM0_AGENT_ID", "hermes"),
"rerank": True,
"keyword_search": False,
}
# ---------------------------------------------------------------------------
# Tool schemas
# ---------------------------------------------------------------------------
PROFILE_SCHEMA = {
"name": "mem0_profile",
"description": (
"Retrieve all stored memories about the user — preferences, facts, "
"project context. Fast, no reranking. Use at conversation start."
),
"parameters": {"type": "object", "properties": {}, "required": []},
}
SEARCH_SCHEMA = {
"name": "mem0_search",
"description": (
"Search memories by meaning. Returns relevant facts ranked by similarity. "
"Set rerank=true for higher accuracy on important queries."
),
"parameters": {
"type": "object",
"properties": {
"query": {"type": "string", "description": "What to search for."},
"rerank": {"type": "boolean", "description": "Enable reranking for precision (default: false)."},
"top_k": {"type": "integer", "description": "Max results (default: 10, max: 50)."},
},
"required": ["query"],
},
}
CONCLUDE_SCHEMA = {
"name": "mem0_conclude",
"description": (
"Store a durable fact about the user. Stored verbatim (no LLM extraction). "
"Use for explicit preferences, corrections, or decisions."
),
"parameters": {
"type": "object",
"properties": {
"conclusion": {"type": "string", "description": "The fact to store."},
},
"required": ["conclusion"],
},
}
# ---------------------------------------------------------------------------
# MemoryProvider implementation
# ---------------------------------------------------------------------------
class Mem0MemoryProvider(MemoryProvider):
"""Mem0 Platform memory with server-side extraction and semantic search."""
def __init__(self):
self._config = None
self._client = None
self._client_lock = threading.Lock()
self._api_key = ""
self._user_id = "hermes-user"
self._agent_id = "hermes"
self._rerank = True
self._prefetch_result = ""
self._prefetch_lock = threading.Lock()
self._prefetch_thread = None
self._sync_thread = None
# Circuit breaker state
self._consecutive_failures = 0
self._breaker_open_until = 0.0
@property
def name(self) -> str:
return "mem0"
def is_available(self) -> bool:
cfg = _load_config()
return bool(cfg.get("api_key"))
def save_config(self, values, hermes_home):
"""Write config to $HERMES_HOME/mem0.json."""
import json
from pathlib import Path
config_path = Path(hermes_home) / "mem0.json"
existing = {}
if config_path.exists():
try:
existing = json.loads(config_path.read_text())
except Exception:
pass
existing.update(values)
config_path.write_text(json.dumps(existing, indent=2))
def get_config_schema(self):
return [
{"key": "api_key", "description": "Mem0 Platform API key", "secret": True, "required": True, "env_var": "MEM0_API_KEY", "url": "https://app.mem0.ai"},
{"key": "user_id", "description": "User identifier", "default": "hermes-user"},
{"key": "agent_id", "description": "Agent identifier", "default": "hermes"},
{"key": "rerank", "description": "Enable reranking for recall", "default": "true", "choices": ["true", "false"]},
]
def _get_client(self):
"""Thread-safe client accessor with lazy initialization."""
with self._client_lock:
if self._client is not None:
return self._client
try:
from mem0 import MemoryClient
self._client = MemoryClient(api_key=self._api_key)
return self._client
except ImportError:
raise RuntimeError("mem0 package not installed. Run: pip install mem0ai")
def _is_breaker_open(self) -> bool:
"""Return True if the circuit breaker is tripped (too many failures)."""
if self._consecutive_failures < _BREAKER_THRESHOLD:
return False
if time.monotonic() >= self._breaker_open_until:
# Cooldown expired — reset and allow a retry
self._consecutive_failures = 0
return False
return True
def _record_success(self):
self._consecutive_failures = 0
def _record_failure(self):
self._consecutive_failures += 1
if self._consecutive_failures >= _BREAKER_THRESHOLD:
self._breaker_open_until = time.monotonic() + _BREAKER_COOLDOWN_SECS
logger.warning(
"Mem0 circuit breaker tripped after %d consecutive failures. "
"Pausing API calls for %ds.",
self._consecutive_failures, _BREAKER_COOLDOWN_SECS,
)
def initialize(self, session_id: str, **kwargs) -> None:
self._config = _load_config()
self._api_key = self._config.get("api_key", "")
self._user_id = self._config.get("user_id", "hermes-user")
self._agent_id = self._config.get("agent_id", "hermes")
self._rerank = self._config.get("rerank", True)
def system_prompt_block(self) -> str:
return (
"# Mem0 Memory\n"
f"Active. User: {self._user_id}.\n"
"Use mem0_search to find memories, mem0_conclude to store facts, "
"mem0_profile for a full overview."
)
def prefetch(self, query: str, *, session_id: str = "") -> str:
if self._prefetch_thread and self._prefetch_thread.is_alive():
self._prefetch_thread.join(timeout=3.0)
with self._prefetch_lock:
result = self._prefetch_result
self._prefetch_result = ""
if not result:
return ""
return f"## Mem0 Memory\n{result}"
def queue_prefetch(self, query: str, *, session_id: str = "") -> None:
if self._is_breaker_open():
return
def _run():
try:
client = self._get_client()
results = client.search(
query=query,
user_id=self._user_id,
rerank=self._rerank,
top_k=5,
)
if results:
lines = [r.get("memory", "") for r in results if r.get("memory")]
with self._prefetch_lock:
self._prefetch_result = "\n".join(f"- {l}" for l in lines)
self._record_success()
except Exception as e:
self._record_failure()
logger.debug("Mem0 prefetch failed: %s", e)
self._prefetch_thread = threading.Thread(target=_run, daemon=True, name="mem0-prefetch")
self._prefetch_thread.start()
def sync_turn(self, user_content: str, assistant_content: str, *, session_id: str = "") -> None:
"""Send the turn to Mem0 for server-side fact extraction (non-blocking)."""
if self._is_breaker_open():
return
def _sync():
try:
client = self._get_client()
messages = [
{"role": "user", "content": user_content},
{"role": "assistant", "content": assistant_content},
]
client.add(messages, user_id=self._user_id, agent_id=self._agent_id)
self._record_success()
except Exception as e:
self._record_failure()
logger.warning("Mem0 sync failed: %s", e)
# Wait for any previous sync before starting a new one
if self._sync_thread and self._sync_thread.is_alive():
self._sync_thread.join(timeout=5.0)
self._sync_thread = threading.Thread(target=_sync, daemon=True, name="mem0-sync")
self._sync_thread.start()
def get_tool_schemas(self) -> List[Dict[str, Any]]:
return [PROFILE_SCHEMA, SEARCH_SCHEMA, CONCLUDE_SCHEMA]
def handle_tool_call(self, tool_name: str, args: dict, **kwargs) -> str:
if self._is_breaker_open():
return json.dumps({
"error": "Mem0 API temporarily unavailable (multiple consecutive failures). Will retry automatically."
})
try:
client = self._get_client()
except Exception as e:
return json.dumps({"error": str(e)})
if tool_name == "mem0_profile":
try:
memories = client.get_all(user_id=self._user_id)
self._record_success()
if not memories:
return json.dumps({"result": "No memories stored yet."})
lines = [m.get("memory", "") for m in memories if m.get("memory")]
return json.dumps({"result": "\n".join(lines), "count": len(lines)})
except Exception as e:
self._record_failure()
return json.dumps({"error": f"Failed to fetch profile: {e}"})
elif tool_name == "mem0_search":
query = args.get("query", "")
if not query:
return json.dumps({"error": "Missing required parameter: query"})
rerank = args.get("rerank", False)
top_k = min(int(args.get("top_k", 10)), 50)
try:
results = client.search(
query=query, user_id=self._user_id,
rerank=rerank, top_k=top_k,
)
self._record_success()
if not results:
return json.dumps({"result": "No relevant memories found."})
items = [{"memory": r.get("memory", ""), "score": r.get("score", 0)} for r in results]
return json.dumps({"results": items, "count": len(items)})
except Exception as e:
self._record_failure()
return json.dumps({"error": f"Search failed: {e}"})
elif tool_name == "mem0_conclude":
conclusion = args.get("conclusion", "")
if not conclusion:
return json.dumps({"error": "Missing required parameter: conclusion"})
try:
client.add(
[{"role": "user", "content": conclusion}],
user_id=self._user_id,
agent_id=self._agent_id,
infer=False,
)
self._record_success()
return json.dumps({"result": "Fact stored."})
except Exception as e:
self._record_failure()
return json.dumps({"error": f"Failed to store: {e}"})
return json.dumps({"error": f"Unknown tool: {tool_name}"})
def shutdown(self) -> None:
for t in (self._prefetch_thread, self._sync_thread):
if t and t.is_alive():
t.join(timeout=5.0)
with self._client_lock:
self._client = None
def register(ctx) -> None:
"""Register Mem0 as a memory provider plugin."""
ctx.register_memory_provider(Mem0MemoryProvider())

View file

@ -0,0 +1,5 @@
name: mem0
version: 1.0.0
description: "Mem0 — server-side LLM fact extraction with semantic search, reranking, and automatic deduplication."
pip_dependencies:
- mem0ai

View file

@ -0,0 +1,40 @@
# OpenViking Memory Provider
Context database by Volcengine (ByteDance) with filesystem-style knowledge hierarchy, tiered retrieval, and automatic memory extraction.
## Requirements
- `pip install openviking`
- OpenViking server running (`openviking-server`)
- Embedding + VLM model configured in `~/.openviking/ov.conf`
## Setup
```bash
hermes memory setup # select "openviking"
```
Or manually:
```bash
hermes config set memory.provider openviking
echo "OPENVIKING_ENDPOINT=http://localhost:1933" >> ~/.hermes/.env
```
## Config
All config via environment variables in `.env`:
| Env Var | Default | Description |
|---------|---------|-------------|
| `OPENVIKING_ENDPOINT` | `http://127.0.0.1:1933` | Server URL |
| `OPENVIKING_API_KEY` | (none) | API key (optional) |
## Tools
| Tool | Description |
|------|-------------|
| `viking_search` | Semantic search with fast/deep/auto modes |
| `viking_read` | Read content at a viking:// URI (abstract/overview/full) |
| `viking_browse` | Filesystem-style navigation (list/tree/stat) |
| `viking_remember` | Store a fact for extraction on session commit |
| `viking_add_resource` | Ingest URLs/docs into the knowledge base |

View file

@ -0,0 +1,582 @@
"""OpenViking memory plugin — full bidirectional MemoryProvider interface.
Context database by Volcengine (ByteDance) that organizes agent knowledge
into a filesystem hierarchy (viking:// URIs) with tiered context loading,
automatic memory extraction, and session management.
Original PR #3369 by Mibayy, rewritten to use the full OpenViking session
lifecycle instead of read-only search endpoints.
Config via environment variables (profile-scoped via each profile's .env):
OPENVIKING_ENDPOINT Server URL (default: http://127.0.0.1:1933)
OPENVIKING_API_KEY API key (required for authenticated servers)
Capabilities:
- Automatic memory extraction on session commit (6 categories)
- Tiered context: L0 (~100 tokens), L1 (~2k), L2 (full)
- Semantic search with hierarchical directory retrieval
- Filesystem-style browsing via viking:// URIs
- Resource ingestion (URLs, docs, code)
"""
from __future__ import annotations
import json
import logging
import os
import threading
from typing import Any, Dict, List, Optional
from agent.memory_provider import MemoryProvider
logger = logging.getLogger(__name__)
_DEFAULT_ENDPOINT = "http://127.0.0.1:1933"
_TIMEOUT = 30.0
# ---------------------------------------------------------------------------
# HTTP helper — uses httpx to avoid requiring the openviking SDK
# ---------------------------------------------------------------------------
def _get_httpx():
"""Lazy import httpx."""
try:
import httpx
return httpx
except ImportError:
return None
class _VikingClient:
"""Thin HTTP client for the OpenViking REST API."""
def __init__(self, endpoint: str, api_key: str = ""):
self._endpoint = endpoint.rstrip("/")
self._api_key = api_key
self._httpx = _get_httpx()
if self._httpx is None:
raise ImportError("httpx is required for OpenViking: pip install httpx")
def _headers(self) -> dict:
h = {"Content-Type": "application/json"}
if self._api_key:
h["X-API-Key"] = self._api_key
return h
def _url(self, path: str) -> str:
return f"{self._endpoint}{path}"
def get(self, path: str, **kwargs) -> dict:
resp = self._httpx.get(
self._url(path), headers=self._headers(), timeout=_TIMEOUT, **kwargs
)
resp.raise_for_status()
return resp.json()
def post(self, path: str, payload: dict = None, **kwargs) -> dict:
resp = self._httpx.post(
self._url(path), json=payload or {}, headers=self._headers(),
timeout=_TIMEOUT, **kwargs
)
resp.raise_for_status()
return resp.json()
def health(self) -> bool:
try:
resp = self._httpx.get(
self._url("/health"), timeout=3.0
)
return resp.status_code == 200
except Exception:
return False
# ---------------------------------------------------------------------------
# Tool schemas
# ---------------------------------------------------------------------------
SEARCH_SCHEMA = {
"name": "viking_search",
"description": (
"Semantic search over the OpenViking knowledge base. "
"Returns ranked results with viking:// URIs for deeper reading. "
"Use mode='deep' for complex queries that need reasoning across "
"multiple sources, 'fast' for simple lookups."
),
"parameters": {
"type": "object",
"properties": {
"query": {"type": "string", "description": "Search query."},
"mode": {
"type": "string", "enum": ["auto", "fast", "deep"],
"description": "Search depth (default: auto).",
},
"scope": {
"type": "string",
"description": "Viking URI prefix to scope search (e.g. 'viking://resources/docs/').",
},
"limit": {"type": "integer", "description": "Max results (default: 10)."},
},
"required": ["query"],
},
}
READ_SCHEMA = {
"name": "viking_read",
"description": (
"Read content at a viking:// URI. Three detail levels:\n"
" abstract — ~100 token summary (L0)\n"
" overview — ~2k token key points (L1)\n"
" full — complete content (L2)\n"
"Start with abstract/overview, only use full when you need details."
),
"parameters": {
"type": "object",
"properties": {
"uri": {"type": "string", "description": "viking:// URI to read."},
"level": {
"type": "string", "enum": ["abstract", "overview", "full"],
"description": "Detail level (default: overview).",
},
},
"required": ["uri"],
},
}
BROWSE_SCHEMA = {
"name": "viking_browse",
"description": (
"Browse the OpenViking knowledge store like a filesystem.\n"
" list — show directory contents\n"
" tree — show hierarchy\n"
" stat — show metadata for a URI"
),
"parameters": {
"type": "object",
"properties": {
"action": {
"type": "string", "enum": ["tree", "list", "stat"],
"description": "Browse action.",
},
"path": {
"type": "string",
"description": "Viking URI path (default: viking://). Examples: 'viking://resources/', 'viking://user/memories/'.",
},
},
"required": ["action"],
},
}
REMEMBER_SCHEMA = {
"name": "viking_remember",
"description": (
"Explicitly store a fact or memory in the OpenViking knowledge base. "
"Use for important information the agent should remember long-term. "
"The system automatically categorizes and indexes the memory."
),
"parameters": {
"type": "object",
"properties": {
"content": {"type": "string", "description": "The information to remember."},
"category": {
"type": "string",
"enum": ["preference", "entity", "event", "case", "pattern"],
"description": "Memory category (default: auto-detected).",
},
},
"required": ["content"],
},
}
ADD_RESOURCE_SCHEMA = {
"name": "viking_add_resource",
"description": (
"Add a URL or document to the OpenViking knowledge base. "
"Supports web pages, GitHub repos, PDFs, markdown, code files. "
"The system automatically parses, indexes, and generates summaries."
),
"parameters": {
"type": "object",
"properties": {
"url": {"type": "string", "description": "URL or path of the resource to add."},
"reason": {
"type": "string",
"description": "Why this resource is relevant (improves search).",
},
},
"required": ["url"],
},
}
# ---------------------------------------------------------------------------
# MemoryProvider implementation
# ---------------------------------------------------------------------------
class OpenVikingMemoryProvider(MemoryProvider):
"""Full bidirectional memory via OpenViking context database."""
def __init__(self):
self._client: Optional[_VikingClient] = None
self._endpoint = ""
self._api_key = ""
self._session_id = ""
self._turn_count = 0
self._sync_thread: Optional[threading.Thread] = None
self._prefetch_result = ""
self._prefetch_lock = threading.Lock()
self._prefetch_thread: Optional[threading.Thread] = None
@property
def name(self) -> str:
return "openviking"
def is_available(self) -> bool:
"""Check if OpenViking endpoint is configured. No network calls."""
return bool(os.environ.get("OPENVIKING_ENDPOINT"))
def get_config_schema(self):
return [
{
"key": "endpoint",
"description": "OpenViking server URL",
"required": True,
"default": _DEFAULT_ENDPOINT,
"env_var": "OPENVIKING_ENDPOINT",
},
{
"key": "api_key",
"description": "OpenViking API key",
"secret": True,
"env_var": "OPENVIKING_API_KEY",
},
]
def initialize(self, session_id: str, **kwargs) -> None:
self._endpoint = os.environ.get("OPENVIKING_ENDPOINT", _DEFAULT_ENDPOINT)
self._api_key = os.environ.get("OPENVIKING_API_KEY", "")
self._session_id = session_id
self._turn_count = 0
try:
self._client = _VikingClient(self._endpoint, self._api_key)
if not self._client.health():
logger.warning("OpenViking server at %s is not reachable", self._endpoint)
self._client = None
except ImportError:
logger.warning("httpx not installed — OpenViking plugin disabled")
self._client = None
def system_prompt_block(self) -> str:
if not self._client:
return ""
# Provide brief info about the knowledge base
try:
# Check what's in the knowledge base via a root listing
resp = self._client.post("/api/v1/browse", {"action": "stat", "path": "viking://"})
result = resp.get("result", {})
children = result.get("children", 0)
if children == 0:
return ""
return (
"# OpenViking Knowledge Base\n"
f"Active. Endpoint: {self._endpoint}\n"
"Use viking_search to find information, viking_read for details "
"(abstract/overview/full), viking_browse to explore.\n"
"Use viking_remember to store facts, viking_add_resource to index URLs/docs."
)
except Exception:
return (
"# OpenViking Knowledge Base\n"
f"Active. Endpoint: {self._endpoint}\n"
"Use viking_search, viking_read, viking_browse, "
"viking_remember, viking_add_resource."
)
def prefetch(self, query: str, *, session_id: str = "") -> str:
"""Return prefetched results from the background thread."""
if self._prefetch_thread and self._prefetch_thread.is_alive():
self._prefetch_thread.join(timeout=3.0)
with self._prefetch_lock:
result = self._prefetch_result
self._prefetch_result = ""
if not result:
return ""
return f"## OpenViking Context\n{result}"
def queue_prefetch(self, query: str, *, session_id: str = "") -> None:
"""Fire a background search to pre-load relevant context."""
if not self._client or not query:
return
def _run():
try:
client = _VikingClient(self._endpoint, self._api_key)
resp = client.post("/api/v1/search/find", {
"query": query,
"top_k": 5,
})
result = resp.get("result", {})
parts = []
for ctx_type in ("memories", "resources"):
items = result.get(ctx_type, [])
for item in items[:3]:
uri = item.get("uri", "")
abstract = item.get("abstract", "")
score = item.get("score", 0)
if abstract:
parts.append(f"- [{score:.2f}] {abstract} ({uri})")
if parts:
with self._prefetch_lock:
self._prefetch_result = "\n".join(parts)
except Exception as e:
logger.debug("OpenViking prefetch failed: %s", e)
self._prefetch_thread = threading.Thread(
target=_run, daemon=True, name="openviking-prefetch"
)
self._prefetch_thread.start()
def sync_turn(self, user_content: str, assistant_content: str, *, session_id: str = "") -> None:
"""Record the conversation turn in OpenViking's session (non-blocking)."""
if not self._client:
return
self._turn_count += 1
def _sync():
try:
client = _VikingClient(self._endpoint, self._api_key)
sid = self._session_id
# Add user message
client.post(f"/api/v1/sessions/{sid}/messages", {
"role": "user",
"content": user_content[:4000], # trim very long messages
})
# Add assistant message
client.post(f"/api/v1/sessions/{sid}/messages", {
"role": "assistant",
"content": assistant_content[:4000],
})
except Exception as e:
logger.debug("OpenViking sync_turn failed: %s", e)
# Wait for any previous sync to finish before starting a new one
if self._sync_thread and self._sync_thread.is_alive():
self._sync_thread.join(timeout=5.0)
self._sync_thread = threading.Thread(
target=_sync, daemon=True, name="openviking-sync"
)
self._sync_thread.start()
def on_session_end(self, messages: List[Dict[str, Any]]) -> None:
"""Commit the session to trigger memory extraction.
OpenViking automatically extracts 6 categories of memories:
profile, preferences, entities, events, cases, and patterns.
"""
if not self._client or self._turn_count == 0:
return
# Wait for any pending sync to finish first
if self._sync_thread and self._sync_thread.is_alive():
self._sync_thread.join(timeout=10.0)
try:
self._client.post(f"/api/v1/sessions/{self._session_id}/commit")
logger.info("OpenViking session %s committed (%d turns)", self._session_id, self._turn_count)
except Exception as e:
logger.warning("OpenViking session commit failed: %s", e)
def on_memory_write(self, action: str, target: str, content: str) -> None:
"""Mirror built-in memory writes to OpenViking as explicit memories."""
if not self._client or action != "add" or not content:
return
def _write():
try:
client = _VikingClient(self._endpoint, self._api_key)
# Add as a user message with memory context so the commit
# picks it up as an explicit memory during extraction
client.post(f"/api/v1/sessions/{self._session_id}/messages", {
"role": "user",
"parts": [
{"type": "text", "text": f"[Memory note — {target}] {content}"},
],
})
except Exception as e:
logger.debug("OpenViking memory mirror failed: %s", e)
t = threading.Thread(target=_write, daemon=True, name="openviking-memwrite")
t.start()
def get_tool_schemas(self) -> List[Dict[str, Any]]:
return [SEARCH_SCHEMA, READ_SCHEMA, BROWSE_SCHEMA, REMEMBER_SCHEMA, ADD_RESOURCE_SCHEMA]
def handle_tool_call(self, tool_name: str, args: dict, **kwargs) -> str:
if not self._client:
return json.dumps({"error": "OpenViking server not connected"})
try:
if tool_name == "viking_search":
return self._tool_search(args)
elif tool_name == "viking_read":
return self._tool_read(args)
elif tool_name == "viking_browse":
return self._tool_browse(args)
elif tool_name == "viking_remember":
return self._tool_remember(args)
elif tool_name == "viking_add_resource":
return self._tool_add_resource(args)
return json.dumps({"error": f"Unknown tool: {tool_name}"})
except Exception as e:
return json.dumps({"error": str(e)})
def shutdown(self) -> None:
# Wait for background threads to finish
for t in (self._sync_thread, self._prefetch_thread):
if t and t.is_alive():
t.join(timeout=5.0)
# -- Tool implementations ------------------------------------------------
def _tool_search(self, args: dict) -> str:
query = args.get("query", "")
if not query:
return json.dumps({"error": "query is required"})
payload: Dict[str, Any] = {"query": query}
mode = args.get("mode", "auto")
if mode != "auto":
payload["mode"] = mode
if args.get("scope"):
payload["target_uri"] = args["scope"]
if args.get("limit"):
payload["top_k"] = args["limit"]
resp = self._client.post("/api/v1/search/find", payload)
result = resp.get("result", {})
# Format results for the model — keep it concise
formatted = []
for ctx_type in ("memories", "resources", "skills"):
items = result.get(ctx_type, [])
for item in items:
entry = {
"uri": item.get("uri", ""),
"type": ctx_type.rstrip("s"),
"score": round(item.get("score", 0), 3),
"abstract": item.get("abstract", ""),
}
if item.get("relations"):
entry["related"] = [r.get("uri") for r in item["relations"][:3]]
formatted.append(entry)
return json.dumps({
"results": formatted,
"total": result.get("total", len(formatted)),
}, ensure_ascii=False)
def _tool_read(self, args: dict) -> str:
uri = args.get("uri", "")
if not uri:
return json.dumps({"error": "uri is required"})
level = args.get("level", "overview")
# Map our level names to OpenViking endpoints
if level == "abstract":
resp = self._client.post("/api/v1/read/abstract", {"uri": uri})
elif level == "full":
resp = self._client.post("/api/v1/read", {"uri": uri, "level": "read"})
else: # overview
resp = self._client.post("/api/v1/read", {"uri": uri, "level": "overview"})
result = resp.get("result", {})
content = result.get("content", "")
# Truncate very long content to avoid flooding the context
if len(content) > 8000:
content = content[:8000] + "\n\n[... truncated, use a more specific URI or abstract level]"
return json.dumps({
"uri": uri,
"level": level,
"content": content,
}, ensure_ascii=False)
def _tool_browse(self, args: dict) -> str:
action = args.get("action", "list")
path = args.get("path", "viking://")
resp = self._client.post("/api/v1/browse", {
"action": action,
"path": path,
})
result = resp.get("result", {})
# Format for readability
if action == "list" and "entries" in result:
entries = []
for e in result["entries"][:50]: # cap at 50 entries
entries.append({
"name": e.get("name", ""),
"uri": e.get("uri", ""),
"type": "dir" if e.get("is_dir") else "file",
})
return json.dumps({"path": path, "entries": entries}, ensure_ascii=False)
return json.dumps(result, ensure_ascii=False)
def _tool_remember(self, args: dict) -> str:
content = args.get("content", "")
if not content:
return json.dumps({"error": "content is required"})
# Store as a session message that will be extracted during commit.
# The category hint helps OpenViking's extraction classify correctly.
category = args.get("category", "")
text = f"[Remember] {content}"
if category:
text = f"[Remember — {category}] {content}"
self._client.post(f"/api/v1/sessions/{self._session_id}/messages", {
"role": "user",
"parts": [
{"type": "text", "text": text},
],
})
return json.dumps({
"status": "stored",
"message": "Memory recorded. Will be extracted and indexed on session commit.",
})
def _tool_add_resource(self, args: dict) -> str:
url = args.get("url", "")
if not url:
return json.dumps({"error": "url is required"})
payload: Dict[str, Any] = {"path": url}
if args.get("reason"):
payload["reason"] = args["reason"]
resp = self._client.post("/api/v1/resources", payload)
result = resp.get("result", {})
return json.dumps({
"status": "added",
"root_uri": result.get("root_uri", ""),
"message": "Resource queued for processing. Use viking_search after a moment to find it.",
}, ensure_ascii=False)
# ---------------------------------------------------------------------------
# Plugin entry point
# ---------------------------------------------------------------------------
def register(ctx) -> None:
"""Register OpenViking as a memory provider plugin."""
ctx.register_memory_provider(OpenVikingMemoryProvider())

View file

@ -0,0 +1,9 @@
name: openviking
version: 2.0.0
description: "OpenViking context database — session-managed memory with automatic extraction, tiered retrieval, and filesystem-style knowledge browsing."
pip_dependencies:
- httpx
requires_env:
- OPENVIKING_ENDPOINT
hooks:
- on_session_end

View file

@ -0,0 +1,40 @@
# RetainDB Memory Provider
Cloud memory API with hybrid search (Vector + BM25 + Reranking) and 7 memory types.
## Requirements
- RetainDB account ($20/month) from [retaindb.com](https://www.retaindb.com)
- `pip install requests`
## Setup
```bash
hermes memory setup # select "retaindb"
```
Or manually:
```bash
hermes config set memory.provider retaindb
echo "RETAINDB_API_KEY=your-key" >> ~/.hermes/.env
```
## Config
All config via environment variables in `.env`:
| Env Var | Default | Description |
|---------|---------|-------------|
| `RETAINDB_API_KEY` | (required) | API key |
| `RETAINDB_BASE_URL` | `https://api.retaindb.com` | API endpoint |
| `RETAINDB_PROJECT` | auto (profile-scoped) | Project identifier |
## Tools
| Tool | Description |
|------|-------------|
| `retaindb_profile` | User's stable profile |
| `retaindb_search` | Semantic search |
| `retaindb_context` | Task-relevant context |
| `retaindb_remember` | Store a fact with type + importance |
| `retaindb_forget` | Delete a memory by ID |

View file

@ -0,0 +1,302 @@
"""RetainDB memory plugin — MemoryProvider interface.
Cross-session memory via RetainDB cloud API. Durable write-behind queue,
semantic search with deduplication, and user profile retrieval.
Original PR #2732 by Alinxus, adapted to MemoryProvider ABC.
Config via environment variables:
RETAINDB_API_KEY API key (required)
RETAINDB_BASE_URL API endpoint (default: https://api.retaindb.com)
RETAINDB_PROJECT Project identifier (default: hermes)
"""
from __future__ import annotations
import json
import logging
import os
import threading
from typing import Any, Dict, List
from agent.memory_provider import MemoryProvider
logger = logging.getLogger(__name__)
_DEFAULT_BASE_URL = "https://api.retaindb.com"
# ---------------------------------------------------------------------------
# Tool schemas
# ---------------------------------------------------------------------------
PROFILE_SCHEMA = {
"name": "retaindb_profile",
"description": "Get the user's stable profile — preferences, facts, and patterns.",
"parameters": {"type": "object", "properties": {}, "required": []},
}
SEARCH_SCHEMA = {
"name": "retaindb_search",
"description": (
"Semantic search across stored memories. Returns ranked results "
"with relevance scores."
),
"parameters": {
"type": "object",
"properties": {
"query": {"type": "string", "description": "What to search for."},
"top_k": {"type": "integer", "description": "Max results (default: 8, max: 20)."},
},
"required": ["query"],
},
}
CONTEXT_SCHEMA = {
"name": "retaindb_context",
"description": "Synthesized 'what matters now' context block for the current task.",
"parameters": {
"type": "object",
"properties": {
"query": {"type": "string", "description": "Current task or question."},
},
"required": ["query"],
},
}
REMEMBER_SCHEMA = {
"name": "retaindb_remember",
"description": "Persist an explicit fact or preference to long-term memory.",
"parameters": {
"type": "object",
"properties": {
"content": {"type": "string", "description": "The fact to remember."},
"memory_type": {
"type": "string",
"enum": ["preference", "fact", "decision", "context"],
"description": "Category (default: fact).",
},
"importance": {
"type": "number",
"description": "Importance 0-1 (default: 0.5).",
},
},
"required": ["content"],
},
}
FORGET_SCHEMA = {
"name": "retaindb_forget",
"description": "Delete a specific memory by ID.",
"parameters": {
"type": "object",
"properties": {
"memory_id": {"type": "string", "description": "Memory ID to delete."},
},
"required": ["memory_id"],
},
}
# ---------------------------------------------------------------------------
# MemoryProvider implementation
# ---------------------------------------------------------------------------
class RetainDBMemoryProvider(MemoryProvider):
"""RetainDB cloud memory with write-behind queue and semantic search."""
def __init__(self):
self._api_key = ""
self._base_url = _DEFAULT_BASE_URL
self._project = "hermes"
self._user_id = ""
self._prefetch_result = ""
self._prefetch_lock = threading.Lock()
self._prefetch_thread = None
self._sync_thread = None
@property
def name(self) -> str:
return "retaindb"
def is_available(self) -> bool:
return bool(os.environ.get("RETAINDB_API_KEY"))
def get_config_schema(self):
return [
{"key": "api_key", "description": "RetainDB API key", "secret": True, "required": True, "env_var": "RETAINDB_API_KEY", "url": "https://retaindb.com"},
{"key": "base_url", "description": "API endpoint", "default": "https://api.retaindb.com"},
{"key": "project", "description": "Project identifier", "default": "hermes"},
]
def _headers(self) -> dict:
return {
"Authorization": f"Bearer {self._api_key}",
"Content-Type": "application/json",
}
def _api(self, method: str, path: str, **kwargs):
"""Make an API call to RetainDB."""
import requests
url = f"{self._base_url}{path}"
resp = requests.request(method, url, headers=self._headers(), timeout=30, **kwargs)
resp.raise_for_status()
return resp.json()
def initialize(self, session_id: str, **kwargs) -> None:
self._api_key = os.environ.get("RETAINDB_API_KEY", "")
self._base_url = os.environ.get("RETAINDB_BASE_URL", _DEFAULT_BASE_URL)
self._user_id = kwargs.get("user_id", "default")
self._session_id = session_id
# Derive profile-scoped project name so different profiles don't
# share server-side memory. Explicit RETAINDB_PROJECT always wins.
explicit_project = os.environ.get("RETAINDB_PROJECT")
if explicit_project:
self._project = explicit_project
else:
hermes_home = kwargs.get("hermes_home", "")
profile_name = os.path.basename(hermes_home) if hermes_home else ""
# Default profile (~/.hermes) → "hermes"; named profiles → "hermes-<name>"
if profile_name and profile_name != ".hermes":
self._project = f"hermes-{profile_name}"
else:
self._project = "hermes"
def system_prompt_block(self) -> str:
return (
"# RetainDB Memory\n"
f"Active. Project: {self._project}.\n"
"Use retaindb_search to find memories, retaindb_remember to store facts, "
"retaindb_profile for a user overview, retaindb_context for task-relevant context."
)
def prefetch(self, query: str, *, session_id: str = "") -> str:
if self._prefetch_thread and self._prefetch_thread.is_alive():
self._prefetch_thread.join(timeout=3.0)
with self._prefetch_lock:
result = self._prefetch_result
self._prefetch_result = ""
if not result:
return ""
return f"## RetainDB Memory\n{result}"
def queue_prefetch(self, query: str, *, session_id: str = "") -> None:
def _run():
try:
data = self._api("POST", "/v1/recall", json={
"project": self._project,
"query": query,
"user_id": self._user_id,
"top_k": 5,
})
results = data.get("results", [])
if results:
lines = [r.get("content", "") for r in results if r.get("content")]
with self._prefetch_lock:
self._prefetch_result = "\n".join(f"- {l}" for l in lines)
except Exception as e:
logger.debug("RetainDB prefetch failed: %s", e)
self._prefetch_thread = threading.Thread(target=_run, daemon=True, name="retaindb-prefetch")
self._prefetch_thread.start()
def sync_turn(self, user_content: str, assistant_content: str, *, session_id: str = "") -> None:
"""Ingest conversation turn in background (non-blocking)."""
def _sync():
try:
self._api("POST", "/v1/ingest", json={
"project": self._project,
"user_id": self._user_id,
"session_id": self._session_id,
"messages": [
{"role": "user", "content": user_content},
{"role": "assistant", "content": assistant_content},
],
})
except Exception as e:
logger.warning("RetainDB sync failed: %s", e)
if self._sync_thread and self._sync_thread.is_alive():
self._sync_thread.join(timeout=5.0)
self._sync_thread = threading.Thread(target=_sync, daemon=True, name="retaindb-sync")
self._sync_thread.start()
def get_tool_schemas(self) -> List[Dict[str, Any]]:
return [PROFILE_SCHEMA, SEARCH_SCHEMA, CONTEXT_SCHEMA, REMEMBER_SCHEMA, FORGET_SCHEMA]
def handle_tool_call(self, tool_name: str, args: dict, **kwargs) -> str:
try:
if tool_name == "retaindb_profile":
data = self._api("GET", f"/v1/profile/{self._project}/{self._user_id}")
return json.dumps(data)
elif tool_name == "retaindb_search":
query = args.get("query", "")
if not query:
return json.dumps({"error": "query is required"})
data = self._api("POST", "/v1/search", json={
"project": self._project,
"user_id": self._user_id,
"query": query,
"top_k": min(int(args.get("top_k", 8)), 20),
})
return json.dumps(data)
elif tool_name == "retaindb_context":
query = args.get("query", "")
if not query:
return json.dumps({"error": "query is required"})
data = self._api("POST", "/v1/recall", json={
"project": self._project,
"user_id": self._user_id,
"query": query,
"top_k": 5,
})
return json.dumps(data)
elif tool_name == "retaindb_remember":
content = args.get("content", "")
if not content:
return json.dumps({"error": "content is required"})
data = self._api("POST", "/v1/remember", json={
"project": self._project,
"user_id": self._user_id,
"content": content,
"memory_type": args.get("memory_type", "fact"),
"importance": float(args.get("importance", 0.5)),
})
return json.dumps(data)
elif tool_name == "retaindb_forget":
memory_id = args.get("memory_id", "")
if not memory_id:
return json.dumps({"error": "memory_id is required"})
data = self._api("DELETE", f"/v1/memory/{memory_id}")
return json.dumps(data)
return json.dumps({"error": f"Unknown tool: {tool_name}"})
except Exception as e:
return json.dumps({"error": str(e)})
def on_memory_write(self, action: str, target: str, content: str) -> None:
if action == "add":
try:
self._api("POST", "/v1/remember", json={
"project": self._project,
"user_id": self._user_id,
"content": content,
"memory_type": "preference" if target == "user" else "fact",
})
except Exception as e:
logger.debug("RetainDB memory bridge failed: %s", e)
def shutdown(self) -> None:
for t in (self._prefetch_thread, self._sync_thread):
if t and t.is_alive():
t.join(timeout=5.0)
def register(ctx) -> None:
"""Register RetainDB as a memory provider plugin."""
ctx.register_memory_provider(RetainDBMemoryProvider())

View file

@ -0,0 +1,7 @@
name: retaindb
version: 1.0.0
description: "RetainDB — cloud memory API with hybrid search and 7 memory types."
pip_dependencies:
- requests
requires_env:
- RETAINDB_API_KEY