mirror of
https://github.com/NousResearch/hermes-agent.git
synced 2026-04-25 00:51:20 +00:00
fix(memory): skip external-provider sync on interrupted turns (#15218)
``run_conversation`` was calling ``memory_manager.sync_all( original_user_message, final_response)`` at the end of every turn where both args were present. That gate didn't consider the ``interrupted`` local flag, so an external memory backend received partial assistant output, aborted tool chains, or mid-stream resets as durable conversational truth. Downstream recall then treated the not-yet-real state as if the user had seen it complete, poisoning the trust boundary between "what the user took away from the turn" and "what Hermes was in the middle of producing when the interrupt hit". Extracted the inline sync block into a new private method ``AIAgent._sync_external_memory_for_turn(original_user_message, final_response, interrupted)`` so the interrupt guard is a single visible check at the top of the method instead of hidden in a boolean-and at the call site. That also gives tests a clean seam to assert on — the pre-fix layout buried the logic inside the 3,000-line ``run_conversation`` function where no focused test could reach it. The new method encodes three independent skip conditions: 1. ``interrupted`` → skip entirely (the #15218 fix). Applies even when ``final_response`` and ``original_user_message`` happen to be populated — an interrupt may have landed between a streamed reply and the next tool call, so the strings on disk are not actually the turn the user took away. 2. No memory manager / no final_response / no user message → preserve existing skip behaviour (nothing new for providerless sessions, system-initiated refreshes, tool-only turns that never resolved, etc.). 3. Sync_all / queue_prefetch_all exceptions → swallow. External memory providers are strictly best-effort; a misconfigured or offline backend must never block the user from seeing their response. The prefetch side-effect is gated on the same interrupt flag: the user's next message is almost certainly a retry of the same intent, and a prefetch keyed on the interrupted turn would fire against stale context. ### Tests (16 new, all passing on py3.11 venv) ``tests/run_agent/test_memory_sync_interrupted.py`` exercises the helper directly on a bare ``AIAgent`` (``__new__`` pattern that the interrupt-propagation tests already use). Coverage: - Interrupted turn with full-looking response → no sync (the fix) - Interrupted turn with long assistant output → no sync (the interrupt could have landed mid-stream; strings-on-disk lie) - Normal completed turn → sync_all + queue_prefetch_all both called with the right args (regression guard for the positive path) - No final_response / no user_message / no memory manager → existing pre-fix skip paths still apply - sync_all raises → exception swallowed, prefetch still attempted - queue_prefetch_all raises → exception swallowed after sync succeeded - 8-case parametrised matrix across (interrupted × final_response × original_user_message) asserts sync fires iff interrupted=False AND both strings are non-empty Closes #15218 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
parent
fd10463069
commit
00c3d848d8
2 changed files with 237 additions and 8 deletions
56
run_agent.py
56
run_agent.py
|
|
@ -4162,6 +4162,49 @@ class AIAgent:
|
|||
except Exception:
|
||||
pass
|
||||
|
||||
def _sync_external_memory_for_turn(
|
||||
self,
|
||||
*,
|
||||
original_user_message: Any,
|
||||
final_response: Any,
|
||||
interrupted: bool,
|
||||
) -> None:
|
||||
"""Mirror a completed turn into external memory providers.
|
||||
|
||||
Called at the end of ``run_conversation`` with the cleaned user
|
||||
message (``original_user_message``) and the finalised assistant
|
||||
response. The external memory backend gets both ``sync_all`` (to
|
||||
persist the exchange) and ``queue_prefetch_all`` (to start
|
||||
warming context for the next turn) in one shot.
|
||||
|
||||
Uses ``original_user_message`` rather than ``user_message``
|
||||
because the latter may carry injected skill content that bloats
|
||||
or breaks provider queries.
|
||||
|
||||
Interrupted turns are skipped entirely (#15218). A partial
|
||||
assistant output, an aborted tool chain, or a mid-stream reset
|
||||
is not durable conversational truth — mirroring it into an
|
||||
external memory backend pollutes future recall with state the
|
||||
user never saw completed. The prefetch is gated on the same
|
||||
flag: the user's next message is almost certainly a retry of
|
||||
the same intent, and a prefetch keyed on the interrupted turn
|
||||
would fire against stale context.
|
||||
|
||||
Normal completed turns still sync as before. The whole body is
|
||||
wrapped in ``try/except Exception`` because external memory
|
||||
providers are strictly best-effort — a misconfigured or offline
|
||||
backend must not block the user from seeing their response.
|
||||
"""
|
||||
if interrupted:
|
||||
return
|
||||
if not (self._memory_manager and final_response and original_user_message):
|
||||
return
|
||||
try:
|
||||
self._memory_manager.sync_all(original_user_message, final_response)
|
||||
self._memory_manager.queue_prefetch_all(original_user_message)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def release_clients(self) -> None:
|
||||
"""Release LLM client resources WITHOUT tearing down session tool state.
|
||||
|
||||
|
|
@ -12524,14 +12567,11 @@ class AIAgent:
|
|||
self._iters_since_skill = 0
|
||||
|
||||
# External memory provider: sync the completed turn + queue next prefetch.
|
||||
# Use original_user_message (clean input) — user_message may contain
|
||||
# injected skill content that bloats / breaks provider queries.
|
||||
if self._memory_manager and final_response and original_user_message:
|
||||
try:
|
||||
self._memory_manager.sync_all(original_user_message, final_response)
|
||||
self._memory_manager.queue_prefetch_all(original_user_message)
|
||||
except Exception:
|
||||
pass
|
||||
self._sync_external_memory_for_turn(
|
||||
original_user_message=original_user_message,
|
||||
final_response=final_response,
|
||||
interrupted=interrupted,
|
||||
)
|
||||
|
||||
# Background memory/skill review — runs AFTER the response is delivered
|
||||
# so it never competes with the user's task for model attention.
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue