fix(tools): bound _read_tracker sub-containers + prune _completion_consumed (#11839)

Two accretion-over-time leaks that compound over long CLI / gateway
lifetimes.  Both were flagged in the memory-leak audit.

## file_tools._read_tracker

_read_tracker[task_id] holds three sub-containers that grew unbounded:

  read_history     set of (path, offset, limit) tuples — 1 per unique read
  dedup            dict of (path, offset, limit) → mtime — same growth pattern
  read_timestamps  dict of resolved_path → mtime — 1 per unique path

A CLI session uses one stable task_id for its lifetime, so these were
uncapped.  A 10k-read session accumulated ~1.5MB of tracker state that
the tool no longer needed (only the most recent reads are relevant for
dedup, consecutive-loop detection, and write/patch external-edit
warnings).

Fix: _cap_read_tracker_data() enforces hard caps on each container
after every add.  Defaults: read_history=500, dedup=1000,
read_timestamps=1000.  Eviction is insertion-order (Python 3.7+ dict
guarantee) for the dicts; arbitrary for the set (which only feeds
diagnostic summaries).

## process_registry._completion_consumed

Module-level set that recorded every session_id ever polled / waited /
logged.  No pruning.  Each entry is ~20 bytes, so the absolute leak is
small, but on a gateway processing thousands of background commands
per day the set grows until process exit.

Fix: _prune_if_needed() now discards _completion_consumed entries
alongside the session dict evictions it already performs (both the
TTL-based prune and the LRU-over-cap prune).  Adds a final
belt-and-suspenders pass that drops any dangling entries whose
session_id no longer appears in _running or _finished.

Tests: tests/tools/test_accretion_caps.py — 9 cases
  * Each container bound respected, oldest evicted
  * No-op when under cap (no unnecessary work)
  * Handles missing sub-containers without crashing
  * Live read_file_tool path enforces caps end-to-end
  * _completion_consumed pruned on TTL expiry
  * _completion_consumed pruned on LRU eviction
  * Dangling entries (no backing session) cleared

Broader suite: 3486 tests/tools + tests/cli pass.  The single flake
(test_alias_command_passes_args) reproduces on unchanged main — known
cross-test pollution under suite-order load.
This commit is contained in:
Teknium 2026-04-17 15:53:57 -07:00 committed by GitHub
parent 0a83187801
commit 3f43aec15d
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
3 changed files with 266 additions and 0 deletions

View file

@ -148,6 +148,58 @@ _file_ops_cache: dict = {}
_read_tracker_lock = threading.Lock()
_read_tracker: dict = {}
# Per-task bounds for the containers inside each _read_tracker[task_id].
# A CLI session uses one stable task_id for its lifetime; without these
# caps, a 10k-read session would accumulate ~1.5MB of dict/set state that
# is never referenced again (only the most recent reads matter for dedup,
# loop detection, and external-edit warnings). Hard caps bound the
# accretion to a few hundred KB regardless of session length.
_READ_HISTORY_CAP = 500 # set; used only by get_read_files_summary
_DEDUP_CAP = 1000 # dict; skip-identical-reread guard
_READ_TIMESTAMPS_CAP = 1000 # dict; external-edit detection for write/patch
def _cap_read_tracker_data(task_data: dict) -> None:
"""Enforce size caps on the per-task read-tracker sub-containers.
Must be called with ``_read_tracker_lock`` held. Eviction policy:
* ``read_history`` (set): pop arbitrary entries on overflow. This
is fine because the set only feeds diagnostic summaries; losing
old entries just trims the summary's tail.
* ``dedup`` / ``read_timestamps`` (dict): pop oldest by insertion
order (Python 3.7+ dicts). Evicted entries lose their dedup
skip on a future re-read (the file gets re-sent once) and
external-edit mtime comparison (the write/patch falls back to
a non-mtime check). Both are graceful degradations, not bugs.
"""
rh = task_data.get("read_history")
if rh is not None and len(rh) > _READ_HISTORY_CAP:
excess = len(rh) - _READ_HISTORY_CAP
for _ in range(excess):
try:
rh.pop()
except KeyError:
break
dedup = task_data.get("dedup")
if dedup is not None and len(dedup) > _DEDUP_CAP:
excess = len(dedup) - _DEDUP_CAP
for _ in range(excess):
try:
dedup.pop(next(iter(dedup)))
except (StopIteration, KeyError):
break
ts = task_data.get("read_timestamps")
if ts is not None and len(ts) > _READ_TIMESTAMPS_CAP:
excess = len(ts) - _READ_TIMESTAMPS_CAP
for _ in range(excess):
try:
ts.pop(next(iter(ts)))
except (StopIteration, KeyError):
break
def _get_file_ops(task_id: str = "default") -> ShellFileOperations:
"""Get or create ShellFileOperations for a terminal environment.
@ -426,6 +478,10 @@ def read_file_tool(path: str, offset: int = 1, limit: int = 500, task_id: str =
except OSError:
pass # Can't stat — skip tracking for this entry
# Bound the per-task containers so a long CLI session doesn't
# accumulate megabytes of dict/set state. See _cap_read_tracker_data.
_cap_read_tracker_data(task_data)
if count >= 4:
# Hard block: stop returning content to break the loop
return json.dumps({
@ -505,6 +561,7 @@ def _update_read_timestamp(filepath: str, task_id: str) -> None:
task_data = _read_tracker.get(task_id)
if task_data is not None:
task_data.setdefault("read_timestamps", {})[resolved] = current_mtime
_cap_read_tracker_data(task_data)
def _check_file_staleness(filepath: str, task_id: str) -> str | None: