mirror of
https://github.com/NousResearch/hermes-agent.git
synced 2026-04-25 00:51:20 +00:00
The _client_cache used event loop id() as part of the cache key, so every new worker-thread event loop created a new entry for the same provider config. In long-running gateways where threads are recycled frequently, this caused unbounded cache growth — each stale entry held an unclosed AsyncOpenAI client with its httpx connection pool, eventually exhausting file descriptors. Fix: remove loop_id from the cache key and instead validate on each async cache hit that the cached loop is the current, open loop. If the loop changed or was closed, the stale entry is replaced in-place rather than creating an additional entry. This bounds cache growth to at most one entry per unique provider config. Also adds a _CLIENT_CACHE_MAX_SIZE (64) safety belt with FIFO eviction as defense-in-depth against any remaining unbounded growth. Cross-loop safety is preserved: different event loops still get different client instances (validated by existing test suite). Closes #10200 |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| anthropic_adapter.py | ||
| auxiliary_client.py | ||
| context_compressor.py | ||
| context_engine.py | ||
| context_references.py | ||
| copilot_acp_client.py | ||
| credential_pool.py | ||
| display.py | ||
| error_classifier.py | ||
| insights.py | ||
| manual_compression_feedback.py | ||
| memory_manager.py | ||
| memory_provider.py | ||
| model_metadata.py | ||
| models_dev.py | ||
| prompt_builder.py | ||
| prompt_caching.py | ||
| rate_limit_tracker.py | ||
| redact.py | ||
| retry_utils.py | ||
| skill_commands.py | ||
| skill_utils.py | ||
| smart_model_routing.py | ||
| subdirectory_hints.py | ||
| title_generator.py | ||
| trajectory.py | ||
| usage_pricing.py | ||