fix: persistent event loop in _run_async prevents 'Event loop is closed' (#2190)

Cherry-picked from PR #2146 by @crazywriter1. Fixes #2104.

asyncio.run() creates and closes a fresh event loop each call. Cached
httpx/AsyncOpenAI clients bound to the dead loop crash on GC with
'Event loop is closed'. This hit vision_analyze on first use in CLI.

Two-layer fix:
- model_tools._run_async(): replace asyncio.run() with persistent
  loop via _get_tool_loop() + run_until_complete()
- auxiliary_client._get_cached_client(): track which loop created
  each async client, discard stale entries if loop is closed

6 regression tests covering loop lifecycle, reuse, and full vision
dispatch chain.

Co-authored-by: Test <test@test.com>
This commit is contained in:
Teknium 2026-03-20 09:44:50 -07:00 committed by GitHub
parent 66a1942524
commit 7a427d7b03
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
3 changed files with 261 additions and 5 deletions

View file

@ -24,6 +24,7 @@ import json
import asyncio
import os
import logging
import threading
from typing import Dict, Any, List, Optional, Tuple
from tools.registry import registry
@ -36,6 +37,25 @@ logger = logging.getLogger(__name__)
# Async Bridging (single source of truth -- used by registry.dispatch too)
# =============================================================================
_tool_loop = None # persistent loop for the main (CLI) thread
_tool_loop_lock = threading.Lock()
def _get_tool_loop():
"""Return a long-lived event loop for running async tool handlers.
Using a persistent loop (instead of asyncio.run() which creates and
*closes* a fresh loop every time) prevents "Event loop is closed"
errors that occur when cached httpx/AsyncOpenAI clients attempt to
close their transport on a dead loop during garbage collection.
"""
global _tool_loop
with _tool_loop_lock:
if _tool_loop is None or _tool_loop.is_closed():
_tool_loop = asyncio.new_event_loop()
return _tool_loop
def _run_async(coro):
"""Run an async coroutine from a sync context.
@ -44,6 +64,10 @@ def _run_async(coro):
disposable thread so asyncio.run() can create its own loop without
conflicting.
For the common CLI path (no running loop), we use a persistent event
loop so that cached async clients (httpx / AsyncOpenAI) remain bound
to a live loop and don't trigger "Event loop is closed" on GC.
This is the single source of truth for sync->async bridging in tool
handlers. The RL paths (agent_loop.py, tool_context.py) also provide
outer thread-pool wrapping as defense-in-depth, but each handler is
@ -59,7 +83,9 @@ def _run_async(coro):
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as pool:
future = pool.submit(asyncio.run, coro)
return future.result(timeout=300)
return asyncio.run(coro)
tool_loop = _get_tool_loop()
return tool_loop.run_until_complete(coro)
# =============================================================================