hermes-agent/tests/run_agent/test_exit_cleanup_interrupt.py
Teknium 3207b9bda0
test: speed up slow tests (backoff + subprocess + IMDS network) (#11797)
Cuts shard-3 local runtime in half by neutralizing real wall-clock
waits across three classes of slow test:

## 1. Retry backoff mocks

- tests/run_agent/conftest.py (NEW): autouse fixture mocks
  jittered_backoff to 0.0 so the `while time.time() < sleep_end`
  busy-loop exits immediately. No global time.sleep mock (would
  break threading tests).
- test_anthropic_error_handling, test_413_compression,
  test_run_agent_codex_responses, test_fallback_model: per-file
  fixtures mock time.sleep / asyncio.sleep for retry / compression
  paths.
- test_retaindb_plugin: cap the retaindb module's bound time.sleep
  to 0.05s via a per-test shim (background writer-thread retries
  sleep 2s after errors; tests don't care about exact duration).
  Plus replace arbitrary time.sleep(N) waits with short polling
  loops bounded by deadline.

## 2. Subprocess sleeps in production code

- test_update_gateway_restart: mock time.sleep. Production code
  does time.sleep(3) after `systemctl restart` to verify the
  service survived. Tests mock subprocess.run \u2014 nothing actually
  restarts \u2014 so the wait is dead time.

## 3. Network / IMDS timeouts (biggest single win)

- tests/conftest.py: add AWS_EC2_METADATA_DISABLED=true plus
  AWS_METADATA_SERVICE_TIMEOUT=1 and ATTEMPTS=1. boto3 falls back
  to IMDS (169.254.169.254) when no AWS creds are set. Any test
  hitting has_aws_credentials() / resolve_aws_auth_env_var() (e.g.
  test_status, test_setup_copilot_acp, anything that touches
  provider auto-detect) burned ~2-4s waiting for that to time out.
- test_exit_cleanup_interrupt: explicitly mock
  resolve_runtime_provider which was doing real network auto-detect
  (~4s). Tests don't care about provider resolution \u2014 the agent
  is already mocked.
- test_timezone: collapse the 3-test "TZ env in subprocess" suite
  into 2 tests by checking both injection AND no-leak in the same
  subprocess spawn (was 3 \u00d7 3.2s, now 2 \u00d7 4s).

## Validation

| Test | Before | After |
|---|---|---|
| test_anthropic_error_handling (8 tests) | ~80s | ~15s |
| test_413_compression (14 tests) | ~18s | 2.3s |
| test_retaindb_plugin (67 tests) | ~13s | 1.3s |
| test_status_includes_tavily_key | 4.0s | 0.05s |
| test_setup_copilot_acp_skips_same_provider_pool_step | 8.0s | 0.26s |
| test_update_gateway_restart (5 tests) | ~18s total | ~0.35s total |
| test_exit_cleanup_interrupt (2 tests) | 8s | 1.5s |
| **Matrix shard 3 local** | **108s** | **50s** |

No behavioral contract changed \u2014 tests still verify retry happens,
service restart logic runs, etc.; they just don't burn real seconds
waiting for it.

Supersedes PR #11779 (those changes are included here).
2026-04-17 14:21:22 -07:00

91 lines
3.5 KiB
Python

"""Tests for KeyboardInterrupt handling in exit cleanup paths.
``except Exception`` does not catch ``KeyboardInterrupt`` (which inherits
from ``BaseException``). A second Ctrl+C during exit cleanup must not
abort remaining cleanup steps. These tests exercise the actual production
code paths — not a copy of the try/except pattern.
"""
import atexit
import weakref
from unittest.mock import MagicMock, patch, call
import pytest
@pytest.fixture(autouse=True)
def _mock_runtime_provider(monkeypatch):
"""run_job calls resolve_runtime_provider which can try real network
auto-detection (~4s of socket timeouts in hermetic CI). Mock it out
since these tests don't care about provider resolution — the agent
is mocked too."""
import hermes_cli.runtime_provider as rp
def _fake_resolve(*args, **kwargs):
return {
"provider": "openrouter",
"api_key": "test-key",
"base_url": "https://openrouter.ai/api/v1",
"model": "test/model",
"api_mode": "chat_completions",
}
monkeypatch.setattr(rp, "resolve_runtime_provider", _fake_resolve)
class TestCronJobCleanup:
"""cron/scheduler.py — end_session + close in the finally block."""
def test_keyboard_interrupt_in_end_session_does_not_skip_close(self):
"""If end_session raises KeyboardInterrupt, close() must still run."""
mock_db = MagicMock()
mock_db.end_session.side_effect = KeyboardInterrupt
from cron import scheduler
job = {
"id": "test-job-1",
"name": "test cleanup",
"prompt": "hello",
"schedule": "0 9 * * *",
"model": "test/model",
}
with patch("hermes_state.SessionDB", return_value=mock_db), \
patch.object(scheduler, "_build_job_prompt", return_value="hello"), \
patch.object(scheduler, "_resolve_origin", return_value=None), \
patch.object(scheduler, "_resolve_delivery_target", return_value=None), \
patch("dotenv.load_dotenv", return_value=None), \
patch("run_agent.AIAgent") as MockAgent:
# Make the agent raise immediately so we hit the finally block
MockAgent.return_value.run_conversation.side_effect = RuntimeError("boom")
scheduler.run_job(job)
mock_db.end_session.assert_called_once()
mock_db.close.assert_called_once()
def test_keyboard_interrupt_in_close_does_not_propagate(self):
"""If close() raises KeyboardInterrupt, it must not escape run_job."""
mock_db = MagicMock()
mock_db.close.side_effect = KeyboardInterrupt
from cron import scheduler
job = {
"id": "test-job-2",
"name": "test close interrupt",
"prompt": "hello",
"schedule": "0 9 * * *",
"model": "test/model",
}
with patch("hermes_state.SessionDB", return_value=mock_db), \
patch.object(scheduler, "_build_job_prompt", return_value="hello"), \
patch.object(scheduler, "_resolve_origin", return_value=None), \
patch.object(scheduler, "_resolve_delivery_target", return_value=None), \
patch("dotenv.load_dotenv", return_value=None), \
patch("run_agent.AIAgent") as MockAgent:
MockAgent.return_value.run_conversation.side_effect = RuntimeError("boom")
# Must not raise
scheduler.run_job(job)
mock_db.end_session.assert_called_once()
mock_db.close.assert_called_once()