hermes-agent/tests/run_agent/test_token_persistence_non_cli.py
Teknium d0e1388ca9
fix(tests): make AIAgent constructor calls self-contained (#11755)
* fix(tests): make AIAgent constructor calls self-contained (no env leakage)

Tests in tests/run_agent/ were constructing AIAgent() without passing
both api_key and base_url, then relying on leaked state from other
tests in the same xdist worker (or process-level env vars) to keep
provider resolution happy. Under hermetic conftest + pytest-split,
that state is gone and the tests fail with 'No LLM provider configured'.

Fix: pass both api_key and base_url explicitly on 47 AIAgent()
construction sites across 13 files. AIAgent.__init__ with both set
takes the direct-construction path (line 960 in run_agent.py) and
skips the resolver entirely.

One call site (test_none_base_url_passed_as_none) left alone — that
test asserts behavior for base_url=None specifically.

This is a prerequisite for any future matrix-split or stricter
isolation work, and lands cleanly on its own.

Validation:
- tests/run_agent/ full: 760 passed, 0 failed (local)
- Previously relied on cross-test pollution; now self-contained

* fix(tests): update opencode-go model order assertion to match kimi-k2.5-first

commit 78a74bb promoted kimi-k2.5 to first position in model suggestion
lists but didn't update this test, which has been failing on main since.
Reorder expected list to match the new canonical order.
2026-04-17 12:32:03 -07:00

63 lines
2 KiB
Python

from types import SimpleNamespace
from unittest.mock import MagicMock, patch
from run_agent import AIAgent
def _mock_response(*, usage: dict, content: str = "done"):
msg = SimpleNamespace(content=content, tool_calls=None)
choice = SimpleNamespace(message=msg, finish_reason="stop")
return SimpleNamespace(
choices=[choice],
model="test/model",
usage=SimpleNamespace(**usage),
)
def _make_agent(session_db, *, platform: str):
with (
patch("run_agent.get_tool_definitions", return_value=[]),
patch("run_agent.check_toolset_requirements", return_value={}),
patch("run_agent.OpenAI"),
):
agent = AIAgent(
api_key="test-key",
base_url="https://openrouter.ai/api/v1",
quiet_mode=True,
skip_context_files=True,
skip_memory=True,
session_db=session_db,
session_id=f"{platform}-session",
platform=platform,
)
agent.client = MagicMock()
agent.client.chat.completions.create.return_value = _mock_response(
usage={
"prompt_tokens": 11,
"completion_tokens": 7,
"total_tokens": 18,
}
)
return agent
def test_run_conversation_persists_tokens_for_telegram_sessions():
session_db = MagicMock()
agent = _make_agent(session_db, platform="telegram")
result = agent.run_conversation("hello")
assert result["final_response"] == "done"
session_db.update_token_counts.assert_called_once()
assert session_db.update_token_counts.call_args.args[0] == "telegram-session"
def test_run_conversation_persists_tokens_for_cron_sessions():
session_db = MagicMock()
agent = _make_agent(session_db, platform="cron")
result = agent.run_conversation("hello")
assert result["final_response"] == "done"
session_db.update_token_counts.assert_called_once()
assert session_db.update_token_counts.call_args.args[0] == "cron-session"