Context compression silently failed when the auxiliary compression model's
context window was smaller than the main model's compression threshold
(e.g. GLM-4.5-air at 131k paired with a 150k threshold). The feasibility
check warned but the session kept running and compression attempts errored
out mid-conversation.
Two changes in _check_compression_model_feasibility():
1. Hard floor: if detected aux context < MINIMUM_CONTEXT_LENGTH (64k),
raise ValueError so the session refuses to start. Mirrors the existing
main-model rejection at AIAgent.__init__ line 1600. A compression model
below 64k cannot summarise a full threshold-sized window.
2. Auto-correct: when aux context is >= 64k but below the computed
threshold, lower the live compressor's threshold_tokens to aux_context
(and update threshold_percent to match so later update_model() calls
stay in sync). Warning reworded to say what was done and how to
persist the fix in config.yaml.
Only ValueError re-raises; other exceptions in the check remain swallowed
as non-fatal.
Third-party gateways that speak the native Anthropic protocol (MiniMax,
Zhipu GLM, Alibaba DashScope, Kimi, LiteLLM proxies) now work end-to-end
with the same feature set as direct api.anthropic.com callers. Synthesizes
eight stale community PRs into one consolidated change.
Five fixes:
- URL detection: consolidate three inline `endswith("/anthropic")`
checks in runtime_provider.py into the shared _detect_api_mode_for_url
helper. Third-party /anthropic endpoints now auto-resolve to
api_mode=anthropic_messages via one code path instead of three.
- OAuth leak-guard: all five sites that assign `_is_anthropic_oauth`
(__init__, switch_model, _try_refresh_anthropic_client_credentials,
_swap_credential, _try_activate_fallback) now gate on
`provider == "anthropic"` so a stale ANTHROPIC_TOKEN never trips
Claude-Code identity injection on third-party endpoints. Previously
only 2 of 5 sites were guarded.
- Prompt caching: new method `_anthropic_prompt_cache_policy()` returns
`(should_cache, use_native_layout)` per endpoint. Replaces three
inline conditions and the `native_anthropic=(api_mode=='anthropic_messages')`
call-site flag. Native Anthropic and third-party Anthropic gateways
both get the native cache_control layout; OpenRouter gets envelope
layout. Layout is persisted in `_primary_runtime` so fallback
restoration preserves the per-endpoint choice.
- Auxiliary client: `_try_custom_endpoint` honors
`api_mode=anthropic_messages` and builds `AnthropicAuxiliaryClient`
instead of silently downgrading to an OpenAI-wire client. Degrades
gracefully to OpenAI-wire when the anthropic SDK isn't installed.
- Config hygiene: `_update_config_for_provider` (hermes_cli/auth.py)
clears stale `api_key`/`api_mode` when switching to a built-in
provider, so a previous MiniMax custom endpoint's credentials can't
leak into a later OpenRouter session.
- Truncation continuation: length-continuation and tool-call-truncation
retry now cover `anthropic_messages` in addition to `chat_completions`
and `bedrock_converse`. Reuses the existing `_build_assistant_message`
path via `normalize_anthropic_response()` so the interim message
shape is byte-identical to the non-truncated path.
Tests: 6 new files, 42 test cases. Targeted run + tests/run_agent,
tests/agent, tests/hermes_cli all pass (4554 passed).
Synthesized from (credits preserved via Co-authored-by trailers):
#7410 @nocoo — URL detection helper
#7393 @keyuyuan — OAuth 5-site guard
#7367 @n-WN — OAuth guard (narrower cousin, kept comment)
#8636 @sgaofen — caching helper + native-vs-proxy layout split
#10954 @Only-Code-A — caching on anthropic_messages+Claude
#7648 @zhongyueming1121 — aux client anthropic_messages branch
#6096 @hansnow — /model switch clears stale api_mode
#9691 @TroyMitchell911 — anthropic_messages truncation continuation
Closes: #7366, #8294 (third-party Anthropic identity + caching).
Supersedes: #7410, #7367, #7393, #8636, #10954, #7648, #6096, #9691.
Rejects: #9621 (OpenAI-wire caching with incomplete blocklist — risky),
#7242 (superseded by #9691, stale branch),
#8321 (targets smart_model_routing which was removed in #12732).
Co-authored-by: nocoo <nocoo@users.noreply.github.com>
Co-authored-by: Keyu Yuan <leoyuan0099@gmail.com>
Co-authored-by: Zoee <30841158+n-WN@users.noreply.github.com>
Co-authored-by: sgaofen <135070653+sgaofen@users.noreply.github.com>
Co-authored-by: Only-Code-A <bxzt2006@163.com>
Co-authored-by: zhongyueming <mygamez@163.com>
Co-authored-by: Xiaohan Li <hansnow@users.noreply.github.com>
Co-authored-by: Troy Mitchell <i@troy-y.org>
CI on main had 7 failing tests. Five were stale test fixtures; one (agent
cache spillover timeout) was covering up a real perf regression in
AIAgent construction.
The perf bug: every AIAgent.__init__ calls _check_compression_model_feasibility
→ resolve_provider_client('auto') → _resolve_api_key_provider which
iterates PROVIDER_REGISTRY. When it hits 'zai', it unconditionally calls
resolve_api_key_provider_credentials → _resolve_zai_base_url → probes 8
Z.AI endpoints with an empty Bearer token (all 401s), ~2s of pure latency
per agent, even when the user has never touched Z.AI. Landed in
9e844160 (PR for credential-pool Z.AI auto-detect) — the short-circuit
when api_key is empty was missing. _resolve_kimi_base_url had the same
shape; fixed too.
Test fixes:
- tests/gateway/test_voice_command.py: _make_adapter helpers were missing
self._voice_locks (added in PR #12644, 7 call sites — all updated).
- tests/test_toolsets.py: test_hermes_platforms_share_core_tools asserted
equality, but hermes-discord has discord_server (DISCORD_BOT_TOKEN-gated,
discord-only by design). Switched to subset check.
- tests/run_agent/test_streaming.py: test_tool_name_not_duplicated_when_resent_per_chunk
missing api_key/base_url — classic pitfall (PR #11619 fixed 16 of
these; this one slipped through on a later commit).
- tests/tools/test_discord_tool.py: TestConfigAllowlist caplog assertions
fail in parallel runs because AIAgent(quiet_mode=True) globally sets
logging.getLogger('tools').setLevel(ERROR) and xdist workers are
persistent. Autouse fixture resets the 'tools' and
'tools.discord_tool' levels per test.
Validation:
tests/cron + voice + agent_cache + streaming + toolsets + command_guards
+ discord_tool: 550/550 pass
tests/hermes_cli + tests/gateway: 5713/5713 pass
AIAgent construction without Z.AI creds: 2.2s → 0.24s (9x)
Follow up salvaged PR #12668 by threading base_url through the
remaining direct-call sites so kimi-k2.5 uses temperature=1.0 on
api.moonshot.ai and keeps 0.6 on api.kimi.com/coding. Add focused
regression tests for run_agent, trajectory_compressor, and
mini_swe_runner.
One source fix (web_server category merge) + five test updates that
didn't travel with their feature PRs. All 13 failures on the 04-19
CI run on main are now accounted for (5 already self-healed on main;
8 fixed here).
Changes
- web_server.py: add code_execution → agent to _CATEGORY_MERGE (new
singleton section from #11971 broke no-single-field-category invariant).
- test_browser_camofox_state: bump hardcoded _config_version 18 → 19
(also from #11971).
- test_registry: add browser_cdp_tool (#12369) and discord_tool (#4753)
to the expected built-in tool set.
- test_run_agent::test_tool_call_accumulation: rewrite fragment chunks
— #0f778f77 switched streaming name-accumulation from += to = to
fix MiniMax/NIM duplication; the test still encoded the old
fragment-per-chunk premise.
- test_concurrent_interrupt::_Stub: no-op
_apply_pending_steer_to_tool_results — #12116 added this call after
concurrent tool batches; the hand-rolled stub was missing it.
- test_codex_cli_model_picker: drop the two obsolete tests that
asserted auto-import from ~/.codex/auth.json into the Hermes auth
store. #12360 explicitly removed that behavior (refresh-token reuse
races with Codex CLI / VS Code); adoption is now explicit via
`hermes auth openai-codex`. Remaining 3 tests in the file (normal
path, Claude Code fallback, negative case) still cover the picker.
Validation
- scripts/run_tests.sh across all 6 affected files + surrounding tests
(54 tests total) all green locally.
Adds a regression guard for the #11277 → proxy-bypass regression fixed in
42b394c3. With HTTPS_PROXY / HTTP_PROXY / ALL_PROXY set, the custom httpx
transport used for TCP keepalives must still route requests through an
HTTPProxy pool; without proxy env, no HTTPProxy mount should exist.
Also maps zrc <zhurongcheng@rcrai.com> → heykb in scripts/release.py
AUTHOR_MAP so the salvage PR passes the author-attribution CI check.
Follow-up on top of mvanhorn's cherry-picked commit. Original PR only
wired request_timeout_seconds into the explicit-creds OpenAI branch at
run_agent.py init; router-based implicit auth, native Anthropic, and the
fallback chain were still hardcoded to SDK defaults.
- agent/anthropic_adapter.py: build_anthropic_client() accepts an optional
timeout kwarg (default 900s preserved when unset/invalid).
- run_agent.py: resolve per-provider/per-model timeout once at init; apply
to Anthropic native init + post-refresh rebuild + stale/interrupt
rebuilds + switch_model + _restore_primary_runtime + the OpenAI
implicit-auth path + _try_activate_fallback (with immediate client
rebuild so the first fallback request carries the configured timeout).
- tests: cover anthropic adapter kwarg honoring; widen mock signatures
to accept the new timeout kwarg.
- docs/example: clarify that the knob now applies to every transport,
the fallback chain, and rebuilds after credential rotation.
Commit 4a9c3565 added a reference to `self.config` in
`_check_compression_model_feasibility()` to pass the user-configured
`auxiliary.compression.context_length` to `get_model_context_length()`.
However, `AIAgent` never stores the loaded config dict as an instance
attribute — the config is loaded into a local variable `_agent_cfg` in
`__init__()` and discarded after init.
This causes an `AttributeError: 'AIAgent' object has no attribute
'config'` on every session start when compression is enabled, caught by
the try/except and logged as a non-fatal DEBUG message.
Fix: store the loaded config as `self._config` in `__init__()` and
update the reference in the feasibility check to use `self._config`.
Inline reasoning tags in an assistant message's content field leak to every downstream consumer: messaging platforms (#8878, #9568), API replay of prior turns, session transcript, CLI recap, generated session titles, and context compression. _extract_reasoning() already captures the reasoning text into msg['reasoning'] separately, so the raw tags in content are redundant.
Stripping once at the storage boundary in _build_assistant_message() cleans the content for every downstream path in one place — no per-platform or per-path stripper needed. Measured impact on a real MiniMax M2.7-highspeed session (per @luoyejiaoe-source, #9306): 55% of assistant messages started with <think> blocks, 51/100 session titles were polluted, 16% content-size reduction.
3 new regression tests in TestBuildAssistantMessage: closed-pair strip with reasoning capture, no-think-tag passthrough, and unterminated-block strip.
Resolves#8878 and #9568.
Originally proposed as PR #9250.
Providers served via NIM (MiniMax M2.7, some Moonshot/DeepSeek proxies) sometimes drop the closing </think> tag, leaving raw reasoning in the assistant's content field. _strip_think_blocks()'s closed-pair regex is non-greedy so it only matches complete blocks — any orphan <think>...EOF survived the stripper and leaked to users (#8878, #9568, #10408).
Adds an unterminated-tag pass that fires when an open reasoning tag sits at a block boundary (start of text or after a newline) with no matching close. Everything from that tag to end of string is stripped. The block-boundary check mirrors gateway/stream_consumer.py's filter so models that mention <think> in prose are not over-stripped.
Also makes the closed-pair regexes consistently case-insensitive so <THINK>...</THINK> and <Thinking>...</Thinking> are handled uniformly — previously the mixed-case open tag would bypass the closed-pair pass and be caught by the unterminated-tag pass, taking trailing visible content with it.
6 new regression tests in TestStripThinkBlocks covering: unterminated <think>, unterminated <thought>, multi-line unterminated, line-start orphan with preserved prefix, prose-mention non-regression, mixed-case closed pairs.
The implementation is inspired by @luinbytes's PR #10408 report of the NIM/MiniMax symptom. This commit does not include the 💭/🧠 emoji regexes from that PR — those glyphs are Hermes CLI display decorations, not model content markers.
Based on #11984 by @maxchernin. Fixes#8259.
Some providers (MiniMax M2.7 via NVIDIA NIM) resend the full function
name in every streaming chunk instead of only the first. The old
accumulator used += which concatenated them into 'read_fileread_file'.
Changed to simple assignment (=), matching the OpenAI Node SDK, LiteLLM,
and Vercel AI SDK patterns. Function names are atomic identifiers
delivered complete — no provider splits them across chunks, so
concatenation was never correct semantics.
* feat(steer): /steer <prompt> injects a mid-run note after the next tool call
Adds a new slash command that sits between /queue (turn boundary) and
interrupt. /steer <text> stashes the message on the running agent and
the agent loop appends it to the LAST tool result's content once the
current tool batch finishes. The model sees it as part of the tool
output on its next iteration.
No interrupt is fired, no new user turn is inserted, and no prompt
cache invalidation happens beyond the normal per-turn tool-result
churn. Message-role alternation is preserved — we only modify an
existing role:"tool" message's content.
Wiring
------
- hermes_cli/commands.py: register /steer + add to ACTIVE_SESSION_BYPASS_COMMANDS.
- run_agent.py: add _pending_steer state, AIAgent.steer(), _drain_pending_steer(),
_apply_pending_steer_to_tool_results(); drain at end of both parallel and
sequential tool executors; clear on interrupt; return leftover as
result['pending_steer'] if the agent exits before another tool batch.
- cli.py: /steer handler — route to agent.steer() when running, fall back to
the regular queue otherwise; deliver result['pending_steer'] as next turn.
- gateway/run.py: running-agent intercept calls running_agent.steer(); idle-agent
path strips the prefix and forwards as a regular user message.
- tui_gateway/server.py: new session.steer JSON-RPC method.
- ui-tui: SessionSteerResponse type + local /steer slash command that calls
session.steer when ui.busy, otherwise enqueues for the next turn.
Fallbacks
---------
- Agent exits mid-steer → surfaces in run_conversation result as pending_steer
so CLI/gateway deliver it as the next user turn instead of silently dropping it.
- All tools skipped after interrupt → re-stashes pending_steer for the caller.
- No active agent → /steer reduces to sending the text as a normal message.
Tests
-----
- tests/run_agent/test_steer.py — accept/reject, concatenation, drain,
last-tool-result injection, multimodal list content, thread safety,
cleared-on-interrupt, registry membership, bypass-set membership.
- tests/gateway/test_steer_command.py — running agent, pending sentinel,
missing steer() method, rejected payload, empty payload.
- tests/gateway/test_command_bypass_active_session.py — /steer bypasses
the Level-1 base adapter guard.
- tests/test_tui_gateway_server.py — session.steer RPC paths.
72/72 targeted tests pass under scripts/run_tests.sh.
* feat(steer): register /steer in Discord's native slash tree
Discord's app_commands tree is a curated subset of slash commands (not
derived from COMMAND_REGISTRY like Telegram/Slack). /steer already
works there as plain text (routes through handle_message → base
adapter bypass → runner), but registering it here adds Discord's
native autocomplete + argument hint UI so users can discover and
type it like any other first-class command.
When streaming died after text was already delivered to the user but
before a tool-call's arguments finished streaming, the partial-stream
stub at the end of _interruptible_streaming_api_call silently set
`tool_calls=None` on the returned message and kept `finish_reason=stop`.
The agent treated the turn as complete, the session exited cleanly with
code 0, and the attempted action was lost with zero user-facing signal.
Live-observed Apr 2026 with MiniMax M2.7 on a ~6-minute audit task:
agent streamed 'Let me write the audit:', started emitting a write_file
tool call, MiniMax stalled for 240s mid-arguments, the stale-stream
detector killed the connection, the stub fired, session ended, no file
written, no error shown.
Fix: the streaming accumulator now records each tool-call's name into
`result['partial_tool_names']` as soon as the name is known. When the
stub builder fires after a partial delivery and finds any recorded tool
names, it appends a human-visible warning to the stub's content — and
also fires it as a live stream delta so the user sees it immediately,
not only in the persisted transcript. The next turn's model also sees
the warning in conversation history and can retry on its own. Text-only
partial streams keep the original bare-recovery behaviour (no warning).
Validation:
| Scenario | Before | After |
|---------------------------------------------|---------------------------|---------------------------------------------|
| Stream dies mid tool-call, text already sent | Silent exit, no indication | User sees ⚠ warning naming the dropped tool |
| Text-only partial stream | Bare recovered text | Unchanged |
| tests/run_agent/test_streaming.py | 24 passed | 26 passed (2 new) |
* fix(interrupt): propagate to concurrent-tool workers + opt-in debug trace
interrupt() previously only flagged the agent's _execution_thread_id.
Tools running inside _execute_tool_calls_concurrent execute on
ThreadPoolExecutor worker threads whose tids are distinct from the
agent's, so is_interrupted() inside those tools returned False no matter
how many times the gateway called .interrupt() — hung ssh / curl / long
make-builds ran to their own timeout.
Changes:
- run_agent.py: track concurrent-tool worker tids in a per-agent set,
fan interrupt()/clear_interrupt() out to them, and handle the
register-after-interrupt race at _run_tool entry. getattr fallback
for the tracker so test stubs built via object.__new__ keep working.
- tools/environments/base.py: opt-in _wait_for_process trace (ENTER,
per-30s HEARTBEAT with interrupt+activity-cb state, INTERRUPT
DETECTED, TIMEOUT, EXIT) behind HERMES_DEBUG_INTERRUPT=1.
- tools/interrupt.py: opt-in set_interrupt() trace (caller tid, target
tid, set snapshot) behind the same env flag.
- tests: new regression test runs a polling tool on a concurrent worker
and asserts is_interrupted() flips to True within ~1s of interrupt().
Second new test guards clear_interrupt() clearing tracked worker bits.
Validation: tests/run_agent/ all 762 pass; tests/tools/ interrupt+env
subset 216 pass.
* fix(interrupt-debug): bypass quiet_mode logger filter so trace reaches agent.log
AIAgent.__init__ sets logging.getLogger('tools').setLevel(ERROR) when
quiet_mode=True (the CLI default). This would silently swallow every
INFO-level trace line from the HERMES_DEBUG_INTERRUPT=1 instrumentation
added in the parent commit — confirmed by running hermes chat -q with
the flag and finding zero trace lines in agent.log even though
_wait_for_process was clearly executing (subprocess pid existed).
Fix: when HERMES_DEBUG_INTERRUPT=1, each traced module explicitly sets
its own logger level to INFO at import time, overriding the 'tools'
parent-level filter. Scoped to the opt-in case only, so production
(quiet_mode default) logs stay quiet as designed.
Validation: hermes chat -q with HERMES_DEBUG_INTERRUPT=1 now writes
'_wait_for_process ENTER/EXIT' lines to agent.log as expected.
* fix(cli): SIGTERM/SIGHUP no longer orphans tool subprocesses
Tool subprocesses spawned by the local environment backend use
os.setsid so they run in their own process group. Before this fix,
SIGTERM/SIGHUP to the hermes CLI killed the main thread via
KeyboardInterrupt but the worker thread running _wait_for_process
never got a chance to call _kill_process — Python exited, the child
was reparented to init (PPID=1), and the subprocess ran to its
natural end (confirmed live: sleep 300 survived 4+ min after SIGTERM
to the agent until manual cleanup).
Changes:
- cli.py _signal_handler (interactive) + _signal_handler_q (-q mode):
route SIGTERM/SIGHUP through agent.interrupt() so the worker's poll
loop sees the per-thread interrupt flag and calls _kill_process
(os.killpg) on the subprocess group. HERMES_SIGTERM_GRACE (default
1.5s) gives the worker time to complete its SIGTERM+SIGKILL
escalation before KeyboardInterrupt unwinds main.
- tools/environments/base.py _wait_for_process: wrap the poll loop in
try/except (KeyboardInterrupt, SystemExit) so the cleanup fires
even on paths the signal handlers don't cover (direct sys.exit,
unhandled KI from nested code, etc.). Emits EXCEPTION_EXIT trace
line when HERMES_DEBUG_INTERRUPT=1.
- New regression test: injects KeyboardInterrupt into a running
_wait_for_process via PyThreadState_SetAsyncExc, verifies the
subprocess process group is dead within 3s of the exception and
that KeyboardInterrupt re-raises cleanly afterward.
Validation:
| Before | After |
|---------------------------------------------------------|--------------------|
| sleep 300 survives 4+ min as PPID=1 orphan after SIGTERM | dies within 2 s |
| No INTERRUPT DETECTED in trace | INTERRUPT DETECTED fires + killing process group |
| tests/tools/test_local_interrupt_cleanup | 1/1 pass |
| tests/run_agent/test_concurrent_interrupt | 4/4 pass |
Cuts shard-3 local runtime in half by neutralizing real wall-clock
waits across three classes of slow test:
## 1. Retry backoff mocks
- tests/run_agent/conftest.py (NEW): autouse fixture mocks
jittered_backoff to 0.0 so the `while time.time() < sleep_end`
busy-loop exits immediately. No global time.sleep mock (would
break threading tests).
- test_anthropic_error_handling, test_413_compression,
test_run_agent_codex_responses, test_fallback_model: per-file
fixtures mock time.sleep / asyncio.sleep for retry / compression
paths.
- test_retaindb_plugin: cap the retaindb module's bound time.sleep
to 0.05s via a per-test shim (background writer-thread retries
sleep 2s after errors; tests don't care about exact duration).
Plus replace arbitrary time.sleep(N) waits with short polling
loops bounded by deadline.
## 2. Subprocess sleeps in production code
- test_update_gateway_restart: mock time.sleep. Production code
does time.sleep(3) after `systemctl restart` to verify the
service survived. Tests mock subprocess.run \u2014 nothing actually
restarts \u2014 so the wait is dead time.
## 3. Network / IMDS timeouts (biggest single win)
- tests/conftest.py: add AWS_EC2_METADATA_DISABLED=true plus
AWS_METADATA_SERVICE_TIMEOUT=1 and ATTEMPTS=1. boto3 falls back
to IMDS (169.254.169.254) when no AWS creds are set. Any test
hitting has_aws_credentials() / resolve_aws_auth_env_var() (e.g.
test_status, test_setup_copilot_acp, anything that touches
provider auto-detect) burned ~2-4s waiting for that to time out.
- test_exit_cleanup_interrupt: explicitly mock
resolve_runtime_provider which was doing real network auto-detect
(~4s). Tests don't care about provider resolution \u2014 the agent
is already mocked.
- test_timezone: collapse the 3-test "TZ env in subprocess" suite
into 2 tests by checking both injection AND no-leak in the same
subprocess spawn (was 3 \u00d7 3.2s, now 2 \u00d7 4s).
## Validation
| Test | Before | After |
|---|---|---|
| test_anthropic_error_handling (8 tests) | ~80s | ~15s |
| test_413_compression (14 tests) | ~18s | 2.3s |
| test_retaindb_plugin (67 tests) | ~13s | 1.3s |
| test_status_includes_tavily_key | 4.0s | 0.05s |
| test_setup_copilot_acp_skips_same_provider_pool_step | 8.0s | 0.26s |
| test_update_gateway_restart (5 tests) | ~18s total | ~0.35s total |
| test_exit_cleanup_interrupt (2 tests) | 8s | 1.5s |
| **Matrix shard 3 local** | **108s** | **50s** |
No behavioral contract changed \u2014 tests still verify retry happens,
service restart logic runs, etc.; they just don't burn real seconds
waiting for it.
Supersedes PR #11779 (those changes are included here).
* fix(tests): make AIAgent constructor calls self-contained (no env leakage)
Tests in tests/run_agent/ were constructing AIAgent() without passing
both api_key and base_url, then relying on leaked state from other
tests in the same xdist worker (or process-level env vars) to keep
provider resolution happy. Under hermetic conftest + pytest-split,
that state is gone and the tests fail with 'No LLM provider configured'.
Fix: pass both api_key and base_url explicitly on 47 AIAgent()
construction sites across 13 files. AIAgent.__init__ with both set
takes the direct-construction path (line 960 in run_agent.py) and
skips the resolver entirely.
One call site (test_none_base_url_passed_as_none) left alone — that
test asserts behavior for base_url=None specifically.
This is a prerequisite for any future matrix-split or stricter
isolation work, and lands cleanly on its own.
Validation:
- tests/run_agent/ full: 760 passed, 0 failed (local)
- Previously relied on cross-test pollution; now self-contained
* fix(tests): update opencode-go model order assertion to match kimi-k2.5-first
commit 78a74bb promoted kimi-k2.5 to first position in model suggestion
lists but didn't update this test, which has been failing on main since.
Reorder expected list to match the new canonical order.
* docs: fix ascii-guard border alignment errors
Three docs pages had ASCII diagram boxes with off-by-one column
alignment issues that failed docs-site-checks CI:
- architecture.md: outer box is 71 cols but inner-box content lines
and border corners were offset by 1 col, making content-line right
border at col 70/72 while top/bottom border was at col 71. Inner
boxes also had border corners at cols 19/36/53 but content pipes
at cols 20/37/54. Rewrote the diagram with consistent 71-col width
throughout, aligned inner boxes at cols 4-19, 22-37, 40-55 with
2-space gaps and 15-space trailing padding.
- gateway-internals.md: same class of issue — outer box at 51 cols,
inner content lines varied 52-54 cols. Rewrote with consistent
51-col width, inner boxes at cols 4-15, 18-29, 32-43. Also
restructured the bottom-half message flow so it's bare text
(not half-open box cells) matching the intent of the original.
- agent-loop.md line 112-114: box 2 (API thread) content lines had
one extra space pushing the right border to col 46 while the top
and bottom borders of that box sat at col 45. Trimmed one trailing
space from each of the three content lines.
All 123 docs files now pass `npm run lint:diagrams`:
✓ Errors: 0 (warnings: 6, non-fatal)
Pre-existing failures on main — unrelated to any open PR.
* test(setup): accept description kwarg in prompt_choice mock lambdas
setup.py's `_curses_prompt_choice` gained an optional `description`
parameter (used for rendering context hints alongside the prompt).
`prompt_choice` forwards it via keyword arg. The two existing tests
mocked `_curses_prompt_choice` with lambdas that didn't accept the
new kwarg, so the forwarded call raised TypeError.
Fix: add `description=None` to both mock lambda signatures so they
absorb the new kwarg without changing behavior.
* test(matrix): update stale audio-caching assertion
test_regular_audio_has_http_url asserted that non-voice audio
messages keep their HTTP URL and are NOT downloaded/cached. That
was true when the caching code only triggered on
`is_voice_message`. Since bec02f37 (encrypted-media caching
refactor), matrix.py caches all media locally — photos, audio,
video, documents — so downstream tools can read them as real
files via media_urls. This applies to regular audio too.
Renamed the test to `test_regular_audio_is_cached_locally`,
flipped the assertions accordingly, and documented the
intentional behavior change in the docstring. Other tests in
the file (voice-specific caching, message-type detection,
reply-to threading) continue to pass.
* test(413): allow multi-pass preflight compression
run_agent.py's preflight compression runs up to 3 passes in a loop
for very large sessions (each pass summarizes the middle N turns,
then re-checks tokens). The loop breaks when a pass returns a
message list no shorter than its input (can't compress further).
test_preflight_compresses_oversized_history used a static mock
return value that returned the same 2 messages regardless of input,
so the loop ran pass 1 (41 -> 2) and pass 2 (2 -> 2 -> break),
making call_count == 2. The assert_called_once() assertion was
strictly wrong under the multi-pass design.
The invariant the test actually cares about is: preflight ran, and
its first invocation received the full oversized history. Replaced
the count assertion with those two invariants.
* docs: drop '...' from gateway diagram, merge side-by-side boxes
ascii-guard 2.3.0 flagged two remaining issues after the initial fix
pass:
1. gateway-internals.md L33: the '...' suffix after inner box 3's
right border got parsed as 'extra characters after inner-box right
border'. Dropped the '...' — the surrounding prose already conveys
'and more platforms' without needing the visual hint.
2. agent-loop.md: ascii-guard can't cleanly parse two side-by-side
boxes of different heights (main thread 7 rows, API thread 5 rows).
Even equalizing heights didn't help — the linter treats the left
box's right border as the end of the diagram. Merged into a single
54-char-wide outer box with both threads labeled as regions inside,
keeping the ▶ arrow to preserve the main→API flow direction.
Two new tests in tests/run_agent/ that pin the user-visible invariant
behind AlexKucera's Discord report (2026-04-16): no matter how a future
keepalive / transport fix for #10324 plumbs sockets in, sequential
chats on the same AIAgent instance must all succeed.
test_create_openai_client_reuse.py (no network, runs in CI):
- test_second_create_does_not_wrap_closed_transport_from_first
back-to-back _create_openai_client calls must not hand the same
http_client (after an SDK close) to the second construction
- test_replace_primary_openai_client_survives_repeated_rebuilds
three sequential rebuilds via the real _replace_primary_openai_client
entrypoint must each install a live client
test_sequential_chats_live.py (opt-in, HERMES_LIVE_TESTS=1):
- test_three_sequential_chats_across_client_rebuild
real OpenRouter round trips, with an explicit
_replace_primary_openai_client call between turns 2 and 3.
Error-sentinel detector treats 'API call failed after 3 retries'
replies as failures instead of letting them pass the naive
truthy check (which is how a first draft of this test missed
the bug it was meant to catch).
Validation:
clean main (post-revert, defensive copy present)
-> all 4 tests PASS
broken #10933 state (keepalive injection, no defensive copy)
-> all 4 tests FAIL with precise messages pointing at #10933
Companion to taeuk178's test_create_openai_client_kwargs_isolation.py,
which pins the syntactic 'don't mutate input dict' half of the same
contract. Together they catch both the specific mechanism of #10933
and any other reimplementation that breaks the sequential-call
invariant.
Shallow-copy client_kwargs at the top of _create_openai_client() to
prevent in-place mutation from leaking back into self._client_kwargs.
Defensive fix that locks the contract for future httpx/transport work.
Cherry-picked from #10978 by @taeuk178.
Group A (3 tests): 'No LLM provider configured' RuntimeError
- test_user_message_surrogates_sanitized, test_counters_initialized_in_init,
test_openai_prompt_tokens_unchanged
- Root cause: AIAgent.__init__ now requires base_url alongside api_key to
skip resolve_provider_client() (which returns None when API keys are
blanked in CI). Added base_url='http://localhost:1234/v1' to test
agent construction.
Group B (5 tests): Discord slash command auto-registration
- test_auto_registers_missing_gateway_commands, test_auto_registered_command_*,
test_register_skill_group_*
- Root cause: xdist workers that loaded a discord mock WITHOUT
app_commands.Command/Group caused _register_slash_commands() to fail
silently. Added comprehensive shared discord mock in
tests/gateway/conftest.py (same pattern as existing telegram mock).
Group C (5 errors): Discord reply mode 'NoneType has no DMChannel'
- All TestReplyToText tests
- Root cause: FakeDMChannel was not a subclass of real discord.DMChannel,
so isinstance() checks in _handle_message failed when running in full
suite (real discord installed). Made FakeDMChannel inherit from
discord.DMChannel when available. Removed fragile monkeypatch approach.
Group D (2 tests): detect_provider_for_model wrong provider
- test_openrouter_slug_match (got 'ai-gateway'), test_bare_name_gets_
openrouter_slug (got 'copilot')
- Root cause: ai-gateway, copilot, and kilocode are multi-vendor
aggregators that list other providers' models (OpenRouter-style slugs).
They were being matched in Step 1 before OpenRouter. Added all three
to _AGGREGATORS set so they're skipped like nous/openrouter.
Group E (1 test): model_flow_custom StopIteration
- test_model_flow_custom_saves_verified_v1_base_url
- Root cause: 'Display name' prompt was added after the test was written.
The input iterator had 5 answers but the flow now asks 6 questions.
Added 6th empty string answer.
Group F (1 test): Telegram proxy env assertion
- test_uses_proxy_env_for_primary_and_fallback_transports
- Root cause: _resolve_proxy_url() now checks TELEGRAM_PROXY first
(via resolve_proxy_url('TELEGRAM_PROXY')). Test didn't clear this
env var, allowing potential leakage from other tests in xdist workers.
Added TELEGRAM_PROXY to the cleanup list.
The gateway compression notifications were already removed in commit cc63b2d1
(PR #4139), but the agent-level context pressure warnings (85%/95% tiered
alerts via _emit_context_pressure) were still firing on both CLI and gateway.
Removed:
- _emit_context_pressure method and all call sites in run_conversation()
- Class-level dedup state (_context_pressure_last_warned, _CONTEXT_PRESSURE_COOLDOWN)
- Instance attribute _context_pressure_warned_at
- Pressure reset logic in _compress_context
- format_context_pressure and format_context_pressure_gateway from agent/display.py
- Orphaned ANSI constants that only served these functions
- tests/run_agent/test_context_pressure.py (all 361 lines)
Compression itself continues to run silently in the background.
Closes#3784
Three targeted fixes for the 'agent stuck on terminal command' report:
1. **Concurrent tool wait loop now checks interrupts** (run_agent.py)
The sequential path checked _interrupt_requested before each tool call,
but the concurrent path's wait loop just blocked with 30s timeouts.
Now polls every 5s and cancels pending futures on interrupt, giving
already-running tools 3s to notice the per-thread interrupt signal.
2. **Cancelled concurrent tools get proper interrupt messages** (run_agent.py)
When a concurrent tool is cancelled or didn't return a result due to
interrupt, the tool result message says 'skipped due to user interrupt'
instead of a generic error.
3. **Typing indicator fires before follow-up turn** (gateway/run.py)
After an interrupt is acknowledged and the pending message dequeued,
the gateway now sends a typing indicator before starting the recursive
_run_agent call. This gives the user immediate visual feedback that
the system is processing their new message (closing the perceived
'dead air' gap between the interrupt ack and the response).
Reported by @_SushantSays.
When a user enters a local model server URL (Ollama, vLLM, llama.cpp)
without a /v1 suffix during 'hermes model' custom endpoint setup,
prompt them to add it. Most OpenAI-compatible local servers require
/v1 in the base URL for chat completions to work.
When a custom/Ollama provider is used and reasoning_effort is set to 'none'
(or enabled: false), inject 'think': false into the request extra_body.
Ollama does not recognise the OpenRouter-style 'reasoning' extra_body field,
so thinking-capable models (Qwen3, etc.) generate <think> blocks regardless
of the reasoning_effort setting. This produces empty-response errors that
corrupt session state.
The fix adds a provider-specific block in _build_api_kwargs() that sets
think=false in extra_body whenever self.provider == 'custom' and reasoning
is explicitly disabled.
Closes#3191
Salvaged from PR #10643 by kshitijk4poor, updated for current main.
Root causes fixed:
1. Telegram xdist mock pollution — new tests/gateway/conftest.py with shared
mock that runs at collection time (prevents ChatType=None caching)
2. VIRTUAL_ENV env var leak — monkeypatch.delenv in _detect_venv_dir tests
3. Copilot base_url missing — add fallback in _resolve_runtime_from_pool_entry
4. Stale vision model assertion — zai now uses glm-5v-turbo
5. Reasoning item id intentionally stripped — assert 'id' not in (store=False)
6. Context length warning unreachable — pass base_url to AIAgent in test
7. Kimi provider label updated — 'Kimi / Kimi Coding Plan' matches models.py
8. Google Workspace calendar tests — rewritten for current production code,
properly mock subprocess on api_module, removed stale +agenda assertions
9. Credential pool auto-seeding — mock _select_pool_entry / _resolve_auto /
_import_codex_cli_tokens to prevent real credentials from leaking into tests
The recovery block previously only retried (continue) when one of the
per-component sanitization checks (messages, tools, system prompt,
headers, credentials) found and stripped non-ASCII content. When the
non-ASCII lived only in api_messages' reasoning_content field (which
is built from messages['reasoning'] and not checked by the original
_sanitize_messages_non_ascii), all checks returned False and the
recovery fell through to the normal error path — burning a retry
attempt despite _force_ascii_payload being set.
Now the recovery always continues (retries) when _is_ascii_codec is
detected. The _force_ascii_payload flag guarantees the next iteration
runs _sanitize_structure_non_ascii(api_kwargs) on the full API payload,
catching any remaining non-ASCII regardless of where it lives.
Also adds test for the 'reasoning' field on canonical messages.
Fixes#6843
The ASCII-locale recovery path in run_agent.py sanitized the canonical
'messages' list but left 'api_messages' untouched. api_messages is a
separate API-copy built before the retry loop and may carry extra fields
(reasoning_content, extra_body entries) that are not present in
'messages'. This caused the retry to still raise UnicodeEncodeError even
after the 'System encoding is ASCII — stripped...' log line appeared.
Two changes:
- _sanitize_messages_non_ascii now walks all extra top-level string fields
in each message dict (any key not in {content, name, tool_calls, role})
so reasoning_content and future extras are cleaned in both 'messages'
and 'api_messages'.
- The ASCII-codec recovery block now also calls sanitize on api_messages
and api_kwargs so no non-ASCII survives into the next retry attempt.
Adds regression tests covering:
- reasoning_content with non-ASCII in api_messages
- extra_body with non-ASCII in api_kwargs
- canonical messages clean but api_messages dirty
Fixes#6843
The _client_cache used event loop id() as part of the cache key, so
every new worker-thread event loop created a new entry for the same
provider config. In long-running gateways where threads are recycled
frequently, this caused unbounded cache growth — each stale entry
held an unclosed AsyncOpenAI client with its httpx connection pool,
eventually exhausting file descriptors.
Fix: remove loop_id from the cache key and instead validate on each
async cache hit that the cached loop is the current, open loop. If
the loop changed or was closed, the stale entry is replaced in-place
rather than creating an additional entry. This bounds cache growth
to at most one entry per unique provider config.
Also adds a _CLIENT_CACHE_MAX_SIZE (64) safety belt with FIFO
eviction as defense-in-depth against any remaining unbounded growth.
Cross-loop safety is preserved: different event loops still get
different client instances (validated by existing test suite).
Closes#10200
With store=False (our default for the Responses API), the API does not
persist response items. When reasoning items with 'id' fields were
replayed on subsequent turns, the API attempted a server-side lookup
for those IDs and returned 404:
Item with id 'rs_...' not found. Items are not persisted when store
is set to false.
The encrypted_content blob is self-contained for reasoning chain
continuity — the id field is unnecessary and triggers the failed lookup.
Fix: strip 'id' from reasoning items in both _chat_messages_to_responses_input
(message conversion) and _preflight_codex_input_items (normalization layer).
The id is still used for local deduplication but never sent to the API.
Reported by @zuogl448 on GPT-5.4.
The existing recovery block sanitized self.api_key and
self._client_kwargs['api_key'] but did not update self.client.api_key.
The OpenAI SDK stores its own copy of api_key and reads it dynamically
via the auth_headers property on every request. Without this fix, the
retry after sanitization would still send the corrupted key in the
Authorization header, causing the same UnicodeEncodeError.
The bug manifests when an API key contains Unicode lookalike characters
(e.g. ʋ U+028B instead of v) from copy-pasting out of PDFs, rich-text
editors, or web pages with decorative fonts. httpx hard-encodes all
HTTP headers as ASCII, so the non-ASCII char in the Authorization
header triggers the error.
Adds TestApiKeyClientSync with two tests verifying:
- All three key locations are synced after sanitization
- Recovery handles client=None (pre-init) without crashing
Previously, non-integer context_length values (e.g. '256K') in
config.yaml were silently ignored, causing the agent to fall back
to 128K auto-detection with no user feedback. This was confusing
for users with custom LiteLLM endpoints expecting larger context.
Now prints a clear stderr warning and logs at WARNING level when
model.context_length or custom_providers[].models.<model>.context_length
cannot be parsed as an integer, telling users to use plain integers
(e.g. 256000 instead of '256K').
Reported by community user ChFarhan via Discord.
When compression fails after max attempts, the agent returns
{completed: False, partial: True} but was missing the 'failed' flag.
The gateway's agent_failed_early guard checked for 'failed' AND
'not final_response', but _run_agent_blocking always converts errors
to final_response — making the guard dead code. This caused the
oversized session to persist, creating an infinite fail loop where
every subsequent message hits the same compression failure.
Changes:
- run_agent.py: add 'failed: True' and 'compression_exhausted: True'
to all 5 compression-exhaustion return paths
- gateway/run.py (_run_agent_blocking): forward 'failed' and
'compression_exhausted' flags through to the caller
- gateway/run.py (_handle_message_with_agent): fix agent_failed_early
to check bool(failed) without the broken 'not final_response' clause;
auto-reset the session when compression is exhausted so the next
message starts fresh
- Update tests to match new guard logic and add
TestCompressionExhaustedFlag test class
Closes#9893
API keys containing Unicode lookalike characters (e.g. ʋ U+028B instead
of v) cause UnicodeEncodeError when httpx encodes the Authorization
header as ASCII. This commonly happens when users copy-paste keys from
PDFs, rich-text editors, or web pages with decorative fonts.
Three layers of defense:
1. **Save-time validation** (hermes_cli/config.py):
_check_non_ascii_credential() strips non-ASCII from credential values
when saving to .env, with a clear warning explaining the issue.
2. **Load-time sanitization** (hermes_cli/env_loader.py):
_sanitize_loaded_credentials() strips non-ASCII from credential env
vars (those ending in _API_KEY, _TOKEN, _SECRET, _KEY) after dotenv
loads them, so the rest of the codebase never sees non-ASCII keys.
3. **Runtime recovery** (run_agent.py):
The UnicodeEncodeError recovery block now also sanitizes self.api_key
and self._client_kwargs['api_key'], fixing the gap where message/tool
sanitization succeeded but the API key still caused httpx to fail on
the Authorization header.
Also: hermes_logging.py RotatingFileHandler now explicitly sets
encoding='utf-8' instead of relying on locale default (defensive
hardening for ASCII-locale systems).
GPT-5.4 supports none/low/medium/high/xhigh but not 'minimal'.
Users may configure 'minimal' via OpenRouter conventions, which would
cause a 400 on native OpenAI. Clamp to 'low' in the codex_responses
path before sending.
Plugins can now return {"action": "block", "message": "reason"} from
their pre_tool_call hook to prevent a tool from executing. The error
message is returned to the model as a tool result so it can adjust.
Covers both execution paths: handle_function_call (model_tools.py) and
agent-level tools (run_agent.py _invoke_tool + sequential/concurrent).
Blocked tools skip all side effects (counter resets, checkpoints,
callbacks, read-loop tracker).
Adds skip_pre_tool_call_hook flag to avoid double-firing the hook when
run_agent.py already checked and then calls handle_function_call.
Salvaged from PR #5385 (gianfrancopiana) and PR #4610 (oredsecurity).
Previously, long-running streamed responses could be incorrectly treated
as idle by the gateway/cron inactivity timeout even while tokens were
actively arriving. The _touch_activity() call (which feeds
get_activity_summary() polled by the external timeout) was either called
only on the first chunk (chat completions) or not at all (Anthropic,
Codex, Codex fallback).
Add _touch_activity() on every chunk/event in all four streaming paths
so the inactivity monitor knows data is still flowing.
Fixes#8760
The existing ASCII codec handler only sanitized conversation messages,
leaving tool schemas, system prompts, ephemeral prompts, prefill messages,
and HTTP headers as unhandled sources of non-ASCII content. On systems
with LANG=C or non-UTF-8 locale, Unicode symbols in tool descriptions
(e.g. arrows, em-dashes from prompt_builder) and system prompt content
would cause UnicodeEncodeError that fell through to the error path.
Changes:
- Add _sanitize_structure_non_ascii() generic recursive walker for
nested dict/list payloads
- Add _sanitize_tools_non_ascii() thin wrapper for tool schemas
- Add _force_ascii_payload flag: once ASCII locale is detected, all
subsequent API calls get proactively sanitized (prevents recurring
failures from new tool results bringing fresh Unicode each turn)
- Extend the ASCII codec error handler to sanitize: prefill_messages,
tool schemas (self.tools), system prompt, ephemeral system prompt,
and default HTTP headers
- Update stale comment that acknowledged the gap
Cherry-picked from PR #8834 (credential pool changes dropped as
separate concern).
Three targeted changes to close the gaps between retry layers that
caused users to experience 'No response from provider for 580s' and
'No activity for 15 minutes' despite having 5 layers of retry:
1. Remove non-streaming fallback from streaming path
Previously, when all 3 stream retries exhausted, the code fell back
to _interruptible_api_call() which had no stale detection and no
activity tracking — a black hole that could hang for up to 1800s.
Now errors propagate to the main retry loop which has richer recovery
(credential rotation, provider fallback, backoff).
For 'stream not supported' errors, sets _disable_streaming flag so
the main retry loop automatically switches to non-streaming on the
next attempt.
2. Add _touch_activity to recovery dead zones
The gateway inactivity monitor relies on _touch_activity() to know
the agent is alive, but activity was never touched during:
- Stale stream detection/kill cycles (180-300s gaps)
- Stream retry connection rebuilds
- Main retry backoff sleeps (up to 120s)
- Error recovery classification
Now all these paths touch activity every ~30s, keeping the gateway
informed during recovery cycles.
3. Add stale-call detector to non-streaming path
_interruptible_api_call() now has the same stale detection pattern
as the streaming path: kills hung connections after 300s (default,
configurable via HERMES_API_CALL_STALE_TIMEOUT), scaled for large
contexts (450s for 50K+ tokens, 600s for 100K+ tokens), disabled
for local providers.
Also touches activity every ~30s during the wait so the gateway
monitor stays informed.
Env vars:
- HERMES_API_CALL_STALE_TIMEOUT: non-streaming stale timeout (default 300s)
- HERMES_STREAM_STALE_TIMEOUT: unchanged (default 180s)
Before: worst case ~2+ hours of sequential retries with no feedback
After: worst case bounded by gateway inactivity timeout (default 1800s)
with continuous activity reporting