Broaden the UnicodeEncodeError recovery to handle systems with ASCII-only
locale (LANG=C, Chromebooks) where ANY non-ASCII character causes encoding
failure, not just lone surrogates.
Changes:
- Add _strip_non_ascii() and _sanitize_messages_non_ascii() helpers that
strip all non-ASCII characters from message content, name, and tool_calls
- Update the UnicodeEncodeError handler to detect ASCII codec errors and
fall back to non-ASCII sanitization after surrogate check fails
- Sanitize tool_calls arguments and name fields (not just content)
- Fix bare .encode() in cli.py suspend handler to use explicit utf-8
- Add comprehensive test suite (17 tests)
When switching models at runtime, the config_context_length override
was not being passed to the new context compressor instance. This
meant the user-specified context length from config.yaml was lost
after a model switch.
- Store _config_context_length on AIAgent instance during __init__
- Pass _config_context_length when creating new ContextCompressor in switch_model
- Add test to verify config_context_length is preserved across model switches
Fixes: quando estamos alterando o modelo não está alterando o tamanho do contexto
Automated dead code audit using vulture + coverage.py + ast-grep intersection,
confirmed by Opus deep verification pass. Every symbol verified to have zero
production callers (test imports excluded from reachability analysis).
Removes ~1,534 lines of dead production code across 46 files and ~1,382 lines
of stale test code. 3 entire files deleted (agent/builtin_memory_provider.py,
hermes_cli/checklist.py, tests/hermes_cli/test_setup_model_selection.py).
Co-authored-by: alt-glitch <balyan.sid@gmail.com>
The _call_anthropic() streaming path never updated last_chunk_time during
the event loop — only once at stream start. The stale stream detector in
the outer poll loop uses this timer, so any Anthropic stream longer than
180s was killed even when events were actively arriving. This self-inflicted
a RemoteProtocolError that users saw as:
'⚠️ Connection to provider dropped (RemoteProtocolError). Reconnecting…'
The _call_chat_completions() path already updates last_chunk_time on every
chunk (line 4475). This brings _call_anthropic() to parity.
Also adds deltas_were_sent tracking to the Anthropic text_delta path so
the retry loop knows not to retry after partial delivery (prevents
duplicated output on connection drops mid-stream).
Reported-by: Discord users (Castellani, Codename_11)
The hardcoded User-Agent 'KimiCLI/1.3' is outdated — Kimi CLI is now at
v1.30.0. The stale version string causes intermittent 403 errors from
Kimi's coding endpoint ('only available for Coding Agents').
Update all 8 occurrences across run_agent.py, auxiliary_client.py, and
doctor.py to 'KimiCLI/1.30.0' to match the current official Kimi CLI.
Extends the /fast command to support Anthropic's Fast Mode beta in addition
to OpenAI Priority Processing. When enabled on Claude Opus 4.6, adds
speed:"fast" and the fast-mode-2026-02-01 beta header to API requests for
~2.5x faster output token throughput.
Changes:
- hermes_cli/models.py: Add _ANTHROPIC_FAST_MODE_MODELS registry,
model_supports_fast_mode() now recognizes Claude Opus 4.6,
resolve_fast_mode_overrides() returns {speed: fast} for Anthropic
vs {service_tier: priority} for OpenAI
- agent/anthropic_adapter.py: Add _FAST_MODE_BETA constant,
build_anthropic_kwargs() accepts fast_mode=True which injects
speed:fast + beta header via extra_headers (skipped for third-party
Anthropic-compatible endpoints like MiniMax)
- run_agent.py: Pass fast_mode to build_anthropic_kwargs in the
anthropic_messages path of _build_api_kwargs()
- cli.py: Update _handle_fast_command with provider-aware messaging
(shows 'Anthropic Fast Mode' vs 'Priority Processing')
- hermes_cli/commands.py: Update /fast description to mention both
providers
- tests: 13 new tests covering Anthropic model detection, override
resolution, CLI availability, routing, adapter kwargs, and
third-party endpoint safety
After mid-loop compression (triggered by 413, context_overflow, or Anthropic
long-context tier errors), _compress_context() creates a new session in SQLite
and resets _last_flushed_db_idx=0. However, conversation_history was not cleared,
so _flush_messages_to_session_db() computed:
flush_from = max(len(conversation_history=200), _last_flushed_db_idx=0) = 200
messages[200:] → empty (compressed messages < 200)
This resulted in zero messages being written to the new session's SQLite store.
On resume, the user would see 'Session found but has no messages.'
The preflight compression path (line 7311) already had the fix:
conversation_history = None
This commit adds the same clearing to the three mid-loop compression sites:
- Anthropic long-context tier overflow
- HTTP 413 payload too large
- Generic context_overflow error
Reported by Aaryan (Nous community).
Raise the default httpx stream read timeout from 60s to 120s for all
providers. Additionally, auto-detect local LLM endpoints (Ollama,
llama.cpp, vLLM) and raise the read timeout to HERMES_API_TIMEOUT
(1800s) since local models can take minutes for prefill on large
contexts before producing the first token.
The stale stream timeout already had this local auto-detection pattern;
the httpx read timeout was missing it — causing a hard 60s wall that
users couldn't find (HERMES_STREAM_READ_TIMEOUT was undocumented).
Changes:
- Default HERMES_STREAM_READ_TIMEOUT: 60s -> 120s
- Auto-detect local endpoints -> raise to 1800s (user override respected)
- Document HERMES_STREAM_READ_TIMEOUT and HERMES_STREAM_STALE_TIMEOUT
- Add 10 parametrized tests
Reported-by: Pavan Srinivas (@pavanandums)
When the model mentions <think> as literal text in its response (e.g.
"(/think not producing <think> tags)"), the streaming display treated it
as a reasoning block opener and suppressed everything after it. The
response box would close with truncated content and no error — the API
response was complete but the display ate it.
Root cause: _stream_delta() matched <think> anywhere in the text stream
regardless of position. Real reasoning blocks always start at the
beginning of a line; mentions in prose appear mid-sentence.
Fix: track line position across streaming deltas with a
_stream_last_was_newline flag. Only enter reasoning suppression when
the tag appears at a block boundary (start of stream, after a newline,
or after only whitespace on the current line). Add a _flush_stream()
safety net that recovers buffered content if no closing tag is found
by end-of-stream.
Also fixes three related issues discovered during investigation:
- anthropic_adapter: _get_anthropic_max_output() now normalizes dots to
hyphens so 'claude-opus-4.6' matches the 'claude-opus-4-6' table key
(was returning 32K instead of 128K)
- run_agent: send explicit max_tokens for Claude models on Nous Portal,
same as OpenRouter — both proxy to Anthropic's API which requires it.
Without it the backend defaults to a low limit that truncates responses.
- run_agent: reset truncated_tool_call_retries after successful tool
execution so a single truncation doesn't poison the entire conversation.
Previously /fast only supported gpt-5.4 and forced a provider switch to
openai-codex. Now supports all 13 models from OpenAI's Priority Processing
pricing table (gpt-5.4, gpt-5.4-mini, gpt-5.2, gpt-5.1, gpt-5, gpt-5-mini,
gpt-4.1, gpt-4.1-mini, gpt-4.1-nano, gpt-4o, gpt-4o-mini, o3, o4-mini).
Key changes:
- Replaced _FAST_MODE_BACKEND_CONFIG with _PRIORITY_PROCESSING_MODELS frozenset
- Removed provider-forcing logic — service_tier is now injected into whatever
API path the user is already on (Codex Responses, Chat Completions, or
OpenRouter passthrough)
- Added request_overrides support to chat_completions path in run_agent.py
- Updated messaging from 'Codex inference tier' to 'Priority Processing'
- Expanded test coverage for all supported models
Add /fast slash command to toggle OpenAI Codex service_tier between
normal and priority ('fast') inference. Only exposed for models
registered in _FAST_MODE_BACKEND_CONFIG (currently gpt-5.4).
- Registry-based backend config for extensibility
- Dynamic command visibility (hidden from help/autocomplete for
non-supported models) via command_filter on SlashCommandCompleter
- service_tier flows through request_overrides from route resolution
- Omit max_output_tokens for Codex backend (rejects it)
- Persists to config.yaml under agent.service_tier
Salvage cleanup: removed simple_term_menu/input() menu (banned),
bare /fast now shows status like /reasoning. Removed redundant
override resolution in _build_api_kwargs — single source of truth
via request_overrides from route.
Co-authored-by: Hermes Agent <hermes@nousresearch.com>
When OpenRouter returns 'No endpoints found that support tool use'
(HTTP 404), display a hint explaining that provider routing restrictions
may be filtering out tool-capable providers. Links the user directly
to the model's OpenRouter page to check which providers support tools.
The hint fires in the error display block that runs regardless of whether
fallback succeeds — so the user always understands WHY the model failed,
not just that it fell back.
Reported via Discord: GLM-5.1 on OpenRouter with US-based provider
restrictions eliminated all 4 tool-supporting endpoints (DeepInfra,
Z.AI, Friendli, Venice), leaving only 7 non-tool providers.
When a streaming response is cut mid-tool-call (connection drop, timeout),
the accumulated function.arguments is invalid JSON. The mock response
builder defaulted finish_reason to 'stop', so the agent loop treated it
as a valid completed turn and tried to execute tools with broken args.
Fix: validate tool call arguments with json.loads() during mock response
reconstruction. If any are invalid JSON, override finish_reason to
'length'. In the main loop's length handler, if tool calls are present,
refuse to execute and return partial=True with a clear error instead of
silently failing or wasting retries.
Also fixes _thinking_exhausted to not short-circuit when tool calls are
present — truncated tool calls are not thinking exhaustion.
Original cherry-picked from PR #6776 by AIandI0x1.
Closes#6638.
When the API returns "max_tokens too large given prompt" (input tokens
are within the context window, but input + requested output > window),
the old code incorrectly routed through the same handler as "prompt too
long" errors, calling get_next_probe_tier() and permanently halving
context_length. This made things worse: the window was fine, only the
requested output size needed trimming for that one call.
Two distinct error classes now handled separately:
Prompt too long — input itself exceeds context window.
Fix: compress history + halve context_length (existing behaviour,
unchanged).
Output cap too large — input OK, but input + max_tokens > window.
Fix: parse available_tokens from the error message, set a one-shot
_ephemeral_max_output_tokens override for the retry, and leave
context_length completely untouched.
Changes:
- agent/model_metadata.py: add parse_available_output_tokens_from_error()
that detects Anthropic's "available_tokens: N" error format and returns
the available output budget, or None for all other error types.
- run_agent.py: call the new parser first in the is_context_length_error
block; if it fires, set _ephemeral_max_output_tokens (with a 64-token
safety margin) and break to retry without touching context_length.
_build_api_kwargs consumes the ephemeral value exactly once then clears
it so subsequent calls use self.max_tokens normally.
- agent/anthropic_adapter.py: expand build_anthropic_kwargs docstring to
clearly document the max_tokens (output cap) vs context_length (total
window) distinction, which is a persistent source of confusion due to
the OpenAI-inherited "max_tokens" name.
- cli-config.yaml.example: add inline comments explaining both keys side
by side where users are most likely to look.
- website/docs/integrations/providers.md: add a callout box at the top
of "Context Length Detection" and clarify the troubleshooting entry.
- tests/test_ctx_halving_fix.py: 24 tests across four classes covering
the parser, build_anthropic_kwargs clamping, ephemeral one-shot
consumption, and the invariant that context_length is never mutated
on output-cap errors.
When _try_activate_fallback() swaps to a new provider (e.g.
kimi-coding), resolve_provider_client() correctly injects
provider-specific default_headers (like KimiCLI User-Agent) into the
returned OpenAI client. However, _client_kwargs was saved with only
api_key and base_url, dropping those headers.
Every subsequent API call rebuilds the client from _client_kwargs via
_create_request_openai_client(), producing a bare OpenAI client without
the required headers. Kimi Coding rejects this with 403; Copilot would
lose its auth headers similarly.
This patch reads _custom_headers from the fallback client (where the
OpenAI SDK stores the default_headers kwarg) and includes them in
_client_kwargs so any client rebuild preserves provider-specific headers.
Fixes#6075
Every turn now logs WHY the agent loop ended to agent.log with a
structured INFO line capturing: exit reason, model, api_calls/max,
budget usage, tool turn count, last message role, response length,
and session ID.
When the last message is a tool result and the turn was NOT
interrupted, emits WARNING level (visible in errors.log) — this is
the 'just stops' scenario users report where a tool call completes
but no continuation or final response follows.
10 tracked exit reasons: text_response, interrupted_by_user,
interrupted_during_api_call, budget_exhausted, max_iterations_reached,
all_retries_exhausted_no_response, fallback_prior_turn_content,
empty_response_exhausted, error_near_max_iterations, unknown.
Parse x-ratelimit-* headers from inference API responses (Nous Portal,
OpenRouter, OpenAI-compatible) and display them in the /usage command.
- New agent/rate_limit_tracker.py: parse 12 rate limit headers (RPM/RPH/
TPM/TPH limits, remaining, reset timers), format as progress bars (CLI)
or compact one-liner (gateway)
- Hook into streaming path in run_agent.py: stream.response.headers is
available on the OpenAI SDK Stream object before chunks are consumed
- CLI /usage: appends rate limit section with progress bars + warnings
when any bucket exceeds 80%
- Gateway /usage: appends compact rate limit summary
- 24 unit tests covering parsing, formatting, edge cases
Headers captured per response:
x-ratelimit-{limit,remaining,reset}-{requests,tokens}{,-1h}
Example CLI display:
Nous Rate Limits (captured just now):
Requests/min [░░░░░░░░░░░░░░░░░░░░] 0.1% 1/800 used (799 left, resets in 59s)
Tokens/hr [░░░░░░░░░░░░░░░░░░░░] 0.0% 49/336.0M (336.0M left, resets in 52m)
When a model returns no content, no structured reasoning, and no tool
calls (common with open models), the agent now silently retries up to
3 times before falling through to (empty).
Silent retry (no synthetic messages) keeps the conversation history
clean, preserves prompt caching, and respects the no-synthetic-user-
injection invariant. Most empty responses from open models are
transient (provider hiccups, rate limits, sampling flukes) so a
simple retry is sufficient.
This fills the last gap in the empty-response recovery chain:
1. _last_content_with_tools fallback (prior tool turn had content)
2. Thinking-only prefill continuation (#5931 — structured reasoning)
3. Empty response silent retry (NEW — truly empty, no reasoning)
4. (empty) terminal (last resort after all retries exhausted)
Inline <think> blocks are excluded — the model chose to reason, it
just produced no visible text. That differs from truly empty.
Tests:
- Updated test_truly_empty to expect 4 API calls (1 + 3 retries)
- Added test_truly_empty_response_succeeds_on_nudge
`_cleanup_task_resources` was unconditionally calling `cleanup_vm()` at
the end of every `run_conversation` (i.e. every user turn), tearing down
the docker/daytona/modal sandbox container regardless of its
`persistent_filesystem` setting. This contradicted the documented intent
of `terminal.lifetime_seconds` (idle reaper) and `container_persistent`,
and caused per-turn loss of `/workspace`, `~/.config`, agent CLI auth
state, and any other content living inside the sandbox.
The unconditional teardown was introduced in fbd3a2fd ("prevent leakage
of morph instances between tasks", 2025-11-04) to plug a Morph backend
leak, two days after `lifetime_seconds` shipped in faecbddd. It was
later refactored into `_cleanup_task_resources` in 70dd3a16 without
changing semantics. Code and docs have disagreed since.
Fix: introduce `terminal_tool.is_persistent_env(task_id)` and skip the
per-turn `cleanup_vm` when the active env is persistent. The idle reaper
(`_cleanup_inactive_envs`) still tears persistent envs down once
`terminal.lifetime_seconds` is exceeded. Non-persistent backends (Morph)
are unchanged — still torn down per turn, preserving the original
leak-prevention intent.
Combines the approaches from PR #6309 (duan78) and PR #5963 (KUSH42):
Tiered warnings (from #5963):
- Replaces boolean _context_pressure_warned with float _context_pressure_warned_at
- Fires at 85% (orange) and re-fires at 95% (red/critical)
- Adds 'compacting context...' status message before compression
Gateway dedup (from #6309):
- Class-level dict _context_pressure_last_warned survives across AIAgent
instances (gateway creates a new instance per message)
- 5-minute cooldown per session prevents warning spam
- Higher-tier warnings bypass the cooldown (85% → 95% always fires)
- Compression reset clears the dedup entry for the session
- Stale entries evicted (older than 2x cooldown) to prevent memory leak
Does NOT inject into messages — purely user-facing via _safe_print (CLI)
and status_callback (gateway). Zero prompt cache impact.
Fixes#6309. Fixes#5963.
Three targeted improvements to the compression system:
1. Replace hardcoded truncation limits with named class constants
(_CONTENT_MAX=6000, _CONTENT_HEAD=4000, _CONTENT_TAIL=1500,
_TOOL_ARGS_MAX=1500, _TOOL_ARGS_HEAD=1200). Previous limits
(3000/500) heavily truncated the summarizer's input — a 200-line
edit got cut to 3000 chars before the summarizer ever saw it.
2. Add '## Tools & Patterns' section to both compression prompt
templates (first-pass and iterative). Preserves working tool
invocations, preferred flags, and tool-specific discoveries
across compaction boundaries.
3. Warn users on 2nd+ compression: 'Session compressed N times —
accuracy may degrade. Consider /new to start fresh.'
Ref #499
Local inference providers (Ollama, oMLX, llama-cpp) can take 300+ seconds
for prefill on large contexts. The 180s stale stream detector was killing
these connections while the provider was still processing.
Uses the existing is_local_endpoint() (proper URL parsing with RFC-1918,
localhost, WSL detection) instead of ad-hoc substring matching. The stale
timeout is only disabled when the user hasn't explicitly set
HERMES_STREAM_STALE_TIMEOUT — explicit user config is always honored.
Fixes#5889
The _call_llm() and direct OpenAI fallback paths in flush_memories() both
hardcoded timeout=30.0, ignoring the user-configurable value at
auxiliary.flush_memories.timeout in config.yaml.
Remove the explicit timeout from the auxiliary _call_llm() call so that
_get_task_timeout('flush_memories') reads from config. For the direct
OpenAI fallback, import and use _get_task_timeout() instead of the
hardcoded value.
Add two regression tests verifying both code paths respect the config.
Fixes#6154
Two linked fixes for MiniMax Anthropic-compatible fallback:
1. Normalize httpx.URL to str before calling .rstrip() in auth/provider
detection helpers. Some client objects expose base_url as httpx.URL,
not str — crashed with AttributeError in _requires_bearer_auth() and
_is_third_party_anthropic_endpoint(). Also fixes _try_activate_fallback()
to use the already-stringified fb_base_url instead of raw httpx.URL.
2. Strip Anthropic-proprietary thinking block signatures when targeting
third-party Anthropic-compatible endpoints (MiniMax, Azure AI Foundry,
self-hosted proxies). These endpoints cannot validate Anthropic's
signatures and reject them with HTTP 400 'Invalid signature in
thinking block'. Now threads base_url through convert_messages_to_anthropic()
→ build_anthropic_kwargs() so signature management is endpoint-aware.
Based on PR #4945 by kshitijk4poor (rstrip fix).
Fixes#4944.
Based on #6079 by @tunamitom with critical fixes and comprehensive tests.
Changes from #6079:
- Fix: sanitization overwrite bug — Qwen message prep now runs AFTER codex
field sanitization, not before (was silently discarding Qwen transforms)
- Fix: missing try/except AuthError in runtime_provider.py — stale Qwen
credentials now fall through to next provider on auto-detect
- Fix: 'qwen' alias conflict — bare 'qwen' stays mapped to 'alibaba'
(DashScope); use 'qwen-portal' or 'qwen-cli' for the OAuth provider
- Fix: hardcoded ['coder-model'] replaced with live API fetch + curated
fallback list (qwen3-coder-plus, qwen3-coder)
- Fix: extract _is_qwen_portal() helper + _qwen_portal_headers() to replace
5 inline 'portal.qwen.ai' string checks and share headers between init
and credential swap
- Fix: add Qwen branch to _apply_client_headers_for_base_url for mid-session
credential swaps
- Fix: remove suspicious TypeError catch blocks around _prompt_provider_choice
- Fix: handle bare string items in content lists (were silently dropped)
- Fix: remove redundant dict() copies after deepcopy in message prep
- Revert: unrelated ai-gateway test mock removal and model_switch.py comment deletion
New tests (30 test functions):
- _qwen_cli_auth_path, _read_qwen_cli_tokens (success + 3 error paths)
- _save_qwen_cli_tokens (roundtrip, parent creation, permissions)
- _qwen_access_token_is_expiring (5 edge cases: fresh, expired, within skew,
None, non-numeric)
- _refresh_qwen_cli_tokens (success, preserve old refresh, 4 error paths,
default expires_in, disk persistence)
- resolve_qwen_runtime_credentials (fresh, auto-refresh, force-refresh,
missing token, env override)
- get_qwen_auth_status (logged in, not logged in)
- Runtime provider resolution (direct, pool entry, alias)
- _build_api_kwargs (metadata, vl_high_resolution_images, message formatting,
max_tokens suppression)
Anthropic signs thinking blocks against the full turn content. Any
upstream mutation (context compression, session truncation, orphan
stripping, message merging) invalidates the signature, causing HTTP 400
'Invalid signature in thinking block' — especially in long-lived
gateway sessions.
Strategy (following clawdbot/OpenClaw pattern):
1. Strip thinking/redacted_thinking from all assistant messages EXCEPT
the last one — preserves reasoning continuity on the current
tool-use chain while avoiding stale signature errors on older turns.
2. Downgrade unsigned thinking blocks to plain text — Anthropic can't
validate them, but the reasoning content is preserved.
3. Strip cache_control from thinking/redacted_thinking blocks to
prevent cache markers from interfering with signature validation.
4. Drop thinking blocks from the second message when merging
consecutive assistant messages (role alternation enforcement).
5. Error recovery: on HTTP 400 mentioning 'signature' and 'thinking',
strip all reasoning_details from the conversation and retry once.
This is the safety net for edge cases the proactive stripping
misses.
Addresses the issue reported in PR #6086 by @mingginwan while
preserving reasoning continuity (their PR stripped ALL thinking
blocks unconditionally).
Files changed:
- agent/anthropic_adapter.py: thinking block management in
convert_messages_to_anthropic (strip old turns, downgrade unsigned,
strip cache_control, merge-time strip)
- run_agent.py: one-shot signature error recovery in retry loop
- tests/test_anthropic_adapter.py: 10 new tests covering all cases
Salvaged fixes from community PRs:
- fix(model_switch): _read_auth_store → _load_auth_store + fix auth store
key lookup (was checking top-level dict instead of store['providers']).
OAuth providers now correctly detected in /model picker.
Cherry-picked from PR #5911 by Xule Lin (linxule).
- fix(ollama): pass num_ctx to override 2048 default context window.
Ollama defaults to 2048 context regardless of model capabilities. Now
auto-detects from /api/show metadata and injects num_ctx into every
request. Config override via model.ollama_num_ctx. Fixes#2708.
Cherry-picked from PR #5929 by kshitij (kshitijk4poor).
- fix(aux): normalize provider aliases for vision/auxiliary routing.
Adds _normalize_aux_provider() with 17 aliases (google→gemini,
claude→anthropic, glm→zai, etc). Fixes vision routing failure when
provider is set to 'google' instead of 'gemini'.
Cherry-picked from PR #5793 by e11i (Elizabeth1979).
- fix(aux): rewrite MiniMax /anthropic base URLs to /v1 for OpenAI SDK.
MiniMax's inference_base_url ends in /anthropic (Anthropic Messages API),
but auxiliary client uses OpenAI SDK which appends /chat/completions →
404 at /anthropic/chat/completions. Generic _to_openai_base_url() helper
rewrites terminal /anthropic to /v1 for OpenAI-compatible endpoint.
Inspired by PR #5786 by Lempkey.
Added debug logging to silent exception blocks across all fixes.
Co-authored-by: Hermes Agent <hermes@nousresearch.com>
The response validation stage unconditionally marked Codex Responses API
replies as invalid when response.output was empty, triggering unnecessary
retries and fallback chains. However, _normalize_codex_response can
recover from this state by synthesizing output from response.output_text.
Now the validation stage checks for output_text before marking the
response invalid, matching the normalization logic. Also fixes
logging.warning → logger.warning for consistency with the rest of the
file.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
When the model produces structured reasoning (via API fields like .reasoning,
.reasoning_content, .reasoning_details) but no visible text content, append
the assistant message as prefill and continue the loop. The model sees its own
reasoning context on the next turn and produces the text portion.
Inspired by clawdbot's 'incomplete-text' recovery pattern. Up to 2 prefill
attempts before falling through to the existing '(empty)' terminal.
Key design decisions:
- Only triggers for structured reasoning (API fields), NOT inline <think> tags
- Prefill messages are popped on success to maintain strict role alternation
- _thinking_prefill marker stripped from all API message building paths
- Works across all providers: OpenAI (continuation), Anthropic (native prefill)
Verified with E2E tests: simulated thinking-only → real OpenRouter continuation
produces correct content. Also confirmed Qwen models consistently produce
structured-reasoning-only responses under token pressure.
Memory plugins (Mem0, Honcho) used static identifiers ('hermes-user',
config peerName) meaning all gateway users shared the same memory bucket.
Changes:
- AIAgent.__init__: add user_id parameter, store as self._user_id
- run_agent.py: include user_id in _init_kwargs passed to memory providers
- gateway/run.py: pass source.user_id to AIAgent in primary + background paths
- Mem0 plugin: prefer kwargs user_id over config default
- Honcho plugin: override cfg.peer_name with gateway user_id when present
CLI sessions (user_id=None) preserve existing defaults. Only gateway
sessions with a real platform user_id get per-user memory scoping.
Reported by plev333.
Comprehensive cleanup across 80 files based on automated (ruff, pyflakes, vulture)
and manual analysis of the entire codebase.
Changes by category:
Unused imports removed (~95 across 55 files):
- Removed genuinely unused imports from all major subsystems
- agent/, hermes_cli/, tools/, gateway/, plugins/, cron/
- Includes imports in try/except blocks that were truly unused
(vs availability checks which were left alone)
Unused variables removed (~25):
- Removed dead variables: connected, inner, channels, last_exc,
source, new_server_names, verify, pconfig, default_terminal,
result, pending_handled, temperature, loop
- Dropped unused argparse subparser assignments in hermes_cli/main.py
(12 instances of add_parser() where result was never used)
Dead code removed:
- run_agent.py: Removed dead ternary (None if False else None) and
surrounding unreachable branch in identity fallback
- run_agent.py: Removed write-only attribute _last_reported_tool
- hermes_cli/providers.py: Removed dead @property decorator on
module-level function (decorator has no effect outside a class)
- gateway/run.py: Removed unused MCP config load before reconnect
- gateway/platforms/slack.py: Removed dead SessionSource construction
Undefined name bugs fixed (would cause NameError at runtime):
- batch_runner.py: Added missing logger = logging.getLogger(__name__)
- tools/environments/daytona.py: Added missing Dict and Path imports
Unnecessary global statements removed (14):
- tools/terminal_tool.py: 5 functions declared global for dicts
they only mutated via .pop()/[key]=value (no rebinding)
- tools/browser_tool.py: cleanup thread loop only reads flag
- tools/rl_training_tool.py: 4 functions only do dict mutations
- tools/mcp_oauth.py: only reads the global
- hermes_time.py: only reads cached values
Inefficient patterns fixed:
- startswith/endswith tuple form: 15 instances of
x.startswith('a') or x.startswith('b') consolidated to
x.startswith(('a', 'b'))
- len(x)==0 / len(x)>0: 13 instances replaced with pythonic
truthiness checks (not x / bool(x))
- in dict.keys(): 5 instances simplified to in dict
- Redefined unused name: removed duplicate _strip_mdv2 import in
send_message_tool.py
Other fixes:
- hermes_cli/doctor.py: Replaced undefined logger.debug() with pass
- hermes_cli/config.py: Consolidated chained .endswith() calls
Test results: 3934 passed, 17 failed (all pre-existing on main),
19 skipped. Zero regressions.
Two gaps in the codex empty-output handling:
1. _run_codex_create_stream_fallback() skipped all non-terminal events,
so when the fallback path was used and the terminal response had
empty output, there was no recovery. Now collects output_item.done
and text deltas during the fallback stream, backfills on empty output.
2. _normalize_codex_response() hard-crashed with RuntimeError when
output was empty, even when the response had output_text set. The
function already had fallback logic at line 3562 to use output_text,
but the guard at line 3446 killed it first. Now checks output_text
before raising and synthesizes a minimal output item.
Salvages the core fix from PR #5673 (egerev) onto current main.
The chatgpt.com/backend-api/codex endpoint streams valid output items
via response.output_item.done events, but the OpenAI SDK's
get_final_response() returns an empty output list. This caused every
Codex response to be rejected as invalid.
Fix: collect output_item.done events during streaming and backfill
response.output when get_final_response() returns empty. Falls back
to synthesizing from text deltas when no done events were received.
Also moves the synthesis logic from the validation loop (too late, from
#5681) into _run_codex_stream() (before the response leaves the
streaming function), and simplifies the validation to just log
diagnostics since recovery now happens upstream.
Co-authored-by: Egor <egerev@users.noreply.github.com>
Three bugs causing OpenAI Codex sessions to fail silently:
1. Credential pool vs legacy store disconnect: hermes auth and hermes
model store device_code tokens in the credential pool, but
get_codex_auth_status(), resolve_codex_runtime_credentials(), and
_model_flow_openai_codex() only read from the legacy provider state.
Fresh pool tokens were invisible to the auth status checks and model
selection flow.
2. _import_codex_cli_tokens() imported expired tokens from ~/.codex/
without checking JWT expiry. Combined with _login_openai_codex()
saying 'Login successful!' for expired credentials, users got stuck
in a loop of dead tokens being recycled.
3. _login_openai_codex() accepted expired tokens from
resolve_codex_runtime_credentials() without validating expiry before
telling the user login succeeded.
Fixes:
- get_codex_auth_status() now checks credential pool first, falls back
to legacy provider state
- _model_flow_openai_codex() uses pool-aware auth status for token
retrieval when fetching model lists
- _import_codex_cli_tokens() validates JWT exp claim, rejects expired
- _login_openai_codex() verifies resolved token isn't expiring before
accepting existing credentials
- _run_codex_stream() logs response.incomplete/failed terminal events
with status and incomplete_details for diagnostics
- Codex empty output recovery: captures streamed text during streaming
and synthesizes a response when get_final_response() returns empty
output (handles chatgpt.com backend-api edge cases)
Two fixes:
1. Replace all stale 'hermes login' references with 'hermes auth' across
auth.py, auxiliary_client.py, delegate_tool.py, config.py, run_agent.py,
and documentation. The 'hermes login' command was deprecated; 'hermes auth'
now handles OAuth credential management.
2. Fix credential removal not persisting for singleton-sourced credentials
(device_code for openai-codex/nous, hermes_pkce for anthropic).
auth_remove_command already cleared env vars for env-sourced credentials,
but singleton credentials stored in the auth store were re-seeded by
_seed_from_singletons() on the next load_pool() call. Now clears the
underlying auth store entry when removing singleton-sourced credentials.
When the Codex CLI (or VS Code extension) consumes a refresh token before
Hermes can use it, Hermes previously surfaced a generic 401 error with no
actionable guidance.
- In `refresh_codex_oauth_pure`: detect `refresh_token_reused` from the
OAuth endpoint and raise an AuthError explaining the cause and the exact
steps to recover (run `codex` to refresh, then `hermes login`).
- In `run_agent.py`: when provider is `openai-codex` and HTTP 401 is
received, show Codex-specific recovery steps instead of the generic
"check your API key" message.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
When using xAI's API directly (base_url contains x.ai), send the
x-grok-conv-id header set to the Hermes session_id. This routes
consecutive requests to the same server, maximizing automatic
prompt cache hits.
Ref: https://docs.x.ai/developers/advanced-api-usage/prompt-caching
OpenCode Go model names with dots (minimax-m2.7, glm-4.5, kimi-k2.5)
were being mangled to hyphens (minimax-m2-7), causing HTTP 401 errors.
Two code paths were affected:
1. model_normalize.py: opencode-go was incorrectly in DOT_TO_HYPHEN_PROVIDERS
2. run_agent.py: _anthropic_preserve_dots() did not check for opencode-go
Fix:
- Remove opencode-go from _DOT_TO_HYPHEN_PROVIDERS (dots are correct for Go)
- Add opencode-go to _anthropic_preserve_dots() provider check
- Add opencode.ai/zen/go to base_url fallback check
- Add regression tests in tests/test_model_normalize.py
Co-authored-by: jacob3712 <jacob3712@users.noreply.github.com>