async_call_llm (and call_llm) can return non-OpenAI objects from
custom providers or adapter shims, crashing downstream consumers
with misleading AttributeError ('str' has no attribute 'choices').
Add _validate_llm_response() that checks the response has the
expected .choices[0].message shape before returning. Wraps all
return paths in call_llm, async_call_llm, and fallback paths.
Fails fast with a clear RuntimeError identifying the task, response
type, and a preview of the malformed payload.
Closes#7264
`resolve_provider_client()` already drops OpenRouter-format model slugs
(containing "/") when the resolved provider is not OpenRouter (line 1097).
However, `_get_cached_client()` returns `model or cached_default` directly
on cache hits, bypassing this check entirely.
When the main provider is openai-codex, the auto-detection chain (Step 1
of `_resolve_auto`) caches a CodexAuxiliaryClient. Subsequent auxiliary
calls for different tasks (e.g. compression with `summary_model:
google/gemini-3-flash-preview`) hit the cache and pass the OpenRouter-
format model slug straight to the Codex Responses API, which does not
understand it and returns an empty `response.output`.
This causes two user-visible failures:
- "Invalid API response shape" (empty output after 3 retries)
- "Context length exceeded, cannot compress further" (compression itself
fails through the same path)
Add `_compat_model()` helper that mirrors the "/" check from
`resolve_provider_client()` and call it on the cache-hit return path.
Four fixes to auxiliary_client.py:
1. Respect explicit provider as hard constraint (#7559)
When auxiliary.{task}.provider is explicitly set (not 'auto'),
connection/payment errors no longer silently fallback to cloud
providers. Local-only users (Ollama, vLLM) will no longer get
unexpected OpenRouter billing from auxiliary tasks.
2. Eliminate model='default' sentinel (#7512)
_resolve_api_key_provider() no longer sends literal 'default' as
model name to APIs. Providers without a known aux model in
_API_KEY_PROVIDER_AUX_MODELS are skipped instead of producing
model_not_supported errors.
3. Add payment/connection fallback to async_call_llm (#7512)
async_call_llm now mirrors sync call_llm's fallback logic for
payment (402) and connection errors. Previously, async consumers
(session_search, web_tools, vision) got hard failures with no
recovery. Also fixes hardcoded 'openrouter' fallback to use the
full auto-detection chain.
4. Use accurate error reason in fallback logs (#7512)
_try_payment_fallback() now accepts a reason parameter and uses
it in log messages. Connection timeouts are no longer misleadingly
logged as 'payment error'.
Closes#7559Closes#7512
The auxiliary client always calls client.chat.completions.create(),
ignoring the api_mode config flag. This breaks codex-family models
(e.g. gpt-5.3-codex) on direct OpenAI API keys, which need the
/v1/responses endpoint.
Changes:
- Expand _resolve_task_provider_model to return api_mode (5-tuple)
- Read api_mode from auxiliary.{task}.api_mode config and env vars
(AUXILIARY_{TASK}_API_MODE)
- Pass api_mode through _get_cached_client to resolve_provider_client
- Add _needs_codex_wrap/_wrap_if_needed helpers that wrap plain OpenAI
clients in CodexAuxiliaryClient when api_mode=codex_responses or
when auto-detection finds api.openai.com + codex model pattern
- Apply wrapping at all custom endpoint, named custom provider, and
API-key provider return paths
- Update test mocks for the new 5-tuple return format
Users can now set:
auxiliary:
compression:
model: gpt-5.3-codex
base_url: https://api.openai.com/v1
api_mode: codex_responses
Closes#6800
Refactor hardcoded color constants throughout the CLI to resolve from
the active skin engine, so custom themes fully control the visual
appearance.
cli.py:
- Replace _GOLD constant with _ACCENT (_SkinAwareAnsi class) that
lazily resolves response_border from the active skin
- Rename _GOLD_DEFAULT to _ACCENT_ANSI_DEFAULT
- Make _build_compact_banner() read banner_title/accent/dim from skin
- Make session resume notifications use _accent_hex()
- Make status line use skin colors (accent_color, separator_color,
label_color instead of cryptic _dim_c/_dim_c2/_accent_c/_label_c)
- Reset _ACCENT cache on /skin switch
agent/display.py:
- Replace hardcoded diff ANSI escapes with skin-aware functions:
_diff_dim(), _diff_file(), _diff_hunk(), _diff_minus(), _diff_plus()
(renamed from SCREAMING_CASE _ANSI_* to snake_case)
- Add reset_diff_colors() for cache invalidation on skin switch
Aligns MiniMax provider with official API documentation. Fixes 6 bugs:
transport mismatch (openai_chat -> anthropic_messages), credential leak
in switch_model(), prompt caching sent to non-Anthropic endpoints,
dot-to-hyphen model name corruption, trajectory compressor URL routing,
and stale doctor health check.
Also corrects context window (204,800), thinking support (manual mode),
max output (131,072), and model catalog (M2 family only on /anthropic).
Source: https://platform.minimax.io/docs/api-reference/text-anthropic-api
Co-authored-by: kshitijk4poor <kshitijk4poor@users.noreply.github.com>
_is_oauth_token() returned True for any key not starting with 'sk-ant-api',
which means MiniMax and Alibaba API keys were falsely treated as Anthropic
OAuth tokens. This triggered the Claude Code compatibility path:
- All tool names prefixed with mcp_ (e.g. mcp_terminal, mcp_web_search)
- System prompt injected with 'You are Claude Code' identity
- 'Hermes Agent' replaced with 'Claude Code' throughout
Fix: Make _is_oauth_token() positively identify Anthropic OAuth tokens by
their key format instead of using a broad catch-all:
- sk-ant-* (but not sk-ant-api-*) -> setup tokens, managed keys
- eyJ* -> JWTs from Anthropic OAuth flow
- Everything else -> False (MiniMax, Alibaba, etc.)
Reported by stefan171.
GPT-5+ models (except gpt-5-mini) are only accessible via the Responses
API on Copilot. When these models were configured as the compression
summary_model (or any auxiliary task), the plain OpenAI client sent them
to /chat/completions which returned a 400 error:
model "gpt-5.4-mini" is not accessible via the /chat/completions endpoint
resolve_provider_client() now checks _should_use_copilot_responses_api()
for the copilot provider and wraps the client in CodexAuxiliaryClient
when needed, routing calls through responses.stream() transparently.
Adds tests for both the wrapping (gpt-5.4-mini) and non-wrapping
(gpt-4.1-mini) paths.
Follow-up fixes for the context engine plugin slot (PR #5700):
- Enhance ContextEngine ABC: add threshold_percent, protect_first_n,
protect_last_n as class attributes; complete update_model() default
with threshold recalculation; clarify on_session_end() lifecycle docs
- Add ContextCompressor.update_model() override for model/provider/
base_url/api_key updates
- Replace all direct compressor internal access in run_agent.py with
ABC interface: switch_model(), fallback restore, context probing
all use update_model() now; _context_probed guarded with getattr/
hasattr for plugin engine compatibility
- Create plugins/context_engine/ directory with discovery module
(mirrors plugins/memory/ pattern) — discover_context_engines(),
load_context_engine()
- Add context.engine config key to DEFAULT_CONFIG (default: compressor)
- Config-driven engine selection in run_agent.__init__: checks config,
then plugins/context_engine/<name>/, then general plugin system,
falls back to built-in ContextCompressor
- Wire on_session_end() in shutdown_memory_provider() at real session
boundaries (CLI exit, /reset, gateway expiry)
- PluginContext.register_context_engine() lets plugins replace the
built-in ContextCompressor with a custom ContextEngine implementation
- PluginManager stores the registered engine; only one allowed
- run_agent.py checks for a plugin engine at init before falling back
to the default ContextCompressor
- reset_session_state() now calls engine.on_session_reset() instead of
poking internal attributes directly
- ContextCompressor.on_session_reset() handles its own internals
(_context_probed, _previous_summary, etc.)
- 19 new tests covering ABC contract, defaults, plugin slot registration,
rejection of duplicates/non-engines, and compressor reset behavior
- All 34 existing compressor tests pass unchanged
Introduces agent/context_engine.py — an abstract base class that defines
the pluggable context engine interface. ContextCompressor now inherits
from ContextEngine as the default implementation.
No behavior change. All 34 existing compressor tests pass.
This is the foundation for a context engine plugin slot, enabling
third-party engines like LCM (Lossless Context Management) to replace
the built-in compressor via the plugin system.
When two gateway messages arrived concurrently, _set_session_env wrote
HERMES_SESSION_PLATFORM/CHAT_ID/CHAT_NAME/THREAD_ID into the process-global
os.environ. Because asyncio tasks share the same process, Message B would
overwrite Message A's values mid-flight, causing background-task notifications
and tool calls to route to the wrong thread/chat.
Replace os.environ with Python's contextvars.ContextVar. Each asyncio task
(and any run_in_executor thread it spawns) gets its own copy, so concurrent
messages never interfere.
Changes:
- New gateway/session_context.py with ContextVar definitions, set/clear/get
helpers, and os.environ fallback for CLI/cron/test backward compatibility
- gateway/run.py: _set_session_env returns reset tokens, _clear_session_env
accepts them for proper cleanup in finally blocks
- All tool consumers updated: cronjob_tools, send_message_tool, skills_tool,
terminal_tool (both notify_on_complete AND check_interval blocks), tts_tool,
agent/skill_utils, agent/prompt_builder
- Tests updated for new contextvar-based API
Fixes#7358
Co-authored-by: teknium1 <127238744+teknium1@users.noreply.github.com>
Adds xAI as a first-class provider: ProviderConfig in auth.py,
HermesOverlay in providers.py, 11 curated Grok models, URL mapping
in model_metadata.py, aliases (x-ai, x.ai), and env var tests.
Uses standard OpenAI-compatible chat completions.
Closes#7050
- Remove sys.path.insert hack (leftover from standalone dev)
- Add token lock (acquire_scoped_lock/release_scoped_lock) in
connect()/disconnect() to prevent duplicate pollers across profiles
- Fix get_connected_platforms: WEIXIN check must precede generic
token/api_key check (requires both token AND account_id)
- Add WEIXIN_HOME_CHANNEL_NAME to _EXTRA_ENV_KEYS
- Add gateway setup wizard with QR login flow
- Add platform status check for partially configured state
- Add weixin.md docs page with full adapter documentation
- Update environment-variables.md reference with all 11 env vars
- Update sidebars.ts to include weixin docs page
- Wire all gateway integration points onto current main
Salvaged from PR #6747 by Zihan Huang.
Port from anomalyco/opencode#21355: Alibaba's DashScope API returns a
unique throttling message ('Request rate increased too quickly...') that
doesn't match standard rate-limit patterns ('rate limit', 'too many
requests'). This caused Alibaba errors to fall through to the 'unknown'
category rather than being properly classified as rate_limit with
appropriate backoff/rotation.
Add 'rate increased too quickly' to _RATE_LIMIT_PATTERNS and test with
the exact error message observed from the Alibaba provider.
_resolve_api_key_provider() now checks is_provider_explicitly_configured
before calling _try_anthropic(). Previously, any auxiliary fallback
(e.g. when kimi-coding key was invalid) would silently discover and use
Claude Code OAuth tokens — consuming the user's Claude Max subscription
without their knowledge.
This is the auxiliary-client counterpart of the setup-wizard gate in
PR #4210.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Previously, removing a claude_code credential from the anthropic pool
only printed a note — the next load_pool() re-seeded it from
~/.claude/.credentials.json. Now writes a 'suppressed_sources' flag
to auth.json that _seed_from_singletons checks before seeding.
Follows the pattern of env: source removal (clears .env var) and
device_code removal (clears auth store state).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
_seed_from_singletons('anthropic') now checks
is_provider_explicitly_configured('anthropic') before reading
~/.claude/.credentials.json. Without this, the auxiliary client
fallback chain silently discovers and uses Claude Code tokens when
the user's primary provider key is invalid — consuming their Claude
Max subscription quota without consent.
Follows the same gating pattern as PR #4210 (setup wizard gate)
but applied to the credential pool seeding path.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Automated dead code audit using vulture + coverage.py + ast-grep intersection,
confirmed by Opus deep verification pass. Every symbol verified to have zero
production callers (test imports excluded from reachability analysis).
Removes ~1,534 lines of dead production code across 46 files and ~1,382 lines
of stale test code. 3 entire files deleted (agent/builtin_memory_provider.py,
hermes_cli/checklist.py, tests/hermes_cli/test_setup_model_selection.py).
Co-authored-by: alt-glitch <balyan.sid@gmail.com>
prompt_builder.py: The `hidden_div` detection pattern uses `.*` which does not
match newlines in Python regex (re.DOTALL is not passed). An attacker can bypass
detection by splitting the style attribute across lines:
`<div style="color:red;\ndisplay: none">injected content</div>`
Replace `.*` with `[\s\S]*?` to match across line boundaries.
credential_files.py: `_load_config_files()` catches all exceptions at DEBUG level
(line 171), making YAML parse failures invisible in production logs. Users whose
credential files silently fail to mount into sandboxes have no diagnostic clue.
Promote to WARNING to match the severity pattern used by the path validation
warnings at lines 150 and 158 in the same function.
webhook.py: `_reload_dynamic_routes()` logs JSON parse failures at WARNING (line
265) but the impact — stale/corrupted dynamic routes persisting silently — warrants
ERROR level to ensure operator visibility in alerting pipelines.
Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
error_classifier.py: Message-only auth errors ("invalid api key", "unauthorized",
etc.) were classified as retryable=True (line 707), inconsistent with the HTTP 401
path (line 432) which correctly uses retryable=False + should_fallback=True. The
mismatch causes 3 wasted retries with the same broken credential before fallback,
while 401 errors immediately attempt fallback. Align the message-based path to
match: retryable=False, should_fallback=True.
web_tools.py: The _PREFIX_RE secret-detection check in web_extract_tool() runs
against the raw URL string (line 1196). URL-encoded secrets like %73k-1234... (
sk-1234...) bypass the filter because the regex expects literal ASCII. Add
urllib.parse.unquote() before the check so percent-encoded variants are also caught.
Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
xAI /v1/models does not return context_length metadata, so Hermes
probes down to the 128k default whenever a user configures a custom
provider pointing at https://api.x.ai/v1. This forces every xAI user
to manually override model.context_length in config.yaml (2M for
Grok 4.20 / 4.1-fast / 4-fast) or lose most of the usable context
window.
Add DEFAULT_CONTEXT_LENGTHS entries for the Grok family so the
fallback lookup returns the correct value via substring matching.
Values sourced from models.dev (2026-04) and cross-checked against
the xAI /v1/models listing:
- grok-4.20-* 2,000,000 (reasoning, non-reasoning, multi-agent)
- grok-4-1-fast-* 2,000,000
- grok-4-fast-* 2,000,000
- grok-4 / grok-4-0709 256,000
- grok-code-fast-1 256,000
- grok-3* 131,072
- grok-2 / latest 131,072
- grok-2-vision* 8,192
- grok (catch-all) 131,072
Keys are ordered longest-first so that specific variants match before
the catch-all, consistent with the existing Claude/Gemma/MiniMax entries.
Add TestDefaultContextLengths.test_grok_models_context_lengths and
test_grok_substring_matching to pin the values and verify the full
lookup path. All 77 tests in test_model_metadata.py pass.
Auth errors matched by message pattern were incorrectly marked retryable=True, causing futile retry loops. Aligns with _classify_by_status() which already sets retryable=False for 401/403. Fixes#7026. Contributed by @kuishou68.
The hardcoded User-Agent 'KimiCLI/1.3' is outdated — Kimi CLI is now at
v1.30.0. The stale version string causes intermittent 403 errors from
Kimi's coding endpoint ('only available for Coding Agents').
Update all 8 occurrences across run_agent.py, auxiliary_client.py, and
doctor.py to 'KimiCLI/1.30.0' to match the current official Kimi CLI.
Extends the /fast command to support Anthropic's Fast Mode beta in addition
to OpenAI Priority Processing. When enabled on Claude Opus 4.6, adds
speed:"fast" and the fast-mode-2026-02-01 beta header to API requests for
~2.5x faster output token throughput.
Changes:
- hermes_cli/models.py: Add _ANTHROPIC_FAST_MODE_MODELS registry,
model_supports_fast_mode() now recognizes Claude Opus 4.6,
resolve_fast_mode_overrides() returns {speed: fast} for Anthropic
vs {service_tier: priority} for OpenAI
- agent/anthropic_adapter.py: Add _FAST_MODE_BETA constant,
build_anthropic_kwargs() accepts fast_mode=True which injects
speed:fast + beta header via extra_headers (skipped for third-party
Anthropic-compatible endpoints like MiniMax)
- run_agent.py: Pass fast_mode to build_anthropic_kwargs in the
anthropic_messages path of _build_api_kwargs()
- cli.py: Update _handle_fast_command with provider-aware messaging
(shows 'Anthropic Fast Mode' vs 'Priority Processing')
- hermes_cli/commands.py: Update /fast description to mention both
providers
- tests: 13 new tests covering Anthropic model detection, override
resolution, CLI availability, routing, adapter kwargs, and
third-party endpoint safety
When the model mentions <think> as literal text in its response (e.g.
"(/think not producing <think> tags)"), the streaming display treated it
as a reasoning block opener and suppressed everything after it. The
response box would close with truncated content and no error — the API
response was complete but the display ate it.
Root cause: _stream_delta() matched <think> anywhere in the text stream
regardless of position. Real reasoning blocks always start at the
beginning of a line; mentions in prose appear mid-sentence.
Fix: track line position across streaming deltas with a
_stream_last_was_newline flag. Only enter reasoning suppression when
the tag appears at a block boundary (start of stream, after a newline,
or after only whitespace on the current line). Add a _flush_stream()
safety net that recovers buffered content if no closing tag is found
by end-of-stream.
Also fixes three related issues discovered during investigation:
- anthropic_adapter: _get_anthropic_max_output() now normalizes dots to
hyphens so 'claude-opus-4.6' matches the 'claude-opus-4-6' table key
(was returning 32K instead of 128K)
- run_agent: send explicit max_tokens for Claude models on Nous Portal,
same as OpenRouter — both proxy to Anthropic's API which requires it.
Without it the backend defaults to a low limit that truncates responses.
- run_agent: reset truncated_tool_call_retries after successful tool
execution so a single truncation doesn't poison the entire conversation.
The Codex retry block and valid-token short-circuit in _refresh_entry()
both return early, bypassing the auth.json sync at the end of the method.
This adds _sync_device_code_entry_to_auth_store() calls on both paths
so refreshed/synced tokens are written back to auth.json regardless of
which code path succeeds.
MiniMax's Anthropic-compatible endpoints reject requests that include
the fine-grained-tool-streaming beta header — every tool-use message
triggers a connection error (~18s timeout). Regular chat works fine.
Add _common_betas_for_base_url() that filters out the tool-streaming
beta for Bearer-auth (MiniMax) endpoints while keeping all other betas.
All four client-construction branches now use the filtered list.
Based on #6528 by @HiddenPuppy.
Original cherry-picked from PR #6688 by kshitijk4poor.
Fixes#6510, fixes#6555.
_classify_by_message had no handling for _USAGE_LIMIT_PATTERNS, so
messages like 'usage limit exceeded, try again in 5 minutes' arriving
without an HTTP status code fell through to FailoverReason.unknown
instead of rate_limit.
Apply the same billing/rate-limit disambiguation that _classify_402
already uses: USAGE_LIMIT_PATTERNS + transient signal → rate_limit,
USAGE_LIMIT_PATTERNS alone → billing.
Add 4 tests covering the no-status-code usage-limit path.
When _generate_summary() failed (no provider, timeout, model error),
the compressor silently dropped all middle turns with just a debug
log. The agent would then see head + tail with no explanation of the
gap, causing total context amnesia (generic greetings instead of
continuing the conversation).
Now generates a static fallback marker that tells the model context
was lost and to continue from the recent tail messages. The fallback
flows through the same role-alternation logic as a real summary so
message structure stays valid.
Step 1 of _resolve_auto() explicitly excluded 'custom' providers,
forcing custom endpoint users through the fragile fallback chain
instead of using their known-working main model credentials.
This caused silent compression failures for users on local OpenAI-
compatible endpoints — the summary generation would fail, middle
turns would be silently dropped, and the agent would lose all
conversation context.
Remove 'custom' from the exclusion list so custom endpoint users
get the same main-model-first treatment as DeepSeek, Anthropic,
Gemini, and other direct providers.
When the API returns "max_tokens too large given prompt" (input tokens
are within the context window, but input + requested output > window),
the old code incorrectly routed through the same handler as "prompt too
long" errors, calling get_next_probe_tier() and permanently halving
context_length. This made things worse: the window was fine, only the
requested output size needed trimming for that one call.
Two distinct error classes now handled separately:
Prompt too long — input itself exceeds context window.
Fix: compress history + halve context_length (existing behaviour,
unchanged).
Output cap too large — input OK, but input + max_tokens > window.
Fix: parse available_tokens from the error message, set a one-shot
_ephemeral_max_output_tokens override for the retry, and leave
context_length completely untouched.
Changes:
- agent/model_metadata.py: add parse_available_output_tokens_from_error()
that detects Anthropic's "available_tokens: N" error format and returns
the available output budget, or None for all other error types.
- run_agent.py: call the new parser first in the is_context_length_error
block; if it fires, set _ephemeral_max_output_tokens (with a 64-token
safety margin) and break to retry without touching context_length.
_build_api_kwargs consumes the ephemeral value exactly once then clears
it so subsequent calls use self.max_tokens normally.
- agent/anthropic_adapter.py: expand build_anthropic_kwargs docstring to
clearly document the max_tokens (output cap) vs context_length (total
window) distinction, which is a persistent source of confusion due to
the OpenAI-inherited "max_tokens" name.
- cli-config.yaml.example: add inline comments explaining both keys side
by side where users are most likely to look.
- website/docs/integrations/providers.md: add a callout box at the top
of "Context Length Detection" and clarify the troubleshooting entry.
- tests/test_ctx_halving_fix.py: 24 tests across four classes covering
the parser, build_anthropic_kwargs clamping, ephemeral one-shot
consumption, and the invariant that context_length is never mutated
on output-cap errors.
The error classifier's generic-400 heuristic only extracted err_body_msg from
the nested body structure (body['error']['message']), missing the flat body
format used by OpenAI's Responses API (body['message']). This caused
descriptive 400 errors like 'Invalid input[index].name: string does not match
pattern' to appear generic when the session was large, misclassifying them as
context overflow and triggering an infinite compression loop.
Added flat-body fallback in _classify_400() consistent with the parent
classify_api_error() function's existing handling at line 297-298.
When is explicitly set to ,
the custom-endpoint path in creates a plain
client without provider-specific headers. This means sync vision calls (e.g.
) use the generic User-Agent and get rejected by
Kimi's coding endpoint with a 403:
'Kimi For Coding is currently only available for Coding Agents such as Kimi CLI...'
The async converter already injects , and the
auto-detected API-key provider path also injects it, but the explicit custom
endpoint shortcut was missing it entirely.
This patch adds the same injection to the custom endpoint
branch, and updates all existing Kimi header sites to for
consistency.
Fixes <issue number to be filled in>
The credential pool seeder (_seed_from_env) hardcoded the base URL
for API-key providers without running provider-specific auto-detection.
For kimi-coding, this caused sk-kimi- prefixed keys to be seeded with
the legacy api.moonshot.ai/v1 endpoint instead of api.kimi.com/coding/v1,
resulting in HTTP 401 on the first request.
Import and call _resolve_kimi_base_url for kimi-coding so the pool
uses the correct endpoint based on the key prefix, matching the
runtime credential resolver behavior.
Also fix a comment: sk-kimi- keys are issued by kimi.com/code,
not platform.kimi.ai.
Fixes#5561
Two bugs in the model fallback system:
1. Nous login leaves stale model in config (provider=nous, model=opus
from previous OpenRouter setup). Fixed by deferring the config.yaml
provider write until AFTER model selection completes, and passing the
selected model atomically via _update_config_for_provider's
default_model parameter. Previously, _update_config_for_provider was
called before model selection — if selection failed (free tier, no
models, exception), config stayed as nous+opus permanently.
2. Codex/stale providers in auxiliary fallback can't connect but block
the auto-detection chain. Added _is_connection_error() detection
(APIConnectionError, APITimeoutError, DNS failures, connection
refused) alongside the existing _is_payment_error() check in
call_llm(). When a provider endpoint is unreachable, the system now
falls back to the next available provider instead of crashing.
Parse x-ratelimit-* headers from inference API responses (Nous Portal,
OpenRouter, OpenAI-compatible) and display them in the /usage command.
- New agent/rate_limit_tracker.py: parse 12 rate limit headers (RPM/RPH/
TPM/TPH limits, remaining, reset timers), format as progress bars (CLI)
or compact one-liner (gateway)
- Hook into streaming path in run_agent.py: stream.response.headers is
available on the OpenAI SDK Stream object before chunks are consumed
- CLI /usage: appends rate limit section with progress bars + warnings
when any bucket exceeds 80%
- Gateway /usage: appends compact rate limit summary
- 24 unit tests covering parsing, formatting, edge cases
Headers captured per response:
x-ratelimit-{limit,remaining,reset}-{requests,tokens}{,-1h}
Example CLI display:
Nous Rate Limits (captured just now):
Requests/min [░░░░░░░░░░░░░░░░░░░░] 0.1% 1/800 used (799 left, resets in 59s)
Tokens/hr [░░░░░░░░░░░░░░░░░░░░] 0.0% 49/336.0M (336.0M left, resets in 52m)
Wrap is_dir() in _is_valid_subdir() and is_file() in
_load_hints_for_directory() with OSError handlers so that
inaccessible directories (e.g. /root from a non-root Daytona
host user) are silently skipped instead of crashing the agent.
The existing PermissionError PRs for prompt_builder.py (#6247,
#6321, #6355) do not cover subdirectory_hints.py, which was
identified as a separate crash path in the #6214 comments.
Ref: #6214
The 24-hour default cooldown for 402-exhausted credentials was far too
aggressive — if a user tops up credits or the 402 was caused by an
oversized max_tokens request rather than true billing exhaustion, they
shouldn't have to wait a full day. Reduce to 1 hour (matching the
existing 429 TTL).
Inspired by PR #6493 (michalkomar).
Two issues resolved:
1. Add opencode.ai to _URL_TO_PROVIDER mapping so base_url routes through
models.dev lookup (which has mimo-v2-pro at 1M context) instead of
falling back to probing /models (404) and defaulting to 128K.
2. Fix _format_context_length to round cleanly: 1048576 → '1M' instead
of '1.048576M'. Applies same rounding logic to K values.