Wire MiniMax-M2.7 and MiniMax-M2.7-highspeed into the model catalog,
CLI model picker, and agent auxiliary/metadata subsystems.
Changes:
- hermes_cli/models.py:
- Add 'minimax-oauth' to _PROVIDER_MODELS with MiniMax-M2.7 and
MiniMax-M2.7-highspeed
- Add ProviderEntry('minimax-oauth', 'MiniMax (OAuth)', ...) to
CANONICAL_PROVIDERS near existing minimax entries
- Add aliases: minimax-portal, minimax-global, minimax_oauth in
_PROVIDER_ALIASES
- hermes_cli/main.py:
- Add 'minimax-oauth' to provider_labels dict
- Insert 'minimax-oauth' into providers list in
select_provider_and_model() near the other minimax entries
- Add 'minimax-oauth' to --provider argparse choices
- Add _model_flow_minimax_oauth() function: ensures login via
_login_minimax_oauth(), resolves runtime credentials, prompts for
model selection, saves model choice and config
- Add dispatch elif branch for selected_provider == 'minimax-oauth'
- agent/auxiliary_client.py:
- Add 'minimax-oauth': 'MiniMax-M2.7-highspeed' to
_API_KEY_PROVIDER_AUX_MODELS
- Add 'minimax-oauth' to _ANTHROPIC_COMPAT_PROVIDERS set
- agent/model_metadata.py:
- Add 'minimax-oauth' to _PROVIDER_PREFIXES frozenset
- MiniMax-M2.7 context length (200_000) already covered by the
existing 'minimax' substring match in DEFAULT_CONTEXT_LENGTHS
* docs(anthropic): correct OAuth scope to Max plan + extra usage credits only
The previous docs pass (#17399) overstated what Anthropic OAuth works
with. In practice Hermes can only route against a Claude Max plan that
has purchased extra usage credits — the base Max allowance is not
consumed, and Claude Pro is not supported at all. Without Max + extra
credits, users must fall back to an ANTHROPIC_API_KEY (pay-per-token).
Updates the four pages touched in #17399:
- integrations/providers.md
- user-guide/features/credential-pools.md
- reference/environment-variables.md
- getting-started/quickstart.md
* fix(aux): skip kimi-coding in vision auto-detect (closes#17076)
Kimi Coding Plan's /coding endpoint (Anthropic Messages wire) has no
image_in capability — Kimi's own docs confirm and suggest switching to
a vision-capable model. Vision lives on the separate Kimi Platform
(api.moonshot.ai, OpenAI-wire, pay-as-you-go). When the user has
kimi-coding as main provider and auxiliary.vision.provider=auto,
resolve_vision_provider_client was handing back an AnthropicAuxiliaryClient
wrapped around /coding which 404'd on every vision request.
Add a _PROVIDERS_WITHOUT_VISION frozenset ({kimi-coding, kimi-coding-cn})
and gate the main-provider vision branch on membership. On a skip the
auto-detect falls through to OpenRouter → Nous like any other
main-provider-unavailable case.
Explicit per-task overrides (auxiliary.vision.provider=kimi-coding) are
unaffected — the skip only applies when the caller is in auto mode.
Tests: 4 new targeted tests in TestVisionAutoSkipsKimiCoding covering
the skip path, CN variant, explicit-override passthrough, and a guard
against accidental skip-list widening.
* perf(startup): lazy-import OpenAI, Anthropic, Firecrawl, account_usage
Four heavy SDK/module imports are now deferred off the hot startup path.
Net savings on cold module imports:
cli 1200 → 958 ms (-242)
run_agent 1220 → 901 ms (-319)
tools.web_tools 711 → 423 ms (-288)
agent.anthropic_adapter 230 → 15 ms (-215)
agent.auxiliary_client 253 → 68 ms (-185)
Four independent changes in one PR since they all use the same pattern
and share the same risk profile (heavy SDK import → lazy proxy or
function-local import):
1. tools/web_tools.py:
'from firecrawl import Firecrawl' moved into _get_firecrawl_client(),
which is only called when backend='firecrawl'. Users on Exa/Tavily/
Parallel pay zero firecrawl cost.
2. cli.py + gateway/run.py:
'from agent.account_usage import ...' moved into the /limits handlers.
account_usage transitively pulls the OpenAI SDK chain; only needed
when the user runs /limits.
3. agent/anthropic_adapter.py:
'try: import anthropic as _anthropic_sdk' replaced with a cached
'_get_anthropic_sdk()' accessor. The three usage sites
(build_anthropic_client, build_anthropic_bedrock_client,
read_claude_code_credentials_from_keychain) now resolve via the
accessor. All pre-existing test patches of
'agent.anthropic_adapter._anthropic_sdk' keep working because the
accessor respects any value already in module globals.
4. agent/auxiliary_client.py AND run_agent.py:
'from openai import OpenAI' replaced with an '_OpenAIProxy()' module-
level object that looks like the OpenAI class but imports the SDK on
first call/isinstance check. This preserves:
- 15+ in-module OpenAI(...) construction sites in auxiliary_client
and the single site in run_agent's _create_openai_client (Python's
function-scope name lookup finds the proxy, forwards the call);
- 'patch("agent.auxiliary_client.OpenAI", ...)' and
'patch("run_agent.OpenAI", ...)' test patterns used by 28+ test
files (patch replaces the module attribute as usual).
Tried two alternatives first:
- 'from openai._client import OpenAI' — doesn't skip openai/__init__.py
(the audit's hypothesis here was wrong).
- Module-level __getattr__ — works for external access but Python
function-scope name resolution skips __getattr__, so in-module
OpenAI(...) calls NameError.
Note: 'openai' still loads on 'import cli' because
cli.py -> neuter_async_httpx_del() -> openai._base_client, and
run_agent.py -> code_execution_tool.py (module-level
build_execute_code_schema) -> _load_config() -> 'from cli import
CLI_CONFIG'. Deferring those is a separate, larger change — out of scope
for this PR. The savings above all come from avoiding the openai/*,
anthropic/*, and firecrawl/* top-level type-tree imports on paths that
don't need them.
Verified:
- 302/302 tests in tests/agent/{test_anthropic_adapter,
test_bedrock_1m_context, test_minimax_provider, test_anthropic_keychain}
pass. Two pre-existing failures on main unchanged.
- 106/106 tests/agent/test_auxiliary_client.py pass (1 pre-existing fail).
- 97/97 tests/run_agent/test_create_openai_client_kwargs_isolation.py,
test_plugin_context_engine_init.py, test_invalid_context_length_warning.py,
test_api_max_retries_config.py,
tests/hermes_cli/test_gemini_provider.py, test_ollama_cloud_provider.py
pass (1 pre-existing fail).
- Live hermes chat smoke: 2 turns + /model switch + tool calls, zero
errors in the 57-line agent.log window.
- Module-level import of run_agent + auxiliary_client + anthropic_adapter
no longer pulls 'anthropic' or 'firecrawl' at all.
* fix(gateway): restore top-level account_usage import for test-patch surface
CI caught two failures in tests/gateway/test_usage_command.py that I
missed locally:
AttributeError: 'module' object at gateway.run has no attribute 'fetch_account_usage'
The test uses monkeypatch.setattr('gateway.run.fetch_account_usage', ...)
to inject a fake account-fetch call. Moving the import inside the
handler deleted that module-level attribute, breaking the patch surface.
Restoring the top-level import in gateway/run.py gives up the ~230 ms
gateway-boot savings from that one lazy, but:
1. the gateway is a long-running daemon — boot cost is paid once per
install, not per turn;
2. the other four lazy-imports (firecrawl, openai, anthropic, cli's
account_usage) remain in place and still account for the bulk of
the savings reported in the PR body;
3. preserving the patch surface keeps the established
'gateway.run.fetch_account_usage' monkeypatch pattern working
without touching tests.
Verified: tests/gateway/test_usage_command.py — 8 passed, 0 failed.
Full targeted sweep (2336 tests across agent/gateway/hermes_cli/run_agent):
2332 passed, 4 failed — all 4 pre-existing on main.
---------
Co-authored-by: teknium1 <teknium@users.noreply.github.com>
Auxiliary tasks (title_generation, vision, compression, web_extract,
session_search) now pick the correct wire protocol based on the
endpoint, not just on which resolve_provider_client branch built the
client. Fixes 404s on Kimi Coding Plan and any other named provider
whose endpoint speaks Anthropic Messages.
Root cause: the 'api_key' branch of resolve_provider_client (and the
Step 2 fallback chain inside _resolve_auto) always built a plain
OpenAI client regardless of what the endpoint actually spoke. For
provider=kimi-coding + model=kimi-for-coding, that meant:
POST https://api.kimi.com/coding/v1/chat/completions
{ "model": "kimi-for-coding", ... }
→ 404 resource_not_found_error
The /coding route only accepts the Anthropic Messages shape (the main
agent already uses api_mode=anthropic_messages for it). Earlier fixes
(#16819, #22ddac4b1) patched the anonymous-custom, named-custom, and
external-process branches — but the named api_key branch (kimi-coding,
minimax, zai, future /anthropic providers) was the fourth sibling and
never got the same treatment.
Fix: one module-level helper _maybe_wrap_anthropic() that rewraps a
plain OpenAI client in AnthropicAuxiliaryClient when:
- api_mode is explicitly 'anthropic_messages', OR
- the URL ends in '/anthropic', OR
- the host is api.kimi.com + path contains '/coding', OR
- the host is api.anthropic.com.
Wired into _wrap_if_needed (covers all resolve_provider_client
branches that already go through it) and into the Step 2 api_key
fallback chain inside _resolve_auto. Explicit api_mode still wins:
passing api_mode='chat_completions' forces OpenAI wire, and already-
wrapped specialized adapters (Codex, Gemini native, CopilotACP) pass
through unchanged.
E2E verified:
- resolve_provider_client('kimi-coding', 'kimi-for-coding')
→ AnthropicAuxiliaryClient (was plain OpenAI, which 404'd)
- _resolve_auto Step 1 for kimi-coding runtime → AnthropicAuxiliaryClient
- resolve_provider_client('openrouter', ...) → plain OpenAI (no regression)
- api_mode='chat_completions' override → plain OpenAI (explicit wins)
Tests:
- tests/agent/test_auxiliary_transport_autodetect.py (new): 21 tests
covering URL detection, wrap decisions, and integration.
- 204/205 existing auxiliary tests pass (1 pre-existing failure on
main, unrelated to this change).
Co-authored-by: teknium1 <teknium@users.noreply.github.com>
Auxiliary callers that configure reasoning via
auxiliary.<task>.extra_body.reasoning were having that config silently
dropped by the Codex Responses adapter — it only forwarded
messages/model/tools through to responses.stream(), never translating
chat.completions-shaped reasoning hints into the Responses API's
top-level reasoning + include fields.
Mirror the main-agent translation from agent/transports/codex.py:
- extra_body.reasoning.effort → resp_kwargs.reasoning.{effort, summary:"auto"}
- 'minimal' → 'low' clamp (Codex backend rejects 'minimal')
- Always include ['reasoning.encrypted_content'] when reasoning is enabled
- {'enabled': False} → omit reasoning and include entirely
- Non-dict reasoning values are ignored defensively
Reported by @OP (Apr 26 feedback bundle).
## Changes
- agent/auxiliary_client.py: _CodexCompletionsAdapter.create() now reads
and translates extra_body.reasoning before calling responses.stream()
- tests/agent/test_auxiliary_client.py: 9 new tests covering all effort
levels, the minimal→low clamp, the disabled path, the no-op paths,
and defensive handling of wrong-shape inputs
Co-authored-by: teknium1 <teknium@users.noreply.github.com>
* fix(anthropic): remove Claude Code fingerprinting from OAuth Messages API path
OAuth requests now identify as Hermes on the wire. Removed:
- "You are Claude Code, Anthropic's official CLI for Claude." system
prompt prepend
- Hermes Agent → Claude Code / Nous Research → Anthropic
system-prompt substitutions
- mcp_ tool-name prefix on outgoing tool schemas + message history
- Matching mcp_ strip on inbound tool_use blocks (strip_tool_prefix path
removed from AnthropicTransport.normalize_response, + all 5 call
sites in run_agent.py and auxiliary_client.py)
- user-agent: claude-cli/<v> (external, cli) and x-app: cli headers on
the Messages API client
Added:
- OAuth path strips context-1m-2025-08-07 — Anthropic rejects OAuth
requests carrying it with HTTP 400 'This authentication style is
incompatible with the long context beta header.'
Kept (auth plumbing, not identity spoofing):
- _is_oauth_token classifier and is_oauth flag threading
- Bearer vs x-api-key auth routing
- _OAUTH_ONLY_BETAS (claude-code-20250219, oauth-2025-04-20) — backend
requires these on the OAuth-gated Messages endpoint
- _OAUTH_CLIENT_ID (Claude Code's) — Anthropic doesn't issue OAuth
creds to third parties; this is the only way the login flow works
- claude-cli/<v> User-Agent on the OAuth token exchange + refresh
endpoints at platform.claude.com/v1/oauth/token — bare requests get
Cloudflare 1010 blocked
Verified live against api.anthropic.com with a fresh sk-ant-oat01-*
token:
- claude-haiku-4-5 simple message: HTTP 200, 'OK' response
- claude-haiku-4-5 tool call: HTTP 200, stop_reason=tool_use, tool
named 'terminal' (no mcp_ prefix) round-tripped correctly
- Outgoing wire: no user-agent, no x-app, real Hermes identity in
system prompt, real tool name in schema
Closes/supersedes #16820 (mcp_ PascalCase normalization patch — no longer
needed since the mcp_ round-trip is gone).
* fix(anthropic): resolve_anthropic_token() reads credential pool first
Close the gap where ~/.hermes/auth.json → credential_pool.anthropic
(where hermes login + dashboard PKCE flow write OAuth tokens) was not
in resolve_anthropic_token()'s source list.
Before: users who authed via hermes login got the token written into
the pool, but legacy fallback code paths (auxiliary_client, models
catalog fetch, explicit-runtime path) that call resolve_anthropic_token()
saw None and raised 'No Anthropic credentials found' — even though the
token was sitting in auth.json.
New priority 1: pool.select() with env-sourced entries skipped. Skipping
env:* entries preserves the existing env-var priority logic further
down the chain (static env OAuth → refreshable Claude Code upgrade via
_prefer_refreshable_claude_code_token).
Surfaced while writing the hermes-agent-dev skill playbook for
'finding a live OAuth token for an E2E test'.
---------
Co-authored-by: teknium1 <teknium@users.noreply.github.com>
Registers tencent-tokenhub (https://tokenhub.tencentmaas.com/v1) as a
new API-key provider with model tencent/hy3-preview (256K context).
- PROVIDER_REGISTRY entry + TOKENHUB_API_KEY / TOKENHUB_BASE_URL env vars
- Aliases: tencent, tokenhub, tencent-cloud, tencentmaas
- openai_chat transport with is_tokenhub branch for top-level
reasoning_effort (Hy3 is a reasoning model)
- tencent/hy3-preview:free added to OpenRouter curated list
- 60+ tests (provider registry, aliases, runtime resolution,
credentials, model catalog, URL mapping, context length)
- Docs: integrations/providers.md, environment-variables.md,
model-catalog.json
Author: simonweng <simonweng@tencent.com>
Salvaged from PR #16860 onto current main (resolved conflicts with
#16935 Azure Anthropic env-var hint tests and the --provider choices=
list removal in chat_parser).
Follow-up to PR #16819 applying the same treatment to the two sibling
fallback sites in resolve_provider_client() that carry the identical bug
class as the anonymous-custom branch:
- Named custom provider (providers: / custom_providers: config entries):
apply _to_openai_base_url() on the OpenAI-wire path (chat_completions /
codex_responses), leave custom_base untouched on the anthropic_messages
path where the /anthropic surface is intentional. Prefer
main_runtime.get('model') over _read_main_model() so the entry model
still wins first. The ImportError fallback for anthropic_messages now
redoes query-param extraction against the rewritten URL so the final
OpenAI client hits /v1.
- external_process branch (copilot-acp): same main_runtime.get('model')
fallback before _read_main_model() so auxiliary tasks on this provider
track live /model switches instead of stale config.yaml.
Keeps the fix consistent across all three custom-endpoint fallback sites
in resolve_provider_client().
- config.py: remove dead ENV_VARS_BY_VERSION[17] entry (current _config_version
is 22, so all users are past version 17 and would never be prompted for
GMI_API_KEY on upgrade — consistent with how arcee was added)
- auxiliary_client.py: use google/gemini-3.1-flash-lite-preview as GMI aux
model instead of anthropic/claude-opus-4.6 (matches cheap fast-model pattern
used by all other providers: zai→glm-4.5-flash, kimi→kimi-k2-turbo-preview,
stepfun→step-3.5-flash, kilocode→google/gemini-3-flash-preview)
- test_gmi_provider.py: fix malformed write_text() call in doctor test
(was: write_text("GMI_API_KEY=*** encoding="utf-8") → missing closing quote,
wrote literal string 'GMI_API_KEY=*** encoding=' to .env file)
- test_gmi_provider.py + test_auxiliary_client.py: update aux model assertions
to match new cheaper default
- docs/integrations/providers.md: add 'gmi' to inline 'Supported providers'
fallback list (was only in the table, not the inline list at line ~1181)
- docs/reference/cli-commands.md: add 'gmi' to --provider choices list
Thread a vision-request flag through auxiliary provider resolution so Copilot clients can include Copilot-Vision-Request only for vision tasks. This preserves normal text requests while ensuring Copilot vision payloads reach the vision-capable route.
Add regression coverage for Copilot vision routing and keep cached text and vision clients separate so a text client without the header is not reused for vision.
Co-authored-by: dhabibi <9087935+dhabibi@users.noreply.github.com>
Azure OpenAI requires an `api-version` query parameter on every request.
When users include it in the base_url (e.g. `?api-version=2025-04-01-preview`),
the OpenAI SDK silently drops it during URL construction, causing 404 errors.
Extract query params from base_url and pass them via `default_query` so the
SDK appends them to every request. This is a generic solution that works for
any custom endpoint requiring query parameters, not just Azure.
No-op for URLs without query params — fully backward compatible.
The AIAgent.flush_memories pre-compression save, the gateway
_flush_memories_for_session, and everything feeding them are
obsolete now that the background memory/skill review handles
persistent memory extraction.
Problems with flush_memories:
- Pre-dates the background review loop. It was the only memory-save
path when introduced; the background review now fires every 10 user
turns on CLI and gateway alike, which is far more frequent than
compression or session reset ever triggered flush.
- Blocking and synchronous. Pre-compression flush ran on the live agent
before compression, blocking the user-visible response.
- Cache-breaking. Flush built a temporary conversation prefix
(system prompt + memory-only tool list) that diverged from the live
conversation's cached prefix, invalidating prompt caching. The
gateway variant spawned a fresh AIAgent with its own clean prompt
for each finalized session — still cache-breaking, just in a
different process.
- Redundant. Background review runs in the live conversation's
session context, gets the same content, writes to the same memory
store, and doesn't break the cache. Everything flush_memories
claimed to preserve is already covered.
What this removes:
- AIAgent.flush_memories() method (~248 LOC in run_agent.py)
- Pre-compression flush call in _compress_context
- flush_memories call sites in cli.py (/new + exit)
- GatewayRunner._flush_memories_for_session + _async_flush_memories
(and the 3 call sites: session expiry watcher, /new, /resume)
- 'flush_memories' entry from DEFAULT_CONFIG auxiliary tasks,
hermes tools UI task list, auxiliary_client docstrings
- _memory_flush_min_turns config + init
- #15631's headroom-deduction math in
_check_compression_model_feasibility (headroom was only needed
because flush dragged the full main-agent system prompt along;
the compression summariser sends a single user-role prompt so
new_threshold = aux_context is safe again)
- The dedicated test files and assertions that exercised
flush-specific paths
What this renames (with read-time backcompat on sessions.json):
- SessionEntry.memory_flushed -> SessionEntry.expiry_finalized.
The session-expiry watcher still uses the flag to avoid re-running
finalize/eviction on the same expired session; the new name
reflects what it now actually gates. from_dict() reads
'expiry_finalized' first, falls back to the legacy 'memory_flushed'
key so existing sessions.json files upgrade seamlessly.
Supersedes #15631 and #15638.
Tested: 383 targeted tests pass across run_agent/, agent/, cli/,
and gateway/ session-boundary suites. No behavior regressions —
background memory review continues to handle persistent memory
extraction on both CLI and gateway.
Generalize the temperature-specific 400 retry that shipped in PR #15621 so
the same reactive strategy covers any provider that rejects an arbitrary
request parameter — — not just temperature.
- agent/auxiliary_client.py:
* New _is_unsupported_parameter_error(exc, param): matches the same six
phrasings the old temperature detector did plus 'unrecognized parameter'
and 'invalid parameter', against any named param.
* _is_unsupported_temperature_error is now a thin back-compat wrapper so
existing imports and tests keep working.
* The max_tokens → max_completion_tokens retry branch in call_llm and
async_call_llm now (a) gates on 'max_tokens is not None' so we do not
pop a key that was never set and silently substitute a None value on
the retry, and (b) also matches the generic helper in addition to the
legacy 'max_tokens' / 'unsupported_parameter' substring checks — picking
up phrasings like 'Unknown parameter: max_tokens' that previously slipped
through.
- tests/agent/test_unsupported_parameter_retry.py: 18 new tests covering
the generic detector across params, the back-compat wrapper, and the two
hardenings to the max_tokens retry branch (None gate + generic phrasing).
Credit: retry-generalization pattern from @nicholasrae's PR #15416. That PR
also proposed the reactive temperature retry which landed independently via
PR #15621 + #15623 (co-authored with @BlueBirdBack). This commit salvages
the remaining hardening ideas onto current main.
Universal reactive fix for 'HTTP 400: Unsupported parameter: temperature'
across all providers/models — not just Codex Responses.
The same backend can accept temperature for some models and reject it for
others (e.g. gpt-5.4 accepts but gpt-5.5 rejects on the same OpenAI
endpoint; similar patterns on Copilot, OpenRouter reasoning routes, and
Anthropic Opus 4.7+ via OAI-compat). An allow/deny-list by model name does
not scale.
call_llm / async_call_llm now detect the concrete 'unsupported parameter:
temperature' 400 and transparently retry once without temperature. Kimi's
server-managed omission and Opus 4.7+'s proactive strip stay in place —
this is the safety net for everything else.
Changes:
- agent/auxiliary_client.py: add _is_unsupported_temperature_error helper;
wire into both sync and async call_llm paths before the existing
max_tokens/payment/auth retry ladder
- tests/agent/test_unsupported_temperature_retry.py: 19 tests covering
detector phrasings, sync + async retry, no-retry-without-temperature,
and non-temperature 400s not triggering the retry
Builds on PR #15620 (codex_responses fallback) which stripped temperature
up front for that one api_mode. This PR closes the gap for every other
provider/model combo via reactive retry.
Credit: retry approach and detector originate from @BlueBirdBack's PR #15578.
Co-authored-by: BlueBirdBack <BlueBirdBack@users.noreply.github.com>
Bedrock's aws_sdk auth_type had no matching branch in
resolve_provider_client(), causing it to fall through to the
"unhandled auth_type" warning and return (None, None). This broke
all auxiliary tasks (compression, memory, summarization) for Bedrock
users — the main conversation loop worked fine, but background
context management silently failed.
Add an aws_sdk branch that creates an AnthropicAuxiliaryClient via
build_anthropic_bedrock_client(), using boto3's default credential
chain (IAM roles, SSO, env vars, instance metadata). Default
auxiliary model is Haiku for cost efficiency.
Closes#13919
Keep auxiliary provider resolution aligned with the switch and persisted main-provider paths when models.dev returns github-copilot slugs.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Auxiliary tasks (session_search, flush_memories, approvals, compression,
vision, etc.) that route to a named custom provider declared under
config.yaml 'providers:' with 'api_mode: anthropic_messages' were
silently building a plain OpenAI client and POSTing to
{base_url}/chat/completions, which returns 404 on Anthropic-compatible
gateways that only expose /v1/messages.
Two gaps caused this:
1. hermes_cli/runtime_provider.py::_get_named_custom_provider — the
providers-dict branch (new-style) returned only name/base_url/api_key/
model and dropped api_mode. The legacy custom_providers-list branch
already propagated it correctly. The dict branch now parses and
returns api_mode via _parse_api_mode() in both match paths.
2. agent/auxiliary_client.py::resolve_provider_client — the named
custom provider block at ~L1740 ignored custom_entry['api_mode']
and unconditionally built an OpenAI client (only wrapping for
Codex/Responses). It now mirrors _try_custom_endpoint()'s three-way
dispatch: anthropic_messages → AnthropicAuxiliaryClient (async wrapped
in AsyncAnthropicAuxiliaryClient), codex_responses → CodexAuxiliaryClient,
otherwise plain OpenAI. An explicit task-level api_mode override
still wins over the provider entry's declared api_mode.
Fixes#15033
Tests: tests/agent/test_auxiliary_named_custom_providers.py gains a
TestProvidersDictApiModeAnthropicMessages class covering
- providers-dict preserves valid api_mode
- invalid api_mode values are dropped
- missing api_mode leaves the entry unchanged (no regression)
- resolve_provider_client returns (Async)AnthropicAuxiliaryClient for
api_mode=anthropic_messages
- full chain via get_text_auxiliary_client / get_async_text_auxiliary_client
with an auxiliary.<task> override
- providers without api_mode still use the OpenAI-wire path
## Merged
Adds MiMo v2.5-pro and v2.5 support to Xiaomi native provider, OpenCode Go, and setup wizard.
### Changes
- Context lengths: added v2.5-pro (1M) and v2.5 (1M), corrected existing MiMo entries to exact values (262144)
- Provider lists: xiaomi, opencode-go, setup wizard
- Vision: upgraded from mimo-v2-omni to mimo-v2.5 (omnimodal)
- Config description updated for XIAOMI_API_KEY
- Tests updated for new vision model preference
### Verification
- 4322 tests passed, 0 new regressions
- Live API tested on Xiaomi portal: basic, reasoning, tool calling, multi-tool, file ops, system prompt, vision — all pass
- Self-review found and fixed 2 issues (redundant vision check, stale HuggingFace context length)
NormalizedResponse and ToolCall now have backward-compat properties
so the agent loop can read them directly without the shim:
ToolCall: .type, .function (returns self), .call_id, .response_item_id
NormalizedResponse: .reasoning_content, .reasoning_details,
.codex_reasoning_items
This eliminates the 35-line shim and its 4 call sites in run_agent.py.
Also changes flush_memories guard from hasattr(response, 'choices')
to self.api_mode in ('chat_completions', 'bedrock_converse') so it
works with raw boto3 dicts too.
WS1 items 3+4 of Cycle 2 (#14418).
Replace direct normalize_anthropic_response() call in
_AnthropicCompletionsAdapter.create() with
AnthropicTransport.normalize_response() via get_transport().
Before: auxiliary_client called adapter v1 directly, bypassing
the transport layer entirely.
After: auxiliary_client → get_transport('anthropic_messages') →
transport.normalize_response() → adapter v1 → NormalizedResponse.
The adapter v1 function (normalize_anthropic_response) now has
zero callers outside agent/anthropic_adapter.py and the transport.
This unblocks collapsing v1 to return NormalizedResponse directly
in a follow-up (the remaining 2-layer chain becomes 1-layer).
WS1 item 2 of Cycle 2 (#14418).
Adds a first-class 'stepfun' API-key provider surfaced as Step Plan:
- Support Step Plan setup for both International and China regions
- Discover Step Plan models live from /step_plan/v1/models, with a
small coding-focused fallback catalog when discovery is unavailable
- Thread StepFun through provider metadata, setup persistence, status
and doctor output, auxiliary routing, and model normalization
- Add tests for provider resolution, model validation, metadata
mapping, and StepFun region/model persistence
Based on #6005 by @hengm3467.
Co-authored-by: hengm3467 <100685635+hengm3467@users.noreply.github.com>
Wire the auxiliary client (compaction, vision, session search, web extract)
to the Nous Portal's curated recommended-models endpoint when running on
Nous Portal, with a TTL-cached fetch that mirrors how we pull /models for
pricing.
hermes_cli/models.py
- fetch_nous_recommended_models(portal_base_url, force_refresh=False)
10-minute TTL cache, keyed per portal URL (staging vs prod don't
collide). Public endpoint, no auth required. Returns {} on any
failure so callers always get a dict.
- get_nous_recommended_aux_model(vision, free_tier=None, ...)
Tier-aware pick from the payload:
- Paid tier → paidRecommended{Vision,Compaction}Model, falling back
to freeRecommended* when the paid field is null (common during
staged rollouts of new paid models).
- Free tier → freeRecommended* only, never leaks paid models.
When free_tier is None, auto-detects via the existing
check_nous_free_tier() helper (already cached 3 min against
/api/oauth/account). Detection errors default to paid so we never
silently downgrade a paying user.
agent/auxiliary_client.py — _try_nous()
- Replaces the hardcoded xiaomi/mimo free-tier branch with a single call
to get_nous_recommended_aux_model(vision=vision).
- Falls back to _NOUS_MODEL (google/gemini-3-flash-preview) when the
Portal is unreachable or returns a null recommendation.
- The Portal is now the source of truth for aux model selection; the
xiaomi allowlist we used to carry is effectively dead.
Tests (15 new)
- tests/hermes_cli/test_models.py::TestNousRecommendedModels
Fetch caching, per-portal keying, network failure, force_refresh;
paid-prefers-paid, paid-falls-to-free, free-never-leaks-paid,
auto-detect, detection-error → paid default, null/blank modelName
handling.
- tests/agent/test_auxiliary_client.py::TestNousAuxiliaryRefresh
_try_nous honors Portal recommendation for text + vision, falls
back to google/gemini-3-flash-preview on None or exception.
Behavior won't visibly change today — both tier recommendations currently
point at google/gemini-3-flash-preview — but the moment the Portal ships
a better paid recommendation, subscribers pick it up within 10 minutes
without a Hermes release.
Revert two overreaches from #13699 that forced paid Nous vision to
xiaomi/mimo-v2-omni instead of the tier-appropriate gemini-3-flash-preview:
1. Remove "nous": "xiaomi/mimo-v2-omni" from _PROVIDER_VISION_MODELS —
#13696 already routes nous main-provider vision through the strict
backend, and this entry caused any direct resolve_provider_client(
"nous", ...) aggregator-lookup path to pick the wrong model for paid.
2. Drop the 'elif vision' paid override in _try_nous() that forced
mimo-v2-omni on every Nous vision call regardless of tier. Paid
accounts now keep gemini-3-flash-preview for vision as well as text.
Free-tier behavior unchanged: still uses mimo-v2-omni for vision,
mimo-v2-pro for text (check_nous_free_tier() branch).
E2E verified:
paid vision → google/gemini-3-flash-preview
free vision → xiaomi/mimo-v2-omni
paid text → google/gemini-3-flash-preview
free text → xiaomi/mimo-v2-pro
Two changes:
1. _PROVIDER_VISION_MODELS: add 'nous' -> 'xiaomi/mimo-v2-omni' entry
so the vision auto-detect chain picks the correct multimodal model.
2. resolve_provider_client: detect when the requested model is a vision
model (from _PROVIDER_VISION_MODELS or known vision model names) and
pass vision=True to _try_nous(). Previously, _try_nous() was always
called without vision=True in resolve_provider_client(), causing it to
return the default text model (gemini-3-flash-preview or mimo-v2-pro)
instead of the vision-capable mimo-v2-omni.
The _try_nous() function already handled free-tier vision correctly, but
the resolve_provider_client() path (used by the auto-detect vision chain)
never signaled that a vision task was in progress.
Verified: xiaomi/mimo-v2-omni returns HTTP 200 with image inputs on Nous
inference API. google/gemini-3-flash-preview returns 404 with images.
Aslaaen's fix in the original PR covered _detect_api_mode_for_url and the
two openai/xai sites in run_agent.py. This finishes the sweep: the same
substring-match false-positive class (e.g. https://api.openai.com.evil/v1,
https://proxy/api.openai.com/v1, https://api.anthropic.com.example/v1)
existed in eight more call sites, and the hostname helper was duplicated
in two modules.
- utils: add shared base_url_hostname() (single source of truth).
- hermes_cli/runtime_provider, run_agent: drop local duplicates, import
from utils. Reuse the cached AIAgent._base_url_hostname attribute
everywhere it's already populated.
- agent/auxiliary_client: switch codex-wrap auto-detect, max_completion_tokens
gate (auxiliary_max_tokens_param), and custom-endpoint max_tokens kwarg
selection to hostname equality.
- run_agent: native-anthropic check in the Claude-style model branch
and in the AIAgent init provider-auto-detect branch.
- agent/model_metadata: Anthropic /v1/models context-length lookup.
- hermes_cli/providers.determine_api_mode: anthropic / openai URL
heuristics for custom/unknown providers (the /anthropic path-suffix
convention for third-party gateways is preserved).
- tools/delegate_tool: anthropic detection for delegated subagent
runtimes.
- hermes_cli/setup, hermes_cli/tools_config: setup-wizard vision-endpoint
native-OpenAI detection (paired with deduping the repeated check into
a single is_native_openai boolean per branch).
Tests:
- tests/test_base_url_hostname.py covers the helper directly
(path-containing-host, host-suffix, trailing dot, port, case).
- tests/hermes_cli/test_determine_api_mode_hostname.py adds the same
regression class for determine_api_mode, plus a test that the
/anthropic third-party gateway convention still wins.
Also: add asslaenn5@gmail.com → Aslaaen to scripts/release.py AUTHOR_MAP.
Requests through Vercel AI Gateway now carry referrerUrl / appName /
User-Agent attribution so traffic shows up in the gateway's analytics.
Adds _AI_GATEWAY_HEADERS in auxiliary_client and a new
ai-gateway.vercel.sh branch in _apply_client_headers_for_base_url.
Kimi's gateway selects the correct temperature server-side based on the
active mode (thinking -> 1.0, non-thinking -> 0.6). Sending any
temperature value — even the previously "correct" one — conflicts with
gateway-managed defaults.
Replaces the old approach of forcing specific temperature values (0.6
for non-thinking, 1.0 for thinking) with an OMIT_TEMPERATURE sentinel
that tells all call sites to strip the temperature key from API kwargs
entirely.
Changes:
- agent/auxiliary_client.py: OMIT_TEMPERATURE sentinel, _is_kimi_model()
prefix check (covers all kimi-* models), _fixed_temperature_for_model()
returns sentinel for kimi models. _build_call_kwargs() strips temp.
- run_agent.py: _build_api_kwargs, flush_memories, and summary generation
paths all handle the sentinel by popping/omitting temperature.
- trajectory_compressor.py: _effective_temperature_for_model returns None
for kimi (sentinel mapped), direct client calls use kwargs dict to
conditionally include temperature.
- mini_swe_runner.py: same sentinel handling via wrapper function.
- 6 test files updated: all 'forces temperature X' assertions replaced
with 'temperature not in kwargs' assertions.
Net: -76 lines (171 added, 247 removed).
Inspired by PR #13137 (@kshitijk4poor).
* feat(security): URL query param + userinfo + form body redaction
Port from nearai/ironclaw#2529.
Hermes already has broad value-shape coverage in agent/redact.py
(30+ vendor prefixes, JWTs, DB connstrs, etc.) but missed three
key-name-based patterns that catch opaque tokens without recognizable
prefixes:
1. URL query params - OAuth callback codes (?code=...),
access_token, refresh_token, signature, etc. These are opaque and
won't match any prefix regex. Now redacted by parameter NAME.
2. URL userinfo (https://user:pass@host) - for non-DB schemes. DB
schemes were already handled by _DB_CONNSTR_RE.
3. Form-urlencoded body (k=v pairs joined by ampersands) -
conservative, only triggers on clean pure-form inputs with no
other text.
Sensitive key allowlist matches ironclaw's (exact case-insensitive,
NOT substring - so token_count and session_id pass through).
Tests: +20 new test cases across 3 test classes. All 75 redact tests
pass; gateway/test_pii_redaction and tools/test_browser_secret_exfil
also green.
Known pre-existing limitation: _ENV_ASSIGN_RE greedy match swallows
whole all-caps ENV-style names + trailing text when followed by
another assignment. Left untouched here (out of scope); URL query
redaction handles the lowercase case.
* feat: replace kimi-k2.5 with kimi-k2.6 on OpenRouter and Nous Portal
Update model catalogs for OpenRouter (fallback snapshot), Nous Portal,
and NVIDIA NIM to reference moonshotai/kimi-k2.6. Add kimi-k2.6 to
the fixed-temperature frozenset in auxiliary_client.py so the 0.6
contract is enforced on aggregator routings.
Native Moonshot provider lists (kimi-coding, kimi-coding-cn, moonshot,
opencode-zen, opencode-go) are unchanged — those use Moonshot's own
model IDs which are unaffected.
Third-party gateways that speak the native Anthropic protocol (MiniMax,
Zhipu GLM, Alibaba DashScope, Kimi, LiteLLM proxies) now work end-to-end
with the same feature set as direct api.anthropic.com callers. Synthesizes
eight stale community PRs into one consolidated change.
Five fixes:
- URL detection: consolidate three inline `endswith("/anthropic")`
checks in runtime_provider.py into the shared _detect_api_mode_for_url
helper. Third-party /anthropic endpoints now auto-resolve to
api_mode=anthropic_messages via one code path instead of three.
- OAuth leak-guard: all five sites that assign `_is_anthropic_oauth`
(__init__, switch_model, _try_refresh_anthropic_client_credentials,
_swap_credential, _try_activate_fallback) now gate on
`provider == "anthropic"` so a stale ANTHROPIC_TOKEN never trips
Claude-Code identity injection on third-party endpoints. Previously
only 2 of 5 sites were guarded.
- Prompt caching: new method `_anthropic_prompt_cache_policy()` returns
`(should_cache, use_native_layout)` per endpoint. Replaces three
inline conditions and the `native_anthropic=(api_mode=='anthropic_messages')`
call-site flag. Native Anthropic and third-party Anthropic gateways
both get the native cache_control layout; OpenRouter gets envelope
layout. Layout is persisted in `_primary_runtime` so fallback
restoration preserves the per-endpoint choice.
- Auxiliary client: `_try_custom_endpoint` honors
`api_mode=anthropic_messages` and builds `AnthropicAuxiliaryClient`
instead of silently downgrading to an OpenAI-wire client. Degrades
gracefully to OpenAI-wire when the anthropic SDK isn't installed.
- Config hygiene: `_update_config_for_provider` (hermes_cli/auth.py)
clears stale `api_key`/`api_mode` when switching to a built-in
provider, so a previous MiniMax custom endpoint's credentials can't
leak into a later OpenRouter session.
- Truncation continuation: length-continuation and tool-call-truncation
retry now cover `anthropic_messages` in addition to `chat_completions`
and `bedrock_converse`. Reuses the existing `_build_assistant_message`
path via `normalize_anthropic_response()` so the interim message
shape is byte-identical to the non-truncated path.
Tests: 6 new files, 42 test cases. Targeted run + tests/run_agent,
tests/agent, tests/hermes_cli all pass (4554 passed).
Synthesized from (credits preserved via Co-authored-by trailers):
#7410 @nocoo — URL detection helper
#7393 @keyuyuan — OAuth 5-site guard
#7367 @n-WN — OAuth guard (narrower cousin, kept comment)
#8636 @sgaofen — caching helper + native-vs-proxy layout split
#10954 @Only-Code-A — caching on anthropic_messages+Claude
#7648 @zhongyueming1121 — aux client anthropic_messages branch
#6096 @hansnow — /model switch clears stale api_mode
#9691 @TroyMitchell911 — anthropic_messages truncation continuation
Closes: #7366, #8294 (third-party Anthropic identity + caching).
Supersedes: #7410, #7367, #7393, #8636, #10954, #7648, #6096, #9691.
Rejects: #9621 (OpenAI-wire caching with incomplete blocklist — risky),
#7242 (superseded by #9691, stale branch),
#8321 (targets smart_model_routing which was removed in #12732).
Co-authored-by: nocoo <nocoo@users.noreply.github.com>
Co-authored-by: Keyu Yuan <leoyuan0099@gmail.com>
Co-authored-by: Zoee <30841158+n-WN@users.noreply.github.com>
Co-authored-by: sgaofen <135070653+sgaofen@users.noreply.github.com>
Co-authored-by: Only-Code-A <bxzt2006@163.com>
Co-authored-by: zhongyueming <mygamez@163.com>
Co-authored-by: Xiaohan Li <hansnow@users.noreply.github.com>
Co-authored-by: Troy Mitchell <i@troy-y.org>
Follow-up to #12144. That PR standardized the kimi-k2.* temperature lock
against the Coding Plan endpoint (api.kimi.com/coding/v1) docs, where
non-thinking models require 0.6. Verified empirically against Moonshot
(April 2026) that the public chat endpoint (api.moonshot.ai/v1) has a
different contract for kimi-k2.5: it only accepts temperature=1, and rejects
0.6 with:
HTTP 400 "invalid temperature: only 1 is allowed for this model"
Users hit the public endpoint when KIMI_API_KEY is a legacy sk-* key (the
sk-kimi-* prefix routes to Coding Plan — see hermes_cli/auth.py). So for
Coding Plan subscribers the fix from #12144 is correct, but for public-API
users it reintroduces the exact 400 reported in #9125.
Reproduction on api.moonshot.ai/v1 + kimi-k2.5:
temperature=1.0 → 200 OK
temperature=0.6 → 400 "only 1 is allowed" ← #12144 default
temperature=None → 200 OK
Other kimi-k2.* models are unaffected empirically — turbo-preview accepts
0.6 and thinking-turbo accepts 1.0 on both endpoints — so only kimi-k2.5
diverges.
Fix: thread the client's actual base_url through _build_call_kwargs (the
parameter already existed but callers passed config-level resolved_base_url;
for auto-detected routes that was often empty). _fixed_temperature_for_model
now checks api.moonshot.ai first via an explicit _KIMI_PUBLIC_API_OVERRIDES
map, then falls back to the Coding Plan defaults. Tests parametrize over
endpoint + model to lock both contracts.
Closes#9125.
- only use the native adapter for the canonical Gemini native endpoint
- keep custom and /openai base URLs on the OpenAI-compatible path
- preserve Hermes keepalive transport injection for native Gemini clients
- stabilize streaming tool-call replay across repeated SSE events
- add follow-up tests for base_url precedence, async streaming, and duplicate tool-call chunks
- add a native Gemini adapter over generateContent/streamGenerateContent
- switch the built-in gemini provider off the OpenAI-compatible endpoint
- preserve thought signatures and native functionResponse replay
- route auxiliary Gemini clients through the same adapter
- add focused unit coverage plus native-provider integration checks
The cherry-picked salvage (admin28980's commit) added codex headers only on the
primary chat client path, with two inaccuracies:
- originator was 'hermes-agent' — Cloudflare whitelists codex_cli_rs,
codex_vscode, codex_sdk_ts, and Codex* prefixes. 'hermes-agent' isn't on
the list, so the header had no mitigating effect on the 403 (the
account-id header alone may have been carrying the fix).
- account-id header was 'ChatGPT-Account-Id' — upstream codex-rs auth.rs
uses canonical 'ChatGPT-Account-ID' (PascalCase, trailing -ID).
Also, the auxiliary client (_try_codex + resolve_provider_client raw_codex
branch) constructs OpenAI clients against the same chatgpt.com endpoint with
no default headers at all — so compression, title generation, vision, session
search, and web_extract all still 403 from VPS IPs.
Consolidate the header set into _codex_cloudflare_headers() in
agent/auxiliary_client.py (natural home next to _read_codex_access_token and
the existing JWT decode logic) and call it from all four insertion points:
- run_agent.py: AIAgent.__init__ (initial construction)
- run_agent.py: _apply_client_headers_for_base_url (credential rotation)
- agent/auxiliary_client.py: _try_codex (aux client)
- agent/auxiliary_client.py: resolve_provider_client raw_codex branch
Net: -36/+55 lines, -25 lines of duplicated inline JWT decode replaced by a
single helper. User-Agent switched to 'codex_cli_rs/0.0.0 (Hermes Agent)' to
match the codex-rs shape while keeping product attribution.
Tests in tests/agent/test_codex_cloudflare_headers.py cover:
- originator value, User-Agent shape, canonical header casing
- account-ID extraction from a real JWT fixture
- graceful handling of malformed / non-string / claim-missing tokens
- wiring at all four insertion points (primary init, rotation, both aux paths)
- non-chatgpt base URLs (openrouter) do NOT get codex headers
- switching away from chatgpt.com drops the headers
* fix(kimi): force fixed temperature on kimi-k2.* models (k2.5, thinking, turbo)
The prior override only matched the literal model name "kimi-for-coding",
but Moonshot's coding endpoint is hit with real model IDs such as
`kimi-k2.5`, `kimi-k2-turbo-preview`, `kimi-k2-thinking`, etc. Those
requests bypassed the override and kept the caller's temperature, so
Moonshot returns HTTP 400 "invalid temperature: only 0.6 is allowed for
this model" (or 1.0 for thinking variants).
Match the whole kimi-k2.* family:
* kimi-k2-thinking / kimi-k2-thinking-turbo -> 1.0 (thinking mode)
* all other kimi-k2.* -> 0.6 (non-thinking / instant mode)
Also accept an optional vendor prefix (e.g. `moonshotai/kimi-k2.5`) so
aggregator routings are covered.
* refactor(kimi): whitelist-match kimi coding models instead of prefix
Addresses review feedback on PR #12144.
- Replace `startswith("kimi-k2")` with explicit frozensets sourced from
Moonshot's kimi-for-coding model list. The prefix match would have also
clamped `kimi-k2-instruct` / `kimi-k2-instruct-0905`, which are the
separate non-coding K2 family with variable temperature (recommended 0.6
but not enforced — see huggingface.co/moonshotai/Kimi-K2-Instruct).
- Confirmed via platform.kimi.ai docs that all five coding models
(k2.5, k2-turbo-preview, k2-0905-preview, k2-thinking, k2-thinking-turbo)
share the fixed-temperature lock, so the preview-model mapping is no
longer an assumption.
- Drop the fragile `"thinking" in bare` substring test for a set lookup.
- Log a debug line on each override so operators can see when Hermes
silently rewrites temperature.
- Update class docstring. Extend the negative test to parametrize over
kimi-k2-instruct, Kimi-K2-Instruct-0905, and a hypothetical future
kimi-k2-experimental name — all must keep the caller's temperature.
Before: aggregator users (OpenRouter / Nous Portal) running 'auto'
routing for auxiliary tasks — compression, vision, web extraction,
session search, etc. — got routed to a cheap provider-side default
model (Gemini Flash). Non-aggregator users already got their main
model. Behavior was inconsistent and surprising — users picked
Claude / GPT / their preferred model, but side tasks ran on
Gemini Flash.
After: 'auto' means "use my main chat model" for every user,
regardless of provider type. Only when the main provider has no
working client does the fallback chain run (OpenRouter → Nous →
custom → Codex → API-key providers). Explicit per-task overrides
in config.yaml (auxiliary.<task>.provider / .model) still win —
they are a hard constraint, not subject to the auto policy.
Vision auto-detection follows the same policy: try main provider +
main model first (with _PROVIDER_VISION_MODELS overrides preserved
for providers like xiaomi and zai that ship a dedicated multimodal
model distinct from their chat model). Aggregator strict vision
backends are fallbacks, not the primary path.
Changes:
- agent/auxiliary_client.py: _resolve_auto() drops the
`_AGGREGATOR_PROVIDERS` guard. resolve_vision_provider_client()
auto branch unifies aggregator and exotic-provider paths —
everyone goes through resolve_provider_client() with main_model.
Dead _AGGREGATOR_PROVIDERS constant removed (was only used by
the guard we just removed).
- hermes_cli/main.py: aux config menu copy updated to reflect
the new semantics ("'auto' means 'use my main model'").
- tests/agent/test_auxiliary_main_first.py: 12 regression tests
covering OpenRouter/Nous/DeepSeek main paths, runtime-override
wins, explicit-config wins, vision override preservation for
exotic providers, and fallback-chain activation when the main
provider has no working client.
Co-authored-by: teknium1 <teknium@nousresearch.com>