hermes-agent/website/docs/developer-guide/provider-runtime.md
kshitijk4poor 20a4f79ed1 feat: provider modules — ProviderProfile ABC, 33 providers, fetch_models, transport single-path
Introduces providers/ package — single source of truth for every
inference provider. Adding a simple api-key provider now requires one
providers/<name>.py file with zero edits anywhere else.

What this PR ships:
- providers/ package (ProviderProfile ABC + 33 profiles across 4 api_modes)
- ProviderProfile declarative fields: name, api_mode, aliases, display_name,
  env_vars, base_url, models_url, auth_type, fallback_models, hostname,
  default_headers, fixed_temperature, default_max_tokens, default_aux_model
- 4 overridable hooks: prepare_messages, build_extra_body,
  build_api_kwargs_extras, fetch_models
- chat_completions.build_kwargs: profile path via _build_kwargs_from_profile,
  legacy flag path retained for lmstudio/tencent-tokenhub (which have
  session-aware reasoning probing that doesn't map cleanly to hooks yet)
- run_agent.py: profile path for all registered providers; legacy path
  variable scoping fixed (all flags defined before branching)
- Auto-wires: auth.PROVIDER_REGISTRY, models.CANONICAL_PROVIDERS,
  doctor health checks, config.OPTIONAL_ENV_VARS, model_metadata._URL_TO_PROVIDER
- GeminiProfile: thinking_config translation (native + openai-compat nested)
- New tests/providers/ (79 tests covering profile declarations, transport
  parity, hook overrides, e2e kwargs assembly)

Deltas vs original PR (salvaged onto current main):
- Added profiles: alibaba-coding-plan, azure-foundry, minimax-oauth
  (were added to main since original PR)
- Skipped profiles: lmstudio, tencent-tokenhub stay on legacy path (their
  reasoning_effort probing has no clean hook equivalent yet)
- Removed lmstudio alias from custom profile (it's a separate provider now)
- Skipped openrouter/custom from PROVIDER_REGISTRY auto-extension
  (resolve_provider special-cases them; adding breaks runtime resolution)
- runtime_provider: profile.api_mode only as fallback when URL detection
  finds nothing (was breaking minimax /v1 override)
- Preserved main's legacy-path improvements: deepseek reasoning_content
  preserve, gemini Gemma skip, OpenRouter response caching, Anthropic 1M
  beta recovery, etc.
- Kept agent/copilot_acp_client.py in place (rejected PR's relocation —
  main has 7 fixes landed since; relocation would revert them)
- _API_KEY_PROVIDER_AUX_MODELS alias kept for backward compat with existing
  test imports

Co-authored-by: kshitijk4poor <82637225+kshitijk4poor@users.noreply.github.com>
Closes #14418
2026-05-05 13:40:01 -07:00

7.9 KiB

sidebar_position title description
4 Provider Runtime Resolution How Hermes resolves providers, credentials, API modes, and auxiliary models at runtime

Provider Runtime Resolution

Hermes has a shared provider runtime resolver used across:

  • CLI
  • gateway
  • cron jobs
  • ACP
  • auxiliary model calls

Primary implementation:

  • hermes_cli/runtime_provider.py — credential resolution, _resolve_custom_runtime()
  • hermes_cli/auth.py — provider registry, resolve_provider()
  • hermes_cli/model_switch.py — shared /model switch pipeline (CLI + gateway)
  • agent/auxiliary_client.py — auxiliary model routing
  • providers/ — declarative source for api_mode, base_url, env_vars, fallback_models (auto-registered into auth.py PROVIDER_REGISTRY at startup)

get_provider_profile() in providers/ returns a typed dict for a given provider id. runtime_provider.py calls this at resolution time to get the canonical base_url, env_vars priority list, api_mode, and fallback_models without needing to duplicate that data in multiple files. Adding a new providers/*.py file that calls register_provider() is enough for runtime_provider.py to pick it up — no branch needed in the resolver itself.

If you are trying to add a new first-class inference provider, read Adding Providers alongside this page.

Resolution precedence

At a high level, provider resolution uses:

  1. explicit CLI/runtime request
  2. config.yaml model/provider config
  3. environment variables
  4. provider-specific defaults or auto resolution

That ordering matters because Hermes treats the saved model/provider choice as the source of truth for normal runs. This prevents a stale shell export from silently overriding the endpoint a user last selected in hermes model.

Providers

Current provider families include:

  • AI Gateway (Vercel)
  • OpenRouter
  • Nous Portal
  • OpenAI Codex
  • Copilot / Copilot ACP
  • Anthropic (native)
  • Google / Gemini
  • Alibaba / DashScope
  • DeepSeek
  • Z.AI
  • Kimi / Moonshot
  • MiniMax
  • MiniMax China
  • Kilo Code
  • Hugging Face
  • OpenCode Zen / OpenCode Go
  • Custom (provider: custom) — first-class provider for any OpenAI-compatible endpoint
  • Named custom providers (custom_providers list in config.yaml)

Output of runtime resolution

The runtime resolver returns data such as:

  • provider
  • api_mode
  • base_url
  • api_key
  • source
  • provider-specific metadata like expiry/refresh info

Why this matters

This resolver is the main reason Hermes can share auth/runtime logic between:

  • hermes chat
  • gateway message handling
  • cron jobs running in fresh sessions
  • ACP editor sessions
  • auxiliary model tasks

AI Gateway

Set AI_GATEWAY_API_KEY in ~/.hermes/.env and run with --provider ai-gateway. Hermes fetches available models from the gateway's /models endpoint, filtering to language models with tool-use support.

OpenRouter, AI Gateway, and custom OpenAI-compatible base URLs

Hermes contains logic to avoid leaking the wrong API key to a custom endpoint when multiple provider keys exist (e.g. OPENROUTER_API_KEY, AI_GATEWAY_API_KEY, and OPENAI_API_KEY).

Each provider's API key is scoped to its own base URL:

  • OPENROUTER_API_KEY is only sent to openrouter.ai endpoints
  • AI_GATEWAY_API_KEY is only sent to ai-gateway.vercel.sh endpoints
  • OPENAI_API_KEY is used for custom endpoints and as a fallback

Hermes also distinguishes between:

  • a real custom endpoint selected by the user
  • the OpenRouter fallback path used when no custom endpoint is configured

That distinction is especially important for:

  • local model servers
  • non-OpenRouter/non-AI Gateway OpenAI-compatible APIs
  • switching providers without re-running setup
  • config-saved custom endpoints that should keep working even when OPENAI_BASE_URL is not exported in the current shell

Native Anthropic path

Anthropic is not just "via OpenRouter" anymore.

When provider resolution selects anthropic, Hermes uses:

  • api_mode = anthropic_messages
  • the native Anthropic Messages API
  • agent/anthropic_adapter.py for translation

Credential resolution for native Anthropic now prefers refreshable Claude Code credentials over copied env tokens when both are present. In practice that means:

  • Claude Code credential files are treated as the preferred source when they include refreshable auth
  • manual ANTHROPIC_TOKEN / CLAUDE_CODE_OAUTH_TOKEN values still work as explicit overrides
  • Hermes preflights Anthropic credential refresh before native Messages API calls
  • Hermes still retries once on a 401 after rebuilding the Anthropic client, as a fallback path

OpenAI Codex path

Codex uses a separate Responses API path:

  • api_mode = codex_responses
  • dedicated credential resolution and auth store support

Auxiliary model routing

Auxiliary tasks such as:

  • vision
  • web extraction summarization
  • context compression summaries
  • session search summarization
  • skills hub operations
  • MCP helper operations
  • memory flushes

can use their own provider/model routing rather than the main conversational model.

When an auxiliary task is configured with provider main, Hermes resolves that through the same shared runtime path as normal chat. In practice that means:

  • env-driven custom endpoints still work
  • custom endpoints saved via hermes model / config.yaml also work
  • auxiliary routing can tell the difference between a real saved custom endpoint and the OpenRouter fallback

Fallback models

Hermes supports a configured fallback model/provider pair, allowing runtime failover when the primary model encounters errors.

How it works internally

  1. Storage: AIAgent.__init__ stores the fallback_model dict and sets _fallback_activated = False.

  2. Trigger points: _try_activate_fallback() is called from three places in the main retry loop in run_agent.py:

    • After max retries on invalid API responses (None choices, missing content)
    • On non-retryable client errors (HTTP 401, 403, 404)
    • After max retries on transient errors (HTTP 429, 500, 502, 503)
  3. Activation flow (_try_activate_fallback):

    • Returns False immediately if already activated or not configured
    • Calls resolve_provider_client() from auxiliary_client.py to build a new client with proper auth
    • Determines api_mode: codex_responses for openai-codex, anthropic_messages for anthropic, chat_completions for everything else
    • Swaps in-place: self.model, self.provider, self.base_url, self.api_mode, self.client, self._client_kwargs
    • For anthropic fallback: builds a native Anthropic client instead of OpenAI-compatible
    • Re-evaluates prompt caching (enabled for Claude models on OpenRouter)
    • Sets _fallback_activated = True — prevents firing again
    • Resets retry count to 0 and continues the loop
  4. Config flow:

    • CLI: cli.py reads CLI_CONFIG["fallback_model"] → passes to AIAgent(fallback_model=...)
    • Gateway: gateway/run.py._load_fallback_model() reads config.yaml → passes to AIAgent
    • Validation: both provider and model keys must be non-empty, or fallback is disabled

What does NOT support fallback

  • Subagent delegation (tools/delegate_tool.py): subagents inherit the parent's provider but not the fallback config
  • Auxiliary tasks: use their own independent provider auto-detection chain (see Auxiliary model routing above)

Cron jobs do support fallback: run_job() reads fallback_providers (or legacy fallback_model) from config.yaml and passes it to AIAgent(fallback_model=...), matching the gateway's _load_fallback_model() pattern. See Cron Internals.

Test coverage

See tests/test_fallback_model.py for comprehensive tests covering all supported providers, one-shot semantics, and edge cases.