Comprehensive audit of every reference/messaging/feature doc page against the
live code registries (PROVIDER_REGISTRY, OPTIONAL_ENV_VARS, COMMAND_REGISTRY,
TOOLSETS, tool registry, on-disk skills). Every fix was verified against code
before writing.
### Wrong values fixed (users would paste-and-fail)
- reference/environment-variables.md:
- DASHSCOPE_BASE_URL default was `coding-intl.dashscope.aliyuncs.com/v1` \u2192
actual `dashscope-intl.aliyuncs.com/compatible-mode/v1`.
- MINIMAX_BASE_URL and MINIMAX_CN_BASE_URL defaults were `/v1` \u2192 actual
`/anthropic` (Hermes calls MiniMax via its Anthropic Messages endpoint).
- reference/toolsets-reference.md MCP example used the non-existent nested
`mcp: servers:` key \u2192 real key is the flat `mcp_servers:`.
- reference/skills-catalog.md listed ~20 bundled skills that no longer exist
on disk (all moved to `optional-skills/`). Regenerated the whole bundled
section from `skills/**/SKILL.md` \u2014 79 skills, accurate paths and names.
- messaging/slack.md ":::info" callout claimed Slack has no
`free_response_channels` equivalent; both the env var and the yaml key are
in fact read.
- messaging/qqbot.md documented `QQ_MARKDOWN_SUPPORT` as an env var, but the
adapter only reads `extra.markdown_support` from config.yaml. Removed the
env var row and noted config-only nature.
- messaging/qqbot.md `hermes setup gateway` \u2192 `hermes gateway setup`.
### Missing coverage added
- Providers: AWS Bedrock and Qwen Portal (qwen-oauth) \u2014 both in
PROVIDER_REGISTRY but undocumented everywhere. Added sections to
integrations/providers.md, rows to quickstart.md and fallback-providers.md.
- integrations/providers.md "Fallback Model" provider list now includes
gemini, google-gemini-cli, qwen-oauth, xai, nvidia, ollama-cloud, bedrock.
- reference/cli-commands.md `--provider` enum and HERMES_INFERENCE_PROVIDER
enum in env-vars now include the same set.
- reference/slash-commands.md: added `/agents` (alias `/tasks`) and `/copy`.
Removed duplicate rows for `/snapshot`, `/fast` (\u00d72), `/debug`.
- reference/tools-reference.md: fixed "47 built-in tools" \u2192 52. Added
`feishu_doc` and `feishu_drive` toolset sections.
- reference/toolsets-reference.md: added `feishu_doc` / `feishu_drive` core
rows + all missing `hermes-<platform>` toolsets in the platform table
(bluebubbles, dingtalk, feishu, qqbot, wecom, wecom-callback, weixin,
homeassistant, webhook, gateway). Fixed the `debugging` composite to
describe the actual `includes=[...]` mechanism.
- reference/optional-skills-catalog.md: added `fitness-nutrition`.
- reference/environment-variables.md: added NOUS_BASE_URL,
NOUS_INFERENCE_BASE_URL, NVIDIA_API_KEY/BASE_URL, OLLAMA_API_KEY/BASE_URL,
XAI_API_KEY/BASE_URL, MISTRAL_API_KEY, AWS_REGION/AWS_PROFILE,
BEDROCK_BASE_URL, HERMES_QWEN_BASE_URL, DISCORD_ALLOWED_CHANNELS,
DISCORD_PROXY, TELEGRAM_REPLY_TO_MODE, MATRIX_DEVICE_ID, MATRIX_REACTIONS,
QQBOT_HOME_CHANNEL_NAME, QQ_SANDBOX.
- messaging/discord.md: documented DISCORD_ALLOWED_CHANNELS, DISCORD_PROXY,
HERMES_DISCORD_TEXT_BATCH_DELAY_SECONDS and HERMES_DISCORD_TEXT_BATCH_SPLIT
_DELAY_SECONDS (all actively read by the adapter).
- messaging/matrix.md: documented MATRIX_REACTIONS (default true).
- messaging/telegram.md: removed the redundant second Webhook Mode section
that invented a `telegram.webhook_mode: true` yaml key the adapter does
not read.
- user-guide/features/hooks.md: added `on_session_finalize` and
`on_session_reset` (both emitted via invoke_hook but undocumented).
- user-guide/features/api-server.md: documented GET /health/detailed, the
`/api/jobs/*` CRUD surface, POST /v1/runs, and GET /v1/runs/{id}/events
(10 routes that were live but undocumented).
- user-guide/features/fallback-providers.md: added `approval` and
`title_generation` auxiliary-task rows; added gemini, bedrock, qwen-oauth
to the supported-providers table.
- user-guide/features/tts.md: "seven providers" \u2192 "eight" (post-xAI add
oversight in #11942).
- user-guide/configuration.md: TTS provider enum gains `xai` and `gemini`;
yaml example block gains `mistral:`, `gemini:`, `xai:` subsections.
Auxiliary-provider enum now enumerates all real registry entries.
- reference/faq.md: stale AIAgent/config examples bumped from
`nous/hermes-3-llama-3.1-70b` and `claude-sonnet-4.6` to
`claude-opus-4.7`.
### Docs-site integrity
- guides/build-a-hermes-plugin.md referenced two nonexistent hooks
(`pre_api_request`, `post_api_request`). Replaced with the real
`on_session_finalize` / `on_session_reset` entries.
- messaging/open-webui.md and features/api-server.md had pre-existing
broken links to `/docs/user-guide/features/profiles` (actual path is
`/docs/user-guide/profiles`). Fixed.
- reference/skills-catalog.md had one `<1%` literal that MDX parsed as a
JSX tag. Escaped to `<1%`.
### False positives filtered out (not changed, verified correct)
- `/set-home` is a registered alias of `/sethome` \u2014 docs were fine.
- `hermes setup gateway` is valid syntax (`hermes setup \<section\>`);
changed in qqbot.md for cross-doc consistency, not as a bug fix.
- Telegram reactions "disabled by default" matches code (default `"false"`).
- Matrix encryption "opt-in" matches code (empty env default \u2192 disabled).
- `pre_api_request` / `post_api_request` hooks do NOT exist in current code;
documented instead the real `on_session_finalize` / `on_session_reset`.
- SIGNAL_IGNORE_STORIES is already in env-vars.md (subagent missed it).
Validation:
- `docusaurus build` \u2014 passes (only pre-existing nix-setup anchor warning).
- `ascii-guard lint docs` \u2014 124 files, 0 errors.
- 22 files changed, +317 / \u2212158.
13 KiB
| title | description | sidebar_label | sidebar_position |
|---|---|---|---|
| Fallback Providers | Configure automatic failover to backup LLM providers when your primary model is unavailable. | Fallback Providers | 8 |
Fallback Providers
Hermes Agent has three layers of resilience that keep your sessions running when providers hit issues:
- Credential pools — rotate across multiple API keys for the same provider (tried first)
- Primary model fallback — automatically switches to a different provider:model when your main model fails
- Auxiliary task fallback — independent provider resolution for side tasks like vision, compression, and web extraction
Credential pools handle same-provider rotation (e.g., multiple OpenRouter keys). This page covers cross-provider fallback. Both are optional and work independently.
Primary Model Fallback
When your main LLM provider encounters errors — rate limits, server overload, auth failures, connection drops — Hermes can automatically switch to a backup provider:model pair mid-session without losing your conversation.
Configuration
Add a fallback_model section to ~/.hermes/config.yaml:
fallback_model:
provider: openrouter
model: anthropic/claude-sonnet-4
Both provider and model are required. If either is missing, the fallback is disabled.
Supported Providers
| Provider | Value | Requirements |
|---|---|---|
| AI Gateway | ai-gateway |
AI_GATEWAY_API_KEY |
| OpenRouter | openrouter |
OPENROUTER_API_KEY |
| Nous Portal | nous |
hermes auth (OAuth) |
| OpenAI Codex | openai-codex |
hermes model (ChatGPT OAuth) |
| GitHub Copilot | copilot |
COPILOT_GITHUB_TOKEN, GH_TOKEN, or GITHUB_TOKEN |
| GitHub Copilot ACP | copilot-acp |
External process (editor integration) |
| Anthropic | anthropic |
ANTHROPIC_API_KEY or Claude Code credentials |
| z.ai / GLM | zai |
GLM_API_KEY |
| Kimi / Moonshot | kimi-coding |
KIMI_API_KEY |
| MiniMax | minimax |
MINIMAX_API_KEY |
| MiniMax (China) | minimax-cn |
MINIMAX_CN_API_KEY |
| DeepSeek | deepseek |
DEEPSEEK_API_KEY |
| NVIDIA NIM | nvidia |
NVIDIA_API_KEY (optional: NVIDIA_BASE_URL) |
| Ollama Cloud | ollama-cloud |
OLLAMA_API_KEY |
| Google Gemini (OAuth) | google-gemini-cli |
hermes model (Google OAuth; optional: HERMES_GEMINI_PROJECT_ID) |
| Google AI Studio | gemini |
GOOGLE_API_KEY (alias: GEMINI_API_KEY) |
| xAI (Grok) | xai (alias grok) |
XAI_API_KEY (optional: XAI_BASE_URL) |
| AWS Bedrock | bedrock |
Standard boto3 auth (AWS_REGION + AWS_PROFILE or AWS_ACCESS_KEY_ID) |
| Qwen Portal (OAuth) | qwen-oauth |
hermes model (Qwen Portal OAuth; optional: HERMES_QWEN_BASE_URL) |
| OpenCode Zen | opencode-zen |
OPENCODE_ZEN_API_KEY |
| OpenCode Go | opencode-go |
OPENCODE_GO_API_KEY |
| Kilo Code | kilocode |
KILOCODE_API_KEY |
| Xiaomi MiMo | xiaomi |
XIAOMI_API_KEY |
| Arcee AI | arcee |
ARCEEAI_API_KEY |
| Alibaba / DashScope | alibaba |
DASHSCOPE_API_KEY |
| Hugging Face | huggingface |
HF_TOKEN |
| Custom endpoint | custom |
base_url + api_key_env (see below) |
Custom Endpoint Fallback
For a custom OpenAI-compatible endpoint, add base_url and optionally api_key_env:
fallback_model:
provider: custom
model: my-local-model
base_url: http://localhost:8000/v1
api_key_env: MY_LOCAL_KEY # env var name containing the API key
When Fallback Triggers
The fallback activates automatically when the primary model fails with:
- Rate limits (HTTP 429) — after exhausting retry attempts
- Server errors (HTTP 500, 502, 503) — after exhausting retry attempts
- Auth failures (HTTP 401, 403) — immediately (no point retrying)
- Not found (HTTP 404) — immediately
- Invalid responses — when the API returns malformed or empty responses repeatedly
When triggered, Hermes:
- Resolves credentials for the fallback provider
- Builds a new API client
- Swaps the model, provider, and client in-place
- Resets the retry counter and continues the conversation
The switch is seamless — your conversation history, tool calls, and context are preserved. The agent continues from exactly where it left off, just using a different model.
:::info One-Shot Fallback activates at most once per session. If the fallback provider also fails, normal error handling takes over (retries, then error message). This prevents cascading failover loops. :::
Examples
OpenRouter as fallback for Anthropic native:
model:
provider: anthropic
default: claude-sonnet-4-6
fallback_model:
provider: openrouter
model: anthropic/claude-sonnet-4
Nous Portal as fallback for OpenRouter:
model:
provider: openrouter
default: anthropic/claude-opus-4
fallback_model:
provider: nous
model: nous-hermes-3
Local model as fallback for cloud:
fallback_model:
provider: custom
model: llama-3.1-70b
base_url: http://localhost:8000/v1
api_key_env: LOCAL_API_KEY
Codex OAuth as fallback:
fallback_model:
provider: openai-codex
model: gpt-5.3-codex
Where Fallback Works
| Context | Fallback Supported |
|---|---|
| CLI sessions | ✔ |
| Messaging gateway (Telegram, Discord, etc.) | ✔ |
| Subagent delegation | ✘ (subagents do not inherit fallback config) |
| Cron jobs | ✘ (run with a fixed provider) |
| Auxiliary tasks (vision, compression) | ✘ (use their own provider chain — see below) |
:::tip
There are no environment variables for fallback_model — it is configured exclusively through config.yaml. This is intentional: fallback configuration is a deliberate choice, not something a stale shell export should override.
:::
Auxiliary Task Fallback
Hermes uses separate lightweight models for side tasks. Each task has its own provider resolution chain that acts as a built-in fallback system.
Tasks with Independent Provider Resolution
| Task | What It Does | Config Key |
|---|---|---|
| Vision | Image analysis, browser screenshots | auxiliary.vision |
| Web Extract | Web page summarization | auxiliary.web_extract |
| Compression | Context compression summaries | auxiliary.compression |
| Session Search | Past session summarization | auxiliary.session_search |
| Skills Hub | Skill search and discovery | auxiliary.skills_hub |
| MCP | MCP helper operations | auxiliary.mcp |
| Memory Flush | Memory consolidation | auxiliary.flush_memories |
| Approval | Smart command-approval classification | auxiliary.approval |
| Title Generation | Session title summaries | auxiliary.title_generation |
Auto-Detection Chain
When a task's provider is set to "auto" (the default), Hermes tries providers in order until one works:
For text tasks (compression, web extract, etc.):
OpenRouter → Nous Portal → Custom endpoint → Codex OAuth →
API-key providers (z.ai, Kimi, MiniMax, Xiaomi MiMo, Hugging Face, Anthropic) → give up
For vision tasks:
Main provider (if vision-capable) → OpenRouter → Nous Portal →
Codex OAuth → Anthropic → Custom endpoint → give up
If the resolved provider fails at call time, Hermes also has an internal retry: if the provider is not OpenRouter and no explicit base_url is set, it tries OpenRouter as a last-resort fallback.
Configuring Auxiliary Providers
Each task can be configured independently in config.yaml:
auxiliary:
vision:
provider: "auto" # auto | openrouter | nous | codex | main | anthropic
model: "" # e.g. "openai/gpt-4o"
base_url: "" # direct endpoint (takes precedence over provider)
api_key: "" # API key for base_url
web_extract:
provider: "auto"
model: ""
compression:
provider: "auto"
model: ""
session_search:
provider: "auto"
model: ""
skills_hub:
provider: "auto"
model: ""
mcp:
provider: "auto"
model: ""
flush_memories:
provider: "auto"
model: ""
Every task above follows the same provider / model / base_url pattern. Context compression is configured under auxiliary.compression:
auxiliary:
compression:
provider: main # Same provider options as other auxiliary tasks
model: google/gemini-3-flash-preview
base_url: null # Custom OpenAI-compatible endpoint
And the fallback model uses:
fallback_model:
provider: openrouter
model: anthropic/claude-sonnet-4
# base_url: http://localhost:8000/v1 # Optional custom endpoint
All three — auxiliary, compression, fallback — work the same way: set provider to pick who handles the request, model to pick which model, and base_url to point at a custom endpoint (overrides provider).
Provider Options for Auxiliary Tasks
These options apply to auxiliary:, compression:, and fallback_model: configs only — "main" is not a valid value for your top-level model.provider. For custom endpoints, use provider: custom in your model: section (see AI Providers).
| Provider | Description | Requirements |
|---|---|---|
"auto" |
Try providers in order until one works (default) | At least one provider configured |
"openrouter" |
Force OpenRouter | OPENROUTER_API_KEY |
"nous" |
Force Nous Portal | hermes auth |
"codex" |
Force Codex OAuth | hermes model → Codex |
"main" |
Use whatever provider the main agent uses (auxiliary tasks only) | Active main provider configured |
"anthropic" |
Force Anthropic native | ANTHROPIC_API_KEY or Claude Code credentials |
Direct Endpoint Override
For any auxiliary task, setting base_url bypasses provider resolution entirely and sends requests directly to that endpoint:
auxiliary:
vision:
base_url: "http://localhost:1234/v1"
api_key: "local-key"
model: "qwen2.5-vl"
base_url takes precedence over provider. Hermes uses the configured api_key for authentication, falling back to OPENAI_API_KEY if not set. It does not reuse OPENROUTER_API_KEY for custom endpoints.
Context Compression Fallback
Context compression uses the auxiliary.compression config block to control which model and provider handles summarization:
auxiliary:
compression:
provider: "auto" # auto | openrouter | nous | main
model: "google/gemini-3-flash-preview"
:::info Legacy migration
Older configs with compression.summary_model / compression.summary_provider / compression.summary_base_url are automatically migrated to auxiliary.compression.* on first load (config version 17).
:::
If no provider is available for compression, Hermes drops middle conversation turns without generating a summary rather than failing the session.
Delegation Provider Override
Subagents spawned by delegate_task do not use the primary fallback model. However, they can be routed to a different provider:model pair for cost optimization:
delegation:
provider: "openrouter" # override provider for all subagents
model: "google/gemini-3-flash-preview" # override model
# base_url: "http://localhost:1234/v1" # or use a direct endpoint
# api_key: "local-key"
See Subagent Delegation for full configuration details.
Cron Job Providers
Cron jobs run with whatever provider is configured at execution time. They do not support a fallback model. To use a different provider for cron jobs, configure provider and model overrides on the cron job itself:
cronjob(
action="create",
schedule="every 2h",
prompt="Check server status",
provider="openrouter",
model="google/gemini-3-flash-preview"
)
See Scheduled Tasks (Cron) for full configuration details.
Summary
| Feature | Fallback Mechanism | Config Location |
|---|---|---|
| Main agent model | fallback_model in config.yaml — one-shot failover on errors |
fallback_model: (top-level) |
| Vision | Auto-detection chain + internal OpenRouter retry | auxiliary.vision |
| Web extraction | Auto-detection chain + internal OpenRouter retry | auxiliary.web_extract |
| Context compression | Auto-detection chain, degrades to no-summary if unavailable | auxiliary.compression |
| Session search | Auto-detection chain | auxiliary.session_search |
| Skills hub | Auto-detection chain | auxiliary.skills_hub |
| MCP helpers | Auto-detection chain | auxiliary.mcp |
| Memory flush | Auto-detection chain | auxiliary.flush_memories |
| Approval classification | Auto-detection chain | auxiliary.approval |
| Title generation | Auto-detection chain | auxiliary.title_generation |
| Delegation | Provider override only (no automatic fallback) | delegation.provider / delegation.model |
| Cron jobs | Per-job provider override only (no automatic fallback) | Per-job provider / model |