hermes-agent/website/docs/developer-guide/provider-runtime.md
Teknium b62a82e0c3
docs: pluggable surfaces coverage — model-provider guide, full plugin map, opt-in fix (#20749)
* docs(providers): add model-provider-plugin authoring guide + fix stale refs

New docs:
- website/docs/developer-guide/model-provider-plugin.md — full authoring
  guide (directory layout, minimal example, ProviderProfile fields,
  overridable hooks, user overrides, api_mode selection, auth types,
  testing, pip distribution)
- Wired into website/sidebars.ts under 'Extending'
- Cross-references added in:
  - guides/build-a-hermes-plugin.md (tip block)
  - developer-guide/adding-providers.md
  - developer-guide/provider-runtime.md

User guide:
- user-guide/features/plugins.md: Plugin types table grows from 3 to 4
  with 'Model providers' row

Stale comment cleanup (providers/*.py → plugins/model-providers/<name>/):
- hermes_cli/main.py:_is_profile_api_key_provider docstring
- hermes_cli/doctor.py:_build_apikey_providers_list docstring
- hermes_cli/auth.py: PROVIDER_REGISTRY + alias auto-extension comments
- hermes_cli/models.py: CANONICAL_PROVIDERS auto-extension comment

AGENTS.md:
- Project-structure tree: added plugins/model-providers/ row
- New section: 'Model-provider plugins' explaining discovery, override
  semantics, PluginManager integration, kind auto-coerce heuristic

Verified: docusaurus build succeeds, new page renders, all 3 cross-links
resolve. 347/347 targeted tests pass (tests/providers/,
tests/hermes_cli/test_plugins.py, tests/hermes_cli/test_runtime_provider_resolution.py,
tests/run_agent/test_provider_parity.py).

* docs(plugins): add 'pluggable interfaces at a glance' maps to plugins.md + build-a-hermes-plugin

Devs landing on either the user-guide plugin page or the build-a-plugin
guide now get an upfront table of every distinct pluggable surface with
a link to the right authoring doc. Previously they'd have to read the
full general-plugin guide to discover that model providers / platforms
/ memory / context engines are separate systems.

user-guide/features/plugins.md:
- New 'Pluggable interfaces — where to go for each' section below the
  existing 4-kinds table
- 10 rows covering every register_* surface (tool, hook, slash command,
  CLI subcommand, skill, model provider, platform, memory, context
  engine, image-gen)
- Explicit note: TTS/STT are NOT plugin-extensible yet — documented
  with a pointer to the current config.yaml 'command providers' pattern
  and a note that register_tts_provider()/register_stt_provider() may
  come later

guides/build-a-hermes-plugin.md:
- New :::info 'Not sure which guide you need?' map at the top so devs
  see all pluggable interfaces before investing in this 737-line
  general-plugin walkthrough
- Existing bottom :::tip expanded to include platform adapters alongside
  model/memory/context plugins

Verified:
- All 8 cross-doc links in the new plugins.md table resolve in a
  docusaurus build (SUCCESS, no new broken links)
- TTS link corrected (features/voice → features/tts; latter exists)
- Pre-existing broken links/anchors (cron-script-only, llms.txt,
  adding-platform-adapters#step-by-step-checklist) are unchanged

* docs(plugins): correct TTS/STT pluggability \u2014 they ARE plugins (command-providers)

Previous commit incorrectly said TTS/STT 'aren't plugin-extensible'. They
are, via the config-driven command-provider pattern \u2014 any CLI that reads
text and writes audio (or vice versa for STT) is automatically a plugin
with zero Python. The tts.md docs cover this extensively and I missed it.

plugins.md:
- TTS row: 'Config-driven (not a Python plugin)', points at
  tts.md#custom-command-providers
- STT row: points at tts.md#voice-message-transcription-stt (STT docs
  live in tts.md despite the filename)
- Expanded note: TTS/STT use config-driven shell-command templates as
  their plugin surface (full tts.providers.<name> registry for TTS;
  HERMES_LOCAL_STT_COMMAND escape hatch for STT)
- Any CLI that reads/writes files is automatically a plugin \u2014 no Python
  register_* API needed
- Future register_tts_provider()/register_stt_provider() hooks mentioned
  as nice-to-have for SDK/streaming cases, not as the primary story

build-a-hermes-plugin.md:
- Same map update: TTS/STT rows explicit, footer note corrected

Verified:
- tts.md anchors (custom-command-providers, voice-message-transcription-stt)
  exist and resolve in docusaurus build (SUCCESS, no new broken links)

* docs(plugins): expand pluggable interfaces table with MCP / event hooks / shell hooks / skill taps

Broadened the scope beyond Python register_* hooks. Hermes has MULTIPLE
plugin-style extension surfaces; they're now all in one table instead of
being scattered across feature docs.

Added rows for:
- **MCP servers** — config.yaml mcp_servers.<name> auto-registers external
  tools from any MCP server. Huge extensibility surface, previously not
  linked from the plugin map.
- **Gateway event hooks** — drop HOOK.yaml + handler.py into
  ~/.hermes/hooks/<name>/ to fire on gateway:startup, session:*, agent:*,
  command:* events. Separate from Python plugin hooks.
- **Shell hooks** — hooks: block in config.yaml runs shell commands on
  events (notifications, auditing, etc.).
- **Skill sources (taps)** — hermes skills tap add <repo> to pull in new
  skill registries beyond the built-in sources.

Both docs updated:
- user-guide/features/plugins.md: table column renamed to 'How' (mixes
  Python API + config-driven + drop-in-dir surfaces accurately)
- guides/build-a-hermes-plugin.md: :::info map at top mirrors the new
  surfaces with a forward-link to the consolidated table

Note block rewritten: instead of singling out TTS/STT as the 'different
style' exception, now honestly describes that Hermes deliberately
supports three plugin styles — Python APIs, config-driven commands, and
drop-in manifest directories — and devs should pick the one that fits
their integration.

Not included (considered and rejected):
- Transport layer (register_transport) — internal, not user-facing
- Tool-call parsers — internal, VLLM phase-2 thing
- Cloud browser providers — hardcoded registry, not drop-in yet
- Terminal backends — hardcoded if/elif, not drop-in yet
- Skill sources (the ABC) — hardcoded list, only taps are user-extensible

Verified:
- All 5 new anchors resolve (gateway-event-hooks, shell-hooks, skills-hub,
  custom-command-providers, voice-message-transcription-stt)
- Docusaurus build SUCCESS, zero new broken links
- Same 3 pre-existing broken links on main (cron-script-only, llms.txt,
  adding-platform-adapters#step-by-step-checklist)

* docs(plugins): cover every pluggable surface in both the overview and how-to

Both plugins.md and build-a-hermes-plugin.md now cover every extension
surface end-to-end \u2014 general plugin APIs, specialized plugin types,
config-driven surfaces \u2014 with concrete authoring patterns for each.

plugins.md:
- 'What plugins can do' table grows from 9 rows (general ctx.register_*
  only) to 14 rows covering register_platform, register_image_gen_provider,
  register_context_engine, MemoryProvider subclass, register_provider
  (model). Each row links to its full authoring guide.
- New 'Plugin sub-categories' section under Plugin Discovery explains
  how plugins/platforms/, plugins/image_gen/, plugins/memory/,
  plugins/context_engine/, plugins/model-providers/ are routed to
  different loaders \u2014 PluginManager vs the per-category own-loader
  systems.
- Explicit mention of user-override semantics at
  ~/.hermes/plugins/model-providers/ and ~/.hermes/plugins/memory/.

build-a-hermes-plugin.md:
- New '## Specialized plugin types' section (5 sub-sections):
  - Model provider plugins \u2014 ProviderProfile + plugin.yaml example,
    auto-wiring summary, link to full guide
  - Platform plugins \u2014 BasePlatformAdapter + register_platform() skeleton
  - Memory provider plugins \u2014 MemoryProvider subclass example
  - Context engine plugins \u2014 ContextEngine subclass example
  - Image-generation backends \u2014 ImageGenProvider + kind: backend example
- New '## Non-Python extension surfaces' section (5 sub-sections):
  - MCP servers \u2014 config.yaml mcp_servers.<name> example
  - Gateway event hooks \u2014 HOOK.yaml + handler.py example
  - Shell hooks \u2014 hooks: block in config.yaml example
  - Skill sources (taps) \u2014 hermes skills tap add example
  - TTS / STT command templates \u2014 tts.providers.<name> with type: command
- Distribute via pip / NixOS promoted from ### to ## (they were orphaned
  after the reorganization)

Each specialized / non-Python section has a concrete, copy-pasteable
example plus a 'Full guide:' link to the authoritative doc. Devs arriving
at the build-a-hermes-plugin guide now see every extension surface at
their disposal, not just the general tool/hook/slash-command surface.

Verified:
- Docusaurus build SUCCESS, zero new broken links
- All new cross-links (developer-guide/model-provider-plugin,
  adding-platform-adapters, memory-provider-plugin, context-engine-plugin,
  user-guide/features/mcp, skills#skills-hub, hooks#gateway-event-hooks,
  hooks#shell-hooks, tts#custom-command-providers,
  tts#voice-message-transcription-stt) resolve
- Same 3 pre-existing broken links on main (cron-script-only, llms.txt,
  adding-platform-adapters#step-by-step-checklist)

* docs(plugins): fix opt-in inconsistency — not every plugin is gated

The 'Every plugin is disabled by default' statement was wrong. Several
plugin categories intentionally bypass plugins.enabled:

- Bundled platform plugins (IRC, Teams) auto-load so shipped gateway
  channels are available out of the box. Activation per channel is via
  gateway.platforms.<name>.enabled.
- Bundled backends (plugins/image_gen/*) auto-load so the default
  backend 'just works'. Selection via <category>.provider config.
- Memory providers are all discovered; one is active via memory.provider.
- Context engines are all discovered; one is active via context.engine.
- Model providers: all 33 discovered at first get_provider_profile();
  user picks via --provider / config.

The plugins.enabled allow-list specifically gates:
- Standalone plugins (general tools/hooks/slash commands)
- User-installed backends
- User-installed platforms (third-party gateway adapters)
- Pip entry-point backends

Which matches the actual code in hermes_cli/plugins.py:737 where the
bundled+backend/platform check bypasses the allow-list.

Rewrote '## Plugins are opt-in' to:
- Retitle to 'Plugins are opt-in (with a few exceptions)'
- Narrow opening claim to 'General plugins and user-installed backends
  are disabled by default'
- Added 'What the allow-list does NOT gate' subsection with a full
  table of which bypass the gate and how they're activated instead
- Fixed migration section wording (bundled platform/backend plugins
  never needed grandfathering)

Verified: docusaurus build SUCCESS, zero new broken links.
2026-05-06 07:24:42 -07:00

8.4 KiB

sidebar_position title description
4 Provider Runtime Resolution How Hermes resolves providers, credentials, API modes, and auxiliary models at runtime

Provider Runtime Resolution

Hermes has a shared provider runtime resolver used across:

  • CLI
  • gateway
  • cron jobs
  • ACP
  • auxiliary model calls

Primary implementation:

  • hermes_cli/runtime_provider.py — credential resolution, _resolve_custom_runtime()
  • hermes_cli/auth.py — provider registry, resolve_provider()
  • hermes_cli/model_switch.py — shared /model switch pipeline (CLI + gateway)
  • agent/auxiliary_client.py — auxiliary model routing
  • providers/ — ABC + registry entry points (ProviderProfile, register_provider, get_provider_profile, list_providers)
  • plugins/model-providers/<name>/ — per-provider plugins (bundled) that declare api_mode, base_url, env_vars, fallback_models and register themselves into the registry on first access. User plugins at $HERMES_HOME/plugins/model-providers/<name>/ override bundled ones of the same name.

get_provider_profile() in providers/ returns a ProviderProfile for a given provider id. runtime_provider.py calls this at resolution time to get the canonical base_url, env_vars priority list, api_mode, and fallback_models without needing to duplicate that data in multiple files. Adding a new plugin under plugins/model-providers/<your-provider>/ (or $HERMES_HOME/plugins/model-providers/<your-provider>/) that calls register_provider() is enough for runtime_provider.py to pick it up — no branch needed in the resolver itself.

If you are trying to add a new first-class inference provider, read Adding Providers and the Model Provider Plugin guide alongside this page.

Resolution precedence

At a high level, provider resolution uses:

  1. explicit CLI/runtime request
  2. config.yaml model/provider config
  3. environment variables
  4. provider-specific defaults or auto resolution

That ordering matters because Hermes treats the saved model/provider choice as the source of truth for normal runs. This prevents a stale shell export from silently overriding the endpoint a user last selected in hermes model.

Providers

Current provider families include:

  • AI Gateway (Vercel)
  • OpenRouter
  • Nous Portal
  • OpenAI Codex
  • Copilot / Copilot ACP
  • Anthropic (native)
  • Google / Gemini
  • Alibaba / DashScope
  • DeepSeek
  • Z.AI
  • Kimi / Moonshot
  • MiniMax
  • MiniMax China
  • Kilo Code
  • Hugging Face
  • OpenCode Zen / OpenCode Go
  • Custom (provider: custom) — first-class provider for any OpenAI-compatible endpoint
  • Named custom providers (custom_providers list in config.yaml)

Output of runtime resolution

The runtime resolver returns data such as:

  • provider
  • api_mode
  • base_url
  • api_key
  • source
  • provider-specific metadata like expiry/refresh info

Why this matters

This resolver is the main reason Hermes can share auth/runtime logic between:

  • hermes chat
  • gateway message handling
  • cron jobs running in fresh sessions
  • ACP editor sessions
  • auxiliary model tasks

AI Gateway

Set AI_GATEWAY_API_KEY in ~/.hermes/.env and run with --provider ai-gateway. Hermes fetches available models from the gateway's /models endpoint, filtering to language models with tool-use support.

OpenRouter, AI Gateway, and custom OpenAI-compatible base URLs

Hermes contains logic to avoid leaking the wrong API key to a custom endpoint when multiple provider keys exist (e.g. OPENROUTER_API_KEY, AI_GATEWAY_API_KEY, and OPENAI_API_KEY).

Each provider's API key is scoped to its own base URL:

  • OPENROUTER_API_KEY is only sent to openrouter.ai endpoints
  • AI_GATEWAY_API_KEY is only sent to ai-gateway.vercel.sh endpoints
  • OPENAI_API_KEY is used for custom endpoints and as a fallback

Hermes also distinguishes between:

  • a real custom endpoint selected by the user
  • the OpenRouter fallback path used when no custom endpoint is configured

That distinction is especially important for:

  • local model servers
  • non-OpenRouter/non-AI Gateway OpenAI-compatible APIs
  • switching providers without re-running setup
  • config-saved custom endpoints that should keep working even when OPENAI_BASE_URL is not exported in the current shell

Native Anthropic path

Anthropic is not just "via OpenRouter" anymore.

When provider resolution selects anthropic, Hermes uses:

  • api_mode = anthropic_messages
  • the native Anthropic Messages API
  • agent/anthropic_adapter.py for translation

Credential resolution for native Anthropic now prefers refreshable Claude Code credentials over copied env tokens when both are present. In practice that means:

  • Claude Code credential files are treated as the preferred source when they include refreshable auth
  • manual ANTHROPIC_TOKEN / CLAUDE_CODE_OAUTH_TOKEN values still work as explicit overrides
  • Hermes preflights Anthropic credential refresh before native Messages API calls
  • Hermes still retries once on a 401 after rebuilding the Anthropic client, as a fallback path

OpenAI Codex path

Codex uses a separate Responses API path:

  • api_mode = codex_responses
  • dedicated credential resolution and auth store support

Auxiliary model routing

Auxiliary tasks such as:

  • vision
  • web extraction summarization
  • context compression summaries
  • session search summarization
  • skills hub operations
  • MCP helper operations
  • memory flushes

can use their own provider/model routing rather than the main conversational model.

When an auxiliary task is configured with provider main, Hermes resolves that through the same shared runtime path as normal chat. In practice that means:

  • env-driven custom endpoints still work
  • custom endpoints saved via hermes model / config.yaml also work
  • auxiliary routing can tell the difference between a real saved custom endpoint and the OpenRouter fallback

Fallback models

Hermes supports a configured fallback model/provider pair, allowing runtime failover when the primary model encounters errors.

How it works internally

  1. Storage: AIAgent.__init__ stores the fallback_model dict and sets _fallback_activated = False.

  2. Trigger points: _try_activate_fallback() is called from three places in the main retry loop in run_agent.py:

    • After max retries on invalid API responses (None choices, missing content)
    • On non-retryable client errors (HTTP 401, 403, 404)
    • After max retries on transient errors (HTTP 429, 500, 502, 503)
  3. Activation flow (_try_activate_fallback):

    • Returns False immediately if already activated or not configured
    • Calls resolve_provider_client() from auxiliary_client.py to build a new client with proper auth
    • Determines api_mode: codex_responses for openai-codex, anthropic_messages for anthropic, chat_completions for everything else
    • Swaps in-place: self.model, self.provider, self.base_url, self.api_mode, self.client, self._client_kwargs
    • For anthropic fallback: builds a native Anthropic client instead of OpenAI-compatible
    • Re-evaluates prompt caching (enabled for Claude models on OpenRouter)
    • Sets _fallback_activated = True — prevents firing again
    • Resets retry count to 0 and continues the loop
  4. Config flow:

    • CLI: cli.py reads CLI_CONFIG["fallback_model"] → passes to AIAgent(fallback_model=...)
    • Gateway: gateway/run.py._load_fallback_model() reads config.yaml → passes to AIAgent
    • Validation: both provider and model keys must be non-empty, or fallback is disabled

What does NOT support fallback

  • Subagent delegation (tools/delegate_tool.py): subagents inherit the parent's provider but not the fallback config
  • Auxiliary tasks: use their own independent provider auto-detection chain (see Auxiliary model routing above)

Cron jobs do support fallback: run_job() reads fallback_providers (or legacy fallback_model) from config.yaml and passes it to AIAgent(fallback_model=...), matching the gateway's _load_fallback_model() pattern. See Cron Internals.

Test coverage

See tests/test_fallback_model.py for comprehensive tests covering all supported providers, one-shot semantics, and edge cases.