* docs(providers): add model-provider-plugin authoring guide + fix stale refs
New docs:
- website/docs/developer-guide/model-provider-plugin.md — full authoring
guide (directory layout, minimal example, ProviderProfile fields,
overridable hooks, user overrides, api_mode selection, auth types,
testing, pip distribution)
- Wired into website/sidebars.ts under 'Extending'
- Cross-references added in:
- guides/build-a-hermes-plugin.md (tip block)
- developer-guide/adding-providers.md
- developer-guide/provider-runtime.md
User guide:
- user-guide/features/plugins.md: Plugin types table grows from 3 to 4
with 'Model providers' row
Stale comment cleanup (providers/*.py → plugins/model-providers/<name>/):
- hermes_cli/main.py:_is_profile_api_key_provider docstring
- hermes_cli/doctor.py:_build_apikey_providers_list docstring
- hermes_cli/auth.py: PROVIDER_REGISTRY + alias auto-extension comments
- hermes_cli/models.py: CANONICAL_PROVIDERS auto-extension comment
AGENTS.md:
- Project-structure tree: added plugins/model-providers/ row
- New section: 'Model-provider plugins' explaining discovery, override
semantics, PluginManager integration, kind auto-coerce heuristic
Verified: docusaurus build succeeds, new page renders, all 3 cross-links
resolve. 347/347 targeted tests pass (tests/providers/,
tests/hermes_cli/test_plugins.py, tests/hermes_cli/test_runtime_provider_resolution.py,
tests/run_agent/test_provider_parity.py).
* docs(plugins): add 'pluggable interfaces at a glance' maps to plugins.md + build-a-hermes-plugin
Devs landing on either the user-guide plugin page or the build-a-plugin
guide now get an upfront table of every distinct pluggable surface with
a link to the right authoring doc. Previously they'd have to read the
full general-plugin guide to discover that model providers / platforms
/ memory / context engines are separate systems.
user-guide/features/plugins.md:
- New 'Pluggable interfaces — where to go for each' section below the
existing 4-kinds table
- 10 rows covering every register_* surface (tool, hook, slash command,
CLI subcommand, skill, model provider, platform, memory, context
engine, image-gen)
- Explicit note: TTS/STT are NOT plugin-extensible yet — documented
with a pointer to the current config.yaml 'command providers' pattern
and a note that register_tts_provider()/register_stt_provider() may
come later
guides/build-a-hermes-plugin.md:
- New :::info 'Not sure which guide you need?' map at the top so devs
see all pluggable interfaces before investing in this 737-line
general-plugin walkthrough
- Existing bottom :::tip expanded to include platform adapters alongside
model/memory/context plugins
Verified:
- All 8 cross-doc links in the new plugins.md table resolve in a
docusaurus build (SUCCESS, no new broken links)
- TTS link corrected (features/voice → features/tts; latter exists)
- Pre-existing broken links/anchors (cron-script-only, llms.txt,
adding-platform-adapters#step-by-step-checklist) are unchanged
* docs(plugins): correct TTS/STT pluggability \u2014 they ARE plugins (command-providers)
Previous commit incorrectly said TTS/STT 'aren't plugin-extensible'. They
are, via the config-driven command-provider pattern \u2014 any CLI that reads
text and writes audio (or vice versa for STT) is automatically a plugin
with zero Python. The tts.md docs cover this extensively and I missed it.
plugins.md:
- TTS row: 'Config-driven (not a Python plugin)', points at
tts.md#custom-command-providers
- STT row: points at tts.md#voice-message-transcription-stt (STT docs
live in tts.md despite the filename)
- Expanded note: TTS/STT use config-driven shell-command templates as
their plugin surface (full tts.providers.<name> registry for TTS;
HERMES_LOCAL_STT_COMMAND escape hatch for STT)
- Any CLI that reads/writes files is automatically a plugin \u2014 no Python
register_* API needed
- Future register_tts_provider()/register_stt_provider() hooks mentioned
as nice-to-have for SDK/streaming cases, not as the primary story
build-a-hermes-plugin.md:
- Same map update: TTS/STT rows explicit, footer note corrected
Verified:
- tts.md anchors (custom-command-providers, voice-message-transcription-stt)
exist and resolve in docusaurus build (SUCCESS, no new broken links)
* docs(plugins): expand pluggable interfaces table with MCP / event hooks / shell hooks / skill taps
Broadened the scope beyond Python register_* hooks. Hermes has MULTIPLE
plugin-style extension surfaces; they're now all in one table instead of
being scattered across feature docs.
Added rows for:
- **MCP servers** — config.yaml mcp_servers.<name> auto-registers external
tools from any MCP server. Huge extensibility surface, previously not
linked from the plugin map.
- **Gateway event hooks** — drop HOOK.yaml + handler.py into
~/.hermes/hooks/<name>/ to fire on gateway:startup, session:*, agent:*,
command:* events. Separate from Python plugin hooks.
- **Shell hooks** — hooks: block in config.yaml runs shell commands on
events (notifications, auditing, etc.).
- **Skill sources (taps)** — hermes skills tap add <repo> to pull in new
skill registries beyond the built-in sources.
Both docs updated:
- user-guide/features/plugins.md: table column renamed to 'How' (mixes
Python API + config-driven + drop-in-dir surfaces accurately)
- guides/build-a-hermes-plugin.md: :::info map at top mirrors the new
surfaces with a forward-link to the consolidated table
Note block rewritten: instead of singling out TTS/STT as the 'different
style' exception, now honestly describes that Hermes deliberately
supports three plugin styles — Python APIs, config-driven commands, and
drop-in manifest directories — and devs should pick the one that fits
their integration.
Not included (considered and rejected):
- Transport layer (register_transport) — internal, not user-facing
- Tool-call parsers — internal, VLLM phase-2 thing
- Cloud browser providers — hardcoded registry, not drop-in yet
- Terminal backends — hardcoded if/elif, not drop-in yet
- Skill sources (the ABC) — hardcoded list, only taps are user-extensible
Verified:
- All 5 new anchors resolve (gateway-event-hooks, shell-hooks, skills-hub,
custom-command-providers, voice-message-transcription-stt)
- Docusaurus build SUCCESS, zero new broken links
- Same 3 pre-existing broken links on main (cron-script-only, llms.txt,
adding-platform-adapters#step-by-step-checklist)
* docs(plugins): cover every pluggable surface in both the overview and how-to
Both plugins.md and build-a-hermes-plugin.md now cover every extension
surface end-to-end \u2014 general plugin APIs, specialized plugin types,
config-driven surfaces \u2014 with concrete authoring patterns for each.
plugins.md:
- 'What plugins can do' table grows from 9 rows (general ctx.register_*
only) to 14 rows covering register_platform, register_image_gen_provider,
register_context_engine, MemoryProvider subclass, register_provider
(model). Each row links to its full authoring guide.
- New 'Plugin sub-categories' section under Plugin Discovery explains
how plugins/platforms/, plugins/image_gen/, plugins/memory/,
plugins/context_engine/, plugins/model-providers/ are routed to
different loaders \u2014 PluginManager vs the per-category own-loader
systems.
- Explicit mention of user-override semantics at
~/.hermes/plugins/model-providers/ and ~/.hermes/plugins/memory/.
build-a-hermes-plugin.md:
- New '## Specialized plugin types' section (5 sub-sections):
- Model provider plugins \u2014 ProviderProfile + plugin.yaml example,
auto-wiring summary, link to full guide
- Platform plugins \u2014 BasePlatformAdapter + register_platform() skeleton
- Memory provider plugins \u2014 MemoryProvider subclass example
- Context engine plugins \u2014 ContextEngine subclass example
- Image-generation backends \u2014 ImageGenProvider + kind: backend example
- New '## Non-Python extension surfaces' section (5 sub-sections):
- MCP servers \u2014 config.yaml mcp_servers.<name> example
- Gateway event hooks \u2014 HOOK.yaml + handler.py example
- Shell hooks \u2014 hooks: block in config.yaml example
- Skill sources (taps) \u2014 hermes skills tap add example
- TTS / STT command templates \u2014 tts.providers.<name> with type: command
- Distribute via pip / NixOS promoted from ### to ## (they were orphaned
after the reorganization)
Each specialized / non-Python section has a concrete, copy-pasteable
example plus a 'Full guide:' link to the authoritative doc. Devs arriving
at the build-a-hermes-plugin guide now see every extension surface at
their disposal, not just the general tool/hook/slash-command surface.
Verified:
- Docusaurus build SUCCESS, zero new broken links
- All new cross-links (developer-guide/model-provider-plugin,
adding-platform-adapters, memory-provider-plugin, context-engine-plugin,
user-guide/features/mcp, skills#skills-hub, hooks#gateway-event-hooks,
hooks#shell-hooks, tts#custom-command-providers,
tts#voice-message-transcription-stt) resolve
- Same 3 pre-existing broken links on main (cron-script-only, llms.txt,
adding-platform-adapters#step-by-step-checklist)
* docs(plugins): fix opt-in inconsistency — not every plugin is gated
The 'Every plugin is disabled by default' statement was wrong. Several
plugin categories intentionally bypass plugins.enabled:
- Bundled platform plugins (IRC, Teams) auto-load so shipped gateway
channels are available out of the box. Activation per channel is via
gateway.platforms.<name>.enabled.
- Bundled backends (plugins/image_gen/*) auto-load so the default
backend 'just works'. Selection via <category>.provider config.
- Memory providers are all discovered; one is active via memory.provider.
- Context engines are all discovered; one is active via context.engine.
- Model providers: all 33 discovered at first get_provider_profile();
user picks via --provider / config.
The plugins.enabled allow-list specifically gates:
- Standalone plugins (general tools/hooks/slash commands)
- User-installed backends
- User-installed platforms (third-party gateway adapters)
- Pip entry-point backends
Which matches the actual code in hermes_cli/plugins.py:737 where the
bundled+backend/platform check bypasses the allow-list.
Rewrote '## Plugins are opt-in' to:
- Retitle to 'Plugins are opt-in (with a few exceptions)'
- Narrow opening claim to 'General plugins and user-installed backends
are disabled by default'
- Added 'What the allow-list does NOT gate' subsection with a full
table of which bypass the gate and how they're activated instead
- Fixed migration section wording (bundled platform/backend plugins
never needed grandfathering)
Verified: docusaurus build SUCCESS, zero new broken links.
Endpoint validated over 6 conversational turns with tool calls (9 API
calls, 3 tool calls, 0 failures) and an 8-request burst (8/8 ok,
0 rate limits). Latency ~5-10s/call — slower than grok-4.20 but
expected for a reasoning model.
- hermes_cli/models.py: add to OPENROUTER_MODELS and _PROVIDER_MODELS['nous']
- website/static/api/model-catalog.json: regenerated
Endpoint re-tested over 6 conversational turns (9 API calls, 3 tool calls)
and an 8-request burst — no rate limits, no errors, ~2-3s latency. The
historical rate-limit issues that caused its removal are gone.
- hermes_cli/models.py: add to OPENROUTER_MODELS and _PROVIDER_MODELS['nous']
- website/static/api/model-catalog.json: regenerated via build_model_catalog.py
Introduces providers/ package — single source of truth for every
inference provider. Adding a simple api-key provider now requires one
providers/<name>.py file with zero edits anywhere else.
What this PR ships:
- providers/ package (ProviderProfile ABC + 33 profiles across 4 api_modes)
- ProviderProfile declarative fields: name, api_mode, aliases, display_name,
env_vars, base_url, models_url, auth_type, fallback_models, hostname,
default_headers, fixed_temperature, default_max_tokens, default_aux_model
- 4 overridable hooks: prepare_messages, build_extra_body,
build_api_kwargs_extras, fetch_models
- chat_completions.build_kwargs: profile path via _build_kwargs_from_profile,
legacy flag path retained for lmstudio/tencent-tokenhub (which have
session-aware reasoning probing that doesn't map cleanly to hooks yet)
- run_agent.py: profile path for all registered providers; legacy path
variable scoping fixed (all flags defined before branching)
- Auto-wires: auth.PROVIDER_REGISTRY, models.CANONICAL_PROVIDERS,
doctor health checks, config.OPTIONAL_ENV_VARS, model_metadata._URL_TO_PROVIDER
- GeminiProfile: thinking_config translation (native + openai-compat nested)
- New tests/providers/ (79 tests covering profile declarations, transport
parity, hook overrides, e2e kwargs assembly)
Deltas vs original PR (salvaged onto current main):
- Added profiles: alibaba-coding-plan, azure-foundry, minimax-oauth
(were added to main since original PR)
- Skipped profiles: lmstudio, tencent-tokenhub stay on legacy path (their
reasoning_effort probing has no clean hook equivalent yet)
- Removed lmstudio alias from custom profile (it's a separate provider now)
- Skipped openrouter/custom from PROVIDER_REGISTRY auto-extension
(resolve_provider special-cases them; adding breaks runtime resolution)
- runtime_provider: profile.api_mode only as fallback when URL detection
finds nothing (was breaking minimax /v1 override)
- Preserved main's legacy-path improvements: deepseek reasoning_content
preserve, gemini Gemma skip, OpenRouter response caching, Anthropic 1M
beta recovery, etc.
- Kept agent/copilot_acp_client.py in place (rejected PR's relocation —
main has 7 fixes landed since; relocation would revert them)
- _API_KEY_PROVIDER_AUX_MODELS alias kept for backward compat with existing
test imports
Co-authored-by: kshitijk4poor <82637225+kshitijk4poor@users.noreply.github.com>
Closes#14418
models.dev appends :cloud and -cloud suffixes to Ollama Cloud model IDs
(e.g. kimi-k2.6:cloud, qwen3-coder:480b-cloud) that the live Ollama Cloud
API does not use. Without normalisation, these suffixed IDs bypass the
dedup check and appear alongside the correct clean IDs, causing 400/404
errors when users select them in /model or hermes model.
Add _strip_ollama_cloud_suffix() and apply it to mdev entries before the
dedup merge in fetch_ollama_cloud_models() so all model IDs stored in the
disk cache use the canonical form the API accepts.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Per https://platform.claude.com/docs/en/build-with-claude/fast-mode:
"Fast mode is currently supported on Opus 4.6 only. Sending speed: fast
with an unsupported model returns an error."
Pre-fix, _is_anthropic_fast_model() returned True for any claude-* model,
so /fast on Opus 4.7 (or Sonnet/Haiku) would persist agent.service_tier=fast
in config.yaml and the adapter would inject extra_body["speed"] = "fast"
on every subsequent request. Opus 4.7 returns:
HTTP 400: 'claude-opus-4-7' does not support the `speed` parameter.
This wedged sessions across model upgrades (a user who ran /fast on Opus 4.6
and later switched the default model to 4.7 hit a hard 400 on every turn
until they manually edited config.yaml).
Changes:
- _is_anthropic_fast_model: gate on "opus-4-6" / "opus-4.6" only
- anthropic_adapter: add _supports_fast_mode predicate as defensive guard
so stale request_overrides on an unsupported model are dropped silently
instead of 400'ing
- Tests: flip the assertions that mirrored the bug (Sonnet/Haiku/Opus 4.7
asserting fast-mode support) to match the documented API contract
Two related fixes for custom_providers model switching:
1. validate_requested_model() now recognizes custom:<name> slugs
(e.g. custom:volcengine) as custom endpoints, not generic providers.
Previously only the bare 'custom' slug matched the relaxed validation
branch, causing model validation to fail with 'not found in provider
listing' for all named custom providers.
2. switch_model() now consults the custom_providers list when deciding
whether to override a validation rejection. If the requested model
matches the entry's 'model' field or any key in its 'models' dict,
the switch is accepted even when the remote /v1/models endpoint does
not list it.
Both changes are covered by existing tests (86 passed).
It was sitting at position 4 of the `hermes model` list, ahead of Anthropic,
OpenAI, Xiaomi, and other first-class API providers. Move it to the end of
CANONICAL_PROVIDERS and drop the "(200+ models, $5 free credit, no markup)"
parenthetical so the entry just reads "Vercel AI Gateway".
Keep context-1m-2025-08-07 in OAuth requests by default so 1M-capable
subscriptions retain full context. When Anthropic rejects a request with
400 'long context beta is not yet available for this subscription',
disable the beta for the rest of the session, rebuild the client, and
retry once.
Addresses #17680 (thanks @JayGwod for the clean reproduction) without
forcing every OAuth user off the 1M context window.
Changes:
- agent/error_classifier.py: new FailoverReason.oauth_long_context_beta_forbidden;
pattern matches 400 + 'long context beta' + 'not yet available'. Narrow
enough that the existing 429 tier-gate pattern keeps its own reason.
- agent/anthropic_adapter.py: _common_betas_for_base_url,
build_anthropic_client, build_anthropic_kwargs gain drop_context_1m_beta
kwarg. Default=False (1M stays). OAuth OAUTH_ONLY_BETAS unchanged.
- agent/transports/anthropic.py: build_kwargs forwards the flag.
- run_agent.py: self._oauth_1m_beta_disabled flag, retry-once guard,
recovery branch next to the image-shrink path. _rebuild_anthropic_client
honors the flag. The main build_kwargs call site threads it through for
fast-mode extra_headers.
- hermes_cli/doctor.py, hermes_cli/models.py: sibling OAuth /v1/models
probes get the same reactive retry — previously they'd falsely report
the Anthropic API as unreachable for affected subscriptions.
Tests: 2190 tests/agent/ + 94 adjacent integration tests pass. New unit
tests cover the classifier pattern (including the collision guard against
the 429 tier-gate) and the drop_context_1m_beta adapter behavior (default
keeps 1M, flag strips only 1M while preserving every other beta).
feat(gateway): refine Platform._missing_ and platform-connected dispatch
Restricts plugin-name acceptance to bundled plugin scan + registry
(no arbitrary string -> enum-pollution), pulls per-platform connectivity
checks into a _PLATFORM_CONNECTED_CHECKERS lambda map with a clean
_is_platform_connected method, and adds tests covering the checker map,
plugin platform interface, and IRC setup wizard.
Wire MiniMax-M2.7 and MiniMax-M2.7-highspeed into the model catalog,
CLI model picker, and agent auxiliary/metadata subsystems.
Changes:
- hermes_cli/models.py:
- Add 'minimax-oauth' to _PROVIDER_MODELS with MiniMax-M2.7 and
MiniMax-M2.7-highspeed
- Add ProviderEntry('minimax-oauth', 'MiniMax (OAuth)', ...) to
CANONICAL_PROVIDERS near existing minimax entries
- Add aliases: minimax-portal, minimax-global, minimax_oauth in
_PROVIDER_ALIASES
- hermes_cli/main.py:
- Add 'minimax-oauth' to provider_labels dict
- Insert 'minimax-oauth' into providers list in
select_provider_and_model() near the other minimax entries
- Add 'minimax-oauth' to --provider argparse choices
- Add _model_flow_minimax_oauth() function: ensures login via
_login_minimax_oauth(), resolves runtime credentials, prompts for
model selection, saves model choice and config
- Add dispatch elif branch for selected_provider == 'minimax-oauth'
- agent/auxiliary_client.py:
- Add 'minimax-oauth': 'MiniMax-M2.7-highspeed' to
_API_KEY_PROVIDER_AUX_MODELS
- Add 'minimax-oauth' to _ANTHROPIC_COMPAT_PROVIDERS set
- agent/model_metadata.py:
- Add 'minimax-oauth' to _PROVIDER_PREFIXES frozenset
- MiniMax-M2.7 context length (200_000) already covered by the
existing 'minimax' substring match in DEFAULT_CONTEXT_LENGTHS
- Remove dead _lmstudio_loaded_context attribute from run_agent.py (set
but never read — the loaded context is pushed to context_compressor.update_model
which is the actual consumer)
- Cache empty reasoning options with 60s TTL to avoid per-turn HTTP probe
for non-reasoning LM Studio models. Non-empty results cached permanently.
- Extract _lmstudio_server_root(), _lmstudio_request_headers(), and
_lmstudio_fetch_raw_models() shared helpers in models.py — eliminates
URL-strip + auth-header + HTTP-call duplication across probe_lmstudio_models,
ensure_lmstudio_model_loaded, and lmstudio_model_reasoning_options
- Revert runtime_provider.py base_url precedence change: preserve the
established contract (saved config.base_url > env var > default) for all
api_key providers
- Remove unnecessary config version bump 22→23
- Fix TUI test: relax target_model assertion to avoid module-cache flake
- AUTHOR_MAP: added rugved@lmstudio.ai → rugvedS07
provider_model_ids("bedrock") fell through to a static _PROVIDER_MODELS
table containing only hardcoded us.* model IDs. Users configured for
non-US AWS regions (eu-central-1, ap-northeast-1, etc.) saw wrong or no
models in /model and autocomplete.
Root causes fixed:
1. models.py: provider_model_ids() now calls discover_bedrock_models()
keyed by the resolved region before falling back to the static table.
A new bedrock_model_ids_or_none() helper in bedrock_adapter.py
consolidates the discover -> extract IDs -> fallback pattern used by
all three call sites.
2. providers.py: registers bedrock in HERMES_OVERLAYS with
transport=bedrock_converse and auth_type=aws_sdk so
get_provider("bedrock") and resolve_provider_full("bedrock") work.
3. model_switch.py: list_authenticated_providers() sections 2 and 3
detect AWS credentials via has_aws_credentials() for aws_sdk
overlays and use live discovery for the model list.
4. bedrock_adapter.py: resolve_bedrock_region() reads the configured
region from botocore.session before falling back to us-east-1,
covering users who set their region in ~/.aws/config via a named
profile rather than env vars.
5. tui_gateway/server.py: passes provider= to get_model_context_length()
so context window lookups work correctly for the Bedrock provider.
Registers tencent-tokenhub (https://tokenhub.tencentmaas.com/v1) as a
new API-key provider with model tencent/hy3-preview (256K context).
- PROVIDER_REGISTRY entry + TOKENHUB_API_KEY / TOKENHUB_BASE_URL env vars
- Aliases: tencent, tokenhub, tencent-cloud, tencentmaas
- openai_chat transport with is_tokenhub branch for top-level
reasoning_effort (Hy3 is a reasoning model)
- tencent/hy3-preview:free added to OpenRouter curated list
- 60+ tests (provider registry, aliases, runtime resolution,
credentials, model catalog, URL mapping, context length)
- Docs: integrations/providers.md, environment-variables.md,
model-catalog.json
Author: simonweng <simonweng@tencent.com>
Salvaged from PR #16860 onto current main (resolved conflicts with
#16935 Azure Anthropic env-var hint tests and the --provider choices=
list removal in chat_parser).
Follow-up to the static list refresh: replace the hardcoded xAI entries
with _xai_curated_models(), mirroring the _codex_curated_models()
pattern from PR #7844. The helper reads $HERMES_HOME/models_dev_cache.json
at import time (no network call) and falls back to a small static list
when the cache is missing or malformed.
Why: _PROVIDER_MODELS["xai"] has drifted once already (issue #16699) and
will drift again next time xAI renames a model. Hermes already maintains
the models.dev cache and uses it for context-length lookups; pointing
_PROVIDER_MODELS at the same source means the /model picker self-heals on
the next cache refresh instead of requiring a PR.
Behavior:
- With cache populated (normal user): shows every current xAI model ID,
picks up renames automatically on next refresh.
- Without cache (fresh install, offline): falls back to a static snapshot
of the 9 current flagship IDs.
- Malformed cache / unexpected shape: same static fallback, no crash.
Import time verified <20ms — disk read only, no HTTP.
Addresses the structural piece of #16699 ("consider a single
_provider_models(provider) resolver") for xAI. Other per-provider lists
can adopt the same pattern as drift is observed.
_PROVIDER_MODELS["xai"] was pointing at model IDs the xAI direct API
no longer accepts:
- grok-4.20-reasoning
- grok-4-1-fast-reasoning
Replaced with the actual current xAI catalog IDs from models.dev
($HERMES_HOME/models_dev_cache.json, mirror of https://models.dev/api.json):
grok-4.20-0309-reasoning
grok-4.20-0309-non-reasoning
grok-4.20-multi-agent-0309
grok-4-1-fast
grok-4-1-fast-non-reasoning
grok-4-fast
grok-4-fast-non-reasoning
grok-4
grok-code-fast-1
The xAI-direct API (https://api.x.ai/v1) serves the dated IDs shown
above; the bare aliases (grok-4.20, grok-4.1-fast, etc.) are
OpenRouter/Vercel-gateway normalizations and are not accepted on
xAI-direct. Those gateways remain unaffected.
Fixes#16699
Address Copilot review on #16868:
1. Tighten pool iteration. ``validate_copilot_token`` only rejects empty
strings and classic PATs (``ghp_*``); a malformed/unsupported ``gho_*``
token at ``credential_pool.copilot[0]`` would pass the gate and short-
circuit the loop, hiding a later valid entry. Switch to calling
``exchange_copilot_token`` directly: only entries that actually exchange
into a live Copilot API token are returned. Bad/expired entries fall
through to the next, and an exhausted pool returns ``""`` so the picker
falls back to the curated list (existing behaviour).
2. Reword the docstring + test module docstring to describe the pool seed
path accurately — ``hermes auth add copilot`` adds an api-key-typed
credential whose ``access_token`` field stores the pasted token, and
``_seed_from_env`` mirrors ``COPILOT_GITHUB_TOKEN`` from
``~/.hermes/.env`` into the pool. The previous wording implied
``auth add copilot`` itself ran the device-code flow, which it does
not (the device-code flow lives in ``hermes model``).
Two new tests cover the iteration change:
- ``test_skips_pool_entry_that_fails_to_exchange`` — pool[0] raises,
pool[1] succeeds, picker uses pool[1].
- ``test_all_pool_entries_fail_exchange_returns_empty`` — every entry
raises, return ``""``.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Users whose only Copilot credential is the OAuth `access_token` saved by
`hermes auth add copilot` (device-code flow) saw the `/model` picker drop
back to a stale hardcoded list. Reason: `_resolve_copilot_catalog_api_key`
only consulted env vars (`COPILOT_GITHUB_TOKEN` / `GH_TOKEN` /
`GITHUB_TOKEN`) and the `gh auth token` CLI fallback, never the credential
pool that Hermes's own login flow writes into `auth.json`. With no token,
the live catalog fetch silently 401s and the picker hides current models
(claude-opus-4.7, claude-sonnet-4.6, gpt-5.5, grok-code-fast-1) — even
though `/model <id>` works fine because runtime inference reads the pool
through a different code path.
Mirror the Codex catalog resolver pattern: env-var first (unchanged), then
walk `read_credential_pool("copilot")` for the first entry with a
supported `access_token` (`gho_*` / `github_pat_*` / `ghu_*`). Run it
through `get_copilot_api_token()` so the catalog request uses the same
exchanged token the runtime path uses. Classic PATs (`ghp_*`) are still
rejected up-front via `validate_copilot_token` since the Copilot API
doesn't accept them.
Strictly additive: env still wins, and a missing/locked auth.json (or any
exception during pool read) still returns "" so the caller falls through
to the curated catalog.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Switch _PRIORITY_PROCESSING_MODELS and _ANTHROPIC_FAST_MODE_MODELS from
hardcoded frozensets to prefix-based matching. Any gpt-*, o1*, o3*, o4*
(OpenAI) and any claude-* (Anthropic) now exposes /fast.
Fixes the case where gpt-5.5 and other post-catalog models silently
skipped Priority Processing because they weren't in the frozenset.
Future OpenAI/Anthropic releases will work without a catalog bump.
Safety:
- Codex-series (*codex*) still excluded — they route through the Codex
Responses API which doesn't take service_tier.
- Anthropic adapter already gates speed=fast on native endpoints only
(_is_third_party_anthropic_endpoint), so claude-sonnet-4.6 on
OpenRouter/Bedrock/opencode-zen won't leak the unknown beta.
- service_tier=priority is silently dropped by non-OpenAI proxies, so
false positives are harmless.
Azure Foundry deploys GPT-5.x, codex-*, and o1/o3/o4 reasoning models as
Responses-API-only. Calling /chat/completions against these deployments
returns 400 'The requested operation is unsupported.', which broke any
user who ran 'hermes model' on Azure, picked a gpt-5/codex deployment,
and kept the default api_mode: chat_completions. Verified in a user
debug bundle on 2026-04-26: gpt-5.3-codex failed on synopsisse.openai.azure.com
with that exact payload while gpt-4o-pure on the same endpoint worked.
Adds azure_foundry_model_api_mode(model_name) that returns
codex_responses when the model name starts with gpt-5, codex, o1, o3,
or o4 — otherwise None so chat_completions / anthropic_messages stay
untouched for gpt-4o, Llama, Claude-via-Anthropic, etc.
Resolver (both the direct Azure Foundry path and the pool-entry path)
consults it and upgrades api_mode unless the user explicitly picked
anthropic_messages. target_model (from /model mid-session switch)
takes precedence over the persisted default so switching from gpt-4o
to gpt-5.3-codex routes correctly before the next request.
Docs: correct the azure-foundry guide which previously claimed Azure
keeps gpt-5.x on chat completions — that was only true for early Azure
OpenAI, not Azure Foundry codex/o-series deployments.
Tests: 14 unit tests for azure_foundry_model_api_mode + 6 integration
tests in TestAzureFoundryResolution covering Bob's exact scenario,
target_model override, anthropic_messages guard, and o3-mini.
Removes deepseek/deepseek-v4-pro and deepseek/deepseek-v4-flash from
OPENROUTER_MODELS and _PROVIDER_MODELS['nous'], then regenerates
website/static/api/model-catalog.json so the hosted picker JSON drops
them too. Direct-API deepseek provider support is unchanged.
OpenRouter and Nous Portal curated picker lists now resolve via a JSON
manifest served by the docs site, falling back to the in-repo snapshot
when unreachable. Lets us update model lists without shipping a release.
Live URL: https://hermes-agent.nousresearch.com/docs/api/model-catalog.json
(source at website/static/api/model-catalog.json; auto-deploys via the
existing deploy-site.yml GitHub Pages pipeline on every merge to main).
Schema (v1) carries id + optional description + free-form metadata at
manifest, provider, and model levels. Pricing and context length stay
live-fetched via existing machinery (/v1/models endpoints, models.dev).
Config (new model_catalog section, default enabled):
model_catalog.url master manifest URL
model_catalog.ttl_hours disk cache TTL (default 24h)
model_catalog.providers.<name>.url optional per-provider override
Fetch pipeline: in-process cache -> disk cache (fresh < TTL) -> HTTP
fetch -> disk-cache-on-failure fallback -> in-repo snapshot as last
resort. Never raises to callers; at worst returns the bundled list.
Changes:
- website/static/api/model-catalog.json initial manifest (35 OR + 31 Nous)
- scripts/build_model_catalog.py regenerator from in-repo lists
- hermes_cli/model_catalog.py fetch + validate + cache module
- hermes_cli/models.py fetch_openrouter_models() +
new get_curated_nous_model_ids()
- hermes_cli/main.py, hermes_cli/auth.py Nous flows use the helper
- hermes_cli/config.py model_catalog defaults
- website/docs/reference/model-catalog.md + sidebars.ts
- tests/hermes_cli/test_model_catalog.py 21 tests (validation, fetch
success/failure, accessors,
disabled, overrides, integration)
Add support for Azure Foundry as a new inference provider. Azure Foundry
endpoints can use either OpenAI-style (/v1/chat/completions) or
Anthropic-style (/v1/messages) API formats.
Changes:
- Add azure-foundry to PROVIDER_REGISTRY (auth.py)
- Add azure-foundry overlay in HERMES_OVERLAYS (providers.py)
- Add empty model list for azure-foundry (models.py)
- Add _model_flow_azure_foundry() interactive setup (main.py)
- Add azure-foundry runtime resolution with api_mode support (runtime_provider.py)
- Add AZURE_FOUNDRY_API_KEY and AZURE_FOUNDRY_BASE_URL env vars (config.py)
Usage:
hermes model -> More providers -> Azure Foundry
The setup wizard prompts for:
- Endpoint URL
- API format (OpenAI or Anthropic-style)
- API key
- Model name
Configuration is saved to config.yaml (model.provider, model.base_url,
model.api_mode, model.default) and ~/.hermes/.env (AZURE_FOUNDRY_API_KEY).
When switching models on a custom endpoint (ollama-launch):
- Same-provider switches no longer re-resolve credentials (fixes base_url
being lost for 'custom' provider on subsequent switches)
- Named providers (ollama-launch) are resolved via user_providers so
switch_model can find their base_url from config
- Models not in the /v1/models probe but present in the user's saved
provider config are accepted with a warning instead of rejected
- CLI /model and TUI /model both pass user_providers/custom_providers
to switch_model so the config model list is available for validation
Closes#15088
- expand short model aliases like sonnet/opus via static catalogs during startup runtime resolution
- keep startup alias resolution network-free and add regression tests in models and tui gateway suites
Replaces gpt-5.4 / gpt-5.4-pro entries in the OpenRouter fallback snapshot
and the Nous Portal curated list. Other aggregators (Vercel AI Gateway)
and provider-native lists are unchanged.
Salvage of the Gemini-specific piece from PR #12585 by @briandevans.
Gemini's OpenAI-compat /v1beta/openai/models endpoint returns IDs prefixed
with 'models/' (native Gemini-API convention), so set-membership against
curated bare IDs drops every model. Strip the prefix before comparison.
The Anthropic static-catalog piece of #12585 was subsumed by #12618's
_fetch_anthropic_models() branch landing earlier in the same salvage PR.
Full branch cherry-pick was skipped because it also carried unrelated
catalog-version regressions.
The generic /v1/models probe in validate_requested_model() sent a plain
'Authorization: Bearer <key>' header, which works for OpenAI-compatible
endpoints but results in a 401 Unauthorized from Anthropic's API.
Anthropic requires x-api-key + anthropic-version headers (or Bearer for
OAuth tokens from Claude Code).
Add a provider-specific branch for normalized == 'anthropic' that calls
the existing _fetch_anthropic_models() helper, which already handles
both regular API keys and Claude Code OAuth tokens correctly. This
mirrors the pattern already used for openai-codex, copilot, and bedrock.
The branch also includes:
- fuzzy auto-correct (cutoff 0.9) for near-exact model ID typos
- fuzzy suggestions (cutoff 0.5) when the model is not listed
- graceful fall-through when the token cannot be resolved or the
network is unreachable (accepts with a warning rather than hard-fail)
- a note that newer/preview/snapshot model IDs can be gate-listed
and may still work even if not returned by /v1/models
Fixes Anthropic provider users seeing 'service unreachable' errors
when running /model <claude-model> because every probe 401'd.
- probe_api_models: add api_mode param; use x-api-key + anthropic-version
headers for anthropic_messages mode (Anthropic's native Models API auth)
- probe_api_models: add User-Agent header to avoid Cloudflare 403 blocks
on third-party OpenAI-compatible endpoints
- validate_requested_model: pass api_mode through from switch_model
- validate_requested_model: for anthropic_messages mode, attempt probe with
correct auth; if probe fails (many proxies don't implement /v1/models),
accept the model with an informational warning instead of rejecting
- fetch_api_models: propagate api_mode to probe_api_models
The Copilot provider resolved context windows via models.dev static data,
which does not include account-specific models (e.g. claude-opus-4.6-1m
with 1M context). This adds the live Copilot /models API as a higher-
priority source for copilot/copilot-acp/github-copilot providers.
New helper get_copilot_model_context() in hermes_cli/models.py extracts
capabilities.limits.max_prompt_tokens from the cached catalog. Results
are cached in-process for 1 hour.
In agent/model_metadata.py, step 5a queries the live API before falling
through to models.dev (step 5b). This ensures account-specific models
get correct context windows while standard models still have a fallback.
Part 1 of #7731.
Refs: #7272
Keep Discord Copilot model switching responsive and current by refreshing picker data from the live catalog when possible, correcting the curated fallback list, and clearing stale controls before the switch completes.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
OpenAI launched GPT-5.5 on Codex today (Apr 23 2026). Adds it to the static
catalog and pipes the user's OAuth access token into the openai-codex path of
provider_model_ids() so /model mid-session and the gateway picker hit the
live ChatGPT codex/models endpoint — new models appear for each user
according to what ChatGPT actually lists for their account, without a Hermes
release.
Verified live: 'gpt-5.5' returns priority 0 (featured) from the endpoint,
400k context per OpenAI's launch article. 'hermes chat --provider
openai-codex --model gpt-5.5' completes end-to-end.
Changes:
- hermes_cli/codex_models.py: add gpt-5.5 to DEFAULT_CODEX_MODELS + forward-compat
- agent/model_metadata.py: 400k context length entry
- hermes_cli/models.py: resolve codex OAuth token before calling
get_codex_model_ids() in provider_model_ids('openai-codex')
## Merged
Adds MiMo v2.5-pro and v2.5 support to Xiaomi native provider, OpenCode Go, and setup wizard.
### Changes
- Context lengths: added v2.5-pro (1M) and v2.5 (1M), corrected existing MiMo entries to exact values (262144)
- Provider lists: xiaomi, opencode-go, setup wizard
- Vision: upgraded from mimo-v2-omni to mimo-v2.5 (omnimodal)
- Config description updated for XIAOMI_API_KEY
- Tests updated for new vision model preference
### Verification
- 4322 tests passed, 0 new regressions
- Live API tested on Xiaomi portal: basic, reasoning, tool calling, multi-tool, file ops, system prompt, vision — all pass
- Self-review found and fixed 2 issues (redundant vision check, stale HuggingFace context length)
New and newer models from models.dev now surface automatically in
/model (both hermes model CLI and the gateway Telegram/Discord picker)
for a curated set of secondary providers — no Hermes release required
when the registry publishes a new model.
Primary user-visible fix: on OpenCode Go, typing '/model mimo-v2.5-pro'
no longer silently fuzzy-corrects to 'mimo-v2-pro'. The exact match
against the merged models.dev catalog wins.
Scope (opt-in frozenset _MODELS_DEV_PREFERRED in hermes_cli/models.py):
opencode-go, opencode-zen, deepseek, kilocode, fireworks, mistral,
togetherai, cohere, perplexity, groq, nvidia, huggingface, zai,
gemini, google.
Explicitly NOT merged:
- openrouter and nous (never): curated list is already a hand-picked
subset / Portal is source of truth.
- xai, xiaomi, minimax, minimax-cn, kimi-coding, kimi-coding-cn,
alibaba, qwen-oauth (per-project decision to keep curated-only).
- providers with dedicated live-endpoint paths (copilot, anthropic,
ai-gateway, ollama-cloud, custom, stepfun, openai-codex) — those
paths already handle freshness themselves.
Changes:
- hermes_cli/models.py: add _MODELS_DEV_PREFERRED + _merge_with_models_dev
helper. provider_model_ids() branches on the set at its curated-fallback
return. Merge is models.dev-first, curated-only extras appended,
case-insensitive dedup, graceful fallback when models.dev is offline.
- hermes_cli/model_switch.py: list_authenticated_providers() calls the
same merge in both its code paths (PROVIDER_TO_MODELS_DEV loop +
HERMES_OVERLAYS loop). Picker AND validation-fallback both see
fresh entries.
- tests/hermes_cli/test_models_dev_preferred_merge.py (new): 13 tests —
merge-helper unit tests (empty/raise/order/dedup), opencode-go/zen
behavior, openrouter+nous explicitly guarded from merge.
- tests/hermes_cli/test_opencode_go_in_model_list.py: converted from
snapshot-style assertion to a behavior-based floor check, so it
doesn't break when models.dev publishes additional opencode-go
entries.
Addresses a report from @pfanis via Telegram: newer Xiaomi variants
on OpenCode Go weren't appearing in the /model picker, and /model
was silently routing requests for new variants to older ones.
Replace xiaomi/mimo-v2-pro with xiaomi/mimo-v2.5-pro and xiaomi/mimo-v2.5
in the OpenRouter fallback catalog and the nous provider model list.
Add matching DEFAULT_CONTEXT_LENGTHS entries (1M tokens each).
Adds a first-class 'stepfun' API-key provider surfaced as Step Plan:
- Support Step Plan setup for both International and China regions
- Discover Step Plan models live from /step_plan/v1/models, with a
small coding-focused fallback catalog when discovery is unavailable
- Thread StepFun through provider metadata, setup persistence, status
and doctor output, auxiliary routing, and model normalization
- Add tests for provider resolution, model validation, metadata
mapping, and StepFun region/model persistence
Based on #6005 by @hengm3467.
Co-authored-by: hengm3467 <100685635+hengm3467@users.noreply.github.com>
Surfaces the free variant alongside the paid minimax-m2.5 entry in
both the OPENROUTER_MODELS fallback snapshot and the nous/openrouter
provider model list.
Remove nvidia/nemotron-3-super-120b-a12b:free, arcee-ai/trinity-large-preview:free,
and openrouter/elephant-alpha from _PROVIDER_MODELS['nous']. The paid nemotron and
arcee-thinking variants remain.