docs: update all docs for /model command overhaul and custom provider support

Documents the full /model command overhaul across 6 files:

AGENTS.md:
- Add model_switch.py to project structure tree

configuration.md:
- Rewrite General Setup with 3 config methods (interactive, config.yaml, env vars)
- Add new 'Switching Models with /model' section documenting all syntax variants
- Add 'Named Custom Providers' section with config.yaml examples and
  custom:name:model triple syntax

slash-commands.md:
- Update /model descriptions in both CLI and messaging tables with
  full syntax examples (provider:model, custom:model, custom:name:model,
  bare custom auto-detect)

cli-commands.md:
- Add /model slash command subsection under hermes model with syntax table
- Add custom endpoint config to hermes model use cases

faq.md:
- Add config.yaml example for offline/local model setup
- Note that provider: custom is a first-class provider
- Document /model custom auto-detect

provider-runtime.md:
- Add model_switch.py to implementation file list
- Update provider families to show Custom as first-class with named variants
This commit is contained in:
Teknium 2026-03-24 07:19:26 -07:00 committed by GitHub
parent a312ee7b4c
commit 773d3bb4df
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
6 changed files with 105 additions and 12 deletions

View file

@ -16,9 +16,10 @@ Hermes has a shared provider runtime resolver used across:
Primary implementation:
- `hermes_cli/runtime_provider.py`
- `hermes_cli/auth.py`
- `agent/auxiliary_client.py`
- `hermes_cli/runtime_provider.py` — credential resolution, `_resolve_custom_runtime()`
- `hermes_cli/auth.py` — provider registry, `resolve_provider()`
- `hermes_cli/model_switch.py` — shared `/model` switch pipeline (CLI + gateway)
- `agent/auxiliary_client.py` — auxiliary model routing
If you are trying to add a new first-class inference provider, read [Adding Providers](./adding-providers.md) alongside this page.
@ -46,7 +47,8 @@ Current provider families include:
- Kimi / Moonshot
- MiniMax
- MiniMax China
- custom OpenAI-compatible endpoints
- Custom (`provider: custom`) — first-class provider for any OpenAI-compatible endpoint
- Named custom providers (`custom_providers` list in config.yaml)
## Output of runtime resolution