Three related issues prevented user-defined providers in `providers:` and
`model_aliases:` from being reachable through standard CLI flags. Requests
silently routed to the configured `model.base_url` instead of the user-
intended endpoint.
* hermes_cli/model_switch.py — root cause of the silent misrouting:
`_ensure_direct_aliases()` rebound `DIRECT_ALIASES` to a freshly-loaded
dict, leaving every `from hermes_cli.model_switch import DIRECT_ALIASES`
caller stuck on the stale empty original. Switched to `.update()` so
module attribute references stay valid.
* hermes_cli/main.py — chat subcommand `--provider` had `choices=[...]`
hardcoded to built-in providers, rejecting valid keys from user
`providers:` config. Dropped the choices list; runtime resolution
validates correctly downstream.
* hermes_cli/oneshot.py — `-m <alias>` only resolved the model name; the
alias's base_url was never propagated. Now consults `DIRECT_ALIASES`
before falling through to `detect_provider_for_model`, and threads the
alias's base_url to `resolve_runtime_provider(explicit_base_url=...)`.
* hermes_cli/runtime_provider.py — `_resolve_named_custom_runtime` now
honors `(provider="custom", explicit_base_url=...)` so a base_url
propagated from a direct-alias resolution actually builds a runtime
instead of falling through to provider-registry handlers that don't
know about ad-hoc local endpoints.
Verified: `hermes chat --provider <user-key> -m <model> -q "..."` and
`hermes -m <user-alias> -z "..."` both route to the user-intended
endpoint, observable via the target server's request log.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Makes hermes -z usable by sweeper without mutating user config.
- Top-level -m/--model and --provider flags that apply to -z/--oneshot
(mirrors hermes chat's plumbing).
- HERMES_INFERENCE_MODEL env var as the parallel to HERMES_INFERENCE_PROVIDER
for CI / scripted invocations.
- resolve_runtime_provider() gets the requested provider; when --model is
given without --provider, detect_provider_for_model() auto-selects the
provider that serves it (same semantic as /model in an interactive session).
- --provider without --model errors out with exit 2 — carrying a config
model across to a different provider is usually wrong, and silently
picking the provider's catalog default hides the mismatch.
Config defaults still used when both flags are omitted (existing behavior).
Validation (all live against OpenRouter):
-z 'x' ....................... uses config default (opus-4.7)
-z 'x' --model haiku-4.5 ..... haiku-4.5 via auto-detected openrouter
-z 'x' --model ... --provider pair as given
HERMES_INFERENCE_MODEL=... -z haiku-4.5 via env var
-z 'x' --provider anthropic .. exits 2 with error to stderr
* feat: add `hermes -z <prompt>` one-shot mode
Top-level flag that runs a single prompt and prints ONLY the final
response text to stdout. No banner, no spinner, no tool previews, no
session_id line — stdout is machine-readable, stderr is silent.
Tools, memory, rules, and AGENTS.md in the CWD are loaded as normal.
Approvals are auto-bypassed (sets HERMES_YOLO_MODE=1 for the call).
Bypasses cli.py entirely — goes straight to AIAgent.chat().
* feat(oneshot): handle interactive-callback gaps explicitly
Document (and where needed, patch) the interactive surfaces that have
no user to answer in oneshot mode:
- clarify — inject a callback that tells the agent to pick the
best default and continue (previously returned a
generic 'not available in this execution context'
error that wastes a tool call)
- sudo password — terminal_tool already gates on HERMES_INTERACTIVE
(we don't set it); sudo fails gracefully
- shell hooks — HERMES_ACCEPT_HOOKS=1 auto-approves; also falls
back to deny on non-tty stdin
- dangerous cmd — HERMES_YOLO_MODE=1 short-circuits before input()
- secret capture— tool returns gracefully when no callback wired
Live-tested: agent asked clarify(['red','blue']) and got 'red' back,
replied with only 'red'.