mirror of
https://github.com/NousResearch/hermes-agent.git
synced 2026-04-26 01:01:40 +00:00
fix(opencode): derive api_mode from target model, not stale config default (#15106)
/model kimi-k2.6 on opencode-zen (or glm-5.1 on opencode-go) returned OpenCode's website 404 HTML page when the user's persisted model.default was a Claude or MiniMax model. The switched-to chat_completions request hit https://opencode.ai/zen (or /zen/go) with no /v1 suffix. Root cause: resolve_runtime_provider() computed api_mode from model_cfg.get('default') instead of the model being requested. With a Claude default, it resolved api_mode=anthropic_messages, stripped /v1 from base_url (required for the Anthropic SDK), then switch_model()'s opencode_model_api_mode override flipped api_mode back to chat_completions without restoring /v1. Fix: thread an optional target_model kwarg through resolve_runtime_provider and _resolve_runtime_from_pool_entry. When the caller is performing an explicit mid-session model switch (i.e. switch_model()), the target model drives both api_mode selection and the conditional /v1 strip. Other callers (CLI init, gateway init, cron, ACP, aux client, delegate, account_usage, tui_gateway) pass nothing and preserve the existing config-default behavior. Regression tests added in test_model_switch_opencode_anthropic.py use the REAL resolver (not a mock) to guard the exact Quentin-repro scenario. Existing tests that mocked resolve_runtime_provider with 'lambda requested:' had their mock signatures widened to '**kwargs' to accept the new kwarg.
This commit is contained in:
parent
7634c1386f
commit
1eb29e6452
5 changed files with 160 additions and 8 deletions
|
|
@ -518,7 +518,7 @@ class TestSwitchModelDirectAliasOverride:
|
|||
|
||||
monkeypatch.setattr(
|
||||
"hermes_cli.runtime_provider.resolve_runtime_provider",
|
||||
lambda requested: {"api_key": "", "base_url": "", "api_mode": "openai_compat", "provider": "custom"},
|
||||
lambda **kwargs: {"api_key": "", "base_url": "", "api_mode": "openai_compat", "provider": "custom"},
|
||||
)
|
||||
|
||||
monkeypatch.setattr("hermes_cli.models.validate_requested_model",
|
||||
|
|
@ -544,7 +544,7 @@ class TestSwitchModelDirectAliasOverride:
|
|||
lambda raw, prov: ("custom", "local-model", "local"))
|
||||
monkeypatch.setattr(
|
||||
"hermes_cli.runtime_provider.resolve_runtime_provider",
|
||||
lambda requested: {"api_key": "", "base_url": "", "api_mode": "openai_compat", "provider": "custom"},
|
||||
lambda **kwargs: {"api_key": "", "base_url": "", "api_mode": "openai_compat", "provider": "custom"},
|
||||
)
|
||||
monkeypatch.setattr("hermes_cli.models.validate_requested_model",
|
||||
lambda *a, **kw: {"accepted": True, "persist": True, "recognized": True, "message": None})
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue