mirror of
https://github.com/NousResearch/hermes-agent.git
synced 2026-05-02 02:01:47 +00:00
The _CODEX_AUX_MODEL constant had already rotated twice in 6 weeks (gpt-5.3-codex -> gpt-5.2-codex -> now broken again at gpt-5.2-codex) because ChatGPT-account Codex gates which models it accepts via an undocumented, shifting allow-list that OpenAI publishes no changelog for. Any pinned default will keep going stale. Issue #17533 reports the current breakage: every ChatGPT-account auxiliary fallback fails with HTTP 400 "model is not supported" and the 60s pause loop degrades long sessions. Rather than reset the clock with another stale pin (PR #17544 proposes gpt-5.2-codex -> gpt-5.4), remove the hardcoded second-order Codex fallback entirely: - Delete `_CODEX_AUX_MODEL`. - Drop `_try_codex` from `_get_provider_chain()` (the auto chain now ends at api-key providers; 4 rungs instead of 5). - Rename `_try_codex() -> _build_codex_client(model)` and require an explicit model from the caller. No more guessing. - `resolve_provider_client("openai-codex", model=None)` now warns and returns (None, None) instead of silently guessing a stale model ID. - Remove `_try_codex` from the `provider="custom"` fallback ladder (same stale-constant trap). - `_resolve_strict_vision_backend("openai-codex")` routes through `resolve_provider_client` so the caller's explicit model is honored. Codex-main users are unaffected: Step 1 of `_resolve_auto` already uses `main_provider` + `main_model` directly and passes the user's configured Codex model through `resolve_provider_client`, which never touched `_CODEX_AUX_MODEL`. Per-task overrides (`auxiliary.<task>.provider/model`) continue to work and are the supported way to route specific aux tasks through Codex. Users whose main provider fails with a payment/connection error and who have ONLY ChatGPT-account Codex auth will now see the 60s pause without a stale-model-rejection noise line in between -- same outcome, cleaner failure. Closes #17533. Supersedes #17544 (which resets the clock on the same stale-constant problem). |
||
|---|---|---|
| .. | ||
| transports | ||
| __init__.py | ||
| account_usage.py | ||
| anthropic_adapter.py | ||
| auxiliary_client.py | ||
| bedrock_adapter.py | ||
| codex_responses_adapter.py | ||
| context_compressor.py | ||
| context_engine.py | ||
| context_references.py | ||
| copilot_acp_client.py | ||
| credential_pool.py | ||
| credential_sources.py | ||
| curator.py | ||
| display.py | ||
| error_classifier.py | ||
| file_safety.py | ||
| gemini_cloudcode_adapter.py | ||
| gemini_native_adapter.py | ||
| gemini_schema.py | ||
| google_code_assist.py | ||
| google_oauth.py | ||
| image_gen_provider.py | ||
| image_gen_registry.py | ||
| image_routing.py | ||
| insights.py | ||
| lmstudio_reasoning.py | ||
| manual_compression_feedback.py | ||
| memory_manager.py | ||
| memory_provider.py | ||
| model_metadata.py | ||
| models_dev.py | ||
| moonshot_schema.py | ||
| nous_rate_guard.py | ||
| onboarding.py | ||
| prompt_builder.py | ||
| prompt_caching.py | ||
| rate_limit_tracker.py | ||
| redact.py | ||
| retry_utils.py | ||
| shell_hooks.py | ||
| skill_commands.py | ||
| skill_preprocessing.py | ||
| skill_utils.py | ||
| subdirectory_hints.py | ||
| title_generator.py | ||
| trajectory.py | ||
| usage_pricing.py | ||