fix(model-normalize): pass DeepSeek V-series IDs through instead of folding to deepseek-chat

`_normalize_for_deepseek` was mapping every non-reasoner input into
`deepseek-chat` on the assumption that DeepSeek's API accepts only two
model IDs. That assumption no longer holds — `deepseek-v4-pro` and
`deepseek-v4-flash` are first-class IDs accepted by the direct API,
and on aggregators `deepseek-chat` routes explicitly to V3 (DeepInfra
backend returns `deepseek-chat-v3`). So a user picking V4 Pro through
the model picker was being silently downgraded to V3.

Verified 2026-04-24 against Nous portal's OpenAI-compat surface:
  - `deepseek/deepseek-v4-flash` → provider: DeepSeek,
    model: deepseek-v4-flash-20260423
  - `deepseek/deepseek-chat`     → provider: DeepInfra,
    model: deepseek/deepseek-chat-v3

Fix:
- Add `deepseek-v4-pro` and `deepseek-v4-flash` to
  `_DEEPSEEK_CANONICAL_MODELS` so exact matches pass through.
- Add `_DEEPSEEK_V_SERIES_RE` (`^deepseek-v\d+(...)?$`) so future
  V-series IDs (`deepseek-v5-*`, dated variants) keep passing through
  without another code change.
- Update docstring + module header to reflect the new rule.

Tests:
- New `TestDeepseekVSeriesPassThrough` — 8 parametrized cases covering
  bare, vendor-prefixed, case-variant, dated, and future V-series IDs
  plus end-to-end `normalize_model_for_provider(..., "deepseek")`.
- New `TestDeepseekCanonicalAndReasonerMapping` — regression coverage
  for canonical pass-through, reasoner-keyword folding, and
  fall-back-to-chat behaviour.
- 77/77 pass.

Reported on Discord (Ufonik, Don Piedro): `/model > Deepseek >
deepseek-v4-pro` surfaced
`Normalized 'deepseek-v4-pro' to 'deepseek-chat'`. Picker listing
showed the v4 names, so validation also rejected the post-normalize
`deepseek-chat` as "not in provider listing" — the contradiction
users saw. Normalizer now respects the picker's choice.
This commit is contained in:
0xbyt4 2026-04-24 15:14:53 +03:00 committed by Teknium
parent acd78a457e
commit 4ac731c841
2 changed files with 100 additions and 8 deletions

View file

@ -9,6 +9,7 @@ from hermes_cli.model_normalize import (
normalize_model_for_provider,
_DOT_TO_HYPHEN_PROVIDERS,
_AGGREGATOR_PROVIDERS,
_normalize_for_deepseek,
detect_vendor,
)
@ -191,3 +192,72 @@ class TestDetectVendor:
])
def test_detects_known_vendors(self, model, expected):
assert detect_vendor(model) == expected
# ── DeepSeek V-series pass-through (bug: V4 models silently folded to V3) ──
class TestDeepseekVSeriesPassThrough:
"""DeepSeek's V-series IDs (``deepseek-v4-pro``, ``deepseek-v4-flash``,
and future ``deepseek-v<N>-*`` variants) are first-class model IDs
accepted directly by DeepSeek's Chat Completions API. Earlier code
folded every non-reasoner name into ``deepseek-chat``, which on
aggregators (Nous portal, OpenRouter via DeepInfra) routes to V3
silently downgrading users who picked V4.
"""
@pytest.mark.parametrize("model", [
"deepseek-v4-pro",
"deepseek-v4-flash",
"deepseek/deepseek-v4-pro", # vendor-prefixed
"deepseek/deepseek-v4-flash",
"DeepSeek-V4-Pro", # case-insensitive
"deepseek-v4-flash-20260423", # dated variant
"deepseek-v5-pro", # future V-series
"deepseek-v10-ultra", # double-digit future
])
def test_v_series_passes_through(self, model):
expected = model.split("/", 1)[-1].lower()
assert _normalize_for_deepseek(model) == expected
def test_deepseek_provider_preserves_v4_pro(self):
"""End-to-end via normalize_model_for_provider — user selecting
V4 Pro must reach DeepSeek's API as V4 Pro, not V3 alias."""
result = normalize_model_for_provider("deepseek-v4-pro", "deepseek")
assert result == "deepseek-v4-pro"
def test_deepseek_provider_preserves_v4_flash(self):
result = normalize_model_for_provider("deepseek-v4-flash", "deepseek")
assert result == "deepseek-v4-flash"
# ── DeepSeek regressions (existing behaviour still holds) ──────────────
class TestDeepseekCanonicalAndReasonerMapping:
"""Canonical pass-through and reasoner-keyword folding stay intact."""
@pytest.mark.parametrize("model,expected", [
("deepseek-chat", "deepseek-chat"),
("deepseek-reasoner", "deepseek-reasoner"),
("DEEPSEEK-CHAT", "deepseek-chat"),
])
def test_canonical_models_pass_through(self, model, expected):
assert _normalize_for_deepseek(model) == expected
@pytest.mark.parametrize("model", [
"deepseek-r1",
"deepseek-r1-0528",
"deepseek-think-v3",
"deepseek-reasoning-preview",
"deepseek-cot-experimental",
])
def test_reasoner_keywords_map_to_reasoner(self, model):
assert _normalize_for_deepseek(model) == "deepseek-reasoner"
@pytest.mark.parametrize("model", [
"deepseek-chat-v3.1", # 'chat' prefix, not V-series pattern
"unknown-model",
"something-random",
"gpt-5", # non-DeepSeek names still fall through
])
def test_unknown_names_fall_back_to_chat(self, model):
assert _normalize_for_deepseek(model) == "deepseek-chat"