fix(model_metadata): skip endpoint probe for known providers (Copilot context bug) (#2507)

The context length resolver was querying the /models endpoint for known
providers like GitHub Copilot, which returns a provider-imposed limit
(128k) instead of the model's actual context window (400k for gpt-5.4).
Since this check happened before the models.dev lookup, the wrong value
won every time.

Fix:
- Add api.githubcopilot.com and models.github.ai to _URL_TO_PROVIDER
- Skip the endpoint metadata probe for known providers — their /models
  data is unreliable for context length. models.dev has the correct
  per-provider values.

Reported by danny [DUMB] — gpt-5.4 via Copilot was resolving to 128k
instead of the correct 400k from models.dev.
This commit is contained in:
Teknium 2026-03-22 08:15:06 -07:00 committed by GitHub
parent afe2f0abe1
commit 72a6d7dffe
No known key found for this signature in database
GPG key ID: B5690EEEBB952194

View file

@ -164,6 +164,8 @@ _URL_TO_PROVIDER: Dict[str, str] = {
"openrouter.ai": "openrouter",
"inference-api.nousresearch.com": "nous",
"api.deepseek.com": "deepseek",
"api.githubcopilot.com": "copilot",
"models.github.ai": "copilot",
}
@ -788,8 +790,12 @@ def get_model_context_length(
if cached is not None:
return cached
# 2. Active endpoint metadata for explicit custom routes
if _is_custom_endpoint(base_url):
# 2. Active endpoint metadata for truly custom/unknown endpoints.
# Known providers (Copilot, OpenAI, Anthropic, etc.) skip this — their
# /models endpoint may report a provider-imposed limit (e.g. Copilot
# returns 128k) instead of the model's full context (400k). models.dev
# has the correct per-provider values and is checked at step 5+.
if _is_custom_endpoint(base_url) and not _is_known_provider_base_url(base_url):
endpoint_metadata = fetch_endpoint_model_metadata(base_url, api_key=api_key)
matched = endpoint_metadata.get(model)
if not matched: