Keep context-1m-2025-08-07 in OAuth requests by default so 1M-capable
subscriptions retain full context. When Anthropic rejects a request with
400 'long context beta is not yet available for this subscription',
disable the beta for the rest of the session, rebuild the client, and
retry once.
Addresses #17680 (thanks @JayGwod for the clean reproduction) without
forcing every OAuth user off the 1M context window.
Changes:
- agent/error_classifier.py: new FailoverReason.oauth_long_context_beta_forbidden;
pattern matches 400 + 'long context beta' + 'not yet available'. Narrow
enough that the existing 429 tier-gate pattern keeps its own reason.
- agent/anthropic_adapter.py: _common_betas_for_base_url,
build_anthropic_client, build_anthropic_kwargs gain drop_context_1m_beta
kwarg. Default=False (1M stays). OAuth OAUTH_ONLY_BETAS unchanged.
- agent/transports/anthropic.py: build_kwargs forwards the flag.
- run_agent.py: self._oauth_1m_beta_disabled flag, retry-once guard,
recovery branch next to the image-shrink path. _rebuild_anthropic_client
honors the flag. The main build_kwargs call site threads it through for
fast-mode extra_headers.
- hermes_cli/doctor.py, hermes_cli/models.py: sibling OAuth /v1/models
probes get the same reactive retry — previously they'd falsely report
the Anthropic API as unreachable for affected subscriptions.
Tests: 2190 tests/agent/ + 94 adjacent integration tests pass. New unit
tests cover the classifier pattern (including the collision guard against
the 429 tier-gate) and the drop_context_1m_beta adapter behavior (default
keeps 1M, flag strips only 1M while preserving every other beta).
* fix(anthropic): remove Claude Code fingerprinting from OAuth Messages API path
OAuth requests now identify as Hermes on the wire. Removed:
- "You are Claude Code, Anthropic's official CLI for Claude." system
prompt prepend
- Hermes Agent → Claude Code / Nous Research → Anthropic
system-prompt substitutions
- mcp_ tool-name prefix on outgoing tool schemas + message history
- Matching mcp_ strip on inbound tool_use blocks (strip_tool_prefix path
removed from AnthropicTransport.normalize_response, + all 5 call
sites in run_agent.py and auxiliary_client.py)
- user-agent: claude-cli/<v> (external, cli) and x-app: cli headers on
the Messages API client
Added:
- OAuth path strips context-1m-2025-08-07 — Anthropic rejects OAuth
requests carrying it with HTTP 400 'This authentication style is
incompatible with the long context beta header.'
Kept (auth plumbing, not identity spoofing):
- _is_oauth_token classifier and is_oauth flag threading
- Bearer vs x-api-key auth routing
- _OAUTH_ONLY_BETAS (claude-code-20250219, oauth-2025-04-20) — backend
requires these on the OAuth-gated Messages endpoint
- _OAUTH_CLIENT_ID (Claude Code's) — Anthropic doesn't issue OAuth
creds to third parties; this is the only way the login flow works
- claude-cli/<v> User-Agent on the OAuth token exchange + refresh
endpoints at platform.claude.com/v1/oauth/token — bare requests get
Cloudflare 1010 blocked
Verified live against api.anthropic.com with a fresh sk-ant-oat01-*
token:
- claude-haiku-4-5 simple message: HTTP 200, 'OK' response
- claude-haiku-4-5 tool call: HTTP 200, stop_reason=tool_use, tool
named 'terminal' (no mcp_ prefix) round-tripped correctly
- Outgoing wire: no user-agent, no x-app, real Hermes identity in
system prompt, real tool name in schema
Closes/supersedes #16820 (mcp_ PascalCase normalization patch — no longer
needed since the mcp_ round-trip is gone).
* fix(anthropic): resolve_anthropic_token() reads credential pool first
Close the gap where ~/.hermes/auth.json → credential_pool.anthropic
(where hermes login + dashboard PKCE flow write OAuth tokens) was not
in resolve_anthropic_token()'s source list.
Before: users who authed via hermes login got the token written into
the pool, but legacy fallback code paths (auxiliary_client, models
catalog fetch, explicit-runtime path) that call resolve_anthropic_token()
saw None and raised 'No Anthropic credentials found' — even though the
token was sitting in auth.json.
New priority 1: pool.select() with env-sourced entries skipped. Skipping
env:* entries preserves the existing env-var priority logic further
down the chain (static env OAuth → refreshable Claude Code upgrade via
_prefer_refreshable_claude_code_token).
Surfaced while writing the hermes-agent-dev skill playbook for
'finding a live OAuth token for an E2E test'.
---------
Co-authored-by: teknium1 <teknium@users.noreply.github.com>
NormalizedResponse and ToolCall now have backward-compat properties
so the agent loop can read them directly without the shim:
ToolCall: .type, .function (returns self), .call_id, .response_item_id
NormalizedResponse: .reasoning_content, .reasoning_details,
.codex_reasoning_items
This eliminates the 35-line shim and its 4 call sites in run_agent.py.
Also changes flush_memories guard from hasattr(response, 'choices')
to self.api_mode in ('chat_completions', 'bedrock_converse') so it
works with raw boto3 dicts too.
WS1 items 3+4 of Cycle 2 (#14418).
3-layer chain (transport → v2 → v1) was collapsed to 2-layer in PR 7.
This collapses the remaining 2-layer (transport → v1 → NR mapping in
transport) to 1-layer: v1 now returns NormalizedResponse directly.
Before: adapter returns (SimpleNamespace, finish_reason) tuple,
transport unpacks and maps to NormalizedResponse (22 lines).
After: adapter returns NormalizedResponse, transport is a
1-line passthrough.
Also updates ToolCall construction — adapter now creates ToolCall
dataclass directly instead of SimpleNamespace(id, type, function).
WS1 item 1 of Cycle 2 (#14418).
Consolidate 4 per-transport lazy singleton helpers (_get_anthropic_transport,
_get_codex_transport, _get_chat_completions_transport, _get_bedrock_transport)
into one generic _get_transport(api_mode) with a shared dict cache.
Collapse the 65-line main normalize block (3 api_mode branches, each with
its own SimpleNamespace shim) into 7 lines: one _get_transport() call +
one _nr_to_assistant_message() shared shim. The shim extracts provider_data
fields (codex_reasoning_items, reasoning_details, call_id, response_item_id)
into the SimpleNamespace shape downstream code expects.
Wire chat_completions and bedrock_converse normalize through their transports
for the first time — these were previously falling into the raw
response.choices[0].message else branch.
Remove 8 dead codex adapter imports that have zero callers after PRs 1-6.
Transport lifecycle improvements:
- Eagerly warm transport cache at __init__ (surfaces import errors early)
- Invalidate transport cache on api_mode change (switch_model, fallback
activation, fallback restore, transport recovery) — prevents stale
transport after mid-session provider switch
run_agent.py: -32 net lines (11,988 -> 11,956).
PR 7 of the provider transport refactor.
Anthropic's API can legitimately return content=[] with stop_reason="end_turn"
when the model has nothing more to add after a turn that already delivered the
user-facing text alongside a trivial tool call (e.g. memory write). The transport
validator was treating that as an invalid response, triggering 3 retries that
each returned the same valid-but-empty response, then failing the run with
"Invalid API response after 3 retries."
The downstream normalizer already handles empty content correctly (empty loop
over response.content, content=None, finish_reason="stop"), so the only fix
needed is at the validator boundary.
Tests:
- Empty content + stop_reason="end_turn" → valid (the fix)
- Empty content + stop_reason="tool_use" → still invalid (regression guard)
- Empty content without stop_reason → still invalid (existing behavior preserved)