When `hermes update` stashes local changes and the restore hits merge
conflicts, the old code prompted the user to reset or keep conflict
markers. If the user declined the reset, git conflict markers
(<<<<<<< Updated upstream) were left in source files, making hermes
completely unrunnable with a SyntaxError on the next invocation.
Additionally, the interactive path called sys.exit(1), which killed
the entire update process before pip dependency install, skill sync,
and gateway restart could finish — even though the code pull itself
had succeeded.
Changes:
- Always auto-reset to clean state when stash restore conflicts
- Remove the "Reset working tree?" prompt (footgun)
- Remove sys.exit(1) — return False so cmd_update continues normally
- User's changes remain safely in the stash for manual recovery
Also fixes a secondary bug where the conflict handling prompt used
bare input() instead of the input_fn parameter, which would hang
in gateway mode.
Tests updated: replaced prompt/sys.exit assertions with auto-reset
behavior checks; removed the "user declines reset" test (path no
longer exists).
After mid-loop compression (triggered by 413, context_overflow, or Anthropic
long-context tier errors), _compress_context() creates a new session in SQLite
and resets _last_flushed_db_idx=0. However, conversation_history was not cleared,
so _flush_messages_to_session_db() computed:
flush_from = max(len(conversation_history=200), _last_flushed_db_idx=0) = 200
messages[200:] → empty (compressed messages < 200)
This resulted in zero messages being written to the new session's SQLite store.
On resume, the user would see 'Session found but has no messages.'
The preflight compression path (line 7311) already had the fix:
conversation_history = None
This commit adds the same clearing to the three mid-loop compression sites:
- Anthropic long-context tier overflow
- HTTP 413 payload too large
- Generic context_overflow error
Reported by Aaryan (Nous community).
The text batching feature routes TEXT messages through
asyncio.create_task() + asyncio.sleep(delay). Even with delay=0,
the task fires asynchronously and won't complete before synchronous
test assertions. This broke 33 tests across Discord, Matrix, and
WeCom adapters.
When _text_batch_delay_seconds is 0 (the test fixture setting),
dispatch directly to handle_message() instead of going through
the async batching path. This preserves the pre-batching behavior
for tests while keeping batching active in production (default
delay 0.6s).
Add streaming timeout documentation to three pages:
- guides/local-llm-on-mac.md: New 'Timeouts' section with table of all
three timeouts, their defaults, local auto-adjustments, and env var
overrides
- reference/faq.md: Tip box in the local models FAQ section
- user-guide/configuration.md: 'Streaming Timeouts' subsection under
the agent config section
Follow-up to #6967.
Set _text_batch_delay_seconds = 0 on test adapter fixtures so messages
dispatch immediately (bypassing async batching). This preserves the
existing synchronous assertion patterns while the batching logic is
tested separately in test_text_batching.py.
22 tests covering:
- Single message dispatch after delay
- Split message aggregation (2-way and 3-way)
- Different chats/rooms not merged
- Adaptive delay for near-limit chunks
- State cleanup after flush
- Split continuation merging
All 5 platform adapters tested.
Feishu already had text batching with a static 0.6s delay. This adds
adaptive delay: waits 2.0s when a chunk is near the ~4096-char split
point since a continuation is almost certain.
Tracks _last_chunk_len on each queued event to determine the delay.
Configurable via HERMES_FEISHU_TEXT_BATCH_SPLIT_DELAY_SECONDS (default 2.0).
Ref #6892
Ports the adaptive batching pattern from the Telegram adapter.
WeCom clients split messages around 4000 chars. Adaptive delay waits
2.0s when a chunk is near the limit, 0.6s otherwise. Only text messages
are batched; commands/media dispatch immediately.
Ref #6892
Ports the adaptive batching pattern from the Telegram adapter.
Matrix clients split messages around 4000 chars. Adaptive delay waits
2.0s when a chunk is near the limit, 0.6s otherwise. Only text messages
are batched; commands dispatch immediately.
Ref #6892
Cherry-picked from PR #6894 by SHL0MS with fixes:
- Only batch TEXT messages; commands/media dispatch immediately
- Use build_session_key() for proper session-scoped batch keys
- Consistent naming (_text_batch_delay_seconds)
- Proper Dict[str, MessageEvent] typing
Discord splits at 2000 chars (lowest of all platforms). Adaptive delay
waits 2.0s when a chunk is near the limit, 0.6s otherwise.
Cherry-picked from PR #6891 by SHL0MS.
When a chunk is near the 4096-char split point, wait 2.0s instead of 0.6s
since a continuation is almost certain.
Raise the default httpx stream read timeout from 60s to 120s for all
providers. Additionally, auto-detect local LLM endpoints (Ollama,
llama.cpp, vLLM) and raise the read timeout to HERMES_API_TIMEOUT
(1800s) since local models can take minutes for prefill on large
contexts before producing the first token.
The stale stream timeout already had this local auto-detection pattern;
the httpx read timeout was missing it — causing a hard 60s wall that
users couldn't find (HERMES_STREAM_READ_TIMEOUT was undocumented).
Changes:
- Default HERMES_STREAM_READ_TIMEOUT: 60s -> 120s
- Auto-detect local endpoints -> raise to 1800s (user override respected)
- Document HERMES_STREAM_READ_TIMEOUT and HERMES_STREAM_STALE_TIMEOUT
- Add 10 parametrized tests
Reported-by: Pavan Srinivas (@pavanandums)
When the model mentions <think> as literal text in its response (e.g.
"(/think not producing <think> tags)"), the streaming display treated it
as a reasoning block opener and suppressed everything after it. The
response box would close with truncated content and no error — the API
response was complete but the display ate it.
Root cause: _stream_delta() matched <think> anywhere in the text stream
regardless of position. Real reasoning blocks always start at the
beginning of a line; mentions in prose appear mid-sentence.
Fix: track line position across streaming deltas with a
_stream_last_was_newline flag. Only enter reasoning suppression when
the tag appears at a block boundary (start of stream, after a newline,
or after only whitespace on the current line). Add a _flush_stream()
safety net that recovers buffered content if no closing tag is found
by end-of-stream.
Also fixes three related issues discovered during investigation:
- anthropic_adapter: _get_anthropic_max_output() now normalizes dots to
hyphens so 'claude-opus-4.6' matches the 'claude-opus-4-6' table key
(was returning 32K instead of 128K)
- run_agent: send explicit max_tokens for Claude models on Nous Portal,
same as OpenRouter — both proxy to Anthropic's API which requires it.
Without it the backend defaults to a low limit that truncates responses.
- run_agent: reset truncated_tool_call_retries after successful tool
execution so a single truncation doesn't poison the entire conversation.
Previously /fast only supported gpt-5.4 and forced a provider switch to
openai-codex. Now supports all 13 models from OpenAI's Priority Processing
pricing table (gpt-5.4, gpt-5.4-mini, gpt-5.2, gpt-5.1, gpt-5, gpt-5-mini,
gpt-4.1, gpt-4.1-mini, gpt-4.1-nano, gpt-4o, gpt-4o-mini, o3, o4-mini).
Key changes:
- Replaced _FAST_MODE_BACKEND_CONFIG with _PRIORITY_PROCESSING_MODELS frozenset
- Removed provider-forcing logic — service_tier is now injected into whatever
API path the user is already on (Codex Responses, Chat Completions, or
OpenRouter passthrough)
- Added request_overrides support to chat_completions path in run_agent.py
- Updated messaging from 'Codex inference tier' to 'Priority Processing'
- Expanded test coverage for all supported models
Add /fast slash command to toggle OpenAI Codex service_tier between
normal and priority ('fast') inference. Only exposed for models
registered in _FAST_MODE_BACKEND_CONFIG (currently gpt-5.4).
- Registry-based backend config for extensibility
- Dynamic command visibility (hidden from help/autocomplete for
non-supported models) via command_filter on SlashCommandCompleter
- service_tier flows through request_overrides from route resolution
- Omit max_output_tokens for Codex backend (rejects it)
- Persists to config.yaml under agent.service_tier
Salvage cleanup: removed simple_term_menu/input() menu (banned),
bare /fast now shows status like /reasoning. Removed redundant
override resolution in _build_api_kwargs — single source of truth
via request_overrides from route.
Co-authored-by: Hermes Agent <hermes@nousresearch.com>
The Codex retry block and valid-token short-circuit in _refresh_entry()
both return early, bypassing the auth.json sync at the end of the method.
This adds _sync_device_code_entry_to_auth_store() calls on both paths
so refreshed/synced tokens are written back to auth.json regardless of
which code path succeeds.
Adds opt-in creative thinking frameworks to ascii-video, p5js, and
manim-video skills, based on Lluminate (joelsimon.net/lluminate).
Only engaged when the user explicitly asks for creative, experimental,
or unconventional output. Straightforward requests are unaffected.
Each skill gets 2-3 strategies matched to its domain:
- ascii-video: Forced Connections, Conceptual Blending, Oblique Strategies
- p5js: Conceptual Blending, SCAMPER, Distance Association
- manim-video: SCAMPER, Assumption Reversal
Strategies sourced from creativity research (Boden, Eno, de Bono,
Koestler, Fauconnier & Turner, Osborn), formalized for LLM prompting
by Lluminate.
When OpenRouter returns 'No endpoints found that support tool use'
(HTTP 404), display a hint explaining that provider routing restrictions
may be filtering out tool-capable providers. Links the user directly
to the model's OpenRouter page to check which providers support tools.
The hint fires in the error display block that runs regardless of whether
fallback succeeds — so the user always understands WHY the model failed,
not just that it fell back.
Reported via Discord: GLM-5.1 on OpenRouter with US-based provider
restrictions eliminated all 4 tool-supporting endpoints (DeepInfra,
Z.AI, Friendli, Venice), leaving only 7 non-tool providers.
MiniMax's Anthropic-compatible endpoints reject requests that include
the fine-grained-tool-streaming beta header — every tool-use message
triggers a connection error (~18s timeout). Regular chat works fine.
Add _common_betas_for_base_url() that filters out the tool-streaming
beta for Bearer-auth (MiniMax) endpoints while keeping all other betas.
All four client-construction branches now use the filtered list.
Based on #6528 by @HiddenPuppy.
Original cherry-picked from PR #6688 by kshitijk4poor.
Fixes#6510, fixes#6555.
* feat: API server model name derived from profile name
For multi-user setups (e.g. OpenWebUI), each profile's API server now
advertises a distinct model name on /v1/models:
- Profile 'lucas' -> model ID 'lucas'
- Profile 'admin' -> model ID 'admin'
- Default profile -> 'hermes-agent' (unchanged)
Explicit override via API_SERVER_MODEL_NAME env var or
platforms.api_server.model_name config for custom names.
Resolves friction where OpenWebUI couldn't distinguish multiple
hermes-agent connections all advertising the same model name.
* docs: multi-user setup with profiles for API server + Open WebUI
- api-server.md: added Multi-User Setup section, API_SERVER_MODEL_NAME
to config table, updated /v1/models description
- open-webui.md: added Multi-User Setup with Profiles section with
step-by-step guide, updated model name references
- environment-variables.md: added API_SERVER_MODEL_NAME entry
When a streaming response is cut mid-tool-call (connection drop, timeout),
the accumulated function.arguments is invalid JSON. The mock response
builder defaulted finish_reason to 'stop', so the agent loop treated it
as a valid completed turn and tried to execute tools with broken args.
Fix: validate tool call arguments with json.loads() during mock response
reconstruction. If any are invalid JSON, override finish_reason to
'length'. In the main loop's length handler, if tool calls are present,
refuse to execute and return partial=True with a clear error instead of
silently failing or wasting retries.
Also fixes _thinking_exhausted to not short-circuit when tool calls are
present — truncated tool calls are not thinking exhaustion.
Original cherry-picked from PR #6776 by AIandI0x1.
Closes#6638.
The test was mocking _vprint entirely, bypassing the suppress guard.
Switch to capturing _print_fn output so the real _vprint runs and
the guard suppresses retry noise as intended.
Replace 6 identical copies of the Termux detection function across
cli.py, browser_tool.py, voice_mode.py, status.py, doctor.py, and
gateway.py with a single shared implementation in hermes_constants.py.
Each call site imports with its original local name to preserve all
existing callers (internal references and test monkeypatches).