feat(gateway): refine Platform._missing_ and platform-connected dispatch
Restricts plugin-name acceptance to bundled plugin scan + registry
(no arbitrary string -> enum-pollution), pulls per-platform connectivity
checks into a _PLATFORM_CONNECTED_CHECKERS lambda map with a clean
_is_platform_connected method, and adds tests covering the checker map,
plugin platform interface, and IRC setup wizard.
Closes remaining functional gaps and adds documentation.
webhook.py: Cross-platform delivery now checks the plugin registry
for unknown platform names instead of hardcoding 15 names in a tuple.
Plugin platforms can receive webhook-routed deliveries.
prompt_builder: Platform hints (system prompt LLM guidance) now fall
back to the plugin registry's platform_hint field. Plugin platforms
can tell the LLM 'you're on IRC, no markdown.'
PlatformEntry: Added platform_hint field for LLM guidance injection.
IRC adapter: Added acquire_scoped_lock/release_scoped_lock in
connect/disconnect to prevent two profiles from using the same IRC
identity. Added platform_hint for IRC-specific LLM guidance.
Removed dead token-empty-warning extension for plugin platforms
(plugin adapters handle their own env vars via check_fn).
website/docs/developer-guide/adding-platform-adapters.md:
- Added 'Plugin Path (Recommended)' section with full code examples,
PLUGIN.yaml template, config.yaml examples, and a table showing all
18 integration points the plugin system handles automatically
- Renamed built-in checklist to clarify it's for core contributors
gateway/platforms/ADDING_A_PLATFORM.md:
- Added Plugin Path section pointing to the reference implementation
and full docs guide
- Clarified built-in path is for core contributors only
Reloading MCP servers rebuilds the tool set for the active session, which
invalidates the provider prompt cache (tool schemas are baked into the
system prompt). The next message re-sends full input tokens — can be
expensive on long-context or high-reasoning models.
To surface that cost, /reload-mcp now routes through a new slash-confirm
primitive with three options: Approve Once / Always Approve / Cancel.
'Always Approve' persists approvals.mcp_reload_confirm: false so future
reloads run silently.
Coverage:
* Classic CLI (cli.py) — interactive numbered prompt.
* TUI (tui_gateway + Ink ops.ts) — text warning on first call; `now` /
`always` args skip the gate; `always` also persists the opt-out.
* Messenger gateway — button UI on Telegram (inline keyboard), Discord
(discord.ui.View), Slack (Block Kit actions); text fallback on every
other platform via /approve /always /cancel replies intercepted in
gateway/run.py _handle_message.
* Config key: approvals.mcp_reload_confirm (default true).
* Auto-reload paths (CLI file watcher, TUI config-sync mtime poll) pass
confirm=true so they do NOT prompt.
Implementation:
* tools/slash_confirm.py — module-level pending-state store used by all
adapters and by the CLI prompt. Thread-safe register/resolve/clear.
* gateway/platforms/base.py — send_slash_confirm hook (default 'Not
supported' → text fallback).
* gateway/run.py — _request_slash_confirm helper + text intercept in
_handle_message (yields to in-progress tool-exec approvals so
dangerous-command /approve still unblocks the tool thread first).
Tests:
* tests/tools/test_slash_confirm.py — primitive lifecycle + async
resolution + double-click atomicity (16 tests).
* tests/hermes_cli/test_mcp_reload_confirm_gate.py — default-config
shape + deep-merge preserves user opt-out (5 tests).
Targeted runs (hermetic): 89 passed (slash-confirm, config gate,
existing agent cache, existing telegram approval buttons).
Adds a public reload path for the in-process skill caches so newly
installed (or removed) skills become visible mid-session without a
gateway restart. Mirrors the shape of /reload-mcp.
Three surfaces:
* /reload-skills slash command — CLI (cli.py) and gateway (gateway/run.py),
with /reload_skills alias for Telegram autocomplete and an explicit
Discord registration.
* skills_reload agent tool (tools/skills_tool.py) — lets agents/subagents
pick up freshly-installed skills via tool call.
* agent.skill_commands.reload_skills() — shared helper that clears
_skill_commands, _SKILLS_PROMPT_CACHE (in-process LRU), and the
on-disk .skills_prompt_snapshot.json, then returns an added/removed
diff plus the new total count.
Tested:
* tests/agent/test_skill_commands_reload.py (9 cases)
* tests/cli/test_cli_reload_skills.py (3 cases)
* tests/gateway/test_reload_skills_command.py (4 cases)
Use case: NemoClaw / OpenShell-style sandboxed orchestrators that drop
skills into ~/.hermes/skills mid-session, plus agentic flows where the
agent itself installs a skill via the shell tool and needs it bound
without a gateway restart. The Python helper
clear_skills_system_prompt_cache(clear_snapshot=True) already exists
internally — this PR just exposes it via slash command and tool.
CI Tests workflow has been red on main for 40+ consecutive runs. This
commit recovers every failure visible in run 25130722163 (most recent
completed run prior to this PR).
Root causes, by group:
Test-mock drift after product landed (fix: update mocks)
- test_mcp_structured_content / test_mcp_dynamic_discovery (6 tests):
product added _rpc_lock (#02ae15222) and _schedule_tools_refresh
(#1350d12b0) without updating sibling test files. Install a real
asyncio.Lock inside the fake run-loop and patch at _schedule_tools_refresh.
- test_session.py: renamed normalize_whatsapp_identifier → canonical_
whatsapp_identifier upstream; keep a local alias so the legacy tests
keep working.
- test_run_progress_topics Slack DM test: PR #8006 made Slack default
tool_progress=off; explicitly set it to 'all' in the test fixture so
the progress-callback path still runs. Also read tool_progress_callback
at call time rather than freezing it in FakeAgent.__init__ — production
assigns it AFTER construction.
- test_tui_gateway_server session-create/close race: session.create now
defers _start_agent_build behind a 50ms timer — wait for the build
thread to enter _make_agent before closing, otherwise the orphan-
cleanup path never runs.
- test_protocol session.resume: product get_messages_as_conversation now
takes include_ancestors kwarg; accept **_kwargs in the test stub.
- test_copilot_acp_client redaction: redactor is OFF by default (snapshots
HERMES_REDACT_SECRETS at import); patch agent.redact._REDACT_ENABLED=True
for the duration of the test.
- test_minimax_provider: after #17171, dots in non-Anthropic model names
stay dots even with preserve_dots=False. Assert the new invariant
rather than the old 'broken for MiniMax' behavior.
- test_update_autostash: updater now scans `ps -A` for dashboard PIDs;
the test's catch-all subprocess.run stub needed stdout/stderr fields.
- test_accretion_caps: read_timestamps dict is populated lazily when
os.path.getmtime succeeds. Use .get("read_timestamps", {}) to tolerate
CI filesystems where the stat races file creation.
Change-detector tests (fix: rewrite as structural invariants)
- test_credential_sources_registry_has_expected_steps: was a frozen set
comparison that broke when minimax-oauth was added. Rewrite as an
invariant check (every step has description, no dupes, core steps
present) per AGENTS.md 'don't write change-detector tests'.
xdist ordering / test pollution (fix: reset state, use module-local patches)
- test_setup vercel: sibling test saved VERCEL_PROJECT_ID='project' to
os.environ via save_env_value() and never cleared it. monkeypatch.delenv
the VERCEL_* vars in the link-file test.
- test_clipboard TestIsWsl: GitHub Actions is on Azure VMs whose real
/proc/version often contains 'microsoft'. Patching builtins.open with
mock_open didn't reliably intercept hermes_constants.is_wsl's call in
xdist workers that had already cached _wsl_detected=True from an
earlier test. Patch hermes_constants.open directly and add
teardown_method to reset the cache after each test.
Pytest-asyncio cancellation hangs (fix: bound product await with timeout)
- test_session_split_brain_11016 (3 params) + test_gateway_shutdown
cancel-inflight: under pytest-asyncio 1.3.0, 'await task' and
'asyncio.gather(cancelled_tasks)' can stall for 30s when the cancelled
task's finally block awaits typing-task cleanup. Bound both with
asyncio.wait_for(..., timeout=5.0) and asyncio.shield — the stragglers
are released from adapter tracking and allowed to finish unwinding in
the background. This is also a legitimate hardening: a wedged finally
shouldn't stall the caller's dispatch or a gateway shutdown.
Orphan UI config (fix: merge tiny tab into messaging category)
- test_web_server test_no_single_field_categories: the telegram.reactions
config field lived in its own 'telegram' schema category with no
siblings. Fold it under 'discord' via _CATEGORY_MERGE so the dashboard
doesn't render an orphan single-field tab.
Local verification: 38/38 originally-failing tests pass; 4044/4044
gateway tests pass; 684/684 targeted subset (all 16 touched test files)
passes.
Address Copilot review on PR #16666:
1. **Duplicate event on every tool start** — both ``tool_progress_callback``
and ``tool_start_callback`` fire side-by-side in ``run_agent.py``, so
wiring both into chat completions emitted *two* ``hermes.tool.progress``
events per real tool call. Drop the legacy ``_on_tool_progress`` emit
entirely; ``_on_tool_start`` now produces a single unified event that
carries the legacy ``tool``/``emoji``/``label`` fields plus the new
``toolCallId``/``status`` correlation fields. Label is computed inline
via ``build_tool_preview`` so callers do not need to pre-format it.
2. **Weak per-event correlation in the regression test** — the previous
assertion checked that a ``toolCallId`` appeared *somewhere* in the
aggregate, which would have passed even if ``running`` lacked the id.
Collect ``(status, toolCallId)`` per event and assert each event
carries the correct pair, plus exactly two events on the wire (no
silent duplication regression).
The two existing chat-completions tool-progress tests are updated to fire
``tool_start_callback`` instead of ``tool_progress_callback``, matching
production reality where ``run_agent`` always pairs them.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds two API server endpoints for external UIs and orchestrators:
- GET /v1/capabilities — machine-readable feature discovery so clients
can detect which Runs API / SSE / auth features this Hermes version
supports before depending on them.
- GET /v1/runs/{run_id} — pollable run status so dashboards can check
queued/running/completed/failed/cancelled/stopping state without
holding an SSE connection open.
Also moves request validation ahead of run allocation so invalid
payloads no longer leave orphaned entries in _run_streams waiting for
the TTL sweep.
task_id is intentionally kept as "default" for the Runs API to
preserve the shared-sandbox model used by CLI, gateway, and the
existing _run_agent_with_callbacks path. session_id is surfaced in
run status for external-UI correlation only.
Salvage of PR #17085 by @Magaav.
QR-login connects an iLink bot identity (...@im.bot), not a scriptable
personal WeChat account. iLink typically does not deliver ordinary WeChat
group events to these bots, so WEIXIN_GROUP_POLICY / WEIXIN_GROUP_ALLOWED_USERS
often have no effect regardless of value.
- Setup wizard: print iLink-bot caveat before the group-policy prompt; relabel
the allowlist input as 'group chat IDs (not member user IDs)'; note that
'open' / 'allowlist' only take effect if iLink delivers group events.
- Adapter: log a WARNING at connect() when WEIXIN_GROUP_POLICY is non-disabled
so the limitation is surfaced in gateway logs, not just docs.
- Docs: add a top-of-page warning callout to weixin.md explaining the iLink
bot identity, narrow the 'DM and group messaging' feature line to DM-only
with a group caveat, tighten the Group Policy section and troubleshooting
row, and clarify WEIXIN_GROUP_ALLOWED_USERS as group IDs (not user IDs)
in weixin.md and environment-variables.md.
Closes#17094
The Weixin adapter only recognized errcode=-14 as a session-expired
signal. However, iLink also returns ret=-2 with errmsg="unknown error"
for the same underlying condition (stale session). The adapter treated
ret=-2 as a rate-limit, exhausting retries with the same stale
context_token instead of refreshing the session.
Added _is_stale_session_ret() helper that distinguishes ret=-2 with
"unknown error" from genuine rate limits. Updated both the poll loop
and _send_text_chunk to use the helper.
FixesNousResearch/hermes-agent#17228
- _markdown_to_signal docstring claimed SPOILER support but the regex list
never handled ``||...||``. Correct the docstring to match the four
actually-supported styles (BOLD / ITALIC / STRIKETHROUGH / MONOSPACE).
Signal's SPOILER bodyRange would need dedicated ``||spoiler||`` parsing
and is left for a follow-up.
- scripts/release.py: add exiao's noreply email to AUTHOR_MAP so the
contributor-attribution gate accepts their cherry-picked commit.
Three Signal adapter improvements that depend on the no-edit-mode
plumbing from the previous commit.
1. Native formatting (markdown -> Signal bodyRanges)
Signal renders markdown as literal characters (**bold**, `code`, #
heading), which looks broken. Added _markdown_to_signal(text) that
strips markdown syntax and emits Signal-native bodyRanges as
start:length:STYLE entries. Offsets are computed in UTF-16 code
units so non-BMP emoji stay aligned. Supports BOLD, ITALIC, STRIKE,
MONO, and headings mapped to BOLD. Fenced code and inline code are
handled; link syntax is unwrapped to visible text + URL.
Includes edge-case fixes reported previously:
- Bullet lists ("* item") no longer misidentified as italics
- URLs containing underscores no longer italicized around the dot
2. Reply-quote context
Parses dataMessage.quote on inbound messages and populates
MessageEvent.raw_message with sender + timestamp_ms. This lets the
gateway's existing [Replying to: "..."] injector (gateway/run.py)
work on Signal, matching Telegram/Matrix behavior.
3. Processing reactions
Overrides on_processing_start -> hourglass and on_processing_complete
-> checkmark via the sendReaction JSON-RPC using targetAuthor and
targetTimestamp pulled from raw_message. Uses the ProcessingOutcome
enum introduced in the previous commit.
Also sets SUPPORTS_MESSAGE_EDITING = False on SignalAdapter so the
no-edit streaming path activates.
Tests: 40+ new tests in tests/gateway/test_signal_format.py covering
markdown conversion, UTF-16 offset correctness with non-BMP emoji,
bullet-list and URL false-positive regressions, reply-quote extraction,
and reaction payload shape. Regression extensions to test_signal.py.
Discord's per-app command-management bucket is ~5 writes / 20 s. A
mass-prune-plus-upsert reconcile (77 orphans + 30 desired = 107 writes
in the reported case) can't finish under the old flat 30 s budget, and
the subsequent reconnect retries inside the rate-limit cooldown also
time out — leaving slash commands broken for ~60 min until the bucket
fully recovers.
Bump the timeout to 600 s so realistic bursts drain, update the warning
message to point at the saturated bucket instead of a hardcoded 30 s.
The 600 s cap still guards against a true hang.
Credit to @Tranquil-Flow for PR #16739 and @davidbordenwi for reporting
#16713 with the bucket-math diagnosis.
Closes#16713.
Co-authored-by: Teknium <teknium@nousresearch.com>
Mechanical cleanup across 43 files — removes 46 unused imports
(F401) and 14 unused local variables (F841) detected by
`ruff check --select F401,F841`. Net: -49 lines.
Also fixes a latent NameError in rl_cli.py where `get_hermes_home()`
was called at module line 32 before its import at line 65 — the
module never imported successfully on main. The ruff audit surfaced
this because it correctly saw the symbol as imported-but-unused
(the call happened before the import ran); the fix moves the import
to the top of the file alongside other stdlib imports.
One `# noqa: F401` kept in hermes_cli/status.py for `subprocess`:
tests monkeypatch `hermes_cli.status.subprocess` as a regression
guard that systemctl isn't called on Termux, so the name must
exist at module scope even though the module body doesn't reference
it. Docstring explains the reason.
Also fixes an invalid `# noqa:` directive in
gateway/platforms/discord.py:308 that lacked a rule code.
Co-authored-by: teknium1 <teknium@users.noreply.github.com>
Network errors through proxies (e.g. sing-box) can leave httpx
connections in a half-closed state occupying pool slots. After enough
reconnect cycles the 256-connection default fills up entirely, causing
Pool timeout: All connections in the connection pool are occupied.
Fix: cycle only the getUpdates request object (_request[0]) via
shut-down + re-initialize before restarting polling. This drains stale
connections without touching the general request (_request[1]) that
concurrent send_message / edit_message calls rely on.
The drain is applied to both _handle_polling_network_error and
_handle_polling_conflict reconnect paths via a shared
_drain_polling_connections() helper. Failures in the drain are
swallowed so reconnect always proceeds.
Based on #16466 by @Mirac1eSky.
The rate-limit branch added by the original PR did sleep+continue with
no attempt to record the last error, so persistent iLink -2 responses
exhausted the retry loop and hit 'assert last_error is not None',
raising AssertionError instead of a descriptive RuntimeError.
Record last_error = RuntimeError(...) before continuing, and break out
of the loop on the final attempt instead of sleeping uselessly.
- Change MAX_MESSAGE_LENGTH from 4000 to 2000 to match Weixin iLink API limit
- Add RATE_LIMIT_ERRCODE = -2 handling with 3x backoff retry
- Increase default send_chunk_delay_seconds from 0.35 to 1.5 to avoid rate limits
- Increase default send_chunk_retries from 2 to 4 for better reliability
- Use _split_text() in send() to chunk long messages before delivery
Fixes#16411
Extract the islink/realpath guard from the 16743 fix into a single
atomic_replace() helper in utils.py, then migrate every os.replace()
call site in the codebase to use it.
The original PR #16777 correctly identified and fixed the bug, but
only patched 9 of ~24 call sites. The same bug class (managed
deployments that symlink state files silently losing the link on
every write) still existed at auth.json, sessions file, gateway
config, env_loader, webhook subscriptions, debug store, model
catalog, pairing, google OAuth, nous rate guard, and more.
Rather than add another 10+ copies of the same three-line guard,
consolidate into atomic_replace(tmp, target) which:
- resolves symlinks via os.path.realpath before os.replace
- returns the resolved real path so callers can re-apply permissions
- is a drop-in replacement for os.replace at the use sites
Changes:
- utils.py: new atomic_replace() helper + atomic_json_write /
atomic_yaml_write now call it instead of inlining the guard
- 16 files: all os.replace() call sites migrated to atomic_replace()
- agent/{google_oauth, nous_rate_guard, shell_hooks}.py
- cron/jobs.py
- gateway/{pairing, session, platforms/telegram}.py
- hermes_cli/{auth, config, debug, env_loader, model_catalog, webhook}.py
- tools/{memory_tool, skill_manager_tool, skills_sync}.py
Tests: tests/test_atomic_replace_symlinks.py pins the invariant for
atomic_replace + atomic_json_write + atomic_yaml_write, covers plain
files, first-time creates, broken symlinks, and permission preservation.
Refs #16743
Builds on #16777 by @vominh1919.
The mention_user_id injection from #38a6bada9 unconditionally attached an
@user:server mention pill + MSC3952 m.mentions.user_ids payload to every
outbound reply and every tool-progress status update. The stated intent
was push notifications in muted rooms, but shipped as always-on in every
room, DM or group, muted or not — so every reply pinged the user.
- gateway/platforms/base.py: stop injecting mention_user_id into send
metadata on every reply; restore the original _thread_metadata passthrough.
- gateway/run.py: drop mention_user_id from status-thread metadata.
- gateway/platforms/matrix.py: drop the mention-pill append block in
_send_text that consumed the metadata. Keep the reaction-based exec
approval half of #38a6bada9 and the inbound/outbound m.mentions
handling (unrelated to the per-reply ping).
Reported by Elkim [NOUS] on Discord.
Co-authored-by: teknium1 <teknium@users.noreply.github.com>
Narrow plaintext shortcut that rewrites a tiny set of admin phrases
("restart gateway", "restart the gateway", "restart hermes") into the
/restart slash command, but only in DMs. Scope is intentionally tight:
- DM text messages only — group chats keep natural-language semantics
- Exact restart-style phrases only
- Skips anything already starting with "/"
Without this, the LLM can receive "restart gateway" as a user turn and
try to satisfy it via the terminal tool (systemctl restart ...). That
kills the gateway while the originating agent is still running, which
leaves systemd in "draining" state waiting on a process it's about to
kill. Routing the phrase to the slash-command dispatcher bypasses the
agent loop and uses the existing restart machinery (request_restart).
Called once, at the adapter level in BasePlatformAdapter.handle_message,
so every platform gets it for free and pending-message reinjection is
covered by the same call site.
Adds 2 Telegram-parametrized e2e tests: DM routes to request_restart,
group chats fall through to the normal agent path.
Without this, every Matrix bot started under hermes-agent shows the
"Encrypted by a device not verified by its owner" badge in Element
indefinitely, because the cross-signing chain (master → SSK → device)
was never published. Operators currently have to write their own
bootstrap script and remember to run it once per bot — and it's easy
to get wrong (the obvious base64.b64encode().decode() produces padded
keyids that matrix-rust-sdk silently rejects in /keys/query, so even
correctly-signed keys fail to load identity in Element).
mautrix already has the right primitive: generate_recovery_key() does
the full flow — generate seeds, upload privates to SSSS, publish
publics to the homeserver, sign the current device with the new SSK,
and return the human-readable recovery key. We invoke it once on
startup if the bot has no existing cross-signing identity, and log
the recovery key with a clear instruction to save it for future
restarts via MATRIX_RECOVERY_KEY (which the existing recovery-key
path already consumes).
Skipped when MATRIX_RECOVERY_KEY is set (existing path takes over)
or when the bot already has cross-signing keys on the homeserver
(get_own_cross_signing_public_keys returns non-None).
Bootstrap failure is non-fatal — logged with hint about UIA; the bot
continues without cross-signing and Element will show the warning
that prompted this PR. That matches the existing soft-fail pattern
for verify_with_recovery_key.
Tested against Continuwuity 0.5.7 (no UIA required). Synapse with
UIA enabled will need a follow-up PR to thread MATRIX_PASSWORD
through to /keys/device_signing/upload.
Five ``except Exception as exc:`` blocks in the Matrix adapter logged
only ``str(exc)`` without ``exc_info=True``:
- _reverify_keys_after_upload → post-upload key verification failure
- _upload_keys_if_needed → initial device-key query failure
- _upload_keys_if_needed → re-upload device keys failure
- _upload_keys_if_needed → initial device key upload failure
- connect → whoami / access-token validation failure
The E2EE key paths here are security-critical: a silent traceback-
less failure during device-key verification or upload makes it
hard for operators to tell whether their Matrix bot is failing
because of a stale token, a federation timeout, or an olm state
mismatch — all three fail with different tracebacks, which
``str(exc)`` alone flattens.
The contributing guide asks for ``exc_info=True`` on error logs.
Append it to each of the five call sites. Pure logging enrichment.
- Wrap _sync_loop sync() call with asyncio.wait_for(timeout=45s) to guard
against TCP-level hangs that the Matrix long-poll timeout cannot catch
- Add logger.debug at the top of _on_room_message so LOG_LEVEL=DEBUG
confirms whether callbacks fire at all (diagnoses #5819, #7914, #12614)
- Add logger.debug when MATRIX_REQUIRE_MENTION silently drops a message,
pointing users to the env var to disable the filter
Adapted for current mautrix-python adapter (PR was written against the
legacy matrix-nio adapter).
Closes#5819
The typing-indicator refresh loop in BasePlatformAdapter._keep_typing
awaited each send_typing call unconditionally. Each call is an HTTP
round-trip to the platform API (Telegram/Discord), normally ~100ms. When
the same network instability that causes upstream provider timeouts
(e.g. Anthropic capacity blips slowing first-token latency past the
120s stream-read timeout) also slows the platform typing API to
multi-second response times, the refresh loop stalls inside the await.
Platform-side typing expires at ~5s, so the bubble dies and stays dead
until the stuck send_typing call returns — right when the user most
needs the 'still working' signal and instead sees a bot that looks
dead, then asks 'wtf are you doing' which itself interrupts the
eventually-recovering turn.
Bound each send_typing with asyncio.wait_for (1.5s cap, derived from
interval so it's always below the 2s cadence). Slow calls get abandoned
so the next scheduled tick fires a fresh send_typing on schedule. As
long as any one of them reaches the platform within its ~5s
typing-expiry window, the bubble stays visible across the stall.
Also catches non-timeout send_typing exceptions (transient HTTP errors)
so one bad tick doesn't terminate the whole loop.
Tests: 4 new in tests/gateway/test_keep_typing_timeout.py covering
slow-send non-blocking, fast-send still-awaited, exception resilience,
and paused-chat regression guard.
Handle queued-title ValueError cleanup during session init, harden Discord message source building for test stubs, and fix the Dockerfile contract test syntax error. Also refresh the TUI lockfile and Nix build flags so nix ubuntu-latest no longer fails on npm lock/peer resolution drift.
Telegram groups emit a single bot_command entity covering the whole
/cmd@botname span with no accompanying mention entity, so the existing
mention gate in _message_mentions_bot dropped slash commands sent via
the bot-menu autocomplete whenever require_mention is enabled.
Recognise bot_command entities whose @botname suffix matches the bot
username (case-insensitive) as a direct mention, and keep rejecting
commands addressed at other bots. Fixes#15415.
Harden the Matrix adapter's sender-drop guards so bot-self events and
appservice/bridge identities never reach the gateway's pairing flow or
the agent loop.
Two filters, applied as early as possible in _on_room_message (and
_on_reaction for the self-filter):
1. _is_self_sender(sender) — case-insensitive + whitespace-trimmed
equality with self._user_id. When self._user_id is still empty
(whoami has not resolved, or login failed), returns True
defensively: an unidentified bot dropping its own events is always
preferable to falling into an echo loop. The previous byte-for-byte
equality check let differently-cased copies of the bot's MXID slip
through, and an unresolved self-ID silently disabled the guard.
2. _is_system_or_bridge_sender(sender) — drops appservice namespace
puppets (conventional @_bridge_...:server form) and malformed
senders with an empty localpart. These identities used to fall
through to the gateway's unauthorized-user path, trigger a pairing
code, and — once an operator approved the bridge — every outbound
message the bridge relayed would loop back as an authorized user
message. This was the root of the 'hall of mirrors' symptom.
Fixes#15763
Test plan
---------
scripts/run_tests.sh tests/gateway/test_matrix.py
scripts/run_tests.sh tests/gateway/test_matrix_mention.py tests/gateway/test_matrix_voice.py
All 182 tests pass. 14 new regression tests cover exact / case-insensitive
/ whitespace / unresolved-self-id matches, bridge prefix detection, empty
sender, and the full _on_room_message drop path.
Extends the existing channel_skill_bindings mechanism (previously
Discord-only) to Slack, so a channel or DM can auto-load one or more
skills at session start without relying on the model's skill selector
for every short reply.
Motivation: Mats's German flashcards DM pushes a cron-driven card
5x/day; he responds with one-word guesses like 'work'. Previously each
reply required the main agent to decide whether to load german-flashcards
(full opus turn just to pick a skill). With the binding configured per
Slack channel, the skill is injected at session start and grading runs
directly.
Changes:
- Extract resolve_channel_skills() from DiscordAdapter._resolve_channel_skills
into gateway.platforms.base (now shared across adapters).
- DiscordAdapter._resolve_channel_skills delegates to the shared helper
(behavior preserved — existing test suite still passes unchanged).
- SlackAdapter: resolve channel_skill_bindings on each message and attach
auto_skill to MessageEvent. gateway/run.py already handles auto-skill
injection on new sessions; this just wires Slack through it.
- gateway/config.py: accept channel_skill_bindings in slack: block of
config.yaml (was Discord-only).
- Tests: new tests/gateway/test_slack_channel_skills.py with 11 cases
covering DM/thread/parent resolution, single-vs-list skills, dedup,
malformed entries. Discord suite unchanged.
- Docs: add 'Per-Channel Skill Bindings' section to Slack user guide.
Config example:
slack:
channel_skill_bindings:
- id: "D0ATH9TQ0G6"
skills: ["german-flashcards"]
Multiple overlapping Slack attachment improvements:
1. Upload retry with backoff on transient errors (429, 5xx, connection
reset, rate_limited, service unavailable). New _is_retryable_upload_error
helper covers three upload paths: _upload_file, send_video,
send_document. Up to 3 attempts with 1.5s * attempt backoff.
2. Thread participation tracking: successful file uploads now add the
thread_ts to _bot_message_ts, mirroring how text replies are tracked.
This lets follow-up thread messages auto-trigger the bot (same
engagement rules as replied threads).
3. Thread metadata preservation in the image redirect-guard fallback
(send_image → send text fallback) and in two gateway.run.py send
paths (image + document fallback calls).
4. HTML response rejection in _download_slack_file_bytes. Parallels
the existing check in _download_slack_file. Guards against Slack
returning a sign-in / redirect page as document bytes when scopes
are missing, so the agent doesn't get HTML-as-a-PDF.
5. File lifecycle event acks (file_shared / file_created / file_change).
These events arrive around snippet uploads. Acking them silences the
slack_bolt 'Unhandled request' 404 warnings without changing behavior.
6. Post-loop message type classification so a mixed image+document upload
classifies as PHOTO (or VOICE if no image), falling back to DOCUMENT.
Previously, the per-file classification in the inbound loop could be
overwritten unpredictably.
7. Expanded text-inject whitelist in inbound document handling to cover
.csv, .json, .xml, .yaml, .yml, .toml, .ini, .cfg (up to 100KB) so
snippets and config files are directly visible to the agent, not just
cached as opaque uploads. Paired with new MIME entries in
SUPPORTED_DOCUMENT_TYPES in base.py.
Squashed from two commits in #11819 so the single commit carries the
contributor's GitHub attribution (the original commits were authored
under a local dev hostname).
Ports openclaw/openclaw#72038 to hermes-agent.
Telegram's `editMessageText` preserves the original message timestamp,
so a long-running streamed reply (reasoning models that take 60+ seconds
to finish) would keep the first-token timestamp even after completion.
Users can't tell how long a task actually took.
When a preview message has been visible for >= 60s (configurable via
`streaming.fresh_final_after_seconds`), finalize by sending a fresh
message instead of editing in place, then best-effort delete the stale
preview. Short previews still edit in place (the existing fast path).
Implementation notes adapted from OpenClaw's TypeScript original:
- `StreamConsumerConfig` gains `fresh_final_after_seconds` (default 0 =
legacy edit-in-place). Gateway-level `StreamingConfig` defaults to 60.
- `GatewayStreamConsumer` tracks `_message_created_ts` at first-send and
checks it in `_send_or_edit` on `finalize=True`. New helpers
`_should_send_fresh_final` + `_try_fresh_final`.
- `BasePlatformAdapter` gains optional `delete_message(chat_id, message_id)`
returning False by default. `TelegramAdapter` implements it via
`_bot.delete_message`.
- `gateway/run.py` only enables fresh-final for `Platform.TELEGRAM`;
other platforms ignore the setting (they don't have the stale-edit
timestamp problem or edit-then-read works cheaply).
- Fallback to normal edit on any fresh-send failure — no user-visible
regression if Telegram rate-limits a send or the message is gone.
Tests: 15 new cases in tests/gateway/test_stream_consumer_fresh_final.py
covering short/long previews, config plumbing, delete-support absent,
send-failure fallback, __no_edit__ sentinel safety, and StreamingConfig
round-trip.
Co-authored-by: Hermes Agent <agent@nousresearch.com>
Slack's modern composer sends messages with a 'blocks' array that
contains rich_text elements. When a user forwards or quotes another
message, the quoted content shows up in the rich_text_quote children
of that array — and is NOT included in the plain 'text' field. The
agent saw only the lossy plain text and was blind to forwarded /
quoted content. Same story for link unfurl previews (Notion, docs,
GitHub, etc.) which Slack puts in the 'attachments' array.
Two fixes in the inbound handler:
1. _extract_text_from_slack_blocks walks rich_text / rich_text_quote /
rich_text_list / rich_text_preformatted trees and renders readable
text ('> quoted', '• bullet', code fences), dedupes against the
plain text field, and appends the extracted content so the agent
sees everything.
2. Link unfurl / attachment preview extraction reads title, url,
body, and footer from the 'attachments' array and appends a
'📎 [title](url)\n body\n _footer_' section per preview.
Skips is_msg_unfurl to avoid echoing our own Slack replies back.
Routing is careful not to trust augmented text: mention gating
(is_mentioned) and slash-command detection both run against the
original 'text' field, so forwarded content containing '<@bot>' or
'/deploy' in a quote can't trick the bot into responding in a
channel it shouldn't or classifying a normal message as a command.
Adjustment from original PR: dropped _serialize_slack_blocks_for_agent,
which inlined a redacted JSON dump of non-rich_text blocks (section,
accessory, actions, etc.) — the agent would see the raw Block Kit
structure for UI-heavy alerts. It added up to 6000 characters to the
prompt context on every qualifying message with no opt-out. The
rich_text extraction and attachment unfurls cover the common bug-fix
case (quoted/forwarded content + link previews) without the prefill
tax. If a user needs block inspection later, it can return as a
config opt-in.
Also updates the Slack platform notes in session.py to accurately
describe what the gateway inlines.
Translate Slack attachment failures into actionable user-facing notices
instead of generic download errors. When a scope/auth/permission issue
breaks attachment processing, the user sees:
[Slack attachment notice]
- Slack attachment access failed for photo.jpg. Missing scope:
files:read. Update the Slack app scopes/settings and reinstall
the app to the workspace.
Two helpers do the translation:
_describe_slack_api_error — handles SlackApiError responses
(missing_scope, invalid_auth, file_not_found, access_denied, etc.)
_describe_slack_download_failure — handles httpx.HTTPStatusError
(401/403/404) and Slack-returns-HTML-sign-in fallbacks
Wired into three existing call sites:
- the Slack Connect files.info path (PR #11111) so scope errors
surface instead of being logged as generic "files.info failed"
- the image, audio, and document download paths so 401/403 and
HTML-body responses translate into actionable notices
Adjustment from original PR: dropped _probe_slack_file_access_issue,
the proactive pre-download files.info probe. It added one extra
Slack API call per attachment even on healthy ones, and overlapped
with the existing files.info call from PR #11111. The post-failure
translation path covers the same user-facing diagnostic value
without the per-message tax.
Also documents files:read scope more prominently in the Slack setup
guide and troubleshooting table.
Contributed back from https://github.com/xinbenlv/zn-hermes-agent.
Closes#7015.
Co-authored-by: xinbenlv <zzn+pa@zzn.im>
Slack Connect channels return file objects with file_access="check_file_info"
and no url_private_download field (see
https://docs.slack.dev/reference/objects/file-object/#slack_connect_files).
These stub objects must be resolved via files.info before download can
proceed. Without this the agent silently skips attachments posted in
Slack Connect channels.
Call files.info on every file whose file_access is check_file_info,
replace the stub with the full file object, and let the existing
download path continue. Warn and skip on files.info failures.
Closes#11095.
The Slack thread-context fetcher used to drop every message with a
bot_id, which silently erased the thread parent whenever a cron job (or
any other bot) had posted it. As a result, replies to a cron-posted
summary lost all context and the agent answered as if from a blank
thread.
Changes:
1. gateway/platforms/slack.py::_fetch_thread_context
- Keep the thread parent even when it was posted by a bot
(e.g. cron summaries, third-party integrations).
- Only skip *our own* prior bot replies to avoid circular context,
matching the per-workspace bot user id via _team_bot_user_ids so
multi-workspace deployments stay correct.
- Keep non-self bot children (useful third-party context).
2. gateway/platforms/slack.py::_handle_slack_message
- Populate MessageEvent.reply_to_text for thread replies (parity
with Telegram/Discord/Feishu/WeCom). gateway.run uses this field
to inject a [Replying to: "..."] prefix when the parent is not
already in the session history, which is exactly the scenario
triggered by cron-generated thread parents.
- New helper _fetch_thread_parent_text reuses the existing thread-
context cache (and its 60s TTL) to avoid duplicate
conversations.replies calls; falls back to a cheap limit=1 fetch
when the cache is cold.
Tests:
- Updated TestSlackThreadContext::test_skips_bot_messages to reflect
the new behaviour (self-bot child dropped, third-party bot kept).
- Added:
* test_fetch_thread_context_includes_bot_parent
* test_fetch_thread_context_excludes_self_bot_replies
* test_fetch_thread_context_multi_workspace
* test_fetch_thread_context_current_ts_excluded (regression guard)
* test_fetch_thread_parent_text_from_cache
* test_slack_reply_to_text_set_on_thread_reply
* test_slack_reply_to_text_none_for_top_level_message
Full Slack suite: 176 passed (was 169).
Extends the strict_mention feature so an @mention in strict mode no
longer persistently tags the thread as 'mentioned'. Without this, the
thread's first mention would permanently auto-trigger the bot on every
subsequent message — which is exactly what strict_mention is designed
to prevent. Closes the agent-to-agent ack loop hole hhhonzik identified
in #14117.
Co-authored-by: hhhonzik <me@janstepanovsky.cz>