mirror of
https://github.com/NousResearch/hermes-agent.git
synced 2026-04-25 00:51:20 +00:00
Merge branch 'main' into rewbs/tool-use-charge-to-subscription
This commit is contained in:
commit
1126284c97
160 changed files with 5545 additions and 664 deletions
249
RELEASE_v0.6.0.md
Normal file
249
RELEASE_v0.6.0.md
Normal file
|
|
@ -0,0 +1,249 @@
|
|||
# Hermes Agent v0.6.0 (v2026.3.30)
|
||||
|
||||
**Release Date:** March 30, 2026
|
||||
|
||||
> The multi-instance release — Profiles for running isolated agent instances, MCP server mode, Docker container, fallback provider chains, two new messaging platforms (Feishu/Lark and WeCom), Telegram webhook mode, Slack multi-workspace OAuth, 95 PRs and 16 resolved issues in 2 days.
|
||||
|
||||
---
|
||||
|
||||
## ✨ Highlights
|
||||
|
||||
- **Profiles — Multi-Instance Hermes** — Run multiple isolated Hermes instances from the same installation. Each profile gets its own config, memory, sessions, skills, and gateway service. Create with `hermes profile create`, switch with `hermes -p <name>`, export/import for sharing. Full token-lock isolation prevents two profiles from using the same bot credential. ([#3681](https://github.com/NousResearch/hermes-agent/pull/3681))
|
||||
|
||||
- **MCP Server Mode** — Expose Hermes conversations and sessions to any MCP-compatible client (Claude Desktop, Cursor, VS Code, etc.) via `hermes mcp serve`. Browse conversations, read messages, search across sessions, and manage attachments — all through the Model Context Protocol. Supports both stdio and Streamable HTTP transports. ([#3795](https://github.com/NousResearch/hermes-agent/pull/3795))
|
||||
|
||||
- **Docker Container** — Official Dockerfile for running Hermes Agent in a container. Supports both CLI and gateway modes with volume-mounted config. ([#3668](https://github.com/NousResearch/hermes-agent/pull/3668), closes [#850](https://github.com/NousResearch/hermes-agent/issues/850))
|
||||
|
||||
- **Ordered Fallback Provider Chain** — Configure multiple inference providers with automatic failover. When your primary provider returns errors or is unreachable, Hermes automatically tries the next provider in the chain. Configure via `fallback_providers` in config.yaml. ([#3813](https://github.com/NousResearch/hermes-agent/pull/3813), closes [#1734](https://github.com/NousResearch/hermes-agent/issues/1734))
|
||||
|
||||
- **Feishu/Lark Platform Support** — Full gateway adapter for Feishu (飞书) and Lark with event subscriptions, message cards, group chat, image/file attachments, and interactive card callbacks. ([#3799](https://github.com/NousResearch/hermes-agent/pull/3799), [#3817](https://github.com/NousResearch/hermes-agent/pull/3817), closes [#1788](https://github.com/NousResearch/hermes-agent/issues/1788))
|
||||
|
||||
- **WeCom (Enterprise WeChat) Platform Support** — New gateway adapter for WeCom (企业微信) with text/image/voice messages, group chats, and callback verification. ([#3847](https://github.com/NousResearch/hermes-agent/pull/3847))
|
||||
|
||||
- **Slack Multi-Workspace OAuth** — Connect a single Hermes gateway to multiple Slack workspaces via OAuth token file. Each workspace gets its own bot token, resolved dynamically per incoming event. ([#3903](https://github.com/NousResearch/hermes-agent/pull/3903))
|
||||
|
||||
- **Telegram Webhook Mode & Group Controls** — Run the Telegram adapter in webhook mode as an alternative to polling — faster response times and better for production deployments behind a reverse proxy. New group mention gating controls when the bot responds: always, only when @mentioned, or via regex triggers. ([#3880](https://github.com/NousResearch/hermes-agent/pull/3880), [#3870](https://github.com/NousResearch/hermes-agent/pull/3870))
|
||||
|
||||
- **Exa Search Backend** — Add Exa as an alternative web search and content extraction backend alongside Firecrawl and DuckDuckGo. Set `EXA_API_KEY` and configure as preferred backend. ([#3648](https://github.com/NousResearch/hermes-agent/pull/3648))
|
||||
|
||||
- **Skills & Credentials on Remote Backends** — Mount skill directories and credential files into Modal and Docker containers, so remote terminal sessions have access to the same skills and secrets as local execution. ([#3890](https://github.com/NousResearch/hermes-agent/pull/3890), [#3671](https://github.com/NousResearch/hermes-agent/pull/3671), closes [#3665](https://github.com/NousResearch/hermes-agent/issues/3665), [#3433](https://github.com/NousResearch/hermes-agent/issues/3433))
|
||||
|
||||
---
|
||||
|
||||
## 🏗️ Core Agent & Architecture
|
||||
|
||||
### Provider & Model Support
|
||||
- **Ordered fallback provider chain** — automatic failover across multiple configured providers ([#3813](https://github.com/NousResearch/hermes-agent/pull/3813))
|
||||
- **Fix api_mode on provider switch** — switching providers via `hermes model` now correctly clears stale `api_mode` instead of hardcoding `chat_completions`, fixing 404s for providers with Anthropic-compatible endpoints ([#3726](https://github.com/NousResearch/hermes-agent/pull/3726), [#3857](https://github.com/NousResearch/hermes-agent/pull/3857), closes [#3685](https://github.com/NousResearch/hermes-agent/issues/3685))
|
||||
- **Stop silent OpenRouter fallback** — when no provider is configured, Hermes now raises a clear error instead of silently routing to OpenRouter ([#3807](https://github.com/NousResearch/hermes-agent/pull/3807), [#3862](https://github.com/NousResearch/hermes-agent/pull/3862))
|
||||
- **Gemini 3.1 preview models** — added to OpenRouter and Nous Portal catalogs ([#3803](https://github.com/NousResearch/hermes-agent/pull/3803), closes [#3753](https://github.com/NousResearch/hermes-agent/issues/3753))
|
||||
- **Gemini direct API context length** — full context length resolution for direct Google AI endpoints ([#3876](https://github.com/NousResearch/hermes-agent/pull/3876))
|
||||
- **gpt-5.4-mini** added to Codex fallback catalog ([#3855](https://github.com/NousResearch/hermes-agent/pull/3855))
|
||||
- **Curated model lists preferred** over live API probe when the probe returns fewer models ([#3856](https://github.com/NousResearch/hermes-agent/pull/3856), [#3867](https://github.com/NousResearch/hermes-agent/pull/3867))
|
||||
- **User-friendly 429 rate limit messages** with Retry-After countdown ([#3809](https://github.com/NousResearch/hermes-agent/pull/3809))
|
||||
- **Auxiliary client placeholder key** for local servers without auth requirements ([#3842](https://github.com/NousResearch/hermes-agent/pull/3842))
|
||||
- **INFO-level logging** for auxiliary provider resolution ([#3866](https://github.com/NousResearch/hermes-agent/pull/3866))
|
||||
|
||||
### Agent Loop & Conversation
|
||||
- **Subagent status reporting** — reports `completed` status when summary exists instead of generic failure ([#3829](https://github.com/NousResearch/hermes-agent/pull/3829))
|
||||
- **Session log file updated during compression** — prevents stale file references after context compression ([#3835](https://github.com/NousResearch/hermes-agent/pull/3835))
|
||||
- **Omit empty tools param** — sends no `tools` parameter when empty instead of `None`, fixing compatibility with strict providers ([#3820](https://github.com/NousResearch/hermes-agent/pull/3820))
|
||||
|
||||
### Profiles & Multi-Instance
|
||||
- **Profiles system** — `hermes profile create/list/switch/delete/export/import/rename`. Each profile gets isolated HERMES_HOME, gateway service, CLI wrapper. Token locks prevent credential collisions. Tab completion for profile names. ([#3681](https://github.com/NousResearch/hermes-agent/pull/3681))
|
||||
- **Profile-aware display paths** — all user-facing `~/.hermes` paths replaced with `display_hermes_home()` to show the correct profile directory ([#3623](https://github.com/NousResearch/hermes-agent/pull/3623))
|
||||
- **Lazy display_hermes_home imports** — prevents `ImportError` during `hermes update` when modules cache stale bytecode ([#3776](https://github.com/NousResearch/hermes-agent/pull/3776))
|
||||
- **HERMES_HOME for protected paths** — `.env` write-deny path now respects HERMES_HOME instead of hardcoded `~/.hermes` ([#3840](https://github.com/NousResearch/hermes-agent/pull/3840))
|
||||
|
||||
---
|
||||
|
||||
## 📱 Messaging Platforms (Gateway)
|
||||
|
||||
### New Platforms
|
||||
- **Feishu/Lark** — Full adapter with event subscriptions, message cards, group chat, image/file attachments, interactive card callbacks ([#3799](https://github.com/NousResearch/hermes-agent/pull/3799), [#3817](https://github.com/NousResearch/hermes-agent/pull/3817))
|
||||
- **WeCom (Enterprise WeChat)** — Text/image/voice messages, group chats, callback verification ([#3847](https://github.com/NousResearch/hermes-agent/pull/3847))
|
||||
|
||||
### Telegram
|
||||
- **Webhook mode** — run as webhook endpoint instead of polling for production deployments ([#3880](https://github.com/NousResearch/hermes-agent/pull/3880))
|
||||
- **Group mention gating & regex triggers** — configurable bot response behavior in groups: always, @mention-only, or regex-matched ([#3870](https://github.com/NousResearch/hermes-agent/pull/3870))
|
||||
- **Gracefully handle deleted reply targets** — no more crashes when the message being replied to was deleted ([#3858](https://github.com/NousResearch/hermes-agent/pull/3858), closes [#3229](https://github.com/NousResearch/hermes-agent/issues/3229))
|
||||
|
||||
### Discord
|
||||
- **Message processing reactions** — adds a reaction emoji while processing and removes it when done, giving visual feedback in channels ([#3871](https://github.com/NousResearch/hermes-agent/pull/3871))
|
||||
- **DISCORD_IGNORE_NO_MENTION** — skip messages that @mention other users/bots but not Hermes ([#3640](https://github.com/NousResearch/hermes-agent/pull/3640))
|
||||
- **Clean up deferred "thinking..."** — properly removes the "thinking..." indicator after slash commands complete ([#3674](https://github.com/NousResearch/hermes-agent/pull/3674), closes [#3595](https://github.com/NousResearch/hermes-agent/issues/3595))
|
||||
|
||||
### Slack
|
||||
- **Multi-workspace OAuth** — connect to multiple Slack workspaces from a single gateway via OAuth token file ([#3903](https://github.com/NousResearch/hermes-agent/pull/3903))
|
||||
|
||||
### WhatsApp
|
||||
- **Persistent aiohttp session** — reuse HTTP sessions across requests instead of creating new ones per message ([#3818](https://github.com/NousResearch/hermes-agent/pull/3818))
|
||||
- **LID↔phone alias resolution** — correctly match Linked ID and phone number formats in allowlists ([#3830](https://github.com/NousResearch/hermes-agent/pull/3830))
|
||||
- **Skip reply prefix in bot mode** — cleaner message formatting when running as a WhatsApp bot ([#3931](https://github.com/NousResearch/hermes-agent/pull/3931))
|
||||
|
||||
### Matrix
|
||||
- **Native voice messages via MSC3245** — send voice messages as proper Matrix voice events instead of file attachments ([#3877](https://github.com/NousResearch/hermes-agent/pull/3877))
|
||||
|
||||
### Mattermost
|
||||
- **Configurable mention behavior** — respond to messages without requiring @mention ([#3664](https://github.com/NousResearch/hermes-agent/pull/3664))
|
||||
|
||||
### Signal
|
||||
- **URL-encode phone numbers** and correct attachment RPC parameter — fixes delivery failures with certain phone number formats ([#3670](https://github.com/NousResearch/hermes-agent/pull/3670)) — @kshitijk4poor
|
||||
|
||||
### Email
|
||||
- **Close SMTP/IMAP connections on failure** — prevents connection leaks during error scenarios ([#3804](https://github.com/NousResearch/hermes-agent/pull/3804))
|
||||
|
||||
### Gateway Core
|
||||
- **Atomic config writes** — use atomic file writes for config.yaml to prevent data loss during crashes ([#3800](https://github.com/NousResearch/hermes-agent/pull/3800))
|
||||
- **Home channel env overrides** — apply environment variable overrides for home channels consistently ([#3796](https://github.com/NousResearch/hermes-agent/pull/3796), [#3808](https://github.com/NousResearch/hermes-agent/pull/3808))
|
||||
- **Replace print() with logger** — BasePlatformAdapter now uses proper logging instead of print statements ([#3669](https://github.com/NousResearch/hermes-agent/pull/3669))
|
||||
- **Cron delivery labels** — resolve human-friendly delivery labels via channel directory ([#3860](https://github.com/NousResearch/hermes-agent/pull/3860), closes [#1945](https://github.com/NousResearch/hermes-agent/issues/1945))
|
||||
- **Cron [SILENT] tightening** — prevent agents from prefixing reports with [SILENT] to suppress delivery ([#3901](https://github.com/NousResearch/hermes-agent/pull/3901))
|
||||
- **Background task media delivery** and vision download timeout fixes ([#3919](https://github.com/NousResearch/hermes-agent/pull/3919))
|
||||
- **Boot-md hook** — example built-in hook to run a BOOT.md file on gateway startup ([#3733](https://github.com/NousResearch/hermes-agent/pull/3733))
|
||||
|
||||
---
|
||||
|
||||
## 🖥️ CLI & User Experience
|
||||
|
||||
### Interactive CLI
|
||||
- **Configurable tool preview length** — show full file paths by default instead of truncating at 40 chars ([#3841](https://github.com/NousResearch/hermes-agent/pull/3841))
|
||||
- **Tool token context display** — `hermes tools` checklist now shows estimated token cost per toolset ([#3805](https://github.com/NousResearch/hermes-agent/pull/3805))
|
||||
- **/bg spinner TUI fix** — route background task spinner through the TUI widget to prevent status bar collision ([#3643](https://github.com/NousResearch/hermes-agent/pull/3643))
|
||||
- **Prevent status bar wrapping** into duplicate rows ([#3883](https://github.com/NousResearch/hermes-agent/pull/3883)) — @kshitijk4poor
|
||||
- **Handle closed stdout ValueError** in safe print paths — fixes crashes when stdout is closed during gateway thread shutdown ([#3843](https://github.com/NousResearch/hermes-agent/pull/3843), closes [#3534](https://github.com/NousResearch/hermes-agent/issues/3534))
|
||||
- **Remove input() from /tools disable** — eliminates freeze in terminal when disabling tools ([#3918](https://github.com/NousResearch/hermes-agent/pull/3918))
|
||||
- **TTY guard for interactive CLI commands** — prevent CPU spin when launched without a terminal ([#3933](https://github.com/NousResearch/hermes-agent/pull/3933))
|
||||
- **Argparse entrypoint** — use argparse in the top-level launcher for cleaner error handling ([#3874](https://github.com/NousResearch/hermes-agent/pull/3874))
|
||||
- **Lazy-initialized tools show yellow** in banner instead of red, reducing false alarm about "missing" tools ([#3822](https://github.com/NousResearch/hermes-agent/pull/3822))
|
||||
- **Honcho tools shown in banner** when configured ([#3810](https://github.com/NousResearch/hermes-agent/pull/3810))
|
||||
|
||||
### Setup & Configuration
|
||||
- **Auto-install matrix-nio** during `hermes setup` when Matrix is selected ([#3802](https://github.com/NousResearch/hermes-agent/pull/3802), [#3873](https://github.com/NousResearch/hermes-agent/pull/3873))
|
||||
- **Session export stdout support** — export sessions to stdout with `-` for piping ([#3641](https://github.com/NousResearch/hermes-agent/pull/3641), closes [#3609](https://github.com/NousResearch/hermes-agent/issues/3609))
|
||||
- **Configurable approval timeouts** — set how long dangerous command approval prompts wait before auto-denying ([#3886](https://github.com/NousResearch/hermes-agent/pull/3886), closes [#3765](https://github.com/NousResearch/hermes-agent/issues/3765))
|
||||
- **Clear __pycache__ during update** — prevents stale bytecode ImportError after `hermes update` ([#3819](https://github.com/NousResearch/hermes-agent/pull/3819))
|
||||
|
||||
---
|
||||
|
||||
## 🔧 Tool System
|
||||
|
||||
### MCP
|
||||
- **MCP Server Mode** — `hermes mcp serve` exposes conversations, sessions, and attachments to MCP clients via stdio or Streamable HTTP ([#3795](https://github.com/NousResearch/hermes-agent/pull/3795))
|
||||
- **Dynamic tool discovery** — respond to `notifications/tools/list_changed` events to pick up new tools from MCP servers without reconnecting ([#3812](https://github.com/NousResearch/hermes-agent/pull/3812))
|
||||
- **Non-deprecated HTTP transport** — switched from `sse_client` to `streamable_http_client` ([#3646](https://github.com/NousResearch/hermes-agent/pull/3646))
|
||||
|
||||
### Web Tools
|
||||
- **Exa search backend** — alternative to Firecrawl and DuckDuckGo for web search and extraction ([#3648](https://github.com/NousResearch/hermes-agent/pull/3648))
|
||||
|
||||
### Browser
|
||||
- **Guard against None LLM responses** in browser snapshot and vision tools ([#3642](https://github.com/NousResearch/hermes-agent/pull/3642))
|
||||
|
||||
### Terminal & Remote Backends
|
||||
- **Mount skill directories** into Modal and Docker containers ([#3890](https://github.com/NousResearch/hermes-agent/pull/3890))
|
||||
- **Mount credential files** into remote backends with mtime+size caching ([#3671](https://github.com/NousResearch/hermes-agent/pull/3671))
|
||||
- **Preserve partial output** when commands time out instead of losing everything ([#3868](https://github.com/NousResearch/hermes-agent/pull/3868))
|
||||
- **Stop marking persisted env vars as missing** on remote backends ([#3650](https://github.com/NousResearch/hermes-agent/pull/3650))
|
||||
|
||||
### Audio
|
||||
- **.aac format support** in transcription tool ([#3865](https://github.com/NousResearch/hermes-agent/pull/3865), closes [#1963](https://github.com/NousResearch/hermes-agent/issues/1963))
|
||||
- **Audio download retry** — retry logic for `cache_audio_from_url` matching the existing image download pattern ([#3401](https://github.com/NousResearch/hermes-agent/pull/3401)) — @binhnt92
|
||||
|
||||
### Vision
|
||||
- **Reject non-image files** and enforce website-only policy for vision analysis ([#3845](https://github.com/NousResearch/hermes-agent/pull/3845))
|
||||
|
||||
### Tool Schema
|
||||
- **Ensure name field** always present in tool definitions, fixing `KeyError: 'name'` crashes ([#3811](https://github.com/NousResearch/hermes-agent/pull/3811), closes [#3729](https://github.com/NousResearch/hermes-agent/issues/3729))
|
||||
|
||||
### ACP (Editor Integration)
|
||||
- **Complete session management surface** for VS Code/Zed/JetBrains clients — proper task lifecycle, cancel support, session persistence ([#3675](https://github.com/NousResearch/hermes-agent/pull/3675))
|
||||
|
||||
---
|
||||
|
||||
## 🧩 Skills & Plugins
|
||||
|
||||
### Skills System
|
||||
- **External skill directories** — configure additional skill directories via `skills.external_dirs` in config.yaml ([#3678](https://github.com/NousResearch/hermes-agent/pull/3678))
|
||||
- **Category path traversal blocked** — prevents `../` attacks in skill category names ([#3844](https://github.com/NousResearch/hermes-agent/pull/3844))
|
||||
- **parallel-cli moved to optional-skills** — reduces default skill footprint ([#3673](https://github.com/NousResearch/hermes-agent/pull/3673)) — @kshitijk4poor
|
||||
|
||||
### New Skills
|
||||
- **memento-flashcards** — spaced repetition flashcard system ([#3827](https://github.com/NousResearch/hermes-agent/pull/3827))
|
||||
- **songwriting-and-ai-music** — songwriting craft and AI music generation prompts ([#3834](https://github.com/NousResearch/hermes-agent/pull/3834))
|
||||
- **SiYuan Note** — integration with SiYuan note-taking app ([#3742](https://github.com/NousResearch/hermes-agent/pull/3742))
|
||||
- **Scrapling** — web scraping skill using Scrapling library ([#3742](https://github.com/NousResearch/hermes-agent/pull/3742))
|
||||
- **one-three-one-rule** — communication framework skill ([#3797](https://github.com/NousResearch/hermes-agent/pull/3797))
|
||||
|
||||
### Plugin System
|
||||
- **Plugin enable/disable commands** — `hermes plugins enable/disable <name>` for managing plugin state without removing them ([#3747](https://github.com/NousResearch/hermes-agent/pull/3747))
|
||||
- **Plugin message injection** — plugins can now inject messages into the conversation stream on behalf of the user via `ctx.inject_message()` ([#3778](https://github.com/NousResearch/hermes-agent/pull/3778)) — @winglian
|
||||
- **Honcho self-hosted support** — allow local Honcho instances without requiring an API key ([#3644](https://github.com/NousResearch/hermes-agent/pull/3644))
|
||||
|
||||
---
|
||||
|
||||
## 🔒 Security & Reliability
|
||||
|
||||
### Security Hardening
|
||||
- **Hardened dangerous command detection** — expanded pattern matching for risky shell commands and added file tool path guards for sensitive locations (`/etc/`, `/boot/`, docker.sock) ([#3872](https://github.com/NousResearch/hermes-agent/pull/3872))
|
||||
- **Sensitive path write checks** in approval system — catch writes to system config files through file tools, not just terminal ([#3859](https://github.com/NousResearch/hermes-agent/pull/3859))
|
||||
- **Secret redaction expansion** — now covers ElevenLabs, Tavily, and Exa API keys ([#3920](https://github.com/NousResearch/hermes-agent/pull/3920))
|
||||
- **Vision file rejection** — reject non-image files passed to vision analysis to prevent information disclosure ([#3845](https://github.com/NousResearch/hermes-agent/pull/3845))
|
||||
- **Category path traversal blocking** — prevent directory traversal in skill category names ([#3844](https://github.com/NousResearch/hermes-agent/pull/3844))
|
||||
|
||||
### Reliability
|
||||
- **Atomic config.yaml writes** — prevent data loss during gateway crashes ([#3800](https://github.com/NousResearch/hermes-agent/pull/3800))
|
||||
- **Clear __pycache__ on update** — prevent stale bytecode from causing ImportError after updates ([#3819](https://github.com/NousResearch/hermes-agent/pull/3819))
|
||||
- **Lazy imports for update safety** — prevent ImportError chains during `hermes update` when modules reference new functions ([#3776](https://github.com/NousResearch/hermes-agent/pull/3776))
|
||||
- **Restore terminalbench2 from patch corruption** — recovered file damaged by patch tool's secret redaction ([#3801](https://github.com/NousResearch/hermes-agent/pull/3801))
|
||||
- **Terminal timeout preserves partial output** — no more lost command output on timeout ([#3868](https://github.com/NousResearch/hermes-agent/pull/3868))
|
||||
|
||||
---
|
||||
|
||||
## 🐛 Notable Bug Fixes
|
||||
|
||||
- **OpenClaw migration model config overwrite** — migration no longer overwrites model config dict with a string ([#3924](https://github.com/NousResearch/hermes-agent/pull/3924)) — @0xbyt4
|
||||
- **OpenClaw migration expanded** — covers full data footprint including sessions, cron, memory ([#3869](https://github.com/NousResearch/hermes-agent/pull/3869))
|
||||
- **Telegram deleted reply targets** — gracefully handle replies to deleted messages instead of crashing ([#3858](https://github.com/NousResearch/hermes-agent/pull/3858))
|
||||
- **Discord "thinking..." persistence** — properly cleans up deferred response indicators ([#3674](https://github.com/NousResearch/hermes-agent/pull/3674))
|
||||
- **WhatsApp LID↔phone aliases** — fixes allowlist matching failures with Linked ID format ([#3830](https://github.com/NousResearch/hermes-agent/pull/3830))
|
||||
- **Signal URL-encoded phone numbers** — fixes delivery failures with certain formats ([#3670](https://github.com/NousResearch/hermes-agent/pull/3670))
|
||||
- **Email connection leaks** — properly close SMTP/IMAP connections on error ([#3804](https://github.com/NousResearch/hermes-agent/pull/3804))
|
||||
- **_safe_print ValueError** — no more gateway thread crashes on closed stdout ([#3843](https://github.com/NousResearch/hermes-agent/pull/3843))
|
||||
- **Tool schema KeyError 'name'** — ensure name field always present in tool definitions ([#3811](https://github.com/NousResearch/hermes-agent/pull/3811))
|
||||
- **api_mode stale on provider switch** — correctly clear when switching providers via `hermes model` ([#3857](https://github.com/NousResearch/hermes-agent/pull/3857))
|
||||
|
||||
---
|
||||
|
||||
## 🧪 Testing
|
||||
|
||||
- Resolved 10+ CI failures across hooks, tiktoken, plugins, and skill tests ([#3848](https://github.com/NousResearch/hermes-agent/pull/3848), [#3721](https://github.com/NousResearch/hermes-agent/pull/3721), [#3936](https://github.com/NousResearch/hermes-agent/pull/3936))
|
||||
|
||||
---
|
||||
|
||||
## 📚 Documentation
|
||||
|
||||
- **Comprehensive OpenClaw migration guide** — step-by-step guide for migrating from OpenClaw/Claw3D to Hermes Agent ([#3864](https://github.com/NousResearch/hermes-agent/pull/3864), [#3900](https://github.com/NousResearch/hermes-agent/pull/3900))
|
||||
- **Credential file passthrough docs** — document how to forward credential files and env vars to remote backends ([#3677](https://github.com/NousResearch/hermes-agent/pull/3677))
|
||||
- **DuckDuckGo requirements clarified** — note runtime dependency on duckduckgo-search package ([#3680](https://github.com/NousResearch/hermes-agent/pull/3680))
|
||||
- **Skills catalog updated** — added red-teaming category and optional skills listing ([#3745](https://github.com/NousResearch/hermes-agent/pull/3745))
|
||||
- **Feishu docs MDX fix** — escape angle-bracket URLs that break Docusaurus build ([#3902](https://github.com/NousResearch/hermes-agent/pull/3902))
|
||||
|
||||
---
|
||||
|
||||
## 👥 Contributors
|
||||
|
||||
### Core
|
||||
- **@teknium1** — 90 PRs across all subsystems
|
||||
|
||||
### Community Contributors
|
||||
- **@kshitijk4poor** — 3 PRs: Signal phone number fix ([#3670](https://github.com/NousResearch/hermes-agent/pull/3670)), parallel-cli to optional-skills ([#3673](https://github.com/NousResearch/hermes-agent/pull/3673)), status bar wrapping fix ([#3883](https://github.com/NousResearch/hermes-agent/pull/3883))
|
||||
- **@winglian** — 1 PR: Plugin message injection interface ([#3778](https://github.com/NousResearch/hermes-agent/pull/3778))
|
||||
- **@binhnt92** — 1 PR: Audio download retry logic ([#3401](https://github.com/NousResearch/hermes-agent/pull/3401))
|
||||
- **@0xbyt4** — 1 PR: OpenClaw migration model config fix ([#3924](https://github.com/NousResearch/hermes-agent/pull/3924))
|
||||
|
||||
### Issues Resolved from Community
|
||||
@Material-Scientist ([#850](https://github.com/NousResearch/hermes-agent/issues/850)), @hanxu98121 ([#1734](https://github.com/NousResearch/hermes-agent/issues/1734)), @penwyp ([#1788](https://github.com/NousResearch/hermes-agent/issues/1788)), @dan-and ([#1945](https://github.com/NousResearch/hermes-agent/issues/1945)), @AdrianScott ([#1963](https://github.com/NousResearch/hermes-agent/issues/1963)), @clawdbot47 ([#3229](https://github.com/NousResearch/hermes-agent/issues/3229)), @alanfwilliams ([#3404](https://github.com/NousResearch/hermes-agent/issues/3404)), @kentimsit ([#3433](https://github.com/NousResearch/hermes-agent/issues/3433)), @hayka-pacha ([#3534](https://github.com/NousResearch/hermes-agent/issues/3534)), @primmer ([#3595](https://github.com/NousResearch/hermes-agent/issues/3595)), @dagelf ([#3609](https://github.com/NousResearch/hermes-agent/issues/3609)), @HenkDz ([#3685](https://github.com/NousResearch/hermes-agent/issues/3685)), @tmdgusya ([#3729](https://github.com/NousResearch/hermes-agent/issues/3729)), @TypQxQ ([#3753](https://github.com/NousResearch/hermes-agent/issues/3753)), @acsezen ([#3765](https://github.com/NousResearch/hermes-agent/issues/3765))
|
||||
|
||||
---
|
||||
|
||||
**Full Changelog**: [v2026.3.28...v2026.3.30](https://github.com/NousResearch/hermes-agent/compare/v2026.3.28...v2026.3.30)
|
||||
|
|
@ -162,6 +162,21 @@ def _is_oauth_token(key: str) -> bool:
|
|||
return True
|
||||
|
||||
|
||||
def _requires_bearer_auth(base_url: str | None) -> bool:
|
||||
"""Return True for Anthropic-compatible providers that require Bearer auth.
|
||||
|
||||
Some third-party /anthropic endpoints implement Anthropic's Messages API but
|
||||
require Authorization: Bearer instead of Anthropic's native x-api-key header.
|
||||
MiniMax's global and China Anthropic-compatible endpoints follow this pattern.
|
||||
"""
|
||||
if not base_url:
|
||||
return False
|
||||
normalized = base_url.rstrip("/").lower()
|
||||
return normalized.startswith("https://api.minimax.io/anthropic") or normalized.startswith(
|
||||
"https://api.minimaxi.com/anthropic"
|
||||
)
|
||||
|
||||
|
||||
def build_anthropic_client(api_key: str, base_url: str = None):
|
||||
"""Create an Anthropic client, auto-detecting setup-tokens vs API keys.
|
||||
|
||||
|
|
@ -180,7 +195,17 @@ def build_anthropic_client(api_key: str, base_url: str = None):
|
|||
if base_url:
|
||||
kwargs["base_url"] = base_url
|
||||
|
||||
if _is_oauth_token(api_key):
|
||||
if _requires_bearer_auth(base_url):
|
||||
# Some Anthropic-compatible providers (e.g. MiniMax) expect the API key in
|
||||
# Authorization: Bearer even for regular API keys. Route those endpoints
|
||||
# through auth_token so the SDK sends Bearer auth instead of x-api-key.
|
||||
# Check this before OAuth token shape detection because MiniMax secrets do
|
||||
# not use Anthropic's sk-ant-api prefix and would otherwise be misread as
|
||||
# Anthropic OAuth/setup tokens.
|
||||
kwargs["auth_token"] = api_key
|
||||
if _COMMON_BETAS:
|
||||
kwargs["default_headers"] = {"anthropic-beta": ",".join(_COMMON_BETAS)}
|
||||
elif _is_oauth_token(api_key):
|
||||
# OAuth access token / setup-token → Bearer auth + Claude Code identity.
|
||||
# Anthropic routes OAuth requests based on user-agent and headers;
|
||||
# without Claude Code's fingerprint, requests get intermittent 500s.
|
||||
|
|
|
|||
|
|
@ -171,6 +171,7 @@ _URL_TO_PROVIDER: Dict[str, str] = {
|
|||
"dashscope.aliyuncs.com": "alibaba",
|
||||
"dashscope-intl.aliyuncs.com": "alibaba",
|
||||
"openrouter.ai": "openrouter",
|
||||
"generativelanguage.googleapis.com": "google",
|
||||
"inference-api.nousresearch.com": "nous",
|
||||
"api.deepseek.com": "deepseek",
|
||||
"api.githubcopilot.com": "copilot",
|
||||
|
|
|
|||
|
|
@ -37,6 +37,9 @@ _PREFIX_PATTERNS = [
|
|||
r"dop_v1_[A-Za-z0-9]{10,}", # DigitalOcean PAT
|
||||
r"doo_v1_[A-Za-z0-9]{10,}", # DigitalOcean OAuth
|
||||
r"am_[A-Za-z0-9_-]{10,}", # AgentMail API key
|
||||
r"sk_[A-Za-z0-9_]{10,}", # ElevenLabs TTS key (sk_ underscore, not sk- dash)
|
||||
r"tvly-[A-Za-z0-9]{10,}", # Tavily search API key
|
||||
r"exa_[A-Za-z0-9]{10,}", # Exa search API key
|
||||
]
|
||||
|
||||
# ENV assignment patterns: KEY=value where KEY contains a secret-like name
|
||||
|
|
|
|||
|
|
@ -324,6 +324,9 @@ compression:
|
|||
# vision:
|
||||
# provider: "auto"
|
||||
# model: "" # e.g. "google/gemini-2.5-flash", "openai/gpt-4o"
|
||||
# timeout: 30 # LLM API call timeout (seconds)
|
||||
# download_timeout: 30 # Image HTTP download timeout (seconds)
|
||||
# # Increase for slow connections or self-hosted image servers
|
||||
#
|
||||
# # Web page scraping / summarization + browser page text extraction
|
||||
# web_extract:
|
||||
|
|
|
|||
191
cli.py
191
cli.py
|
|
@ -1355,6 +1355,49 @@ class HermesCLI:
|
|||
|
||||
return snapshot
|
||||
|
||||
@staticmethod
|
||||
def _status_bar_display_width(text: str) -> int:
|
||||
"""Return terminal cell width for status-bar text.
|
||||
|
||||
len() is not enough for prompt_toolkit layout decisions because some
|
||||
glyphs can render wider than one Python codepoint. Keeping the status
|
||||
bar within the real display width prevents it from wrapping onto a
|
||||
second line and leaving behind duplicate rows.
|
||||
"""
|
||||
try:
|
||||
from prompt_toolkit.utils import get_cwidth
|
||||
return get_cwidth(text or "")
|
||||
except Exception:
|
||||
return len(text or "")
|
||||
|
||||
@classmethod
|
||||
def _trim_status_bar_text(cls, text: str, max_width: int) -> str:
|
||||
"""Trim status-bar text to a single terminal row."""
|
||||
if max_width <= 0:
|
||||
return ""
|
||||
try:
|
||||
from prompt_toolkit.utils import get_cwidth
|
||||
except Exception:
|
||||
get_cwidth = None
|
||||
|
||||
if cls._status_bar_display_width(text) <= max_width:
|
||||
return text
|
||||
|
||||
ellipsis = "..."
|
||||
ellipsis_width = cls._status_bar_display_width(ellipsis)
|
||||
if max_width <= ellipsis_width:
|
||||
return ellipsis[:max_width]
|
||||
|
||||
out = []
|
||||
width = 0
|
||||
for ch in text:
|
||||
ch_width = get_cwidth(ch) if get_cwidth else len(ch)
|
||||
if width + ch_width + ellipsis_width > max_width:
|
||||
break
|
||||
out.append(ch)
|
||||
width += ch_width
|
||||
return "".join(out).rstrip() + ellipsis
|
||||
|
||||
def _build_status_bar_text(self, width: Optional[int] = None) -> str:
|
||||
try:
|
||||
snapshot = self._get_status_bar_snapshot()
|
||||
|
|
@ -1369,11 +1412,12 @@ class HermesCLI:
|
|||
duration_label = snapshot["duration"]
|
||||
|
||||
if width < 52:
|
||||
return f"⚕ {snapshot['model_short']} · {duration_label}"
|
||||
text = f"⚕ {snapshot['model_short']} · {duration_label}"
|
||||
return self._trim_status_bar_text(text, width)
|
||||
if width < 76:
|
||||
parts = [f"⚕ {snapshot['model_short']}", percent_label]
|
||||
parts.append(duration_label)
|
||||
return " · ".join(parts)
|
||||
return self._trim_status_bar_text(" · ".join(parts), width)
|
||||
|
||||
if snapshot["context_length"]:
|
||||
ctx_total = _format_context_length(snapshot["context_length"])
|
||||
|
|
@ -1384,7 +1428,7 @@ class HermesCLI:
|
|||
|
||||
parts = [f"⚕ {snapshot['model_short']}", context_label, percent_label]
|
||||
parts.append(duration_label)
|
||||
return " │ ".join(parts)
|
||||
return self._trim_status_bar_text(" │ ".join(parts), width)
|
||||
except Exception:
|
||||
return f"⚕ {self.model if getattr(self, 'model', None) else 'Hermes'}"
|
||||
|
||||
|
|
@ -1406,53 +1450,54 @@ class HermesCLI:
|
|||
duration_label = snapshot["duration"]
|
||||
|
||||
if width < 52:
|
||||
return [
|
||||
("class:status-bar", " ⚕ "),
|
||||
("class:status-bar-strong", snapshot["model_short"]),
|
||||
("class:status-bar-dim", " · "),
|
||||
("class:status-bar-dim", duration_label),
|
||||
("class:status-bar", " "),
|
||||
]
|
||||
|
||||
percent = snapshot["context_percent"]
|
||||
percent_label = f"{percent}%" if percent is not None else "--"
|
||||
if width < 76:
|
||||
frags = [
|
||||
("class:status-bar", " ⚕ "),
|
||||
("class:status-bar-strong", snapshot["model_short"]),
|
||||
("class:status-bar-dim", " · "),
|
||||
(self._status_bar_context_style(percent), percent_label),
|
||||
]
|
||||
frags.extend([
|
||||
("class:status-bar-dim", " · "),
|
||||
("class:status-bar-dim", duration_label),
|
||||
("class:status-bar", " "),
|
||||
])
|
||||
return frags
|
||||
|
||||
if snapshot["context_length"]:
|
||||
ctx_total = _format_context_length(snapshot["context_length"])
|
||||
ctx_used = format_token_count_compact(snapshot["context_tokens"])
|
||||
context_label = f"{ctx_used}/{ctx_total}"
|
||||
]
|
||||
else:
|
||||
context_label = "ctx --"
|
||||
percent = snapshot["context_percent"]
|
||||
percent_label = f"{percent}%" if percent is not None else "--"
|
||||
if width < 76:
|
||||
frags = [
|
||||
("class:status-bar", " ⚕ "),
|
||||
("class:status-bar-strong", snapshot["model_short"]),
|
||||
("class:status-bar-dim", " · "),
|
||||
(self._status_bar_context_style(percent), percent_label),
|
||||
("class:status-bar-dim", " · "),
|
||||
("class:status-bar-dim", duration_label),
|
||||
("class:status-bar", " "),
|
||||
]
|
||||
else:
|
||||
if snapshot["context_length"]:
|
||||
ctx_total = _format_context_length(snapshot["context_length"])
|
||||
ctx_used = format_token_count_compact(snapshot["context_tokens"])
|
||||
context_label = f"{ctx_used}/{ctx_total}"
|
||||
else:
|
||||
context_label = "ctx --"
|
||||
|
||||
bar_style = self._status_bar_context_style(percent)
|
||||
frags = [
|
||||
("class:status-bar", " ⚕ "),
|
||||
("class:status-bar-strong", snapshot["model_short"]),
|
||||
("class:status-bar-dim", " │ "),
|
||||
("class:status-bar-dim", context_label),
|
||||
("class:status-bar-dim", " │ "),
|
||||
(bar_style, self._build_context_bar(percent)),
|
||||
("class:status-bar-dim", " "),
|
||||
(bar_style, percent_label),
|
||||
]
|
||||
frags.extend([
|
||||
("class:status-bar-dim", " │ "),
|
||||
("class:status-bar-dim", duration_label),
|
||||
("class:status-bar", " "),
|
||||
])
|
||||
bar_style = self._status_bar_context_style(percent)
|
||||
frags = [
|
||||
("class:status-bar", " ⚕ "),
|
||||
("class:status-bar-strong", snapshot["model_short"]),
|
||||
("class:status-bar-dim", " │ "),
|
||||
("class:status-bar-dim", context_label),
|
||||
("class:status-bar-dim", " │ "),
|
||||
(bar_style, self._build_context_bar(percent)),
|
||||
("class:status-bar-dim", " "),
|
||||
(bar_style, percent_label),
|
||||
("class:status-bar-dim", " │ "),
|
||||
("class:status-bar-dim", duration_label),
|
||||
("class:status-bar", " "),
|
||||
]
|
||||
|
||||
total_width = sum(self._status_bar_display_width(text) for _, text in frags)
|
||||
if total_width > width:
|
||||
plain_text = "".join(text for _, text in frags)
|
||||
trimmed = self._trim_status_bar_text(plain_text, width)
|
||||
return [("class:status-bar", trimmed)]
|
||||
return frags
|
||||
except Exception:
|
||||
return [("class:status-bar", f" {self._build_status_bar_text()} ")]
|
||||
|
|
@ -2744,22 +2789,12 @@ class HermesCLI:
|
|||
print(f" MCP tool: /tools {subcommand} github:create_issue")
|
||||
return
|
||||
|
||||
# Confirm session reset before applying
|
||||
verb = "Disable" if subcommand == "disable" else "Enable"
|
||||
# Apply the change directly — the user typing the command is implicit
|
||||
# consent. Do NOT use input() here; it hangs inside prompt_toolkit's
|
||||
# TUI event loop (known pitfall).
|
||||
verb = "Disabling" if subcommand == "disable" else "Enabling"
|
||||
label = ", ".join(names)
|
||||
_cprint(f"{_GOLD}{verb} {label}?{_RST}")
|
||||
_cprint(f"{_DIM}This will save to config and reset your session so the "
|
||||
f"change takes effect cleanly.{_RST}")
|
||||
try:
|
||||
answer = input(" Continue? [y/N] ").strip().lower()
|
||||
except (EOFError, KeyboardInterrupt):
|
||||
print()
|
||||
_cprint(f"{_DIM}Cancelled.{_RST}")
|
||||
return
|
||||
|
||||
if answer not in ("y", "yes"):
|
||||
_cprint(f"{_DIM}Cancelled.{_RST}")
|
||||
return
|
||||
_cprint(f"{_GOLD}{verb} {label}...{_RST}")
|
||||
|
||||
tools_disable_enable_command(
|
||||
Namespace(tools_action=subcommand, names=names, platform="cli"))
|
||||
|
|
@ -2802,6 +2837,28 @@ class HermesCLI:
|
|||
print(" Example: python cli.py --toolsets web,terminal")
|
||||
print()
|
||||
|
||||
def _handle_profile_command(self):
|
||||
"""Display active profile name and home directory."""
|
||||
from hermes_constants import get_hermes_home, display_hermes_home
|
||||
|
||||
home = get_hermes_home()
|
||||
display = display_hermes_home()
|
||||
|
||||
profiles_parent = Path.home() / ".hermes" / "profiles"
|
||||
try:
|
||||
rel = home.relative_to(profiles_parent)
|
||||
profile_name = str(rel).split("/")[0]
|
||||
except ValueError:
|
||||
profile_name = None
|
||||
|
||||
print()
|
||||
if profile_name:
|
||||
print(f" Profile: {profile_name}")
|
||||
else:
|
||||
print(" Profile: default")
|
||||
print(f" Home: {display}")
|
||||
print()
|
||||
|
||||
def show_config(self):
|
||||
"""Display current configuration with kawaii ASCII art."""
|
||||
# Get terminal config from environment (which was set from cli-config.yaml)
|
||||
|
|
@ -3644,6 +3701,8 @@ class HermesCLI:
|
|||
return False
|
||||
elif canonical == "help":
|
||||
self.show_help()
|
||||
elif canonical == "profile":
|
||||
self._handle_profile_command()
|
||||
elif canonical == "tools":
|
||||
self._handle_tools_command(cmd_original)
|
||||
elif canonical == "toolsets":
|
||||
|
|
@ -3801,6 +3860,8 @@ class HermesCLI:
|
|||
self.console.print(f" Status bar {state}")
|
||||
elif canonical == "verbose":
|
||||
self._toggle_verbose()
|
||||
elif canonical == "yolo":
|
||||
self._toggle_yolo()
|
||||
elif canonical == "reasoning":
|
||||
self._handle_reasoning_command(cmd_original)
|
||||
elif canonical == "compress":
|
||||
|
|
@ -4399,6 +4460,17 @@ class HermesCLI:
|
|||
}
|
||||
_cprint(labels.get(self.tool_progress_mode, ""))
|
||||
|
||||
def _toggle_yolo(self):
|
||||
"""Toggle YOLO mode — skip all dangerous command approval prompts."""
|
||||
import os
|
||||
current = bool(os.environ.get("HERMES_YOLO_MODE"))
|
||||
if current:
|
||||
os.environ.pop("HERMES_YOLO_MODE", None)
|
||||
self.console.print(" ⚠ YOLO mode [bold red]OFF[/] — dangerous commands will require approval.")
|
||||
else:
|
||||
os.environ["HERMES_YOLO_MODE"] = "1"
|
||||
self.console.print(" ⚡ YOLO mode [bold green]ON[/] — all commands auto-approved. Use with caution.")
|
||||
|
||||
def _handle_reasoning_command(self, cmd: str):
|
||||
"""Handle /reasoning — manage effort level and display toggle.
|
||||
|
||||
|
|
@ -6165,6 +6237,11 @@ class HermesCLI:
|
|||
self._interrupt_queue = queue.Queue() # For messages typed while agent is running
|
||||
self._should_exit = False
|
||||
self._last_ctrl_c_time = 0 # Track double Ctrl+C for force exit
|
||||
|
||||
# Give plugin manager a CLI reference so plugins can inject messages
|
||||
from hermes_cli.plugins import get_plugin_manager
|
||||
get_plugin_manager()._cli_ref = self
|
||||
|
||||
# Config file watcher — detect mcp_servers changes and auto-reload
|
||||
from hermes_cli.config import get_config_path as _get_config_path
|
||||
_cfg_path = _get_config_path()
|
||||
|
|
|
|||
|
|
@ -236,11 +236,12 @@ def _build_job_prompt(job: dict) -> str:
|
|||
# Always prepend [SILENT] guidance so the cron agent can suppress
|
||||
# delivery when it has nothing new or noteworthy to report.
|
||||
silent_hint = (
|
||||
"[SYSTEM: If you have nothing new or noteworthy to report, respond "
|
||||
"with exactly \"[SILENT]\" (optionally followed by a brief internal "
|
||||
"note). This suppresses delivery to the user while still saving "
|
||||
"output locally. Only use [SILENT] when there are genuinely no "
|
||||
"changes worth reporting.]\n\n"
|
||||
"[SYSTEM: If you have a meaningful status report or findings, "
|
||||
"send them — that is the whole point of this job. Only respond "
|
||||
"with exactly \"[SILENT]\" (nothing else) when there is genuinely "
|
||||
"nothing new to report. [SILENT] suppresses delivery to the user. "
|
||||
"Never combine [SILENT] with content — either report your "
|
||||
"findings normally, or say [SILENT] and nothing more.]\n\n"
|
||||
)
|
||||
prompt = silent_hint + prompt
|
||||
if skills is None:
|
||||
|
|
|
|||
|
|
@ -24,6 +24,15 @@ logger = logging.getLogger(__name__)
|
|||
|
||||
def _coerce_bool(value: Any, default: bool = True) -> bool:
|
||||
"""Coerce bool-ish config values, preserving a caller-provided default."""
|
||||
if value is None:
|
||||
return default
|
||||
if isinstance(value, str):
|
||||
lowered = value.strip().lower()
|
||||
if lowered in ("true", "1", "yes", "on"):
|
||||
return True
|
||||
if lowered in ("false", "0", "no", "off"):
|
||||
return False
|
||||
return default
|
||||
return is_truthy_value(value, default=default)
|
||||
|
||||
|
||||
|
|
@ -510,6 +519,10 @@ def load_gateway_config() -> GatewayConfig:
|
|||
)
|
||||
if "reply_prefix" in platform_cfg:
|
||||
bridged["reply_prefix"] = platform_cfg["reply_prefix"]
|
||||
if "require_mention" in platform_cfg:
|
||||
bridged["require_mention"] = platform_cfg["require_mention"]
|
||||
if "mention_patterns" in platform_cfg:
|
||||
bridged["mention_patterns"] = platform_cfg["mention_patterns"]
|
||||
if not bridged:
|
||||
continue
|
||||
plat_data = platforms_data.setdefault(plat.value, {})
|
||||
|
|
@ -534,6 +547,20 @@ def load_gateway_config() -> GatewayConfig:
|
|||
os.environ["DISCORD_FREE_RESPONSE_CHANNELS"] = str(frc)
|
||||
if "auto_thread" in discord_cfg and not os.getenv("DISCORD_AUTO_THREAD"):
|
||||
os.environ["DISCORD_AUTO_THREAD"] = str(discord_cfg["auto_thread"]).lower()
|
||||
|
||||
# Telegram settings → env vars (env vars take precedence)
|
||||
telegram_cfg = yaml_cfg.get("telegram", {})
|
||||
if isinstance(telegram_cfg, dict):
|
||||
if "require_mention" in telegram_cfg and not os.getenv("TELEGRAM_REQUIRE_MENTION"):
|
||||
os.environ["TELEGRAM_REQUIRE_MENTION"] = str(telegram_cfg["require_mention"]).lower()
|
||||
if "mention_patterns" in telegram_cfg and not os.getenv("TELEGRAM_MENTION_PATTERNS"):
|
||||
import json as _json
|
||||
os.environ["TELEGRAM_MENTION_PATTERNS"] = _json.dumps(telegram_cfg["mention_patterns"])
|
||||
frc = telegram_cfg.get("free_response_chats")
|
||||
if frc is not None and not os.getenv("TELEGRAM_FREE_RESPONSE_CHATS"):
|
||||
if isinstance(frc, list):
|
||||
frc = ",".join(str(v) for v in frc)
|
||||
os.environ["TELEGRAM_FREE_RESPONSE_CHATS"] = str(frc)
|
||||
except Exception as e:
|
||||
logger.warning(
|
||||
"Failed to process config.yaml — falling back to .env / gateway.json values. "
|
||||
|
|
@ -876,4 +903,3 @@ def _apply_env_overrides(config: GatewayConfig) -> None:
|
|||
config.default_reset_policy.at_hour = int(reset_hour)
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
|
|
|
|||
|
|
@ -898,6 +898,26 @@ class BasePlatformAdapter(ABC):
|
|||
except Exception:
|
||||
pass
|
||||
|
||||
# ── Processing lifecycle hooks ──────────────────────────────────────────
|
||||
# Subclasses override these to react to message processing events
|
||||
# (e.g. Discord adds 👀/✅/❌ reactions).
|
||||
|
||||
async def on_processing_start(self, event: MessageEvent) -> None:
|
||||
"""Hook called when background processing begins."""
|
||||
|
||||
async def on_processing_complete(self, event: MessageEvent, success: bool) -> None:
|
||||
"""Hook called when background processing completes."""
|
||||
|
||||
async def _run_processing_hook(self, hook_name: str, *args: Any, **kwargs: Any) -> None:
|
||||
"""Run a lifecycle hook without letting failures break message flow."""
|
||||
hook = getattr(self, hook_name, None)
|
||||
if not callable(hook):
|
||||
return
|
||||
try:
|
||||
await hook(*args, **kwargs)
|
||||
except Exception as e:
|
||||
logger.warning("[%s] %s hook failed: %s", self.name, hook_name, e)
|
||||
|
||||
@staticmethod
|
||||
def _is_retryable_error(error: Optional[str]) -> bool:
|
||||
"""Return True if the error string looks like a transient network failure."""
|
||||
|
|
@ -1060,6 +1080,18 @@ class BasePlatformAdapter(ABC):
|
|||
|
||||
async def _process_message_background(self, event: MessageEvent, session_key: str) -> None:
|
||||
"""Background task that actually processes the message."""
|
||||
# Track delivery outcomes for the processing-complete hook
|
||||
delivery_attempted = False
|
||||
delivery_succeeded = False
|
||||
|
||||
def _record_delivery(result):
|
||||
nonlocal delivery_attempted, delivery_succeeded
|
||||
if result is None:
|
||||
return
|
||||
delivery_attempted = True
|
||||
if getattr(result, "success", False):
|
||||
delivery_succeeded = True
|
||||
|
||||
# Create interrupt event for this session
|
||||
interrupt_event = asyncio.Event()
|
||||
self._active_sessions[session_key] = interrupt_event
|
||||
|
|
@ -1069,6 +1101,8 @@ class BasePlatformAdapter(ABC):
|
|||
typing_task = asyncio.create_task(self._keep_typing(event.source.chat_id, metadata=_thread_metadata))
|
||||
|
||||
try:
|
||||
await self._run_processing_hook("on_processing_start", event)
|
||||
|
||||
# Call the handler (this can take a while with tool calls)
|
||||
response = await self._message_handler(event)
|
||||
|
||||
|
|
@ -1138,6 +1172,7 @@ class BasePlatformAdapter(ABC):
|
|||
reply_to=event.message_id,
|
||||
metadata=_thread_metadata,
|
||||
)
|
||||
_record_delivery(result)
|
||||
|
||||
# Human-like pacing delay between text and media
|
||||
human_delay = self._get_human_delay()
|
||||
|
|
@ -1237,6 +1272,10 @@ class BasePlatformAdapter(ABC):
|
|||
except Exception as file_err:
|
||||
logger.error("[%s] Error sending local file %s: %s", self.name, file_path, file_err)
|
||||
|
||||
# Determine overall success for the processing hook
|
||||
processing_ok = delivery_succeeded if delivery_attempted else not bool(response)
|
||||
await self._run_processing_hook("on_processing_complete", event, processing_ok)
|
||||
|
||||
# Check if there's a pending message that was queued during our processing
|
||||
if session_key in self._pending_messages:
|
||||
pending_event = self._pending_messages.pop(session_key)
|
||||
|
|
@ -1253,7 +1292,11 @@ class BasePlatformAdapter(ABC):
|
|||
await self._process_message_background(pending_event, session_key)
|
||||
return # Already cleaned up
|
||||
|
||||
except asyncio.CancelledError:
|
||||
await self._run_processing_hook("on_processing_complete", event, False)
|
||||
raise
|
||||
except Exception as e:
|
||||
await self._run_processing_hook("on_processing_complete", event, False)
|
||||
logger.error("[%s] Error handling message: %s", self.name, e, exc_info=True)
|
||||
# Send the error to the user so they aren't left with radio silence
|
||||
try:
|
||||
|
|
|
|||
|
|
@ -660,6 +660,41 @@ class DiscordAdapter(BasePlatformAdapter):
|
|||
pass
|
||||
|
||||
logger.info("[%s] Disconnected", self.name)
|
||||
|
||||
async def _add_reaction(self, message: Any, emoji: str) -> bool:
|
||||
"""Add an emoji reaction to a Discord message."""
|
||||
if not message or not hasattr(message, "add_reaction"):
|
||||
return False
|
||||
try:
|
||||
await message.add_reaction(emoji)
|
||||
return True
|
||||
except Exception as e:
|
||||
logger.debug("[%s] add_reaction failed (%s): %s", self.name, emoji, e)
|
||||
return False
|
||||
|
||||
async def _remove_reaction(self, message: Any, emoji: str) -> bool:
|
||||
"""Remove the bot's own emoji reaction from a Discord message."""
|
||||
if not message or not hasattr(message, "remove_reaction") or not self._client or not self._client.user:
|
||||
return False
|
||||
try:
|
||||
await message.remove_reaction(emoji, self._client.user)
|
||||
return True
|
||||
except Exception as e:
|
||||
logger.debug("[%s] remove_reaction failed (%s): %s", self.name, emoji, e)
|
||||
return False
|
||||
|
||||
async def on_processing_start(self, event: MessageEvent) -> None:
|
||||
"""Add an in-progress reaction for normal Discord message events."""
|
||||
message = event.raw_message
|
||||
if hasattr(message, "add_reaction"):
|
||||
await self._add_reaction(message, "👀")
|
||||
|
||||
async def on_processing_complete(self, event: MessageEvent, success: bool) -> None:
|
||||
"""Swap the in-progress reaction for a final success/failure reaction."""
|
||||
message = event.raw_message
|
||||
if hasattr(message, "add_reaction"):
|
||||
await self._remove_reaction(message, "👀")
|
||||
await self._add_reaction(message, "✅" if success else "❌")
|
||||
|
||||
async def send(
|
||||
self,
|
||||
|
|
|
|||
|
|
@ -17,6 +17,8 @@ Environment variables:
|
|||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import io
|
||||
import json
|
||||
import logging
|
||||
import mimetypes
|
||||
import os
|
||||
|
|
@ -512,8 +514,11 @@ class MatrixAdapter(BasePlatformAdapter):
|
|||
reply_to: Optional[str] = None,
|
||||
metadata: Optional[Dict[str, Any]] = None,
|
||||
) -> SendResult:
|
||||
"""Upload an audio file as a voice message."""
|
||||
return await self._send_local_file(chat_id, audio_path, "m.audio", caption, reply_to, metadata=metadata)
|
||||
"""Upload an audio file as a voice message (MSC3245 native voice)."""
|
||||
return await self._send_local_file(
|
||||
chat_id, audio_path, "m.audio", caption, reply_to,
|
||||
metadata=metadata, is_voice=True
|
||||
)
|
||||
|
||||
async def send_video(
|
||||
self,
|
||||
|
|
@ -546,13 +551,16 @@ class MatrixAdapter(BasePlatformAdapter):
|
|||
caption: Optional[str] = None,
|
||||
reply_to: Optional[str] = None,
|
||||
metadata: Optional[Dict[str, Any]] = None,
|
||||
is_voice: bool = False,
|
||||
) -> SendResult:
|
||||
"""Upload bytes to Matrix and send as a media message."""
|
||||
import nio
|
||||
|
||||
# Upload to homeserver.
|
||||
resp = await self._client.upload(
|
||||
data,
|
||||
# nio expects a DataProvider (callable) or file-like object, not raw bytes.
|
||||
# nio.upload() returns a tuple (UploadResponse|UploadError, Optional[Dict])
|
||||
resp, maybe_encryption_info = await self._client.upload(
|
||||
io.BytesIO(data),
|
||||
content_type=content_type,
|
||||
filename=filename,
|
||||
)
|
||||
|
|
@ -574,6 +582,10 @@ class MatrixAdapter(BasePlatformAdapter):
|
|||
},
|
||||
}
|
||||
|
||||
# Add MSC3245 voice flag for native voice messages.
|
||||
if is_voice:
|
||||
msg_content["org.matrix.msc3245.voice"] = {}
|
||||
|
||||
if reply_to:
|
||||
msg_content["m.relates_to"] = {
|
||||
"m.in_reply_to": {"event_id": reply_to}
|
||||
|
|
@ -601,6 +613,7 @@ class MatrixAdapter(BasePlatformAdapter):
|
|||
reply_to: Optional[str] = None,
|
||||
file_name: Optional[str] = None,
|
||||
metadata: Optional[Dict[str, Any]] = None,
|
||||
is_voice: bool = False,
|
||||
) -> SendResult:
|
||||
"""Read a local file and upload it."""
|
||||
p = Path(file_path)
|
||||
|
|
@ -613,7 +626,7 @@ class MatrixAdapter(BasePlatformAdapter):
|
|||
ct = mimetypes.guess_type(fname)[0] or "application/octet-stream"
|
||||
data = p.read_bytes()
|
||||
|
||||
return await self._upload_and_send(room_id, data, fname, ct, msgtype, caption, reply_to, metadata)
|
||||
return await self._upload_and_send(room_id, data, fname, ct, msgtype, caption, reply_to, metadata, is_voice)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Sync loop
|
||||
|
|
@ -808,11 +821,19 @@ class MatrixAdapter(BasePlatformAdapter):
|
|||
event_mimetype = (content_info.get("info") or {}).get("mimetype", "")
|
||||
media_type = "application/octet-stream"
|
||||
msg_type = MessageType.DOCUMENT
|
||||
is_voice_message = False
|
||||
|
||||
if isinstance(event, nio.RoomMessageImage):
|
||||
msg_type = MessageType.PHOTO
|
||||
media_type = event_mimetype or "image/png"
|
||||
elif isinstance(event, nio.RoomMessageAudio):
|
||||
msg_type = MessageType.AUDIO
|
||||
# Check for MSC3245 voice flag: org.matrix.msc3245.voice: {}
|
||||
source_content = getattr(event, "source", {}).get("content", {})
|
||||
if source_content.get("org.matrix.msc3245.voice") is not None:
|
||||
is_voice_message = True
|
||||
msg_type = MessageType.VOICE
|
||||
else:
|
||||
msg_type = MessageType.AUDIO
|
||||
media_type = event_mimetype or "audio/ogg"
|
||||
elif isinstance(event, nio.RoomMessageVideo):
|
||||
msg_type = MessageType.VIDEO
|
||||
|
|
@ -850,6 +871,31 @@ class MatrixAdapter(BasePlatformAdapter):
|
|||
if relates_to.get("rel_type") == "m.thread":
|
||||
thread_id = relates_to.get("event_id")
|
||||
|
||||
# For voice messages, cache audio locally for transcription tools.
|
||||
# Use the authenticated nio client to download (Matrix requires auth for media).
|
||||
media_urls = [http_url] if http_url else None
|
||||
media_types = [media_type] if http_url else None
|
||||
|
||||
if is_voice_message and url and url.startswith("mxc://"):
|
||||
try:
|
||||
import nio
|
||||
from gateway.platforms.base import cache_audio_from_bytes
|
||||
|
||||
resp = await self._client.download(mxc=url)
|
||||
if isinstance(resp, nio.MemoryDownloadResponse):
|
||||
# Extract extension from mimetype or default to .ogg
|
||||
ext = ".ogg"
|
||||
if media_type and "/" in media_type:
|
||||
subtype = media_type.split("/")[1]
|
||||
ext = f".{subtype}" if subtype else ".ogg"
|
||||
local_path = cache_audio_from_bytes(resp.body, ext)
|
||||
media_urls = [local_path]
|
||||
logger.debug("Matrix: cached voice message to %s", local_path)
|
||||
else:
|
||||
logger.warning("Matrix: failed to download voice: %s", getattr(resp, "message", resp))
|
||||
except Exception as e:
|
||||
logger.warning("Matrix: failed to cache voice message, using HTTP URL: %s", e)
|
||||
|
||||
source = self.build_source(
|
||||
chat_id=room.room_id,
|
||||
chat_type=chat_type,
|
||||
|
|
@ -858,8 +904,9 @@ class MatrixAdapter(BasePlatformAdapter):
|
|||
thread_id=thread_id,
|
||||
)
|
||||
|
||||
# Use cached local path for images, HTTP URL for other media types
|
||||
media_urls = [cached_path] if cached_path else ([http_url] if http_url else None)
|
||||
# Use cached local path for images (voice messages already handled above).
|
||||
if cached_path:
|
||||
media_urls = [cached_path]
|
||||
media_types = [media_type] if media_urls else None
|
||||
|
||||
msg_event = MessageEvent(
|
||||
|
|
|
|||
|
|
@ -9,6 +9,7 @@ Uses slack-bolt (Python) with Socket Mode for:
|
|||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
|
|
@ -73,6 +74,10 @@ class SlackAdapter(BasePlatformAdapter):
|
|||
self._bot_user_id: Optional[str] = None
|
||||
self._user_name_cache: Dict[str, str] = {} # user_id → display name
|
||||
self._socket_mode_task: Optional[asyncio.Task] = None
|
||||
# Multi-workspace support
|
||||
self._team_clients: Dict[str, AsyncWebClient] = {} # team_id → WebClient
|
||||
self._team_bot_user_ids: Dict[str, str] = {} # team_id → bot_user_id
|
||||
self._channel_team: Dict[str, str] = {} # channel_id → team_id
|
||||
|
||||
async def connect(self) -> bool:
|
||||
"""Connect to Slack via Socket Mode."""
|
||||
|
|
@ -82,16 +87,34 @@ class SlackAdapter(BasePlatformAdapter):
|
|||
)
|
||||
return False
|
||||
|
||||
bot_token = self.config.token
|
||||
raw_token = self.config.token
|
||||
app_token = os.getenv("SLACK_APP_TOKEN")
|
||||
|
||||
if not bot_token:
|
||||
if not raw_token:
|
||||
logger.error("[Slack] SLACK_BOT_TOKEN not set")
|
||||
return False
|
||||
if not app_token:
|
||||
logger.error("[Slack] SLACK_APP_TOKEN not set")
|
||||
return False
|
||||
|
||||
# Support comma-separated bot tokens for multi-workspace
|
||||
bot_tokens = [t.strip() for t in raw_token.split(",") if t.strip()]
|
||||
|
||||
# Also load tokens from OAuth token file
|
||||
from hermes_constants import get_hermes_home
|
||||
tokens_file = get_hermes_home() / "slack_tokens.json"
|
||||
if tokens_file.exists():
|
||||
try:
|
||||
saved = json.loads(tokens_file.read_text(encoding="utf-8"))
|
||||
for team_id, entry in saved.items():
|
||||
tok = entry.get("token", "") if isinstance(entry, dict) else ""
|
||||
if tok and tok not in bot_tokens:
|
||||
bot_tokens.append(tok)
|
||||
team_label = entry.get("team_name", team_id) if isinstance(entry, dict) else team_id
|
||||
logger.info("[Slack] Loaded saved token for workspace %s", team_label)
|
||||
except Exception as e:
|
||||
logger.warning("[Slack] Failed to read %s: %s", tokens_file, e)
|
||||
|
||||
try:
|
||||
# Acquire scoped lock to prevent duplicate app token usage
|
||||
from gateway.status import acquire_scoped_lock
|
||||
|
|
@ -104,12 +127,30 @@ class SlackAdapter(BasePlatformAdapter):
|
|||
self._set_fatal_error('slack_token_lock', message, retryable=False)
|
||||
return False
|
||||
|
||||
self._app = AsyncApp(token=bot_token)
|
||||
# First token is the primary — used for AsyncApp / Socket Mode
|
||||
primary_token = bot_tokens[0]
|
||||
self._app = AsyncApp(token=primary_token)
|
||||
|
||||
# Get our own bot user ID for mention detection
|
||||
auth_response = await self._app.client.auth_test()
|
||||
self._bot_user_id = auth_response.get("user_id")
|
||||
bot_name = auth_response.get("user", "unknown")
|
||||
# Register each bot token and map team_id → client
|
||||
for token in bot_tokens:
|
||||
client = AsyncWebClient(token=token)
|
||||
auth_response = await client.auth_test()
|
||||
team_id = auth_response.get("team_id", "")
|
||||
bot_user_id = auth_response.get("user_id", "")
|
||||
bot_name = auth_response.get("user", "unknown")
|
||||
team_name = auth_response.get("team", "unknown")
|
||||
|
||||
self._team_clients[team_id] = client
|
||||
self._team_bot_user_ids[team_id] = bot_user_id
|
||||
|
||||
# First token sets the primary bot_user_id (backward compat)
|
||||
if self._bot_user_id is None:
|
||||
self._bot_user_id = bot_user_id
|
||||
|
||||
logger.info(
|
||||
"[Slack] Authenticated as @%s in workspace %s (team: %s)",
|
||||
bot_name, team_name, team_id,
|
||||
)
|
||||
|
||||
# Register message event handler
|
||||
@self._app.event("message")
|
||||
|
|
@ -134,7 +175,10 @@ class SlackAdapter(BasePlatformAdapter):
|
|||
self._socket_mode_task = asyncio.create_task(self._handler.start_async())
|
||||
|
||||
self._running = True
|
||||
logger.info("[Slack] Connected as @%s (Socket Mode)", bot_name)
|
||||
logger.info(
|
||||
"[Slack] Socket Mode connected (%d workspace(s))",
|
||||
len(self._team_clients),
|
||||
)
|
||||
return True
|
||||
|
||||
except Exception as e: # pragma: no cover - defensive logging
|
||||
|
|
@ -161,6 +205,13 @@ class SlackAdapter(BasePlatformAdapter):
|
|||
|
||||
logger.info("[Slack] Disconnected")
|
||||
|
||||
def _get_client(self, chat_id: str) -> AsyncWebClient:
|
||||
"""Return the workspace-specific WebClient for a channel."""
|
||||
team_id = self._channel_team.get(chat_id)
|
||||
if team_id and team_id in self._team_clients:
|
||||
return self._team_clients[team_id]
|
||||
return self._app.client # fallback to primary
|
||||
|
||||
async def send(
|
||||
self,
|
||||
chat_id: str,
|
||||
|
|
@ -197,7 +248,7 @@ class SlackAdapter(BasePlatformAdapter):
|
|||
if broadcast and i == 0:
|
||||
kwargs["reply_broadcast"] = True
|
||||
|
||||
last_result = await self._app.client.chat_postMessage(**kwargs)
|
||||
last_result = await self._get_client(chat_id).chat_postMessage(**kwargs)
|
||||
|
||||
return SendResult(
|
||||
success=True,
|
||||
|
|
@ -219,7 +270,7 @@ class SlackAdapter(BasePlatformAdapter):
|
|||
if not self._app:
|
||||
return SendResult(success=False, error="Not connected")
|
||||
try:
|
||||
await self._app.client.chat_update(
|
||||
await self._get_client(chat_id).chat_update(
|
||||
channel=chat_id,
|
||||
ts=message_id,
|
||||
text=content,
|
||||
|
|
@ -253,7 +304,7 @@ class SlackAdapter(BasePlatformAdapter):
|
|||
return # Can only set status in a thread context
|
||||
|
||||
try:
|
||||
await self._app.client.assistant_threads_setStatus(
|
||||
await self._get_client(chat_id).assistant_threads_setStatus(
|
||||
channel_id=chat_id,
|
||||
thread_ts=thread_ts,
|
||||
status="is thinking...",
|
||||
|
|
@ -295,7 +346,7 @@ class SlackAdapter(BasePlatformAdapter):
|
|||
if not os.path.exists(file_path):
|
||||
raise FileNotFoundError(f"File not found: {file_path}")
|
||||
|
||||
result = await self._app.client.files_upload_v2(
|
||||
result = await self._get_client(chat_id).files_upload_v2(
|
||||
channel=chat_id,
|
||||
file=file_path,
|
||||
filename=os.path.basename(file_path),
|
||||
|
|
@ -397,7 +448,7 @@ class SlackAdapter(BasePlatformAdapter):
|
|||
if not self._app:
|
||||
return False
|
||||
try:
|
||||
await self._app.client.reactions_add(
|
||||
await self._get_client(channel).reactions_add(
|
||||
channel=channel, timestamp=timestamp, name=emoji
|
||||
)
|
||||
return True
|
||||
|
|
@ -413,7 +464,7 @@ class SlackAdapter(BasePlatformAdapter):
|
|||
if not self._app:
|
||||
return False
|
||||
try:
|
||||
await self._app.client.reactions_remove(
|
||||
await self._get_client(channel).reactions_remove(
|
||||
channel=channel, timestamp=timestamp, name=emoji
|
||||
)
|
||||
return True
|
||||
|
|
@ -423,7 +474,7 @@ class SlackAdapter(BasePlatformAdapter):
|
|||
|
||||
# ----- User identity resolution -----
|
||||
|
||||
async def _resolve_user_name(self, user_id: str) -> str:
|
||||
async def _resolve_user_name(self, user_id: str, chat_id: str = "") -> str:
|
||||
"""Resolve a Slack user ID to a display name, with caching."""
|
||||
if not user_id:
|
||||
return ""
|
||||
|
|
@ -434,7 +485,8 @@ class SlackAdapter(BasePlatformAdapter):
|
|||
return user_id
|
||||
|
||||
try:
|
||||
result = await self._app.client.users_info(user=user_id)
|
||||
client = self._get_client(chat_id) if chat_id else self._app.client
|
||||
result = await client.users_info(user=user_id)
|
||||
user = result.get("user", {})
|
||||
# Prefer display_name → real_name → user_id
|
||||
profile = user.get("profile", {})
|
||||
|
|
@ -498,7 +550,7 @@ class SlackAdapter(BasePlatformAdapter):
|
|||
response = await client.get(image_url)
|
||||
response.raise_for_status()
|
||||
|
||||
result = await self._app.client.files_upload_v2(
|
||||
result = await self._get_client(chat_id).files_upload_v2(
|
||||
channel=chat_id,
|
||||
content=response.content,
|
||||
filename="image.png",
|
||||
|
|
@ -558,7 +610,7 @@ class SlackAdapter(BasePlatformAdapter):
|
|||
return SendResult(success=False, error=f"Video file not found: {video_path}")
|
||||
|
||||
try:
|
||||
result = await self._app.client.files_upload_v2(
|
||||
result = await self._get_client(chat_id).files_upload_v2(
|
||||
channel=chat_id,
|
||||
file=video_path,
|
||||
filename=os.path.basename(video_path),
|
||||
|
|
@ -599,7 +651,7 @@ class SlackAdapter(BasePlatformAdapter):
|
|||
display_name = file_name or os.path.basename(file_path)
|
||||
|
||||
try:
|
||||
result = await self._app.client.files_upload_v2(
|
||||
result = await self._get_client(chat_id).files_upload_v2(
|
||||
channel=chat_id,
|
||||
file=file_path,
|
||||
filename=display_name,
|
||||
|
|
@ -627,7 +679,7 @@ class SlackAdapter(BasePlatformAdapter):
|
|||
return {"name": chat_id, "type": "unknown"}
|
||||
|
||||
try:
|
||||
result = await self._app.client.conversations_info(channel=chat_id)
|
||||
result = await self._get_client(chat_id).conversations_info(channel=chat_id)
|
||||
channel = result.get("channel", {})
|
||||
is_dm = channel.get("is_im", False)
|
||||
return {
|
||||
|
|
@ -660,6 +712,11 @@ class SlackAdapter(BasePlatformAdapter):
|
|||
user_id = event.get("user", "")
|
||||
channel_id = event.get("channel", "")
|
||||
ts = event.get("ts", "")
|
||||
team_id = event.get("team", "")
|
||||
|
||||
# Track which workspace owns this channel
|
||||
if team_id and channel_id:
|
||||
self._channel_team[channel_id] = team_id
|
||||
|
||||
# Determine if this is a DM or channel message
|
||||
channel_type = event.get("channel_type", "")
|
||||
|
|
@ -676,11 +733,12 @@ class SlackAdapter(BasePlatformAdapter):
|
|||
thread_ts = event.get("thread_ts") or ts # ts fallback for channels
|
||||
|
||||
# In channels, only respond if bot is mentioned
|
||||
if not is_dm and self._bot_user_id:
|
||||
if f"<@{self._bot_user_id}>" not in text:
|
||||
bot_uid = self._team_bot_user_ids.get(team_id, self._bot_user_id)
|
||||
if not is_dm and bot_uid:
|
||||
if f"<@{bot_uid}>" not in text:
|
||||
return
|
||||
# Strip the bot mention from the text
|
||||
text = text.replace(f"<@{self._bot_user_id}>", "").strip()
|
||||
text = text.replace(f"<@{bot_uid}>", "").strip()
|
||||
|
||||
# Determine message type
|
||||
msg_type = MessageType.TEXT
|
||||
|
|
@ -700,7 +758,7 @@ class SlackAdapter(BasePlatformAdapter):
|
|||
if ext not in (".jpg", ".jpeg", ".png", ".gif", ".webp"):
|
||||
ext = ".jpg"
|
||||
# Slack private URLs require the bot token as auth header
|
||||
cached = await self._download_slack_file(url, ext)
|
||||
cached = await self._download_slack_file(url, ext, team_id=team_id)
|
||||
media_urls.append(cached)
|
||||
media_types.append(mimetype)
|
||||
msg_type = MessageType.PHOTO
|
||||
|
|
@ -711,7 +769,7 @@ class SlackAdapter(BasePlatformAdapter):
|
|||
ext = "." + mimetype.split("/")[-1].split(";")[0]
|
||||
if ext not in (".ogg", ".mp3", ".wav", ".webm", ".m4a"):
|
||||
ext = ".ogg"
|
||||
cached = await self._download_slack_file(url, ext, audio=True)
|
||||
cached = await self._download_slack_file(url, ext, audio=True, team_id=team_id)
|
||||
media_urls.append(cached)
|
||||
media_types.append(mimetype)
|
||||
msg_type = MessageType.VOICE
|
||||
|
|
@ -742,7 +800,7 @@ class SlackAdapter(BasePlatformAdapter):
|
|||
continue
|
||||
|
||||
# Download and cache
|
||||
raw_bytes = await self._download_slack_file_bytes(url)
|
||||
raw_bytes = await self._download_slack_file_bytes(url, team_id=team_id)
|
||||
cached_path = cache_document_from_bytes(
|
||||
raw_bytes, original_filename or f"document{ext}"
|
||||
)
|
||||
|
|
@ -771,7 +829,7 @@ class SlackAdapter(BasePlatformAdapter):
|
|||
logger.warning("[Slack] Failed to cache document from %s: %s", url, e, exc_info=True)
|
||||
|
||||
# Resolve user display name (cached after first lookup)
|
||||
user_name = await self._resolve_user_name(user_id)
|
||||
user_name = await self._resolve_user_name(user_id, chat_id=channel_id)
|
||||
|
||||
# Build source
|
||||
source = self.build_source(
|
||||
|
|
@ -808,6 +866,11 @@ class SlackAdapter(BasePlatformAdapter):
|
|||
text = command.get("text", "").strip()
|
||||
user_id = command.get("user_id", "")
|
||||
channel_id = command.get("channel_id", "")
|
||||
team_id = command.get("team_id", "")
|
||||
|
||||
# Track which workspace owns this channel
|
||||
if team_id and channel_id:
|
||||
self._channel_team[channel_id] = team_id
|
||||
|
||||
# Map subcommands to gateway commands — derived from central registry.
|
||||
# Also keep "compact" as a Slack-specific alias for /compress.
|
||||
|
|
@ -839,12 +902,12 @@ class SlackAdapter(BasePlatformAdapter):
|
|||
|
||||
await self.handle_message(event)
|
||||
|
||||
async def _download_slack_file(self, url: str, ext: str, audio: bool = False) -> str:
|
||||
async def _download_slack_file(self, url: str, ext: str, audio: bool = False, team_id: str = "") -> str:
|
||||
"""Download a Slack file using the bot token for auth, with retry."""
|
||||
import asyncio
|
||||
import httpx
|
||||
|
||||
bot_token = self.config.token
|
||||
bot_token = self._team_clients[team_id].token if team_id and team_id in self._team_clients else self.config.token
|
||||
last_exc = None
|
||||
|
||||
async with httpx.AsyncClient(timeout=30.0, follow_redirects=True) as client:
|
||||
|
|
@ -874,12 +937,12 @@ class SlackAdapter(BasePlatformAdapter):
|
|||
raise
|
||||
raise last_exc
|
||||
|
||||
async def _download_slack_file_bytes(self, url: str) -> bytes:
|
||||
async def _download_slack_file_bytes(self, url: str, team_id: str = "") -> bytes:
|
||||
"""Download a Slack file and return raw bytes, with retry."""
|
||||
import asyncio
|
||||
import httpx
|
||||
|
||||
bot_token = self.config.token
|
||||
bot_token = self._team_clients[team_id].token if team_id and team_id in self._team_clients else self.config.token
|
||||
last_exc = None
|
||||
|
||||
async with httpx.AsyncClient(timeout=30.0, follow_redirects=True) as client:
|
||||
|
|
|
|||
|
|
@ -8,6 +8,7 @@ Uses python-telegram-bot library for:
|
|||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
|
|
@ -122,6 +123,8 @@ class TelegramAdapter(BasePlatformAdapter):
|
|||
super().__init__(config, Platform.TELEGRAM)
|
||||
self._app: Optional[Application] = None
|
||||
self._bot: Optional[Bot] = None
|
||||
self._webhook_mode: bool = False
|
||||
self._mention_patterns = self._compile_mention_patterns()
|
||||
self._reply_to_mode: str = getattr(config, 'reply_to_mode', 'first') or 'first'
|
||||
# Buffer rapid/album photo updates so Telegram image bursts are handled
|
||||
# as a single MessageEvent instead of self-interrupting multiple turns.
|
||||
|
|
@ -456,7 +459,19 @@ class TelegramAdapter(BasePlatformAdapter):
|
|||
self._persist_dm_topic_thread_id(int(chat_id), topic_name, thread_id)
|
||||
|
||||
async def connect(self) -> bool:
|
||||
"""Connect to Telegram and start polling for updates."""
|
||||
"""Connect to Telegram via polling or webhook.
|
||||
|
||||
By default, uses long polling (outbound connection to Telegram).
|
||||
If ``TELEGRAM_WEBHOOK_URL`` is set, starts an HTTP webhook server
|
||||
instead. Webhook mode is useful for cloud deployments (Fly.io,
|
||||
Railway) where inbound HTTP can wake a suspended machine.
|
||||
|
||||
Env vars for webhook mode::
|
||||
|
||||
TELEGRAM_WEBHOOK_URL Public HTTPS URL (e.g. https://app.fly.dev/telegram)
|
||||
TELEGRAM_WEBHOOK_PORT Local listen port (default 8443)
|
||||
TELEGRAM_WEBHOOK_SECRET Secret token for update verification
|
||||
"""
|
||||
if not TELEGRAM_AVAILABLE:
|
||||
logger.error(
|
||||
"[%s] python-telegram-bot not installed. Run: pip install python-telegram-bot",
|
||||
|
|
@ -550,37 +565,76 @@ class TelegramAdapter(BasePlatformAdapter):
|
|||
else:
|
||||
raise
|
||||
await self._app.start()
|
||||
loop = asyncio.get_running_loop()
|
||||
|
||||
def _polling_error_callback(error: Exception) -> None:
|
||||
if self._polling_error_task and not self._polling_error_task.done():
|
||||
return
|
||||
if self._looks_like_polling_conflict(error):
|
||||
self._polling_error_task = loop.create_task(self._handle_polling_conflict(error))
|
||||
elif self._looks_like_network_error(error):
|
||||
logger.warning("[%s] Telegram network error, scheduling reconnect: %s", self.name, error)
|
||||
self._polling_error_task = loop.create_task(self._handle_polling_network_error(error))
|
||||
else:
|
||||
logger.error("[%s] Telegram polling error: %s", self.name, error, exc_info=True)
|
||||
# Decide between webhook and polling mode
|
||||
webhook_url = os.getenv("TELEGRAM_WEBHOOK_URL", "").strip()
|
||||
|
||||
# Store reference for retry use in _handle_polling_conflict
|
||||
self._polling_error_callback_ref = _polling_error_callback
|
||||
if webhook_url:
|
||||
# ── Webhook mode ─────────────────────────────────────
|
||||
# Telegram pushes updates to our HTTP endpoint. This
|
||||
# enables cloud platforms (Fly.io, Railway) to auto-wake
|
||||
# suspended machines on inbound HTTP traffic.
|
||||
webhook_port = int(os.getenv("TELEGRAM_WEBHOOK_PORT", "8443"))
|
||||
webhook_secret = os.getenv("TELEGRAM_WEBHOOK_SECRET", "").strip() or None
|
||||
from urllib.parse import urlparse
|
||||
webhook_path = urlparse(webhook_url).path or "/telegram"
|
||||
|
||||
await self._app.updater.start_polling(
|
||||
allowed_updates=Update.ALL_TYPES,
|
||||
drop_pending_updates=True,
|
||||
error_callback=_polling_error_callback,
|
||||
)
|
||||
await self._app.updater.start_webhook(
|
||||
listen="0.0.0.0",
|
||||
port=webhook_port,
|
||||
url_path=webhook_path,
|
||||
webhook_url=webhook_url,
|
||||
secret_token=webhook_secret,
|
||||
allowed_updates=Update.ALL_TYPES,
|
||||
drop_pending_updates=True,
|
||||
)
|
||||
self._webhook_mode = True
|
||||
logger.info(
|
||||
"[%s] Webhook server listening on 0.0.0.0:%d%s",
|
||||
self.name, webhook_port, webhook_path,
|
||||
)
|
||||
else:
|
||||
# ── Polling mode (default) ───────────────────────────
|
||||
loop = asyncio.get_running_loop()
|
||||
|
||||
def _polling_error_callback(error: Exception) -> None:
|
||||
if self._polling_error_task and not self._polling_error_task.done():
|
||||
return
|
||||
if self._looks_like_polling_conflict(error):
|
||||
self._polling_error_task = loop.create_task(self._handle_polling_conflict(error))
|
||||
elif self._looks_like_network_error(error):
|
||||
logger.warning("[%s] Telegram network error, scheduling reconnect: %s", self.name, error)
|
||||
self._polling_error_task = loop.create_task(self._handle_polling_network_error(error))
|
||||
else:
|
||||
logger.error("[%s] Telegram polling error: %s", self.name, error, exc_info=True)
|
||||
|
||||
# Store reference for retry use in _handle_polling_conflict
|
||||
self._polling_error_callback_ref = _polling_error_callback
|
||||
|
||||
await self._app.updater.start_polling(
|
||||
allowed_updates=Update.ALL_TYPES,
|
||||
drop_pending_updates=True,
|
||||
error_callback=_polling_error_callback,
|
||||
)
|
||||
|
||||
# Register bot commands so Telegram shows a hint menu when users type /
|
||||
# List is derived from the central COMMAND_REGISTRY — adding a new
|
||||
# gateway command there automatically adds it to the Telegram menu.
|
||||
try:
|
||||
from telegram import BotCommand
|
||||
from hermes_cli.commands import telegram_bot_commands
|
||||
from hermes_cli.commands import telegram_menu_commands
|
||||
# Telegram allows up to 100 commands but has an undocumented
|
||||
# payload size limit. Skill descriptions are truncated to 40
|
||||
# chars in telegram_menu_commands() to fit 100 commands safely.
|
||||
menu_commands, hidden_count = telegram_menu_commands(max_commands=100)
|
||||
await self._bot.set_my_commands([
|
||||
BotCommand(name, desc) for name, desc in telegram_bot_commands()
|
||||
BotCommand(name, desc) for name, desc in menu_commands
|
||||
])
|
||||
if hidden_count:
|
||||
logger.info(
|
||||
"[%s] Telegram menu: %d commands registered, %d hidden (over 100 limit). Use /commands for full list.",
|
||||
self.name, len(menu_commands), hidden_count,
|
||||
)
|
||||
except Exception as e:
|
||||
logger.warning(
|
||||
"[%s] Could not register Telegram command menu: %s",
|
||||
|
|
@ -590,7 +644,8 @@ class TelegramAdapter(BasePlatformAdapter):
|
|||
)
|
||||
|
||||
self._mark_connected()
|
||||
logger.info("[%s] Connected and polling for Telegram updates", self.name)
|
||||
mode = "webhook" if self._webhook_mode else "polling"
|
||||
logger.info("[%s] Connected to Telegram (%s mode)", self.name, mode)
|
||||
|
||||
# Set up DM topics (Bot API 9.4 — Private Chat Topics)
|
||||
# Runs after connection is established so the bot can call createForumTopic.
|
||||
|
|
@ -618,7 +673,7 @@ class TelegramAdapter(BasePlatformAdapter):
|
|||
return False
|
||||
|
||||
async def disconnect(self) -> None:
|
||||
"""Stop polling, cancel pending album flushes, and disconnect."""
|
||||
"""Stop polling/webhook, cancel pending album flushes, and disconnect."""
|
||||
pending_media_group_tasks = list(self._media_group_tasks.values())
|
||||
for task in pending_media_group_tasks:
|
||||
task.cancel()
|
||||
|
|
@ -1325,6 +1380,148 @@ class TelegramAdapter(BasePlatformAdapter):
|
|||
|
||||
return text
|
||||
|
||||
# ── Group mention gating ──────────────────────────────────────────────
|
||||
|
||||
def _telegram_require_mention(self) -> bool:
|
||||
"""Return whether group chats should require an explicit bot trigger."""
|
||||
configured = self.config.extra.get("require_mention")
|
||||
if configured is not None:
|
||||
if isinstance(configured, str):
|
||||
return configured.lower() in ("true", "1", "yes", "on")
|
||||
return bool(configured)
|
||||
return os.getenv("TELEGRAM_REQUIRE_MENTION", "false").lower() in ("true", "1", "yes", "on")
|
||||
|
||||
def _telegram_free_response_chats(self) -> set[str]:
|
||||
raw = self.config.extra.get("free_response_chats")
|
||||
if raw is None:
|
||||
raw = os.getenv("TELEGRAM_FREE_RESPONSE_CHATS", "")
|
||||
if isinstance(raw, list):
|
||||
return {str(part).strip() for part in raw if str(part).strip()}
|
||||
return {part.strip() for part in str(raw).split(",") if part.strip()}
|
||||
|
||||
def _compile_mention_patterns(self) -> List[re.Pattern]:
|
||||
"""Compile optional regex wake-word patterns for group triggers."""
|
||||
patterns = self.config.extra.get("mention_patterns")
|
||||
if patterns is None:
|
||||
raw = os.getenv("TELEGRAM_MENTION_PATTERNS", "").strip()
|
||||
if raw:
|
||||
try:
|
||||
loaded = json.loads(raw)
|
||||
except Exception:
|
||||
loaded = [part.strip() for part in raw.splitlines() if part.strip()]
|
||||
if not loaded:
|
||||
loaded = [part.strip() for part in raw.split(",") if part.strip()]
|
||||
patterns = loaded
|
||||
|
||||
if patterns is None:
|
||||
return []
|
||||
if isinstance(patterns, str):
|
||||
patterns = [patterns]
|
||||
if not isinstance(patterns, list):
|
||||
logger.warning(
|
||||
"[%s] telegram mention_patterns must be a list or string; got %s",
|
||||
self.name,
|
||||
type(patterns).__name__,
|
||||
)
|
||||
return []
|
||||
|
||||
compiled: List[re.Pattern] = []
|
||||
for pattern in patterns:
|
||||
if not isinstance(pattern, str) or not pattern.strip():
|
||||
continue
|
||||
try:
|
||||
compiled.append(re.compile(pattern, re.IGNORECASE))
|
||||
except re.error as exc:
|
||||
logger.warning("[%s] Invalid Telegram mention pattern %r: %s", self.name, pattern, exc)
|
||||
if compiled:
|
||||
logger.info("[%s] Loaded %d Telegram mention pattern(s)", self.name, len(compiled))
|
||||
return compiled
|
||||
|
||||
def _is_group_chat(self, message: Message) -> bool:
|
||||
chat = getattr(message, "chat", None)
|
||||
if not chat:
|
||||
return False
|
||||
chat_type = str(getattr(chat, "type", "")).split(".")[-1].lower()
|
||||
return chat_type in ("group", "supergroup")
|
||||
|
||||
def _is_reply_to_bot(self, message: Message) -> bool:
|
||||
if not self._bot or not getattr(message, "reply_to_message", None):
|
||||
return False
|
||||
reply_user = getattr(message.reply_to_message, "from_user", None)
|
||||
return bool(reply_user and getattr(reply_user, "id", None) == getattr(self._bot, "id", None))
|
||||
|
||||
def _message_mentions_bot(self, message: Message) -> bool:
|
||||
if not self._bot:
|
||||
return False
|
||||
|
||||
bot_username = (getattr(self._bot, "username", None) or "").lstrip("@").lower()
|
||||
bot_id = getattr(self._bot, "id", None)
|
||||
|
||||
def _iter_sources():
|
||||
yield getattr(message, "text", None) or "", getattr(message, "entities", None) or []
|
||||
yield getattr(message, "caption", None) or "", getattr(message, "caption_entities", None) or []
|
||||
|
||||
for source_text, entities in _iter_sources():
|
||||
if bot_username and f"@{bot_username}" in source_text.lower():
|
||||
return True
|
||||
for entity in entities:
|
||||
entity_type = str(getattr(entity, "type", "")).split(".")[-1].lower()
|
||||
if entity_type == "mention" and bot_username:
|
||||
offset = int(getattr(entity, "offset", -1))
|
||||
length = int(getattr(entity, "length", 0))
|
||||
if offset < 0 or length <= 0:
|
||||
continue
|
||||
if source_text[offset:offset + length].strip().lower() == f"@{bot_username}":
|
||||
return True
|
||||
elif entity_type == "text_mention":
|
||||
user = getattr(entity, "user", None)
|
||||
if user and getattr(user, "id", None) == bot_id:
|
||||
return True
|
||||
return False
|
||||
|
||||
def _message_matches_mention_patterns(self, message: Message) -> bool:
|
||||
if not self._mention_patterns:
|
||||
return False
|
||||
for candidate in (getattr(message, "text", None), getattr(message, "caption", None)):
|
||||
if not candidate:
|
||||
continue
|
||||
for pattern in self._mention_patterns:
|
||||
if pattern.search(candidate):
|
||||
return True
|
||||
return False
|
||||
|
||||
def _clean_bot_trigger_text(self, text: Optional[str]) -> Optional[str]:
|
||||
if not text or not self._bot or not getattr(self._bot, "username", None):
|
||||
return text
|
||||
username = re.escape(self._bot.username)
|
||||
cleaned = re.sub(rf"(?i)@{username}\b[,:\-]*\s*", "", text).strip()
|
||||
return cleaned or text
|
||||
|
||||
def _should_process_message(self, message: Message, *, is_command: bool = False) -> bool:
|
||||
"""Apply Telegram group trigger rules.
|
||||
|
||||
DMs remain unrestricted. Group/supergroup messages are accepted when:
|
||||
- the chat is explicitly allowlisted in ``free_response_chats``
|
||||
- ``require_mention`` is disabled
|
||||
- the message is a command
|
||||
- the message replies to the bot
|
||||
- the bot is @mentioned
|
||||
- the text/caption matches a configured regex wake-word pattern
|
||||
"""
|
||||
if not self._is_group_chat(message):
|
||||
return True
|
||||
if str(getattr(getattr(message, "chat", None), "id", "")) in self._telegram_free_response_chats():
|
||||
return True
|
||||
if not self._telegram_require_mention():
|
||||
return True
|
||||
if is_command:
|
||||
return True
|
||||
if self._is_reply_to_bot(message):
|
||||
return True
|
||||
if self._message_mentions_bot(message):
|
||||
return True
|
||||
return self._message_matches_mention_patterns(message)
|
||||
|
||||
async def _handle_text_message(self, update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
|
||||
"""Handle incoming text messages.
|
||||
|
||||
|
|
@ -1334,14 +1531,19 @@ class TelegramAdapter(BasePlatformAdapter):
|
|||
"""
|
||||
if not update.message or not update.message.text:
|
||||
return
|
||||
if not self._should_process_message(update.message):
|
||||
return
|
||||
|
||||
event = self._build_message_event(update.message, MessageType.TEXT)
|
||||
event.text = self._clean_bot_trigger_text(event.text)
|
||||
self._enqueue_text_event(event)
|
||||
|
||||
async def _handle_command(self, update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
|
||||
"""Handle incoming command messages."""
|
||||
if not update.message or not update.message.text:
|
||||
return
|
||||
if not self._should_process_message(update.message, is_command=True):
|
||||
return
|
||||
|
||||
event = self._build_message_event(update.message, MessageType.COMMAND)
|
||||
await self.handle_message(event)
|
||||
|
|
@ -1350,6 +1552,8 @@ class TelegramAdapter(BasePlatformAdapter):
|
|||
"""Handle incoming location/venue pin messages."""
|
||||
if not update.message:
|
||||
return
|
||||
if not self._should_process_message(update.message):
|
||||
return
|
||||
|
||||
msg = update.message
|
||||
venue = getattr(msg, "venue", None)
|
||||
|
|
@ -1493,6 +1697,8 @@ class TelegramAdapter(BasePlatformAdapter):
|
|||
"""Handle incoming media messages, downloading images to local cache."""
|
||||
if not update.message:
|
||||
return
|
||||
if not self._should_process_message(update.message):
|
||||
return
|
||||
|
||||
msg = update.message
|
||||
|
||||
|
|
@ -1516,7 +1722,7 @@ class TelegramAdapter(BasePlatformAdapter):
|
|||
|
||||
# Add caption as text
|
||||
if msg.caption:
|
||||
event.text = msg.caption
|
||||
event.text = self._clean_bot_trigger_text(msg.caption)
|
||||
|
||||
# Handle stickers: describe via vision tool with caching
|
||||
if msg.sticker:
|
||||
|
|
|
|||
188
gateway/run.py
188
gateway/run.py
|
|
@ -301,6 +301,50 @@ def _resolve_runtime_agent_kwargs() -> dict:
|
|||
}
|
||||
|
||||
|
||||
def _check_unavailable_skill(command_name: str) -> str | None:
|
||||
"""Check if a command matches a known-but-inactive skill.
|
||||
|
||||
Returns a helpful message if the skill exists but is disabled or only
|
||||
available as an optional install. Returns None if no match found.
|
||||
"""
|
||||
# Normalize: command uses hyphens, skill names may use hyphens or underscores
|
||||
normalized = command_name.lower().replace("_", "-")
|
||||
try:
|
||||
from tools.skills_tool import SKILLS_DIR, _get_disabled_skill_names
|
||||
disabled = _get_disabled_skill_names()
|
||||
|
||||
# Check disabled built-in skills
|
||||
for skill_md in SKILLS_DIR.rglob("SKILL.md"):
|
||||
if any(part in ('.git', '.github', '.hub') for part in skill_md.parts):
|
||||
continue
|
||||
name = skill_md.parent.name.lower().replace("_", "-")
|
||||
if name == normalized and name in disabled:
|
||||
return (
|
||||
f"The **{command_name}** skill is installed but disabled.\n"
|
||||
f"Enable it with: `hermes skills config`"
|
||||
)
|
||||
|
||||
# Check optional skills (shipped with repo but not installed)
|
||||
from hermes_constants import get_hermes_home
|
||||
repo_root = Path(__file__).resolve().parent.parent
|
||||
optional_dir = repo_root / "optional-skills"
|
||||
if optional_dir.exists():
|
||||
for skill_md in optional_dir.rglob("SKILL.md"):
|
||||
name = skill_md.parent.name.lower().replace("_", "-")
|
||||
if name == normalized:
|
||||
# Build install path: official/<category>/<name>
|
||||
rel = skill_md.parent.relative_to(optional_dir)
|
||||
parts = list(rel.parts)
|
||||
install_path = f"official/{'/'.join(parts)}"
|
||||
return (
|
||||
f"The **{command_name}** skill is available but not installed.\n"
|
||||
f"Install it with: `hermes skills install {install_path}`"
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
return None
|
||||
|
||||
|
||||
def _platform_config_key(platform: "Platform") -> str:
|
||||
"""Map a Platform enum to its config.yaml key (LOCAL→"cli", rest→enum value)."""
|
||||
return "cli" if platform == Platform.LOCAL else platform.value
|
||||
|
|
@ -432,6 +476,13 @@ class GatewayRunner:
|
|||
self._honcho_managers: Dict[str, Any] = {}
|
||||
self._honcho_configs: Dict[str, Any] = {}
|
||||
|
||||
# Rate-limit compression warning messages sent to users.
|
||||
# Keyed by chat_id — value is the timestamp of the last warning sent.
|
||||
# Prevents the warning from firing on every message when a session
|
||||
# remains above the threshold after compression.
|
||||
self._compression_warn_sent: Dict[str, float] = {}
|
||||
self._compression_warn_cooldown: int = 3600 # seconds (1 hour)
|
||||
|
||||
# Ensure tirith security scanner is available (downloads if needed)
|
||||
try:
|
||||
from tools.tirith_security import ensure_installed
|
||||
|
|
@ -1651,6 +1702,11 @@ class GatewayRunner:
|
|||
# In DMs: offer pairing code. In groups: silently ignore.
|
||||
if source.chat_type == "dm" and self._get_unauthorized_dm_behavior(source.platform) == "pair":
|
||||
platform_name = source.platform.value if source.platform else "unknown"
|
||||
# Rate-limit ALL pairing responses (code or rejection) to
|
||||
# prevent spamming the user with repeated messages when
|
||||
# multiple DMs arrive in quick succession.
|
||||
if self.pairing_store._is_rate_limited(platform_name, source.user_id):
|
||||
return None
|
||||
code = self.pairing_store.generate_code(
|
||||
platform_name, source.user_id, source.user_name or ""
|
||||
)
|
||||
|
|
@ -1672,6 +1728,8 @@ class GatewayRunner:
|
|||
"Too many pairing requests right now~ "
|
||||
"Please try again later!"
|
||||
)
|
||||
# Record rate limit so subsequent messages are silently ignored
|
||||
self.pairing_store._record_rate_limit(platform_name, source.user_id)
|
||||
return None
|
||||
|
||||
# PRIORITY handling when an agent is already running for this session.
|
||||
|
|
@ -1817,7 +1875,13 @@ class GatewayRunner:
|
|||
|
||||
if canonical == "help":
|
||||
return await self._handle_help_command(event)
|
||||
|
||||
if canonical == "commands":
|
||||
return await self._handle_commands_command(event)
|
||||
|
||||
if canonical == "profile":
|
||||
return await self._handle_profile_command(event)
|
||||
|
||||
if canonical == "status":
|
||||
return await self._handle_status_command(event)
|
||||
|
||||
|
|
@ -1830,6 +1894,9 @@ class GatewayRunner:
|
|||
if canonical == "verbose":
|
||||
return await self._handle_verbose_command(event)
|
||||
|
||||
if canonical == "yolo":
|
||||
return await self._handle_yolo_command(event)
|
||||
|
||||
if canonical == "provider":
|
||||
return await self._handle_provider_command(event)
|
||||
|
||||
|
|
@ -1974,6 +2041,12 @@ class GatewayRunner:
|
|||
if msg:
|
||||
event.text = msg
|
||||
# Fall through to normal message processing with skill content
|
||||
else:
|
||||
# Not an active skill — check if it's a known-but-disabled or
|
||||
# uninstalled skill and give actionable guidance.
|
||||
_unavail_msg = _check_unavailable_skill(command)
|
||||
if _unavail_msg:
|
||||
return _unavail_msg
|
||||
except Exception as e:
|
||||
logger.debug("Skill command check failed (non-fatal): %s", e)
|
||||
|
||||
|
|
@ -2344,13 +2417,18 @@ class GatewayRunner:
|
|||
pass
|
||||
|
||||
# Still too large after compression — warn user
|
||||
# Rate-limited to once per cooldown period per
|
||||
# chat to avoid spamming on every message.
|
||||
if _new_tokens >= _warn_token_threshold:
|
||||
logger.warning(
|
||||
"Session hygiene: still ~%s tokens after "
|
||||
"compression — suggesting /reset",
|
||||
f"{_new_tokens:,}",
|
||||
)
|
||||
if _hyg_adapter:
|
||||
_now = time.time()
|
||||
_last_warn = self._compression_warn_sent.get(source.chat_id, 0)
|
||||
if _hyg_adapter and _now - _last_warn >= self._compression_warn_cooldown:
|
||||
self._compression_warn_sent[source.chat_id] = _now
|
||||
try:
|
||||
await _hyg_adapter.send(
|
||||
source.chat_id,
|
||||
|
|
@ -2372,7 +2450,10 @@ class GatewayRunner:
|
|||
if _approx_tokens >= _warn_token_threshold:
|
||||
_hyg_adapter = self.adapters.get(source.platform)
|
||||
_hyg_meta = {"thread_id": source.thread_id} if source.thread_id else None
|
||||
if _hyg_adapter:
|
||||
_now = time.time()
|
||||
_last_warn = self._compression_warn_sent.get(source.chat_id, 0)
|
||||
if _hyg_adapter and _now - _last_warn >= self._compression_warn_cooldown:
|
||||
self._compression_warn_sent[source.chat_id] = _now
|
||||
try:
|
||||
await _hyg_adapter.send(
|
||||
source.chat_id,
|
||||
|
|
@ -2999,6 +3080,36 @@ class GatewayRunner:
|
|||
return f"{header}\n\n{session_info}"
|
||||
return header
|
||||
|
||||
async def _handle_profile_command(self, event: MessageEvent) -> str:
|
||||
"""Handle /profile — show active profile name and home directory."""
|
||||
from hermes_constants import get_hermes_home, display_hermes_home
|
||||
from pathlib import Path
|
||||
|
||||
home = get_hermes_home()
|
||||
display = display_hermes_home()
|
||||
|
||||
# Detect profile name from HERMES_HOME path
|
||||
# Profile paths look like: ~/.hermes/profiles/<name>
|
||||
profiles_parent = Path.home() / ".hermes" / "profiles"
|
||||
try:
|
||||
rel = home.relative_to(profiles_parent)
|
||||
profile_name = str(rel).split("/")[0]
|
||||
except ValueError:
|
||||
profile_name = None
|
||||
|
||||
if profile_name:
|
||||
lines = [
|
||||
f"👤 **Profile:** `{profile_name}`",
|
||||
f"📂 **Home:** `{display}`",
|
||||
]
|
||||
else:
|
||||
lines = [
|
||||
"👤 **Profile:** default",
|
||||
f"📂 **Home:** `{display}`",
|
||||
]
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
async def _handle_status_command(self, event: MessageEvent) -> str:
|
||||
"""Handle /status command."""
|
||||
source = event.source
|
||||
|
|
@ -3065,12 +3176,69 @@ class GatewayRunner:
|
|||
from agent.skill_commands import get_skill_commands
|
||||
skill_cmds = get_skill_commands()
|
||||
if skill_cmds:
|
||||
lines.append(f"\n⚡ **Skill Commands** ({len(skill_cmds)} installed):")
|
||||
for cmd in sorted(skill_cmds):
|
||||
lines.append(f"\n⚡ **Skill Commands** ({len(skill_cmds)} active):")
|
||||
# Show first 10, then point to /commands for the rest
|
||||
sorted_cmds = sorted(skill_cmds)
|
||||
for cmd in sorted_cmds[:10]:
|
||||
lines.append(f"`{cmd}` — {skill_cmds[cmd]['description']}")
|
||||
if len(sorted_cmds) > 10:
|
||||
lines.append(f"\n... and {len(sorted_cmds) - 10} more. Use `/commands` for the full paginated list.")
|
||||
except Exception:
|
||||
pass
|
||||
return "\n".join(lines)
|
||||
|
||||
async def _handle_commands_command(self, event: MessageEvent) -> str:
|
||||
"""Handle /commands [page] - paginated list of all commands and skills."""
|
||||
from hermes_cli.commands import gateway_help_lines
|
||||
|
||||
raw_args = event.get_command_args().strip()
|
||||
if raw_args:
|
||||
try:
|
||||
requested_page = int(raw_args)
|
||||
except ValueError:
|
||||
return "Usage: `/commands [page]`"
|
||||
else:
|
||||
requested_page = 1
|
||||
|
||||
# Build combined entry list: built-in commands + skill commands
|
||||
entries = list(gateway_help_lines())
|
||||
try:
|
||||
from agent.skill_commands import get_skill_commands
|
||||
skill_cmds = get_skill_commands()
|
||||
if skill_cmds:
|
||||
entries.append("")
|
||||
entries.append("⚡ **Skill Commands**:")
|
||||
for cmd in sorted(skill_cmds):
|
||||
desc = skill_cmds[cmd].get("description", "").strip() or "Skill command"
|
||||
entries.append(f"`{cmd}` — {desc}")
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
if not entries:
|
||||
return "No commands available."
|
||||
|
||||
from gateway.config import Platform
|
||||
page_size = 15 if event.source.platform == Platform.TELEGRAM else 20
|
||||
total_pages = max(1, (len(entries) + page_size - 1) // page_size)
|
||||
page = max(1, min(requested_page, total_pages))
|
||||
start = (page - 1) * page_size
|
||||
page_entries = entries[start:start + page_size]
|
||||
|
||||
lines = [
|
||||
f"📚 **Commands** ({len(entries)} total, page {page}/{total_pages})",
|
||||
"",
|
||||
*page_entries,
|
||||
]
|
||||
if total_pages > 1:
|
||||
nav_parts = []
|
||||
if page > 1:
|
||||
nav_parts.append(f"`/commands {page - 1}` ← prev")
|
||||
if page < total_pages:
|
||||
nav_parts.append(f"next → `/commands {page + 1}`")
|
||||
lines.extend(["", " | ".join(nav_parts)])
|
||||
if page != requested_page:
|
||||
lines.append(f"_(Requested page {requested_page} was out of range, showing page {page}.)_")
|
||||
return "\n".join(lines)
|
||||
|
||||
async def _handle_provider_command(self, event: MessageEvent) -> str:
|
||||
"""Handle /provider command - show available providers."""
|
||||
|
|
@ -3891,7 +4059,7 @@ class GatewayRunner:
|
|||
# Send media files
|
||||
for media_path in (media_files or []):
|
||||
try:
|
||||
await adapter.send_file(
|
||||
await adapter.send_document(
|
||||
chat_id=source.chat_id,
|
||||
file_path=media_path,
|
||||
)
|
||||
|
|
@ -3999,6 +4167,16 @@ class GatewayRunner:
|
|||
else:
|
||||
return f"🧠 ✓ Reasoning effort set to `{effort}` (this session only)"
|
||||
|
||||
async def _handle_yolo_command(self, event: MessageEvent) -> str:
|
||||
"""Handle /yolo — toggle dangerous command approval bypass."""
|
||||
current = bool(os.environ.get("HERMES_YOLO_MODE"))
|
||||
if current:
|
||||
os.environ.pop("HERMES_YOLO_MODE", None)
|
||||
return "⚠️ YOLO mode **OFF** — dangerous commands will require approval."
|
||||
else:
|
||||
os.environ["HERMES_YOLO_MODE"] = "1"
|
||||
return "⚡ YOLO mode **ON** — all commands auto-approved. Use with caution."
|
||||
|
||||
async def _handle_verbose_command(self, event: MessageEvent) -> str:
|
||||
"""Handle /verbose command — cycle tool progress display mode.
|
||||
|
||||
|
|
|
|||
11
hermes
11
hermes
|
|
@ -1,12 +1,11 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
Hermes Agent CLI Launcher
|
||||
Hermes Agent CLI launcher.
|
||||
|
||||
This is a convenience wrapper to launch the Hermes CLI.
|
||||
Usage: ./hermes [options]
|
||||
This wrapper should behave like the installed `hermes` command, including
|
||||
subcommands such as `gateway`, `cron`, and `doctor`.
|
||||
"""
|
||||
|
||||
if __name__ == "__main__":
|
||||
from cli import main
|
||||
import fire
|
||||
fire.Fire(main)
|
||||
from hermes_cli.main import main
|
||||
main()
|
||||
|
|
|
|||
|
|
@ -11,5 +11,5 @@ Provides subcommands for:
|
|||
- hermes cron - Manage cron jobs
|
||||
"""
|
||||
|
||||
__version__ = "0.5.0"
|
||||
__release_date__ = "2026.3.28"
|
||||
__version__ = "0.6.0"
|
||||
__release_date__ = "2026.3.30"
|
||||
|
|
|
|||
|
|
@ -2393,21 +2393,20 @@ def _login_nous(args, pconfig: ProviderConfig) -> None:
|
|||
raise AuthError("No runtime API key available to fetch models",
|
||||
provider="nous", code="invalid_token")
|
||||
|
||||
model_ids = fetch_nous_models(
|
||||
inference_base_url=runtime_base_url,
|
||||
api_key=runtime_key,
|
||||
timeout_seconds=timeout_seconds,
|
||||
verify=verify,
|
||||
)
|
||||
# Use curated model list (same as OpenRouter defaults) instead
|
||||
# of the full /models dump which returns hundreds of models.
|
||||
from hermes_cli.models import _PROVIDER_MODELS
|
||||
model_ids = _PROVIDER_MODELS.get("nous", [])
|
||||
|
||||
print()
|
||||
if model_ids:
|
||||
print(f"Showing {len(model_ids)} curated models — use \"Enter custom model name\" for others.")
|
||||
selected_model = _prompt_model_selection(model_ids)
|
||||
if selected_model:
|
||||
_save_model_choice(selected_model)
|
||||
print(f"Default model set to: {selected_model}")
|
||||
else:
|
||||
print("No models were returned by the inference API.")
|
||||
print("No curated models available for Nous Portal.")
|
||||
except Exception as exc:
|
||||
message = format_auth_error(exc) if isinstance(exc, AuthError) else str(exc)
|
||||
print()
|
||||
|
|
|
|||
|
|
@ -241,7 +241,8 @@ def approval_callback(cli, command: str, description: str) -> str:
|
|||
lock = cli._approval_lock
|
||||
|
||||
with lock:
|
||||
timeout = 60
|
||||
from cli import CLI_CONFIG
|
||||
timeout = CLI_CONFIG.get("approvals", {}).get("timeout", 60)
|
||||
response_queue = queue.Queue()
|
||||
choices = ["once", "session", "always", "deny"]
|
||||
if len(command) > 70:
|
||||
|
|
|
|||
|
|
@ -5,6 +5,7 @@ toggleable list of items. Falls back to a numbered text UI when
|
|||
curses is unavailable (Windows without curses, piped stdin, etc.).
|
||||
"""
|
||||
|
||||
import sys
|
||||
from typing import List, Set
|
||||
|
||||
from hermes_cli.colors import Colors, color
|
||||
|
|
@ -26,6 +27,10 @@ def curses_checklist(
|
|||
The indices the user confirmed as checked. On cancel (ESC/q),
|
||||
returns ``pre_selected`` unchanged.
|
||||
"""
|
||||
# Safety: return defaults when stdin is not a terminal.
|
||||
if not sys.stdin.isatty():
|
||||
return set(pre_selected)
|
||||
|
||||
try:
|
||||
import curses
|
||||
selected = set(pre_selected)
|
||||
|
|
|
|||
|
|
@ -88,7 +88,19 @@ def claw_command(args):
|
|||
|
||||
def _cmd_migrate(args):
|
||||
"""Run the OpenClaw → Hermes migration."""
|
||||
source_dir = Path(getattr(args, "source", None) or Path.home() / ".openclaw")
|
||||
# Check current and legacy OpenClaw directories
|
||||
explicit_source = getattr(args, "source", None)
|
||||
if explicit_source:
|
||||
source_dir = Path(explicit_source)
|
||||
else:
|
||||
source_dir = Path.home() / ".openclaw"
|
||||
if not source_dir.is_dir():
|
||||
# Try legacy directory names
|
||||
for legacy in (".clawdbot", ".moldbot"):
|
||||
candidate = Path.home() / legacy
|
||||
if candidate.is_dir():
|
||||
source_dir = candidate
|
||||
break
|
||||
dry_run = getattr(args, "dry_run", False)
|
||||
preset = getattr(args, "preset", "full")
|
||||
overwrite = getattr(args, "overwrite", False)
|
||||
|
|
|
|||
|
|
@ -71,6 +71,7 @@ COMMAND_REGISTRY: list[CommandDef] = [
|
|||
aliases=("q",), args_hint="<prompt>"),
|
||||
CommandDef("status", "Show session info", "Session",
|
||||
gateway_only=True),
|
||||
CommandDef("profile", "Show active profile name and home directory", "Info"),
|
||||
CommandDef("sethome", "Set this chat as the home channel", "Session",
|
||||
gateway_only=True, aliases=("set-home",)),
|
||||
CommandDef("resume", "Resume a previously-named session", "Session",
|
||||
|
|
@ -90,6 +91,8 @@ COMMAND_REGISTRY: list[CommandDef] = [
|
|||
CommandDef("verbose", "Cycle tool progress display: off -> new -> all -> verbose",
|
||||
"Configuration", cli_only=True,
|
||||
gateway_config_gate="display.tool_progress_command"),
|
||||
CommandDef("yolo", "Toggle YOLO mode (skip all dangerous command approvals)",
|
||||
"Configuration"),
|
||||
CommandDef("reasoning", "Manage reasoning effort and display", "Configuration",
|
||||
args_hint="[level|show|hide]",
|
||||
subcommands=("none", "low", "minimal", "medium", "high", "xhigh", "show", "hide", "on", "off")),
|
||||
|
|
@ -118,6 +121,8 @@ COMMAND_REGISTRY: list[CommandDef] = [
|
|||
"Tools & Skills", cli_only=True),
|
||||
|
||||
# Info
|
||||
CommandDef("commands", "Browse all commands and skills (paginated)", "Info",
|
||||
gateway_only=True, args_hint="[page]"),
|
||||
CommandDef("help", "Show available commands", "Info"),
|
||||
CommandDef("usage", "Show token usage for the current session", "Info"),
|
||||
CommandDef("insights", "Show usage insights and analytics", "Info",
|
||||
|
|
@ -361,6 +366,69 @@ def telegram_bot_commands() -> list[tuple[str, str]]:
|
|||
return result
|
||||
|
||||
|
||||
def telegram_menu_commands(max_commands: int = 100) -> tuple[list[tuple[str, str]], int]:
|
||||
"""Return Telegram menu commands capped to the Bot API limit.
|
||||
|
||||
Priority order (higher priority = never bumped by overflow):
|
||||
1. Core CommandDef commands (always included)
|
||||
2. Plugin slash commands (take precedence over skills)
|
||||
3. Built-in skill commands (fill remaining slots, alphabetical)
|
||||
|
||||
Skills are the only tier that gets trimmed when the cap is hit.
|
||||
User-installed hub skills are excluded — accessible via /skills.
|
||||
|
||||
Returns:
|
||||
(menu_commands, hidden_count) where hidden_count is the number of
|
||||
skill commands omitted due to the cap.
|
||||
"""
|
||||
all_commands = list(telegram_bot_commands())
|
||||
|
||||
# Plugin slash commands get priority over skills
|
||||
try:
|
||||
from hermes_cli.plugins import get_plugin_manager
|
||||
pm = get_plugin_manager()
|
||||
plugin_cmds = getattr(pm, "_plugin_commands", {})
|
||||
for cmd_name in sorted(plugin_cmds):
|
||||
tg_name = cmd_name.replace("-", "_")
|
||||
desc = "Plugin command"
|
||||
if len(desc) > 40:
|
||||
desc = desc[:37] + "..."
|
||||
all_commands.append((tg_name, desc))
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Remaining slots go to built-in skill commands (not hub-installed).
|
||||
skill_entries: list[tuple[str, str]] = []
|
||||
try:
|
||||
from agent.skill_commands import get_skill_commands
|
||||
from tools.skills_tool import SKILLS_DIR
|
||||
_skills_dir = str(SKILLS_DIR.resolve())
|
||||
_hub_dir = str((SKILLS_DIR / ".hub").resolve())
|
||||
skill_cmds = get_skill_commands()
|
||||
for cmd_key in sorted(skill_cmds):
|
||||
info = skill_cmds[cmd_key]
|
||||
skill_path = info.get("skill_md_path", "")
|
||||
if not skill_path.startswith(_skills_dir):
|
||||
continue
|
||||
if skill_path.startswith(_hub_dir):
|
||||
continue
|
||||
name = cmd_key.lstrip("/").replace("-", "_")
|
||||
desc = info.get("description", "")
|
||||
# Keep descriptions short — setMyCommands has an undocumented
|
||||
# total payload limit. 40 chars fits 100 commands safely.
|
||||
if len(desc) > 40:
|
||||
desc = desc[:37] + "..."
|
||||
skill_entries.append((name, desc))
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Skills fill remaining slots — they're the only tier that gets trimmed
|
||||
remaining_slots = max(0, max_commands - len(all_commands))
|
||||
hidden_count = max(0, len(skill_entries) - remaining_slots)
|
||||
all_commands.extend(skill_entries[:remaining_slots])
|
||||
return all_commands[:max_commands], hidden_count
|
||||
|
||||
|
||||
def slack_subcommand_map() -> dict[str, str]:
|
||||
"""Return subcommand -> /command mapping for Slack /hermes handler.
|
||||
|
||||
|
|
|
|||
|
|
@ -225,7 +225,8 @@ DEFAULT_CONFIG = {
|
|||
"model": "", # e.g. "google/gemini-2.5-flash", "gpt-4o"
|
||||
"base_url": "", # direct OpenAI-compatible endpoint (takes precedence over provider)
|
||||
"api_key": "", # API key for base_url (falls back to OPENAI_API_KEY)
|
||||
"timeout": 30, # seconds — increase for slow local vision models
|
||||
"timeout": 30, # seconds — LLM API call timeout; increase for slow local vision models
|
||||
"download_timeout": 30, # seconds — image HTTP download timeout; increase for slow connections
|
||||
},
|
||||
"web_extract": {
|
||||
"provider": "auto",
|
||||
|
|
@ -409,6 +410,7 @@ DEFAULT_CONFIG = {
|
|||
# off — skip all approval prompts (equivalent to --yolo)
|
||||
"approvals": {
|
||||
"mode": "manual",
|
||||
"timeout": 60,
|
||||
},
|
||||
|
||||
# Permanently allowed dangerous command patterns (added via "always" approval)
|
||||
|
|
@ -739,6 +741,14 @@ OPTIONAL_ENV_VARS = {
|
|||
"password": True,
|
||||
"category": "tool",
|
||||
},
|
||||
"CAMOFOX_URL": {
|
||||
"description": "Camofox browser server URL for local anti-detection browsing (e.g. http://localhost:9377)",
|
||||
"prompt": "Camofox server URL",
|
||||
"url": "https://github.com/jo-inc/camofox-browser",
|
||||
"tools": ["browser_navigate", "browser_click"],
|
||||
"password": False,
|
||||
"category": "tool",
|
||||
},
|
||||
"FAL_KEY": {
|
||||
"description": "FAL API key for image generation",
|
||||
"prompt": "FAL API key",
|
||||
|
|
|
|||
|
|
@ -4,6 +4,7 @@ Used by `hermes tools` and `hermes skills` for interactive checklists.
|
|||
Provides a curses multi-select with keyboard navigation, plus a
|
||||
text-based numbered fallback for terminals without curses support.
|
||||
"""
|
||||
import sys
|
||||
from typing import Callable, List, Optional, Set
|
||||
|
||||
from hermes_cli.colors import Colors, color
|
||||
|
|
@ -31,6 +32,11 @@ def curses_checklist(
|
|||
if cancel_returns is None:
|
||||
cancel_returns = set(selected)
|
||||
|
||||
# Safety: curses and input() both hang or spin when stdin is not a
|
||||
# terminal (e.g. subprocess pipe). Return defaults immediately.
|
||||
if not sys.stdin.isatty():
|
||||
return cancel_returns
|
||||
|
||||
try:
|
||||
import curses
|
||||
chosen = set(selected)
|
||||
|
|
|
|||
|
|
@ -406,8 +406,11 @@ def run_doctor(args):
|
|||
if terminal_env == "docker":
|
||||
if shutil.which("docker"):
|
||||
# Check if docker daemon is running
|
||||
result = subprocess.run(["docker", "info"], capture_output=True)
|
||||
if result.returncode == 0:
|
||||
try:
|
||||
result = subprocess.run(["docker", "info"], capture_output=True, timeout=10)
|
||||
except subprocess.TimeoutExpired:
|
||||
result = None
|
||||
if result is not None and result.returncode == 0:
|
||||
check_ok("docker", "(daemon running)")
|
||||
else:
|
||||
check_fail("docker daemon not running")
|
||||
|
|
@ -426,12 +429,16 @@ def run_doctor(args):
|
|||
ssh_host = os.getenv("TERMINAL_SSH_HOST")
|
||||
if ssh_host:
|
||||
# Try to connect
|
||||
result = subprocess.run(
|
||||
["ssh", "-o", "ConnectTimeout=5", "-o", "BatchMode=yes", ssh_host, "echo ok"],
|
||||
capture_output=True,
|
||||
text=True
|
||||
)
|
||||
if result.returncode == 0:
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["ssh", "-o", "ConnectTimeout=5", "-o", "BatchMode=yes", ssh_host, "echo ok"],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=15
|
||||
)
|
||||
except subprocess.TimeoutExpired:
|
||||
result = None
|
||||
if result is not None and result.returncode == 0:
|
||||
check_ok(f"SSH connection to {ssh_host}")
|
||||
else:
|
||||
check_fail(f"SSH connection to {ssh_host}")
|
||||
|
|
|
|||
|
|
@ -50,6 +50,23 @@ import sys
|
|||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
def _require_tty(command_name: str) -> None:
|
||||
"""Exit with a clear error if stdin is not a terminal.
|
||||
|
||||
Interactive TUI commands (hermes tools, hermes setup, hermes model) use
|
||||
curses or input() prompts that spin at 100% CPU when stdin is a pipe.
|
||||
This guard prevents accidental non-interactive invocation.
|
||||
"""
|
||||
if not sys.stdin.isatty():
|
||||
print(
|
||||
f"Error: 'hermes {command_name}' requires an interactive terminal.\n"
|
||||
f"It cannot be run through a pipe or non-interactive subprocess.\n"
|
||||
f"Run it directly in your terminal instead.",
|
||||
file=sys.stderr,
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
# Add project root to path
|
||||
PROJECT_ROOT = Path(__file__).parent.parent.resolve()
|
||||
sys.path.insert(0, str(PROJECT_ROOT))
|
||||
|
|
@ -617,6 +634,7 @@ def cmd_gateway(args):
|
|||
|
||||
def cmd_whatsapp(args):
|
||||
"""Set up WhatsApp: choose mode, configure, install bridge, pair via QR."""
|
||||
_require_tty("whatsapp")
|
||||
import subprocess
|
||||
from pathlib import Path
|
||||
from hermes_cli.config import get_env_value, save_env_value
|
||||
|
|
@ -803,12 +821,14 @@ def cmd_whatsapp(args):
|
|||
|
||||
def cmd_setup(args):
|
||||
"""Interactive setup wizard."""
|
||||
_require_tty("setup")
|
||||
from hermes_cli.setup import run_setup_wizard
|
||||
run_setup_wizard(args)
|
||||
|
||||
|
||||
def cmd_model(args):
|
||||
"""Select default model — starts with provider selection, then model picker."""
|
||||
_require_tty("model")
|
||||
from hermes_cli.auth import (
|
||||
resolve_provider, AuthError, format_auth_error,
|
||||
)
|
||||
|
|
@ -1096,14 +1116,20 @@ def _model_flow_nous(config, current_model="", args=None):
|
|||
# login_nous already handles model selection + config update
|
||||
return
|
||||
|
||||
# Already logged in — fetch models and select
|
||||
print("Fetching models from Nous Portal...")
|
||||
# Already logged in — use curated model list (same as OpenRouter defaults).
|
||||
# The live /models endpoint returns hundreds of models; the curated list
|
||||
# shows only agentic models users recognize from OpenRouter.
|
||||
from hermes_cli.models import _PROVIDER_MODELS
|
||||
model_ids = _PROVIDER_MODELS.get("nous", [])
|
||||
if not model_ids:
|
||||
print("No curated models available for Nous Portal.")
|
||||
return
|
||||
|
||||
print(f"Showing {len(model_ids)} curated models — use \"Enter custom model name\" for others.")
|
||||
|
||||
# Verify credentials are still valid (catches expired sessions early)
|
||||
try:
|
||||
creds = resolve_nous_runtime_credentials(min_key_ttl_seconds=5 * 60)
|
||||
model_ids = fetch_nous_models(
|
||||
inference_base_url=creds.get("base_url", ""),
|
||||
api_key=creds.get("api_key", ""),
|
||||
)
|
||||
except Exception as exc:
|
||||
relogin = isinstance(exc, AuthError) and exc.relogin_required
|
||||
msg = format_auth_error(exc) if isinstance(exc, AuthError) else str(exc)
|
||||
|
|
@ -1120,11 +1146,7 @@ def _model_flow_nous(config, current_model="", args=None):
|
|||
except Exception as login_exc:
|
||||
print(f"Re-login failed: {login_exc}")
|
||||
return
|
||||
print(f"Could not fetch models: {msg}")
|
||||
return
|
||||
|
||||
if not model_ids:
|
||||
print("No models returned by the inference API.")
|
||||
print(f"Could not verify credentials: {msg}")
|
||||
return
|
||||
|
||||
selected = _prompt_model_selection(model_ids, current_model=current_model)
|
||||
|
|
@ -2494,6 +2516,7 @@ def cmd_version(args):
|
|||
|
||||
def cmd_uninstall(args):
|
||||
"""Uninstall Hermes Agent."""
|
||||
_require_tty("uninstall")
|
||||
from hermes_cli.uninstall import run_uninstall
|
||||
run_uninstall(args)
|
||||
|
||||
|
|
@ -4204,6 +4227,7 @@ For more help on a command:
|
|||
def cmd_skills(args):
|
||||
# Route 'config' action to skills_config module
|
||||
if getattr(args, 'skills_action', None) == 'config':
|
||||
_require_tty("skills config")
|
||||
from hermes_cli.skills_config import skills_command as skills_config_command
|
||||
skills_config_command(args)
|
||||
else:
|
||||
|
|
@ -4414,6 +4438,7 @@ For more help on a command:
|
|||
from hermes_cli.tools_config import tools_disable_enable_command
|
||||
tools_disable_enable_command(args)
|
||||
else:
|
||||
_require_tty("tools")
|
||||
from hermes_cli.tools_config import tools_command
|
||||
tools_command(args)
|
||||
|
||||
|
|
|
|||
|
|
@ -511,6 +511,10 @@ def _interpolate_value(value: str) -> str:
|
|||
|
||||
def cmd_mcp_configure(args):
|
||||
"""Reconfigure which tools are enabled for an existing MCP server."""
|
||||
import sys as _sys
|
||||
if not _sys.stdin.isatty():
|
||||
print("Error: 'hermes mcp configure' requires an interactive terminal.", file=_sys.stderr)
|
||||
_sys.exit(1)
|
||||
name = args.name
|
||||
servers = _get_mcp_servers()
|
||||
|
||||
|
|
|
|||
|
|
@ -131,6 +131,7 @@ def _browser_label(current_provider: str) -> str:
|
|||
mapping = {
|
||||
"browserbase": "Browserbase",
|
||||
"browser-use": "Browser Use",
|
||||
"camofox": "Camofox",
|
||||
"local": "Local browser",
|
||||
}
|
||||
return mapping.get(current_provider or "local", current_provider or "Local browser")
|
||||
|
|
@ -144,6 +145,64 @@ def _tts_label(current_provider: str) -> str:
|
|||
"neutts": "NeuTTS",
|
||||
}
|
||||
return mapping.get(current_provider or "edge", current_provider or "Edge TTS")
|
||||
|
||||
|
||||
def _resolve_browser_feature_state(
|
||||
*,
|
||||
browser_tool_enabled: bool,
|
||||
browser_provider: str,
|
||||
browser_provider_explicit: bool,
|
||||
browser_local_available: bool,
|
||||
direct_camofox: bool,
|
||||
direct_browserbase: bool,
|
||||
direct_browser_use: bool,
|
||||
managed_browser_available: bool,
|
||||
) -> tuple[str, bool, bool, bool]:
|
||||
"""Resolve browser availability using the same precedence as runtime."""
|
||||
if direct_camofox:
|
||||
return "camofox", True, bool(browser_tool_enabled), False
|
||||
|
||||
if browser_provider_explicit:
|
||||
current_provider = browser_provider or "local"
|
||||
if current_provider == "browserbase":
|
||||
provider_available = managed_browser_available or direct_browserbase
|
||||
available = bool(browser_local_available and provider_available)
|
||||
managed = bool(
|
||||
browser_tool_enabled
|
||||
and browser_local_available
|
||||
and managed_browser_available
|
||||
and not direct_browserbase
|
||||
)
|
||||
active = bool(browser_tool_enabled and available)
|
||||
return current_provider, available, active, managed
|
||||
if current_provider == "browser-use":
|
||||
available = bool(browser_local_available and direct_browser_use)
|
||||
active = bool(browser_tool_enabled and available)
|
||||
return current_provider, available, active, False
|
||||
if current_provider == "camofox":
|
||||
return current_provider, False, False, False
|
||||
|
||||
current_provider = "local"
|
||||
available = bool(browser_local_available)
|
||||
active = bool(browser_tool_enabled and available)
|
||||
return current_provider, available, active, False
|
||||
|
||||
if managed_browser_available or direct_browserbase:
|
||||
available = bool(browser_local_available)
|
||||
managed = bool(
|
||||
browser_tool_enabled
|
||||
and browser_local_available
|
||||
and managed_browser_available
|
||||
and not direct_browserbase
|
||||
)
|
||||
active = bool(browser_tool_enabled and available)
|
||||
return "browserbase", available, active, managed
|
||||
|
||||
available = bool(browser_local_available)
|
||||
active = bool(browser_tool_enabled and available)
|
||||
return "local", available, active, False
|
||||
|
||||
|
||||
def get_nous_subscription_features(
|
||||
config: Optional[Dict[str, object]] = None,
|
||||
) -> NousSubscriptionFeatures:
|
||||
|
|
@ -168,22 +227,22 @@ def get_nous_subscription_features(
|
|||
browser_tool_enabled = _toolset_enabled(config, "browser")
|
||||
modal_tool_enabled = _toolset_enabled(config, "terminal")
|
||||
|
||||
web_backend = str(config.get("web", {}).get("backend") or "").strip().lower() if isinstance(config.get("web"), dict) else ""
|
||||
tts_provider = str(config.get("tts", {}).get("provider") or "edge").strip().lower() if isinstance(config.get("tts"), dict) else "edge"
|
||||
web_cfg = config.get("web") if isinstance(config.get("web"), dict) else {}
|
||||
tts_cfg = config.get("tts") if isinstance(config.get("tts"), dict) else {}
|
||||
browser_cfg = config.get("browser") if isinstance(config.get("browser"), dict) else {}
|
||||
terminal_cfg = config.get("terminal") if isinstance(config.get("terminal"), dict) else {}
|
||||
|
||||
web_backend = str(web_cfg.get("backend") or "").strip().lower()
|
||||
tts_provider = str(tts_cfg.get("provider") or "edge").strip().lower()
|
||||
browser_provider_explicit = "cloud_provider" in browser_cfg
|
||||
browser_provider = normalize_browser_cloud_provider(
|
||||
config.get("browser", {}).get("cloud_provider")
|
||||
if isinstance(config.get("browser"), dict)
|
||||
else None
|
||||
browser_cfg.get("cloud_provider") if browser_provider_explicit else None
|
||||
)
|
||||
terminal_backend = (
|
||||
str(config.get("terminal", {}).get("backend") or "local").strip().lower()
|
||||
if isinstance(config.get("terminal"), dict)
|
||||
else "local"
|
||||
str(terminal_cfg.get("backend") or "local").strip().lower()
|
||||
)
|
||||
modal_mode = normalize_modal_mode(
|
||||
config.get("terminal", {}).get("modal_mode")
|
||||
if isinstance(config.get("terminal"), dict)
|
||||
else None
|
||||
terminal_cfg.get("modal_mode")
|
||||
)
|
||||
|
||||
direct_exa = bool(get_env_value("EXA_API_KEY"))
|
||||
|
|
@ -193,6 +252,7 @@ def get_nous_subscription_features(
|
|||
direct_fal = bool(get_env_value("FAL_KEY"))
|
||||
direct_openai_tts = bool(resolve_openai_audio_api_key())
|
||||
direct_elevenlabs = bool(get_env_value("ELEVENLABS_API_KEY"))
|
||||
direct_camofox = bool(get_env_value("CAMOFOX_URL"))
|
||||
direct_browserbase = bool(get_env_value("BROWSERBASE_API_KEY") and get_env_value("BROWSERBASE_PROJECT_ID"))
|
||||
direct_browser_use = bool(get_env_value("BROWSER_USE_API_KEY"))
|
||||
direct_modal = has_direct_modal_credentials()
|
||||
|
|
@ -241,26 +301,21 @@ def get_nous_subscription_features(
|
|||
)
|
||||
tts_active = bool(tts_tool_enabled and tts_available)
|
||||
|
||||
browser_current_provider = browser_provider or "local"
|
||||
browser_local_available = _has_agent_browser()
|
||||
browser_managed = (
|
||||
browser_tool_enabled
|
||||
and browser_current_provider == "browserbase"
|
||||
and managed_browser_available
|
||||
and not direct_browserbase
|
||||
)
|
||||
browser_available = bool(
|
||||
browser_local_available
|
||||
or (browser_current_provider == "browserbase" and (managed_browser_available or direct_browserbase))
|
||||
or (browser_current_provider == "browser-use" and direct_browser_use)
|
||||
)
|
||||
browser_active = bool(
|
||||
browser_tool_enabled
|
||||
and (
|
||||
(browser_current_provider == "local" and browser_local_available)
|
||||
or (browser_current_provider == "browserbase" and (managed_browser_available or direct_browserbase))
|
||||
or (browser_current_provider == "browser-use" and direct_browser_use)
|
||||
)
|
||||
(
|
||||
browser_current_provider,
|
||||
browser_available,
|
||||
browser_active,
|
||||
browser_managed,
|
||||
) = _resolve_browser_feature_state(
|
||||
browser_tool_enabled=browser_tool_enabled,
|
||||
browser_provider=browser_provider,
|
||||
browser_provider_explicit=browser_provider_explicit,
|
||||
browser_local_available=browser_local_available,
|
||||
direct_camofox=direct_camofox,
|
||||
direct_browserbase=direct_browserbase,
|
||||
direct_browser_use=direct_browser_use,
|
||||
managed_browser_available=managed_browser_available,
|
||||
)
|
||||
|
||||
if terminal_backend != "modal":
|
||||
|
|
@ -346,7 +401,7 @@ def get_nous_subscription_features(
|
|||
direct_override=browser_active and not browser_managed,
|
||||
toolset_enabled=browser_tool_enabled,
|
||||
current_provider=_browser_label(browser_current_provider),
|
||||
explicit_configured=isinstance(config.get("browser"), dict) and "cloud_provider" in config.get("browser", {}),
|
||||
explicit_configured=browser_provider_explicit,
|
||||
),
|
||||
"modal": NousFeatureState(
|
||||
key="modal",
|
||||
|
|
|
|||
|
|
@ -154,6 +154,34 @@ class PluginContext:
|
|||
self._manager._plugin_tool_names.add(name)
|
||||
logger.debug("Plugin %s registered tool: %s", self.manifest.name, name)
|
||||
|
||||
# -- message injection --------------------------------------------------
|
||||
|
||||
def inject_message(self, content: str, role: str = "user") -> bool:
|
||||
"""Inject a message into the active conversation.
|
||||
|
||||
If the agent is idle (waiting for user input), this starts a new turn.
|
||||
If the agent is running, this interrupts and injects the message.
|
||||
|
||||
This enables plugins (e.g. remote control viewers, messaging bridges)
|
||||
to send messages into the conversation from external sources.
|
||||
|
||||
Returns True if the message was queued successfully.
|
||||
"""
|
||||
cli = self._manager._cli_ref
|
||||
if cli is None:
|
||||
logger.warning("inject_message: no CLI reference (not available in gateway mode)")
|
||||
return False
|
||||
|
||||
msg = content if role == "user" else f"[{role}] {content}"
|
||||
|
||||
if getattr(cli, "_agent_running", False):
|
||||
# Agent is mid-turn — interrupt with the message
|
||||
cli._interrupt_queue.put(msg)
|
||||
else:
|
||||
# Agent is idle — queue as next input
|
||||
cli._pending_input.put(msg)
|
||||
return True
|
||||
|
||||
# -- hook registration --------------------------------------------------
|
||||
|
||||
def register_hook(self, hook_name: str, callback: Callable) -> None:
|
||||
|
|
@ -186,6 +214,7 @@ class PluginManager:
|
|||
self._hooks: Dict[str, List[Callable]] = {}
|
||||
self._plugin_tool_names: Set[str] = set()
|
||||
self._discovered: bool = False
|
||||
self._cli_ref = None # Set by CLI after plugin discovery
|
||||
|
||||
# -----------------------------------------------------------------------
|
||||
# Public
|
||||
|
|
|
|||
|
|
@ -616,24 +616,32 @@ def _print_setup_summary(config: dict, hermes_home):
|
|||
else:
|
||||
tool_status.append(("Web Search & Extract", False, "EXA_API_KEY, PARALLEL_API_KEY, FIRECRAWL_API_KEY/FIRECRAWL_API_URL, or TAVILY_API_KEY"))
|
||||
|
||||
# Browser tools (local Chromium or Browserbase cloud)
|
||||
import shutil
|
||||
|
||||
_ab_found = (
|
||||
shutil.which("agent-browser")
|
||||
or (
|
||||
Path(__file__).parent.parent / "node_modules" / ".bin" / "agent-browser"
|
||||
).exists()
|
||||
)
|
||||
# Browser tools (local Chromium, Camofox, Browserbase, or Browser Use)
|
||||
browser_provider = subscription_features.browser.current_provider
|
||||
if subscription_features.browser.managed_by_nous:
|
||||
tool_status.append(("Browser Automation (Nous Browserbase)", True, None))
|
||||
elif subscription_features.browser.current_provider == "Browserbase" and subscription_features.browser.available:
|
||||
tool_status.append(("Browser Automation (Browserbase)", True, None))
|
||||
elif _ab_found:
|
||||
tool_status.append(("Browser Automation (local)", True, None))
|
||||
elif subscription_features.browser.available:
|
||||
label = "Browser Automation"
|
||||
if browser_provider:
|
||||
label = f"Browser Automation ({browser_provider})"
|
||||
tool_status.append((label, True, None))
|
||||
else:
|
||||
missing_browser_hint = "npm install -g agent-browser, set CAMOFOX_URL, or configure Browserbase"
|
||||
if browser_provider == "Browserbase":
|
||||
missing_browser_hint = (
|
||||
"npm install -g agent-browser and set "
|
||||
"BROWSERBASE_API_KEY/BROWSERBASE_PROJECT_ID"
|
||||
)
|
||||
elif browser_provider == "Browser Use":
|
||||
missing_browser_hint = (
|
||||
"npm install -g agent-browser and set BROWSER_USE_API_KEY"
|
||||
)
|
||||
elif browser_provider == "Camofox":
|
||||
missing_browser_hint = "CAMOFOX_URL"
|
||||
elif browser_provider == "Local browser":
|
||||
missing_browser_hint = "npm install -g agent-browser"
|
||||
tool_status.append(
|
||||
("Browser Automation", False, "npm install -g agent-browser")
|
||||
("Browser Automation", False, missing_browser_hint)
|
||||
)
|
||||
|
||||
# FAL (image generation)
|
||||
|
|
@ -1045,10 +1053,9 @@ def setup_model_provider(config: dict):
|
|||
min_key_ttl_seconds=5 * 60,
|
||||
timeout_seconds=15.0,
|
||||
)
|
||||
nous_models = fetch_nous_models(
|
||||
inference_base_url=creds.get("base_url", ""),
|
||||
api_key=creds.get("api_key", ""),
|
||||
)
|
||||
# Use curated model list instead of full /models dump
|
||||
from hermes_cli.models import _PROVIDER_MODELS
|
||||
nous_models = _PROVIDER_MODELS.get("nous", [])
|
||||
except Exception as e:
|
||||
logger.debug("Could not fetch Nous models after login: %s", e)
|
||||
|
||||
|
|
@ -2821,10 +2828,38 @@ def setup_gateway(config: dict):
|
|||
if token or get_env_value("MATRIX_PASSWORD"):
|
||||
# E2EE
|
||||
print()
|
||||
if prompt_yes_no("Enable end-to-end encryption (E2EE)?", False):
|
||||
want_e2ee = prompt_yes_no("Enable end-to-end encryption (E2EE)?", False)
|
||||
if want_e2ee:
|
||||
save_env_value("MATRIX_ENCRYPTION", "true")
|
||||
print_success("E2EE enabled")
|
||||
print_info(" Requires: pip install 'matrix-nio[e2e]'")
|
||||
|
||||
# Auto-install matrix-nio
|
||||
matrix_pkg = "matrix-nio[e2e]" if want_e2ee else "matrix-nio"
|
||||
try:
|
||||
__import__("nio")
|
||||
except ImportError:
|
||||
print_info(f"Installing {matrix_pkg}...")
|
||||
import subprocess
|
||||
|
||||
uv_bin = shutil.which("uv")
|
||||
if uv_bin:
|
||||
result = subprocess.run(
|
||||
[uv_bin, "pip", "install", "--python", sys.executable, matrix_pkg],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
)
|
||||
else:
|
||||
result = subprocess.run(
|
||||
[sys.executable, "-m", "pip", "install", matrix_pkg],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
)
|
||||
if result.returncode == 0:
|
||||
print_success(f"{matrix_pkg} installed")
|
||||
else:
|
||||
print_warning(f"Install failed — run manually: pip install '{matrix_pkg}'")
|
||||
if result.stderr:
|
||||
print_info(f" Error: {result.stderr.strip().splitlines()[-1]}")
|
||||
|
||||
# Allowed users
|
||||
print()
|
||||
|
|
|
|||
|
|
@ -354,7 +354,14 @@ def do_install(identifier: str, category: str = "", force: bool = False,
|
|||
extra_metadata.update(getattr(bundle, "metadata", {}) or {})
|
||||
|
||||
# Quarantine the bundle
|
||||
q_path = quarantine_bundle(bundle)
|
||||
try:
|
||||
q_path = quarantine_bundle(bundle)
|
||||
except ValueError as exc:
|
||||
c.print(f"[bold red]Installation blocked:[/] {exc}\n")
|
||||
from tools.skills_hub import append_audit_log
|
||||
append_audit_log("BLOCKED", bundle.name, bundle.source,
|
||||
bundle.trust_level, "invalid_path", str(exc))
|
||||
return
|
||||
c.print(f"[dim]Quarantined to {q_path.relative_to(q_path.parent.parent.parent)}[/]")
|
||||
|
||||
# Scan
|
||||
|
|
@ -414,7 +421,15 @@ def do_install(identifier: str, category: str = "", force: bool = False,
|
|||
return
|
||||
|
||||
# Install
|
||||
install_dir = install_from_quarantine(q_path, bundle.name, category, bundle, result)
|
||||
try:
|
||||
install_dir = install_from_quarantine(q_path, bundle.name, category, bundle, result)
|
||||
except ValueError as exc:
|
||||
c.print(f"[bold red]Installation blocked:[/] {exc}\n")
|
||||
shutil.rmtree(q_path, ignore_errors=True)
|
||||
from tools.skills_hub import append_audit_log
|
||||
append_audit_log("BLOCKED", bundle.name, bundle.source,
|
||||
bundle.trust_level, "invalid_path", str(exc))
|
||||
return
|
||||
from tools.skills_hub import SKILLS_DIR
|
||||
c.print(f"[bold green]Installed:[/] {install_dir.relative_to(SKILLS_DIR)}")
|
||||
c.print(f"[dim]Files: {', '.join(bundle.files.keys())}[/]\n")
|
||||
|
|
|
|||
|
|
@ -312,23 +312,31 @@ def show_status(args):
|
|||
_gw_svc = get_service_name()
|
||||
except Exception:
|
||||
_gw_svc = "hermes-gateway"
|
||||
result = subprocess.run(
|
||||
["systemctl", "--user", "is-active", _gw_svc],
|
||||
capture_output=True,
|
||||
text=True
|
||||
)
|
||||
is_active = result.stdout.strip() == "active"
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["systemctl", "--user", "is-active", _gw_svc],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=5
|
||||
)
|
||||
is_active = result.stdout.strip() == "active"
|
||||
except subprocess.TimeoutExpired:
|
||||
is_active = False
|
||||
print(f" Status: {check_mark(is_active)} {'running' if is_active else 'stopped'}")
|
||||
print(" Manager: systemd (user)")
|
||||
|
||||
elif sys.platform == 'darwin':
|
||||
from hermes_cli.gateway import get_launchd_label
|
||||
result = subprocess.run(
|
||||
["launchctl", "list", get_launchd_label()],
|
||||
capture_output=True,
|
||||
text=True
|
||||
)
|
||||
is_loaded = result.returncode == 0
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["launchctl", "list", get_launchd_label()],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=5
|
||||
)
|
||||
is_loaded = result.returncode == 0
|
||||
except subprocess.TimeoutExpired:
|
||||
is_loaded = False
|
||||
print(f" Status: {check_mark(is_loaded)} {'loaded' if is_loaded else 'not loaded'}")
|
||||
print(" Manager: launchd")
|
||||
else:
|
||||
|
|
|
|||
|
|
@ -314,6 +314,16 @@ TOOL_CATEGORIES = {
|
|||
"browser_provider": "browser-use",
|
||||
"post_setup": "browserbase",
|
||||
},
|
||||
{
|
||||
"name": "Camofox",
|
||||
"tag": "Local anti-detection browser (Firefox/Camoufox)",
|
||||
"env_vars": [
|
||||
{"key": "CAMOFOX_URL", "prompt": "Camofox server URL", "default": "http://localhost:9377",
|
||||
"url": "https://github.com/jo-inc/camofox-browser"},
|
||||
],
|
||||
"browser_provider": "camofox",
|
||||
"post_setup": "camofox",
|
||||
},
|
||||
],
|
||||
},
|
||||
"homeassistant": {
|
||||
|
|
@ -378,6 +388,28 @@ def _run_post_setup(post_setup_key: str):
|
|||
elif not node_modules.exists():
|
||||
_print_warning(" Node.js not found - browser tools require: npm install (in hermes-agent directory)")
|
||||
|
||||
elif post_setup_key == "camofox":
|
||||
camofox_dir = PROJECT_ROOT / "node_modules" / "@askjo" / "camoufox-browser"
|
||||
if not camofox_dir.exists() and shutil.which("npm"):
|
||||
_print_info(" Installing Camofox browser server...")
|
||||
import subprocess
|
||||
result = subprocess.run(
|
||||
["npm", "install", "--silent"],
|
||||
capture_output=True, text=True, cwd=str(PROJECT_ROOT)
|
||||
)
|
||||
if result.returncode == 0:
|
||||
_print_success(" Camofox installed")
|
||||
else:
|
||||
_print_warning(" npm install failed - run manually: npm install")
|
||||
if camofox_dir.exists():
|
||||
_print_info(" Start the Camofox server:")
|
||||
_print_info(" npx @askjo/camoufox-browser")
|
||||
_print_info(" First run downloads the Camoufox engine (~300MB)")
|
||||
_print_info(" Or use Docker: docker run -p 9377:9377 jo-inc/camofox-browser")
|
||||
elif not shutil.which("npm"):
|
||||
_print_warning(" Node.js not found. Install Camofox via Docker:")
|
||||
_print_info(" docker run -p 9377:9377 jo-inc/camofox-browser")
|
||||
|
||||
elif post_setup_key == "rl_training":
|
||||
try:
|
||||
__import__("tinker_atropos")
|
||||
|
|
@ -615,7 +647,9 @@ def _toolset_has_keys(ts_key: str, config: dict = None) -> bool:
|
|||
if cat:
|
||||
for provider in _visible_providers(cat, config):
|
||||
env_vars = provider.get("env_vars", [])
|
||||
if env_vars and all(get_env_value(e["key"]) for e in env_vars):
|
||||
if not env_vars:
|
||||
return True # No-key provider (e.g. Local Browser, Edge TTS)
|
||||
if all(get_env_value(e["key"]) for e in env_vars):
|
||||
return True
|
||||
return False
|
||||
|
||||
|
|
|
|||
|
|
@ -10,16 +10,27 @@ import os
|
|||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
from hermes_constants import get_hermes_home
|
||||
from honcho_integration.client import resolve_config_path, GLOBAL_CONFIG_PATH
|
||||
|
||||
HOST = "hermes"
|
||||
|
||||
|
||||
def _config_path() -> Path:
|
||||
"""Return the active Honcho config path (instance-local or global)."""
|
||||
"""Return the active Honcho config path for reading (instance-local or global)."""
|
||||
return resolve_config_path()
|
||||
|
||||
|
||||
def _local_config_path() -> Path:
|
||||
"""Return the instance-local Honcho config path for writing.
|
||||
|
||||
Always returns $HERMES_HOME/honcho.json so each profile/instance gets
|
||||
its own config file. The global ~/.honcho/config.json is only used as
|
||||
a read fallback (via resolve_config_path) for cross-app interop.
|
||||
"""
|
||||
return get_hermes_home() / "honcho.json"
|
||||
|
||||
|
||||
def _read_config() -> dict:
|
||||
path = _config_path()
|
||||
if path.exists():
|
||||
|
|
@ -31,7 +42,7 @@ def _read_config() -> dict:
|
|||
|
||||
|
||||
def _write_config(cfg: dict, path: Path | None = None) -> None:
|
||||
path = path or _config_path()
|
||||
path = path or _local_config_path()
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
path.write_text(
|
||||
json.dumps(cfg, indent=2, ensure_ascii=False) + "\n",
|
||||
|
|
@ -95,13 +106,13 @@ def cmd_setup(args) -> None:
|
|||
"""Interactive Honcho setup wizard."""
|
||||
cfg = _read_config()
|
||||
|
||||
active_path = _config_path()
|
||||
write_path = _local_config_path()
|
||||
read_path = _config_path()
|
||||
print("\nHoncho memory setup\n" + "─" * 40)
|
||||
print(" Honcho gives Hermes persistent cross-session memory.")
|
||||
if active_path != GLOBAL_CONFIG_PATH:
|
||||
print(f" Instance config: {active_path}")
|
||||
else:
|
||||
print(" Config is shared with other hosts at ~/.honcho/config.json")
|
||||
print(f" Config: {write_path}")
|
||||
if read_path != write_path and read_path.exists():
|
||||
print(f" (seeding from existing config at {read_path})")
|
||||
print()
|
||||
|
||||
if not _ensure_sdk_installed():
|
||||
|
|
@ -189,7 +200,7 @@ def cmd_setup(args) -> None:
|
|||
hermes_host.setdefault("saveMessages", True)
|
||||
|
||||
_write_config(cfg)
|
||||
print(f"\n Config written to {active_path}")
|
||||
print(f"\n Config written to {write_path}")
|
||||
|
||||
# Test connection
|
||||
print(" Testing connection... ", end="", flush=True)
|
||||
|
|
@ -237,6 +248,7 @@ def cmd_status(args) -> None:
|
|||
cfg = _read_config()
|
||||
|
||||
active_path = _config_path()
|
||||
write_path = _local_config_path()
|
||||
|
||||
if not cfg:
|
||||
print(f" No Honcho config found at {active_path}")
|
||||
|
|
@ -259,6 +271,8 @@ def cmd_status(args) -> None:
|
|||
print(f" Workspace: {hcfg.workspace_id}")
|
||||
print(f" Host: {hcfg.host}")
|
||||
print(f" Config path: {active_path}")
|
||||
if write_path != active_path:
|
||||
print(f" Write path: {write_path} (instance-local)")
|
||||
print(f" AI peer: {hcfg.ai_peer}")
|
||||
print(f" User peer: {hcfg.peer_name or 'not set'}")
|
||||
print(f" Session key: {hcfg.resolve_session_name()}")
|
||||
|
|
|
|||
|
|
@ -304,6 +304,29 @@ def ensure_parent(path: Path) -> None:
|
|||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
|
||||
def resolve_secret_input(value: Any, env: Optional[Dict[str, str]] = None) -> Optional[str]:
|
||||
"""Resolve an OpenClaw SecretInput value to a plain string.
|
||||
|
||||
SecretInput can be:
|
||||
- A plain string: "sk-..."
|
||||
- An env template: "${OPENROUTER_API_KEY}"
|
||||
- A SecretRef object: {"source": "env", "id": "OPENROUTER_API_KEY"}
|
||||
"""
|
||||
if isinstance(value, str):
|
||||
# Check for env template: "${VAR_NAME}"
|
||||
m = re.match(r"^\$\{(\w+)\}$", value.strip())
|
||||
if m and env:
|
||||
return env.get(m.group(1), "").strip() or None
|
||||
return value.strip() or None
|
||||
if isinstance(value, dict):
|
||||
source = value.get("source", "")
|
||||
ref_id = value.get("id", "")
|
||||
if source == "env" and ref_id and env:
|
||||
return env.get(ref_id, "").strip() or None
|
||||
# File/exec sources can't be resolved here — return None
|
||||
return None
|
||||
|
||||
|
||||
def load_yaml_file(path: Path) -> Dict[str, Any]:
|
||||
if yaml is None or not path.exists():
|
||||
return {}
|
||||
|
|
@ -890,14 +913,20 @@ class Migrator:
|
|||
self.record("command-allowlist", source, destination, "migrated", "Would merge patterns", added_patterns=added)
|
||||
|
||||
def load_openclaw_config(self) -> Dict[str, Any]:
|
||||
config_path = self.source_root / "openclaw.json"
|
||||
if not config_path.exists():
|
||||
return {}
|
||||
try:
|
||||
data = json.loads(config_path.read_text(encoding="utf-8"))
|
||||
return data if isinstance(data, dict) else {}
|
||||
except json.JSONDecodeError:
|
||||
return {}
|
||||
# Check current name and legacy config filenames
|
||||
for name in ("openclaw.json", "clawdbot.json", "moldbot.json"):
|
||||
config_path = self.source_root / name
|
||||
if config_path.exists():
|
||||
try:
|
||||
data = json.loads(config_path.read_text(encoding="utf-8"))
|
||||
return data if isinstance(data, dict) else {}
|
||||
except json.JSONDecodeError:
|
||||
continue
|
||||
return {}
|
||||
|
||||
def load_openclaw_env(self) -> Dict[str, str]:
|
||||
"""Load the OpenClaw .env file for secrets that live there instead of config."""
|
||||
return parse_env_file(self.source_root / ".env")
|
||||
|
||||
def merge_env_values(self, additions: Dict[str, str], kind: str, source: Path) -> None:
|
||||
destination = self.target_root / ".env"
|
||||
|
|
@ -1024,6 +1053,10 @@ class Migrator:
|
|||
supported_targets=sorted(SUPPORTED_SECRET_TARGETS),
|
||||
)
|
||||
|
||||
def _resolve_channel_secret(self, value: Any) -> Optional[str]:
|
||||
"""Resolve a channel config value that may be a SecretRef."""
|
||||
return resolve_secret_input(value, self.load_openclaw_env())
|
||||
|
||||
def migrate_discord_settings(self, config: Optional[Dict[str, Any]] = None) -> None:
|
||||
config = config or self.load_openclaw_config()
|
||||
additions: Dict[str, str] = {}
|
||||
|
|
@ -1118,15 +1151,17 @@ class Migrator:
|
|||
secret_additions: Dict[str, str] = {}
|
||||
|
||||
# Extract provider API keys from models.providers
|
||||
# Note: apiKey values can be strings, env templates, or SecretRef objects
|
||||
openclaw_env = self.load_openclaw_env()
|
||||
providers = config.get("models", {}).get("providers", {})
|
||||
if isinstance(providers, dict):
|
||||
for provider_name, provider_cfg in providers.items():
|
||||
if not isinstance(provider_cfg, dict):
|
||||
continue
|
||||
api_key = provider_cfg.get("apiKey")
|
||||
if not isinstance(api_key, str) or not api_key.strip():
|
||||
raw_key = provider_cfg.get("apiKey")
|
||||
api_key = resolve_secret_input(raw_key, openclaw_env)
|
||||
if not api_key:
|
||||
continue
|
||||
api_key = api_key.strip()
|
||||
|
||||
base_url = provider_cfg.get("baseUrl", "")
|
||||
api_type = provider_cfg.get("api", "")
|
||||
|
|
@ -1170,6 +1205,50 @@ class Migrator:
|
|||
if isinstance(oai_key, str) and oai_key.strip():
|
||||
secret_additions["VOICE_TOOLS_OPENAI_KEY"] = oai_key.strip()
|
||||
|
||||
# Also check the OpenClaw .env file — many users store keys there
|
||||
# instead of inline in openclaw.json
|
||||
openclaw_env = self.load_openclaw_env()
|
||||
env_key_mapping = {
|
||||
"OPENROUTER_API_KEY": "OPENROUTER_API_KEY",
|
||||
"OPENAI_API_KEY": "OPENAI_API_KEY",
|
||||
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY",
|
||||
"ELEVENLABS_API_KEY": "ELEVENLABS_API_KEY",
|
||||
"TELEGRAM_BOT_TOKEN": "TELEGRAM_BOT_TOKEN",
|
||||
"DEEPSEEK_API_KEY": "DEEPSEEK_API_KEY",
|
||||
"GEMINI_API_KEY": "GEMINI_API_KEY",
|
||||
"ZAI_API_KEY": "ZAI_API_KEY",
|
||||
"MINIMAX_API_KEY": "MINIMAX_API_KEY",
|
||||
}
|
||||
for oc_key, hermes_key in env_key_mapping.items():
|
||||
val = openclaw_env.get(oc_key, "").strip()
|
||||
if val and hermes_key not in secret_additions:
|
||||
secret_additions[hermes_key] = val
|
||||
|
||||
# Check per-agent auth-profiles.json for additional credentials
|
||||
auth_profiles_path = self.source_root / "agents" / "main" / "agent" / "auth-profiles.json"
|
||||
if auth_profiles_path.exists():
|
||||
try:
|
||||
profiles = json.loads(auth_profiles_path.read_text(encoding="utf-8"))
|
||||
if isinstance(profiles, dict):
|
||||
# auth-profiles.json wraps profiles in a "profiles" key
|
||||
profile_entries = profiles.get("profiles", profiles) if isinstance(profiles.get("profiles"), dict) else profiles
|
||||
for profile_name, profile_data in profile_entries.items():
|
||||
if not isinstance(profile_data, dict):
|
||||
continue
|
||||
# Canonical field is "key", "apiKey" is accepted as alias
|
||||
api_key = profile_data.get("key", "") or profile_data.get("apiKey", "")
|
||||
if not isinstance(api_key, str) or not api_key.strip():
|
||||
continue
|
||||
name_lower = profile_name.lower()
|
||||
if "openrouter" in name_lower and "OPENROUTER_API_KEY" not in secret_additions:
|
||||
secret_additions["OPENROUTER_API_KEY"] = api_key.strip()
|
||||
elif "openai" in name_lower and "OPENAI_API_KEY" not in secret_additions:
|
||||
secret_additions["OPENAI_API_KEY"] = api_key.strip()
|
||||
elif "anthropic" in name_lower and "ANTHROPIC_API_KEY" not in secret_additions:
|
||||
secret_additions["ANTHROPIC_API_KEY"] = api_key.strip()
|
||||
except (json.JSONDecodeError, OSError):
|
||||
pass
|
||||
|
||||
if secret_additions:
|
||||
self.merge_env_values(secret_additions, "provider-keys", self.source_root / "openclaw.json")
|
||||
else:
|
||||
|
|
@ -1218,7 +1297,11 @@ class Migrator:
|
|||
|
||||
if self.execute:
|
||||
backup_path = self.maybe_backup(destination)
|
||||
hermes_config["model"] = model_str
|
||||
existing_model = hermes_config.get("model")
|
||||
if isinstance(existing_model, dict):
|
||||
existing_model["default"] = model_str
|
||||
else:
|
||||
hermes_config["model"] = {"default": model_str}
|
||||
dump_yaml_file(destination, hermes_config)
|
||||
self.record("model-config", source_path, destination, "migrated", backup=str(backup_path) if backup_path else "", model=model_str)
|
||||
else:
|
||||
|
|
@ -1244,22 +1327,44 @@ class Migrator:
|
|||
if isinstance(provider, str) and provider in ("elevenlabs", "openai", "edge"):
|
||||
tts_data["provider"] = provider
|
||||
|
||||
elevenlabs = tts.get("elevenlabs", {})
|
||||
# TTS provider settings live under messages.tts.providers.{provider}
|
||||
# in OpenClaw (not messages.tts.elevenlabs directly)
|
||||
providers = tts.get("providers") or {}
|
||||
|
||||
# Also check the top-level "talk" config which has provider settings too
|
||||
talk_cfg = (config or self.load_openclaw_config()).get("talk") or {}
|
||||
talk_providers = talk_cfg.get("providers") or {}
|
||||
|
||||
# Merge: messages.tts.providers takes priority, then talk.providers,
|
||||
# then legacy flat keys (messages.tts.elevenlabs, etc.)
|
||||
elevenlabs = (
|
||||
(providers.get("elevenlabs") or {})
|
||||
if isinstance(providers.get("elevenlabs"), dict) else
|
||||
(talk_providers.get("elevenlabs") or {})
|
||||
if isinstance(talk_providers.get("elevenlabs"), dict) else
|
||||
(tts.get("elevenlabs") or {})
|
||||
)
|
||||
if isinstance(elevenlabs, dict):
|
||||
el_settings: Dict[str, str] = {}
|
||||
voice_id = elevenlabs.get("voiceId")
|
||||
voice_id = elevenlabs.get("voiceId") or talk_cfg.get("voiceId")
|
||||
if isinstance(voice_id, str) and voice_id.strip():
|
||||
el_settings["voice_id"] = voice_id.strip()
|
||||
model_id = elevenlabs.get("modelId")
|
||||
model_id = elevenlabs.get("modelId") or talk_cfg.get("modelId")
|
||||
if isinstance(model_id, str) and model_id.strip():
|
||||
el_settings["model_id"] = model_id.strip()
|
||||
if el_settings:
|
||||
tts_data["elevenlabs"] = el_settings
|
||||
|
||||
openai_tts = tts.get("openai", {})
|
||||
openai_tts = (
|
||||
(providers.get("openai") or {})
|
||||
if isinstance(providers.get("openai"), dict) else
|
||||
(talk_providers.get("openai") or {})
|
||||
if isinstance(talk_providers.get("openai"), dict) else
|
||||
(tts.get("openai") or {})
|
||||
)
|
||||
if isinstance(openai_tts, dict):
|
||||
oai_settings: Dict[str, str] = {}
|
||||
oai_model = openai_tts.get("model")
|
||||
oai_model = openai_tts.get("model") or openai_tts.get("modelId")
|
||||
if isinstance(oai_model, str) and oai_model.strip():
|
||||
oai_settings["model"] = oai_model.strip()
|
||||
oai_voice = openai_tts.get("voice")
|
||||
|
|
@ -1268,7 +1373,11 @@ class Migrator:
|
|||
if oai_settings:
|
||||
tts_data["openai"] = oai_settings
|
||||
|
||||
edge_tts = tts.get("edge", {})
|
||||
edge_tts = (
|
||||
(providers.get("edge") or {})
|
||||
if isinstance(providers.get("edge"), dict) else
|
||||
(tts.get("edge") or {})
|
||||
)
|
||||
if isinstance(edge_tts, dict):
|
||||
edge_voice = edge_tts.get("voice")
|
||||
if isinstance(edge_voice, str) and edge_voice.strip():
|
||||
|
|
@ -1298,15 +1407,29 @@ class Migrator:
|
|||
self.record("tts-config", source_path, destination, "migrated", "Would set TTS config", settings=list(tts_data.keys()))
|
||||
|
||||
def migrate_shared_skills(self) -> None:
|
||||
source_root = self.source_root / "skills"
|
||||
# Check all OpenClaw skill sources: managed, personal, project-level
|
||||
skill_sources = [
|
||||
(self.source_root / "skills", "shared-skills", "managed skills"),
|
||||
(Path.home() / ".agents" / "skills", "personal-skills", "personal cross-project skills"),
|
||||
(self.source_root / "workspace" / ".agents" / "skills", "project-skills", "project-level shared skills"),
|
||||
(self.source_root / "workspace.default" / ".agents" / "skills", "project-skills", "project-level shared skills"),
|
||||
]
|
||||
found_any = False
|
||||
for source_root, kind_label, desc in skill_sources:
|
||||
if source_root.exists():
|
||||
found_any = True
|
||||
self._import_skill_directory(source_root, kind_label, desc)
|
||||
if not found_any:
|
||||
destination_root = self.target_root / "skills" / SKILL_CATEGORY_DIRNAME
|
||||
self.record("shared-skills", None, destination_root, "skipped", "No shared OpenClaw skills directories found")
|
||||
|
||||
def _import_skill_directory(self, source_root: Path, kind_label: str, desc: str) -> None:
|
||||
"""Import skills from a single source directory into openclaw-imports."""
|
||||
destination_root = self.target_root / "skills" / SKILL_CATEGORY_DIRNAME
|
||||
if not source_root.exists():
|
||||
self.record("shared-skills", None, destination_root, "skipped", "No shared OpenClaw skills directory found")
|
||||
return
|
||||
|
||||
skill_dirs = [p for p in sorted(source_root.iterdir()) if p.is_dir() and (p / "SKILL.md").exists()]
|
||||
if not skill_dirs:
|
||||
self.record("shared-skills", source_root, destination_root, "skipped", "No shared skills with SKILL.md found")
|
||||
self.record(kind_label, source_root, destination_root, "skipped", f"No skills with SKILL.md found in {desc}")
|
||||
return
|
||||
|
||||
for skill_dir in skill_dirs:
|
||||
|
|
@ -1314,7 +1437,7 @@ class Migrator:
|
|||
final_destination = destination
|
||||
if destination.exists():
|
||||
if self.skill_conflict_mode == "skip":
|
||||
self.record("shared-skill", skill_dir, destination, "conflict", "Destination skill already exists")
|
||||
self.record(kind_label, skill_dir, destination, "conflict", "Destination skill already exists")
|
||||
continue
|
||||
if self.skill_conflict_mode == "rename":
|
||||
final_destination = self.resolve_skill_destination(destination)
|
||||
|
|
@ -1329,19 +1452,19 @@ class Migrator:
|
|||
details: Dict[str, Any] = {"backup": str(backup_path) if backup_path else ""}
|
||||
if final_destination != destination:
|
||||
details["renamed_from"] = str(destination)
|
||||
self.record("shared-skill", skill_dir, final_destination, "migrated", **details)
|
||||
self.record(kind_label, skill_dir, final_destination, "migrated", **details)
|
||||
else:
|
||||
if final_destination != destination:
|
||||
self.record(
|
||||
"shared-skill",
|
||||
kind_label,
|
||||
skill_dir,
|
||||
final_destination,
|
||||
"migrated",
|
||||
"Would copy shared skill directory under a renamed folder",
|
||||
f"Would copy {desc} directory under a renamed folder",
|
||||
renamed_from=str(destination),
|
||||
)
|
||||
else:
|
||||
self.record("shared-skill", skill_dir, final_destination, "migrated", "Would copy shared skill directory")
|
||||
self.record(kind_label, skill_dir, final_destination, "migrated", f"Would copy {desc} directory")
|
||||
|
||||
desc_path = destination_root / "DESCRIPTION.md"
|
||||
if self.execute:
|
||||
|
|
@ -1518,6 +1641,7 @@ class Migrator:
|
|||
self.source_candidate("workspace/IDENTITY.md", "workspace.default/IDENTITY.md"),
|
||||
self.source_candidate("workspace/TOOLS.md", "workspace.default/TOOLS.md"),
|
||||
self.source_candidate("workspace/HEARTBEAT.md", "workspace.default/HEARTBEAT.md"),
|
||||
self.source_candidate("workspace/BOOTSTRAP.md", "workspace.default/BOOTSTRAP.md"),
|
||||
]
|
||||
for candidate in candidates:
|
||||
if candidate:
|
||||
|
|
@ -1789,8 +1913,9 @@ class Migrator:
|
|||
human_delay = defaults.get("humanDelay") or {}
|
||||
if human_delay:
|
||||
hd = hermes_cfg.get("human_delay") or {}
|
||||
if human_delay.get("enabled"):
|
||||
hd["mode"] = "natural"
|
||||
hd_mode = human_delay.get("mode") or ("natural" if human_delay.get("enabled") else None)
|
||||
if hd_mode and hd_mode != "off":
|
||||
hd["mode"] = hd_mode
|
||||
if human_delay.get("minMs"):
|
||||
hd["min_ms"] = human_delay["minMs"]
|
||||
if human_delay.get("maxMs"):
|
||||
|
|
@ -1804,11 +1929,11 @@ class Migrator:
|
|||
changes = True
|
||||
|
||||
# Map terminal/exec settings
|
||||
exec_cfg = defaults.get("exec") or (config.get("tools") or {}).get("exec") or {}
|
||||
exec_cfg = (config.get("tools") or {}).get("exec") or {}
|
||||
if exec_cfg:
|
||||
terminal_cfg = hermes_cfg.get("terminal") or {}
|
||||
if exec_cfg.get("timeout"):
|
||||
terminal_cfg["timeout"] = exec_cfg["timeout"]
|
||||
if exec_cfg.get("timeoutSec") or exec_cfg.get("timeout"):
|
||||
terminal_cfg["timeout"] = exec_cfg.get("timeoutSec") or exec_cfg.get("timeout")
|
||||
changes = True
|
||||
hermes_cfg["terminal"] = terminal_cfg
|
||||
|
||||
|
|
@ -1883,24 +2008,34 @@ class Migrator:
|
|||
sr = hermes_cfg.get("session_reset") or {}
|
||||
changes = False
|
||||
|
||||
reset_triggers = session.get("resetTriggers") or session.get("reset_triggers") or {}
|
||||
if reset_triggers:
|
||||
daily = reset_triggers.get("daily") or {}
|
||||
idle = reset_triggers.get("idle") or {}
|
||||
# OpenClaw uses session.reset (structured) and session.resetTriggers (string array)
|
||||
reset = session.get("reset") or {}
|
||||
reset_triggers = session.get("resetTriggers") or session.get("reset_triggers") or []
|
||||
|
||||
if daily.get("enabled") and idle.get("enabled"):
|
||||
sr["mode"] = "both"
|
||||
elif daily.get("enabled"):
|
||||
if reset:
|
||||
# Structured reset config: has mode, atHour, idleMinutes
|
||||
mode = reset.get("mode", "")
|
||||
if mode == "daily":
|
||||
sr["mode"] = "daily"
|
||||
elif idle.get("enabled"):
|
||||
elif mode == "idle":
|
||||
sr["mode"] = "idle"
|
||||
else:
|
||||
sr["mode"] = "none"
|
||||
|
||||
if daily.get("hour") is not None:
|
||||
sr["at_hour"] = daily["hour"]
|
||||
if idle.get("minutes") or idle.get("timeoutMinutes"):
|
||||
sr["idle_minutes"] = idle.get("minutes") or idle.get("timeoutMinutes")
|
||||
sr["mode"] = mode or "none"
|
||||
if reset.get("atHour") is not None:
|
||||
sr["at_hour"] = reset["atHour"]
|
||||
if reset.get("idleMinutes"):
|
||||
sr["idle_minutes"] = reset["idleMinutes"]
|
||||
changes = True
|
||||
elif isinstance(reset_triggers, list) and reset_triggers:
|
||||
# Simple string triggers: ["daily", "idle"]
|
||||
has_daily = "daily" in reset_triggers
|
||||
has_idle = "idle" in reset_triggers
|
||||
if has_daily and has_idle:
|
||||
sr["mode"] = "both"
|
||||
elif has_daily:
|
||||
sr["mode"] = "daily"
|
||||
elif has_idle:
|
||||
sr["mode"] = "idle"
|
||||
changes = True
|
||||
|
||||
if changes:
|
||||
|
|
@ -2092,11 +2227,12 @@ class Migrator:
|
|||
browser_hermes = hermes_cfg.get("browser") or {}
|
||||
changed = False
|
||||
|
||||
if browser.get("inactivityTimeoutMs"):
|
||||
browser_hermes["inactivity_timeout"] = browser["inactivityTimeoutMs"] // 1000
|
||||
# Map fields that have Hermes equivalents
|
||||
if browser.get("cdpUrl"):
|
||||
browser_hermes["cdp_url"] = browser["cdpUrl"]
|
||||
changed = True
|
||||
if browser.get("commandTimeoutMs"):
|
||||
browser_hermes["command_timeout"] = browser["commandTimeoutMs"] // 1000
|
||||
if browser.get("headless") is not None:
|
||||
browser_hermes["headless"] = browser["headless"]
|
||||
changed = True
|
||||
|
||||
if changed:
|
||||
|
|
@ -2107,9 +2243,9 @@ class Migrator:
|
|||
self.record("browser-config", "openclaw.json browser.*", "config.yaml browser",
|
||||
"migrated")
|
||||
|
||||
# Archive advanced browser settings
|
||||
# Archive remaining browser settings
|
||||
advanced = {k: v for k, v in browser.items()
|
||||
if k not in ("inactivityTimeoutMs", "commandTimeoutMs") and v}
|
||||
if k not in ("cdpUrl", "headless") and v}
|
||||
if advanced and self.archive_dir:
|
||||
if self.execute:
|
||||
self.archive_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
|
@ -2130,18 +2266,22 @@ class Migrator:
|
|||
hermes_cfg = load_yaml_file(hermes_cfg_path)
|
||||
changed = False
|
||||
|
||||
# Map exec timeout -> terminal timeout
|
||||
# Map exec timeout -> terminal timeout (field is timeoutSec in OpenClaw)
|
||||
exec_cfg = tools.get("exec") or {}
|
||||
if exec_cfg.get("timeout"):
|
||||
timeout_val = exec_cfg.get("timeoutSec") or exec_cfg.get("timeout")
|
||||
if timeout_val:
|
||||
terminal_cfg = hermes_cfg.get("terminal") or {}
|
||||
terminal_cfg["timeout"] = exec_cfg["timeout"]
|
||||
terminal_cfg["timeout"] = timeout_val
|
||||
hermes_cfg["terminal"] = terminal_cfg
|
||||
changed = True
|
||||
|
||||
# Map web search API key
|
||||
web_cfg = tools.get("webSearch") or tools.get("web") or {}
|
||||
if web_cfg.get("braveApiKey") and self.migrate_secrets:
|
||||
self._set_env_var("BRAVE_API_KEY", web_cfg["braveApiKey"], "tools.webSearch.braveApiKey")
|
||||
# Map web search API key (path: tools.web.search.brave.apiKey in OpenClaw)
|
||||
web_cfg = tools.get("web") or tools.get("webSearch") or {}
|
||||
search_cfg = web_cfg.get("search") or web_cfg if not web_cfg.get("search") else web_cfg["search"]
|
||||
brave_cfg = search_cfg.get("brave") or {}
|
||||
brave_key = brave_cfg.get("apiKey") or search_cfg.get("braveApiKey") or web_cfg.get("braveApiKey")
|
||||
if brave_key and isinstance(brave_key, str) and self.migrate_secrets:
|
||||
self._set_env_var("BRAVE_API_KEY", brave_key, "tools.web.search.brave.apiKey")
|
||||
|
||||
if changed and self.execute:
|
||||
self.maybe_backup(hermes_cfg_path)
|
||||
|
|
@ -2169,8 +2309,9 @@ class Migrator:
|
|||
hermes_cfg_path = self.target_root / "config.yaml"
|
||||
hermes_cfg = load_yaml_file(hermes_cfg_path)
|
||||
|
||||
# Map approval mode
|
||||
mode = approvals.get("mode") or approvals.get("defaultMode")
|
||||
# Map approval mode (nested under approvals.exec.mode in OpenClaw)
|
||||
exec_approvals = approvals.get("exec") or {}
|
||||
mode = (exec_approvals.get("mode") if isinstance(exec_approvals, dict) else None) or approvals.get("mode") or approvals.get("defaultMode")
|
||||
if mode:
|
||||
mode_map = {"auto": "off", "always": "manual", "smart": "smart", "manual": "manual"}
|
||||
hermes_mode = mode_map.get(mode, "manual")
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue