merge: resolve conflict with main (i18n refactor)

Main moved StatusPage constants/functions inside the component and added
i18n support. Resolved by keeping the i18n structure and adding the
runningRemote key to en.ts, zh.ts, and types.ts for remote gateway
display.
This commit is contained in:
Hermes Agent 2026-04-14 22:30:50 +00:00
commit 5ad28a2dbe
158 changed files with 10933 additions and 1378 deletions

View file

@ -919,7 +919,7 @@ display:
slack: 'off' # quiet in shared Slack workspace
```
Platforms without an override fall back to the global `tool_progress` value. Valid platform keys: `telegram`, `discord`, `slack`, `signal`, `whatsapp`, `matrix`, `mattermost`, `email`, `sms`, `homeassistant`, `dingtalk`, `feishu`, `wecom`, `weixin`, `bluebubbles`.
Platforms without an override fall back to the global `tool_progress` value. Valid platform keys: `telegram`, `discord`, `slack`, `signal`, `whatsapp`, `matrix`, `mattermost`, `email`, `sms`, `homeassistant`, `dingtalk`, `feishu`, `wecom`, `weixin`, `bluebubbles`, `qqbot`.
`interim_assistant_messages` is gateway-only. When enabled, Hermes sends completed mid-turn assistant updates as separate chat messages. This is independent from `tool_progress` and does not require gateway streaming.

View file

@ -277,6 +277,6 @@ docker restart hermes
```sh
docker logs --tail 50 hermes # Recent logs
docker exec hermes hermes version # Verify version
docker run -it --rm nousresearch/hermes-agent:latest version # Verify version
docker stats hermes # Resource usage
```

View file

@ -278,3 +278,9 @@ In Open WebUI, add each as a separate connection. The model dropdown shows `alic
- **Response storage** — stored responses (for `previous_response_id`) are persisted in SQLite and survive gateway restarts. Max 100 stored responses (LRU eviction).
- **No file upload** — vision/document analysis via uploaded files is not yet supported through the API.
- **Model field is cosmetic** — the `model` field in requests is accepted but the actual LLM model used is configured server-side in config.yaml.
## Proxy Mode
The API server also serves as the backend for **gateway proxy mode**. When another Hermes gateway instance is configured with `GATEWAY_PROXY_URL` pointing at this API server, it forwards all messages here instead of running its own agent. This enables split deployments — for example, a Docker container handling Matrix E2EE that relays to a host-side agent.
See [Matrix Proxy Mode](/docs/user-guide/messaging/matrix#proxy-mode-e2ee-on-macos) for the full setup guide.

View file

@ -204,6 +204,7 @@ When scheduling jobs, you specify where the output goes:
| `"wecom"` | WeCom | |
| `"weixin"` | Weixin (WeChat) | |
| `"bluebubbles"` | BlueBubbles (iMessage) | |
| `"qqbot"` | QQ Bot (Tencent QQ) | |
The agent's final response is automatically delivered. You do not need to call `send_message` in the cron prompt.

View file

@ -86,7 +86,7 @@ Project-local plugins under `./.hermes/plugins/` are disabled by default. Enable
| Add CLI commands | `ctx.register_cli_command(name, help, setup_fn, handler_fn)` — adds `hermes <plugin> <subcommand>` |
| Inject messages | `ctx.inject_message(content, role="user")` — see [Injecting Messages](#injecting-messages) |
| Ship data files | `Path(__file__).parent / "data" / "file.yaml"` |
| Bundle skills | Copy `skill.md` to `~/.hermes/skills/` at load time |
| Bundle skills | `ctx.register_skill(name, path)` — namespaced as `plugin:skill`, loaded via `skill_view("plugin:skill")` |
| Gate on env vars | `requires_env: [API_KEY]` in plugin.yaml — prompted during `hermes plugins install` |
| Distribute via pip | `[project.entry-points."hermes_agent.plugins"]` |

View file

@ -36,6 +36,8 @@ display:
| `ares` | War-god theme — crimson and bronze | `Ares Agent` | Deep crimson borders with bronze accents. Aggressive spinner verbs ("forging", "marching", "tempering steel"). Custom sword-and-shield ASCII art banner. |
| `mono` | Monochrome — clean grayscale | `Hermes Agent` | All grays — no color. Borders are `#555555`, text is `#c9d1d9`. Ideal for minimal terminal setups or screen recordings. |
| `slate` | Cool blue — developer-focused | `Hermes Agent` | Royal blue borders (`#4169e1`), soft blue text. Calm and professional. No custom spinner — uses default faces. |
| `daylight` | Light theme for bright terminals with dark text and cool blue accents | `Hermes Agent` | Designed for white or bright terminals. Dark slate text with blue borders, pale status surfaces, and a light completion menu that stays readable in light terminal profiles. |
| `warm-lightmode` | Warm brown/gold text for light terminal backgrounds | `Hermes Agent` | Warm parchment tones for light terminals. Dark brown text with saddle-brown accents, cream-colored status surfaces. An earthy alternative to the cooler daylight theme. |
| `poseidon` | Ocean-god theme — deep blue and seafoam | `Poseidon Agent` | Deep blue to seafoam gradient. Ocean-themed spinners ("charting currents", "sounding the depth"). Trident ASCII art banner. |
| `sisyphus` | Sisyphean theme — austere grayscale with persistence | `Sisyphus Agent` | Light grays with stark contrast. Boulder-themed spinners ("pushing uphill", "resetting the boulder", "enduring the loop"). Boulder-and-hill ASCII art banner. |
| `charizard` | Volcanic theme — burnt orange and ember | `Charizard Agent` | Warm burnt orange to ember gradient. Fire-themed spinners ("banking into the draft", "measuring burn"). Dragon-silhouette ASCII art banner. |
@ -63,6 +65,12 @@ Controls all color values throughout the CLI. Values are hex color strings.
| `response_border` | Border around the agent's response box (ANSI escape) | `#FFD700` |
| `session_label` | Session label color | `#DAA520` |
| `session_border` | Session ID dim border color | `#8B8682` |
| `status_bar_bg` | Background color for the TUI status / usage bar | `#1a1a2e` |
| `voice_status_bg` | Background color for the voice-mode status badge | `#1a1a2e` |
| `completion_menu_bg` | Background color for the completion menu list | `#1a1a2e` |
| `completion_menu_current_bg` | Background color for the active completion row | `#333355` |
| `completion_menu_meta_bg` | Background color for the completion meta column | `#1a1a2e` |
| `completion_menu_meta_current_bg` | Background color for the active completion meta column | `#333355` |
### Spinner (`spinner:`)
@ -129,6 +137,12 @@ colors:
response_border: "#FFD700"
session_label: "#DAA520"
session_border: "#8B8682"
status_bar_bg: "#1a1a2e"
voice_status_bg: "#1a1a2e"
completion_menu_bg: "#1a1a2e"
completion_menu_current_bg: "#333355"
completion_menu_meta_bg: "#1a1a2e"
completion_menu_meta_current_bg: "#333355"
spinner:
waiting_faces:

View file

@ -6,7 +6,7 @@ description: "Chat with Hermes from Telegram, Discord, Slack, WhatsApp, Signal,
# Messaging Gateway
Chat with Hermes from Telegram, Discord, Slack, WhatsApp, Signal, SMS, Email, Home Assistant, Mattermost, Matrix, DingTalk, Feishu/Lark, WeCom, Weixin, BlueBubbles (iMessage), or your browser. The gateway is a single background process that connects to all your configured platforms, handles sessions, runs cron jobs, and delivers voice messages.
Chat with Hermes from Telegram, Discord, Slack, WhatsApp, Signal, SMS, Email, Home Assistant, Mattermost, Matrix, DingTalk, Feishu/Lark, WeCom, Weixin, BlueBubbles (iMessage), QQ, or your browser. The gateway is a single background process that connects to all your configured platforms, handles sessions, runs cron jobs, and delivers voice messages.
For the full voice feature set — including CLI microphone mode, spoken replies in messaging, and Discord voice-channel conversations — see [Voice Mode](/docs/user-guide/features/voice-mode) and [Use Voice Mode with Hermes](/docs/guides/use-voice-mode-with-hermes).
@ -30,6 +30,7 @@ For the full voice feature set — including CLI microphone mode, spoken replies
| WeCom Callback | — | — | — | — | — | — | — |
| Weixin | ✅ | ✅ | ✅ | — | — | ✅ | ✅ |
| BlueBubbles | — | ✅ | ✅ | — | ✅ | ✅ | — |
| QQ | ✅ | ✅ | ✅ | — | — | ✅ | — |
**Voice** = TTS audio replies and/or voice message transcription. **Images** = send/receive images. **Files** = send/receive file attachments. **Threads** = threaded conversations. **Reactions** = emoji reactions on messages. **Typing** = typing indicator while processing. **Streaming** = progressive message updates via editing.
@ -55,6 +56,7 @@ flowchart TB
wcb[WeCom Callback]
wx[Weixin]
bb[BlueBubbles]
qq[QQ]
api["API Server<br/>(OpenAI-compatible)"]
wh[Webhooks]
end
@ -80,6 +82,7 @@ flowchart TB
wcb --> store
wx --> store
bb --> store
qq --> store
api --> store
wh --> store
store --> agent
@ -369,6 +372,7 @@ Each platform has its own toolset:
| WeCom Callback | `hermes-wecom-callback` | Full tools including terminal |
| Weixin | `hermes-weixin` | Full tools including terminal |
| BlueBubbles | `hermes-bluebubbles` | Full tools including terminal |
| QQBot | `hermes-qqbot` | Full tools including terminal |
| API Server | `hermes` (default) | Full tools including terminal |
| Webhooks | `hermes-webhook` | Full tools including terminal |
@ -390,5 +394,6 @@ Each platform has its own toolset:
- [WeCom Callback Setup](wecom-callback.md)
- [Weixin Setup (WeChat)](weixin.md)
- [BlueBubbles Setup (iMessage)](bluebubbles.md)
- [QQBot Setup](qqbot.md)
- [Open WebUI + API Server](open-webui.md)
- [Webhooks](webhooks.md)

View file

@ -439,6 +439,141 @@ security breach). A new access token gets a new device ID with no stale key
history, so other clients trust it immediately.
:::
## Proxy Mode (E2EE on macOS)
Matrix E2EE requires `libolm`, which doesn't compile on macOS ARM64 (Apple Silicon). The `hermes-agent[matrix]` extra is gated to Linux only. If you're on macOS, proxy mode lets you run E2EE in a Docker container on a Linux VM while the actual agent runs natively on macOS with full access to your local files, memory, and skills.
### How It Works
```
macOS (Host):
└─ hermes gateway
├─ api_server adapter ← listens on 0.0.0.0:8642
├─ AIAgent ← single source of truth
├─ Sessions, memory, skills
└─ Local file access (Obsidian, projects, etc.)
Linux VM (Docker):
└─ hermes gateway (proxy mode)
├─ Matrix adapter ← E2EE decryption/encryption
└─ HTTP forward → macOS:8642/v1/chat/completions
(no LLM API keys, no agent, no inference)
```
The Docker container only handles Matrix protocol + E2EE. When a message arrives, it decrypts it and forwards the text to the host via a standard HTTP request. The host runs the agent, calls tools, generates a response, and streams it back. The container encrypts and sends the response to Matrix. All sessions are unified — CLI, Matrix, Telegram, and any other platform share the same memory and conversation history.
### Step 1: Configure the Host (macOS)
Enable the API server so the host accepts incoming requests from the Docker container.
Add to `~/.hermes/.env`:
```bash
API_SERVER_ENABLED=true
API_SERVER_KEY=your-secret-key-here
API_SERVER_HOST=0.0.0.0
```
- `API_SERVER_HOST=0.0.0.0` binds to all interfaces so the Docker container can reach it.
- `API_SERVER_KEY` is required for non-loopback binding. Pick a strong random string.
- The API server runs on port 8642 by default (change with `API_SERVER_PORT` if needed).
Start the gateway:
```bash
hermes gateway
```
You should see the API server start alongside any other platforms you have configured. Verify it's reachable from the VM:
```bash
# From the Linux VM
curl http://<mac-ip>:8642/health
```
### Step 2: Configure the Docker Container (Linux VM)
The container needs Matrix credentials and the proxy URL. It does NOT need LLM API keys.
**`docker-compose.yml`:**
```yaml
services:
hermes-matrix:
build: .
environment:
# Matrix credentials
MATRIX_HOMESERVER: "https://matrix.example.org"
MATRIX_ACCESS_TOKEN: "syt_..."
MATRIX_ALLOWED_USERS: "@you:matrix.example.org"
MATRIX_ENCRYPTION: "true"
MATRIX_DEVICE_ID: "HERMES_BOT"
# Proxy mode — forward to host agent
GATEWAY_PROXY_URL: "http://192.168.1.100:8642"
GATEWAY_PROXY_KEY: "your-secret-key-here"
volumes:
- ./matrix-store:/root/.hermes/platforms/matrix/store
```
**`Dockerfile`:**
```dockerfile
FROM python:3.11-slim
RUN apt-get update && apt-get install -y libolm-dev && rm -rf /var/lib/apt/lists/*
RUN pip install 'hermes-agent[matrix]'
CMD ["hermes", "gateway"]
```
That's the entire container. No API keys for OpenRouter, Anthropic, or any inference provider.
### Step 3: Start Both
1. Start the host gateway first:
```bash
hermes gateway
```
2. Start the Docker container:
```bash
docker compose up -d
```
3. Send a message in an encrypted Matrix room. The container decrypts it, forwards it to the host, and streams the response back.
### Configuration Reference
Proxy mode is configured on the **container side** (the thin gateway):
| Setting | Description |
|---------|-------------|
| `GATEWAY_PROXY_URL` | URL of the remote Hermes API server (e.g., `http://192.168.1.100:8642`) |
| `GATEWAY_PROXY_KEY` | Bearer token for authentication (must match `API_SERVER_KEY` on the host) |
| `gateway.proxy_url` | Same as `GATEWAY_PROXY_URL` but in `config.yaml` |
The host side needs:
| Setting | Description |
|---------|-------------|
| `API_SERVER_ENABLED` | Set to `true` |
| `API_SERVER_KEY` | Bearer token (shared with the container) |
| `API_SERVER_HOST` | Set to `0.0.0.0` for network access |
| `API_SERVER_PORT` | Port number (default: `8642`) |
### Works for Any Platform
Proxy mode is not limited to Matrix. Any platform adapter can use it — set `GATEWAY_PROXY_URL` on any gateway instance and it will forward to the remote agent instead of running one locally. This is useful for any deployment where the platform adapter needs to run in a different environment from the agent (network isolation, E2EE requirements, resource constraints).
:::tip
Session continuity is maintained via the `X-Hermes-Session-Id` header. The host's API server tracks sessions by this ID, so conversations persist across messages just like they would with a local agent.
:::
:::note
**Limitations (v1):** Tool progress messages from the remote agent are not relayed back — the user sees the streamed final response only, not individual tool calls. Dangerous command approval prompts are handled on the host side, not relayed to the Matrix user. These can be addressed in future updates.
:::
### Sync issues / bot falls behind
**Cause**: Long-running tool executions can delay the sync loop, or the homeserver is slow.

View file

@ -0,0 +1,122 @@
# QQ Bot
Connect Hermes to QQ via the **Official QQ Bot API (v2)** — supporting private (C2C), group @-mentions, guild, and direct messages with voice transcription.
## Overview
The QQ Bot adapter uses the [Official QQ Bot API](https://bot.q.qq.com/wiki/develop/api-v2/) to:
- Receive messages via a persistent **WebSocket** connection to the QQ Gateway
- Send text and markdown replies via the **REST API**
- Download and process images, voice messages, and file attachments
- Transcribe voice messages using Tencent's built-in ASR or a configurable STT provider
## Prerequisites
1. **QQ Bot Application** — Register at [q.qq.com](https://q.qq.com):
- Create a new application and note your **App ID** and **App Secret**
- Enable the required intents: C2C messages, Group @-messages, Guild messages
- Configure your bot in sandbox mode for testing, or publish for production
2. **Dependencies** — The adapter requires `aiohttp` and `httpx`:
```bash
pip install aiohttp httpx
```
## Configuration
### Interactive setup
```bash
hermes setup gateway
```
Select **QQ Bot** from the platform list and follow the prompts.
### Manual configuration
Set the required environment variables in `~/.hermes/.env`:
```bash
QQ_APP_ID=your-app-id
QQ_CLIENT_SECRET=your-app-secret
```
## Environment Variables
| Variable | Description | Default |
|---|---|---|
| `QQ_APP_ID` | QQ Bot App ID (required) | — |
| `QQ_CLIENT_SECRET` | QQ Bot App Secret (required) | — |
| `QQ_HOME_CHANNEL` | OpenID for cron/notification delivery | — |
| `QQ_HOME_CHANNEL_NAME` | Display name for home channel | `Home` |
| `QQ_ALLOWED_USERS` | Comma-separated user OpenIDs for DM access | open (all users) |
| `QQ_ALLOW_ALL_USERS` | Set to `true` to allow all DMs | `false` |
| `QQ_MARKDOWN_SUPPORT` | Enable QQ markdown (msg_type 2) | `true` |
| `QQ_STT_API_KEY` | API key for voice-to-text provider | — |
| `QQ_STT_BASE_URL` | Base URL for STT provider | `https://open.bigmodel.cn/api/coding/paas/v4` |
| `QQ_STT_MODEL` | STT model name | `glm-asr` |
## Advanced Configuration
For fine-grained control, add platform settings to `~/.hermes/config.yaml`:
```yaml
platforms:
qq:
enabled: true
extra:
app_id: "your-app-id"
client_secret: "your-secret"
markdown_support: true
dm_policy: "open" # open | allowlist | disabled
allow_from:
- "user_openid_1"
group_policy: "open" # open | allowlist | disabled
group_allow_from:
- "group_openid_1"
stt:
provider: "zai" # zai (GLM-ASR), openai (Whisper), etc.
baseUrl: "https://open.bigmodel.cn/api/coding/paas/v4"
apiKey: "your-stt-key"
model: "glm-asr"
```
## Voice Messages (STT)
Voice transcription works in two stages:
1. **QQ built-in ASR** (free, always tried first) — QQ provides `asr_refer_text` in voice message attachments, which uses Tencent's own speech recognition
2. **Configured STT provider** (fallback) — If QQ's ASR doesn't return text, the adapter calls an OpenAI-compatible STT API:
- **Zhipu/GLM (zai)**: Default provider, uses `glm-asr` model
- **OpenAI Whisper**: Set `QQ_STT_BASE_URL` and `QQ_STT_MODEL`
- Any OpenAI-compatible STT endpoint
## Troubleshooting
### Bot disconnects immediately (quick disconnect)
This usually means:
- **Invalid App ID / Secret** — Double-check your credentials at q.qq.com
- **Missing permissions** — Ensure the bot has the required intents enabled
- **Sandbox-only bot** — If the bot is in sandbox mode, it can only receive messages from QQ's sandbox test channel
### Voice messages not transcribed
1. Check if QQ's built-in `asr_refer_text` is present in the attachment data
2. If using a custom STT provider, verify `QQ_STT_API_KEY` is set correctly
3. Check gateway logs for STT error messages
### Messages not delivered
- Verify the bot's **intents** are enabled at q.qq.com
- Check `QQ_ALLOWED_USERS` if DM access is restricted
- For group messages, ensure the bot is **@mentioned** (group policy may require allowlisting)
- Check `QQ_HOME_CHANNEL` for cron/notification delivery
### Connection errors
- Ensure `aiohttp` and `httpx` are installed: `pip install aiohttp httpx`
- Check network connectivity to `api.sgroup.qq.com` and the WebSocket gateway
- Review gateway logs for detailed error messages and reconnect behavior

View file

@ -70,7 +70,7 @@ Routes define how different webhook sources are handled. Each route is a named e
| `secret` | **Yes** | HMAC secret for signature validation. Falls back to the global `secret` if not set on the route. Set to `"INSECURE_NO_AUTH"` for testing only (skips validation). |
| `prompt` | No | Template string with dot-notation payload access (e.g. `{pull_request.title}`). If omitted, the full JSON payload is dumped into the prompt. |
| `skills` | No | List of skill names to load for the agent run. |
| `deliver` | No | Where to send the response: `github_comment`, `telegram`, `discord`, `slack`, `signal`, `sms`, `whatsapp`, `matrix`, `mattermost`, `homeassistant`, `email`, `dingtalk`, `feishu`, `wecom`, `weixin`, `bluebubbles`, or `log` (default). |
| `deliver` | No | Where to send the response: `github_comment`, `telegram`, `discord`, `slack`, `signal`, `sms`, `whatsapp`, `matrix`, `mattermost`, `homeassistant`, `email`, `dingtalk`, `feishu`, `wecom`, `weixin`, `bluebubbles`, `qqbot`, or `log` (default). |
| `deliver_extra` | No | Additional delivery config — keys depend on `deliver` type (e.g. `repo`, `pr_number`, `chat_id`). Values support the same `{dot.notation}` templates as `prompt`. |
### Full example

View file

@ -46,6 +46,7 @@ Each session is tagged with its source platform:
| `wecom` | WeCom (WeChat Work) |
| `weixin` | Weixin (personal WeChat) |
| `bluebubbles` | Apple iMessage via BlueBubbles macOS server |
| `qqbot` | QQ Bot (Tencent QQ) via Official API v2 |
| `homeassistant` | Home Assistant conversation |
| `webhook` | Incoming webhooks |
| `api-server` | API server requests |