mirror of
https://github.com/NousResearch/hermes-agent.git
synced 2026-04-28 01:21:43 +00:00
feat: add supports_parallel_tool_calls for MCP servers
Port from openai/codex#17667: MCP servers can now opt-in to parallel tool execution by setting supports_parallel_tool_calls: true in their config. This allows tools from the same server to run concurrently within a single tool-call batch, matching the behavior already available for built-in tools like web_search and read_file. Previously all MCP tools were forced sequential because they weren't in the _PARALLEL_SAFE_TOOLS set. Now _should_parallelize_tool_batch checks is_mcp_tool_parallel_safe() which looks up the server's config flag. Config example: mcp_servers: docs: command: "docs-server" supports_parallel_tool_calls: true Changes: - tools/mcp_tool.py: Track parallel-safe servers in _parallel_safe_servers set, populated during register_mcp_servers(). Add is_mcp_tool_parallel_safe() public API. - run_agent.py: Add _is_mcp_tool_parallel_safe() lazy-import wrapper. Update _should_parallelize_tool_batch() to check MCP tools against server config. - 11 new tests covering the feature end-to-end. - Updated MCP docs and config reference.
This commit is contained in:
parent
847d7cbea5
commit
64e7226068
6 changed files with 263 additions and 2 deletions
|
|
@ -105,6 +105,7 @@ Hermes reads MCP config from `~/.hermes/config.yaml` under `mcp_servers`.
|
|||
| `timeout` | number | Tool call timeout |
|
||||
| `connect_timeout` | number | Initial connection timeout |
|
||||
| `enabled` | bool | If `false`, Hermes skips the server entirely |
|
||||
| `supports_parallel_tool_calls` | bool | If `true`, tools from this server may run concurrently |
|
||||
| `tools` | mapping | Per-server tool filtering and utility policy |
|
||||
|
||||
### Minimal stdio example
|
||||
|
|
@ -409,6 +410,23 @@ Because Hermes now only registers those wrappers when both are true:
|
|||
|
||||
This is intentional and keeps the tool list honest.
|
||||
|
||||
## Parallel Tool Calls
|
||||
|
||||
By default, MCP tools run sequentially — one at a time. If your MCP server exposes tools that are safe to run concurrently (e.g. read-only queries, independent API calls), you can opt-in to parallel execution:
|
||||
|
||||
```yaml
|
||||
mcp_servers:
|
||||
docs:
|
||||
command: "docs-server"
|
||||
supports_parallel_tool_calls: true
|
||||
```
|
||||
|
||||
When `supports_parallel_tool_calls` is `true`, Hermes may execute multiple tools from that server at the same time within a single tool-call batch, just like it does for built-in read-only tools (web_search, read_file, etc.).
|
||||
|
||||
:::caution
|
||||
Only enable parallel calls for MCP servers whose tools are safe to run at the same time. If tools read and write shared state, files, databases, or external resources, review the read/write race conditions before enabling this setting.
|
||||
:::
|
||||
|
||||
## MCP Sampling Support
|
||||
|
||||
MCP servers can request LLM inference from Hermes via the `sampling/createMessage` protocol. This allows an MCP server to ask Hermes to generate text on its behalf — useful for servers that need LLM capabilities but don't have their own model access.
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue