Broad drift audit against origin/main (b52b63396).
Reference pages (most user-visible drift):
- slash-commands: add /busy, /curator, /footer, /indicator, /redraw, /steer
that were missing; drop non-existent /terminal-setup; fix /q footnote
(resolves to /queue, not /quit); extend CLI-only list with all 24
CLI-only commands in the registry
- cli-commands: add dedicated sections for hermes curator / fallback /
hooks (new subcommands not previously documented); remove stale
hermes honcho standalone section (the plugin registers dynamically
via hermes memory); list curator/fallback/hooks in top-level table;
fix completion to include fish
- toolsets-reference: document the real 52-toolset count; split browser
vs browser-cdp; add discord / discord_admin / spotify / yuanbao;
correct hermes-cli tool count from 36 to 38; fix misleading claim
that hermes-homeassistant adds tools (it's identical to hermes-cli)
- tools-reference: bump tool count 55 -> 68; add 7 Spotify, 5 Yuanbao,
2 Discord toolsets; move browser_cdp/browser_dialog to their own
browser-cdp toolset section
- environment-variables: add 40+ user-facing HERMES_* vars that were
undocumented (--yolo, --accept-hooks, --ignore-*, inference model
override, agent/stream/checkpoint timeouts, OAuth trace, per-platform
batch tuning for Telegram/Discord/Matrix/Feishu/WeCom, cron knobs,
gateway restart/connect timeouts); dedupe the Cron Scheduler section;
replace stale QQ_SANDBOX with QQ_PORTAL_HOST
User-guide (top level):
- cli.md: compression preserves last 20 turns, not 4 (protect_last_n: 20)
- configuration.md: display.platforms is the canonical per-platform
override key; tool_progress_overrides is deprecated and auto-migrated
- profiles.md: model.default is the config key, not model.model
- sessions.md: CLI/TUI session IDs use 6-char hex, gateway uses 8
- checkpoints-and-rollback.md: destructive-command list now matches
_DESTRUCTIVE_PATTERNS (adds rmdir, cp, install, dd)
- docker.md: the container runs as non-root hermes (UID 10000) via
gosu; fix install command (uv pip); add missing --insecure on the
dashboard compose example (required for non-loopback bind)
- security.md: systemctl danger pattern also matches 'restart'
- index.md: built-in tool count 47 -> 68
- integrations/index.md: 6 STT providers, 8 memory providers
- integrations/providers.md: drop fictional dashscope/qwen aliases
Features:
- overview.md: 9 image models (not 8), 9 TTS providers (not 5),
8 memory providers (Supermemory was missing)
- tool-gateway.md: 9 image models
- tools.md: extend common-toolsets list with search / messaging /
spotify / discord / debugging / safe
- fallback-providers.md: add 6 real providers from PROVIDER_REGISTRY
(lmstudio, kimi-coding-cn, stepfun, alibaba-coding-plan,
tencent-tokenhub, azure-foundry)
- plugins.md: Available Hooks table now includes on_session_finalize,
on_session_reset, subagent_stop
- built-in-plugins.md: add the 7 bundled plugins the page didn't
mention (spotify, google_meet, three image_gen providers, two
dashboard examples)
- web-dashboard.md: add --insecure and --tui flags
- cron.md: hermes cron create takes positional schedule/prompt, not
flags
Messaging:
- telegram.md: TELEGRAM_WEBHOOK_SECRET is now REQUIRED when
TELEGRAM_WEBHOOK_URL is set (gateway refuses to start without it
per GHSA-3vpc-7q5r-276h). Biggest user-visible drift in the batch.
- discord.md: HERMES_DISCORD_TEXT_BATCH_SPLIT_DELAY_SECONDS default
is 2.0, not 0.1
- dingtalk.md: document DINGTALK_REQUIRE_MENTION /
FREE_RESPONSE_CHATS / MENTION_PATTERNS / HOME_CHANNEL /
ALLOW_ALL_USERS that the adapter supports
- bluebubbles.md: drop fictional BLUEBUBBLES_SEND_READ_RECEIPTS env
var; the setting lives in platforms.bluebubbles.extra only
- qqbot.md: drop dead QQ_SANDBOX; add real QQ_PORTAL_HOST and
QQ_GROUP_ALLOWED_USERS
- wecom-callback.md: replace 'hermes gateway start' (service-only)
with 'hermes gateway' for first-time setup
Developer-guide:
- architecture.md: refresh tool/toolset counts (61/52), terminal
backend count (7), line counts for run_agent.py (~13.7k), cli.py
(~11.5k), main.py (~10.4k), setup.py (~3.5k), gateway/run.py
(~12.2k), mcp_tool.py (~3.1k); add yuanbao adapter, bump platform
adapter count 18 -> 20
- agent-loop.md: run_agent.py line count 10.7k -> 13.7k
- tools-runtime.md: add vercel_sandbox backend
- adding-tools.md: remove stale 'Discovery import added to
model_tools.py' checklist item (registry auto-discovery)
- adding-platform-adapters.md: mark send_typing / get_chat_info as
concrete base methods; only connect/disconnect/send are abstract
- acp-internals.md: ACP sessions now persist to SessionDB
(~/.hermes/state.db); acp.run_agent call uses
use_unstable_protocol=True
- cron-internals.md: gateway runs scheduler in a dedicated background
thread via _start_cron_ticker, not on a maintenance cycle; locking
is cross-process via fcntl.flock (Unix) / msvcrt.locking (Windows)
- gateway-internals.md: gateway/run.py ~12k lines
- provider-runtime.md: cron DOES support fallback (run_job reads
fallback_providers from config)
- session-storage.md: SCHEMA_VERSION = 11 (not 9); add migrations
10 and 11 (trigram FTS, inline-mode FTS5 re-index); add
api_call_count column to Sessions DDL; document messages_fts_trigram
and state_meta in the architecture tree
- context-compression-and-caching.md: remove the obsolete 'context
pressure warnings' section (warnings were removed for causing
models to give up early)
- context-engine-plugin.md: compress() signature now includes
focus_topic param
- extending-the-cli.md: _build_tui_layout_children signature now
includes model_picker_widget; add to default layout
Also fixed three pre-existing broken links/anchors the build warned
about (docker.md -> api-server.md, yuanbao.md -> cron-jobs.md and
tips#background-tasks, nix-setup.md -> #container-aware-cli).
Regenerated per-skill pages via website/scripts/generate-skill-docs.py
so catalog tables and sidebar are consistent with current SKILL.md
frontmatter.
docusaurus build: clean, no broken links or anchors.
12 KiB
| title | sidebar_label | description |
|---|---|---|
| Parallel Cli | Parallel Cli | Optional vendor skill for Parallel CLI — agent-native web search, extraction, deep research, enrichment, FindAll, and monitoring |
{/* This page is auto-generated from the skill's SKILL.md by website/scripts/generate-skill-docs.py. Edit the source SKILL.md, not this page. */}
Parallel Cli
Optional vendor skill for Parallel CLI — agent-native web search, extraction, deep research, enrichment, FindAll, and monitoring. Prefer JSON output and non-interactive flows.
Skill metadata
| Source | Optional — install with hermes skills install official/research/parallel-cli |
| Path | optional-skills/research/parallel-cli |
| Version | 1.1.0 |
| Author | Hermes Agent |
| License | MIT |
| Tags | Research, Web, Search, Deep-Research, Enrichment, CLI |
| Related skills | duckduckgo-search, mcporter |
Reference: full SKILL.md
:::info The following is the complete skill definition that Hermes loads when this skill is triggered. This is what the agent sees as instructions when the skill is active. :::
Parallel CLI
Use parallel-cli when the user explicitly wants Parallel, or when a terminal-native workflow would benefit from Parallel's vendor-specific stack for web search, extraction, deep research, enrichment, entity discovery, or monitoring.
This is an optional third-party workflow, not a Hermes core capability.
Important expectations:
- Parallel is a paid service with a free tier, not a fully free local tool.
- It overlaps with Hermes native
web_search/web_extract, so do not prefer it by default for ordinary lookups. - Prefer this skill when the user mentions Parallel specifically or needs capabilities like Parallel's enrichment, FindAll, or monitor workflows.
parallel-cli is designed for agents:
- JSON output via
--json - Non-interactive command execution
- Async long-running jobs with
--no-wait,status, andpoll - Context chaining with
--previous-interaction-id - Search, extract, research, enrichment, entity discovery, and monitoring in one CLI
When to use it
Prefer this skill when:
- The user explicitly mentions Parallel or
parallel-cli - The task needs richer workflows than a simple one-shot search/extract pass
- You need async deep research jobs that can be launched and polled later
- You need structured enrichment, FindAll entity discovery, or monitoring
Prefer Hermes native web_search / web_extract for quick one-off lookups when Parallel is not specifically requested.
Installation
Try the least invasive install path available for the environment.
Homebrew
brew install parallel-web/tap/parallel-cli
npm
npm install -g parallel-web-cli
Python package
pip install "parallel-web-tools[cli]"
Standalone installer
curl -fsSL https://parallel.ai/install.sh | bash
If you want an isolated Python install, pipx can also work:
pipx install "parallel-web-tools[cli]"
pipx ensurepath
Authentication
Interactive login:
parallel-cli login
Headless / SSH / CI:
parallel-cli login --device
API key environment variable:
export PARALLEL_API_KEY="***"
Verify current auth status:
parallel-cli auth
If auth requires browser interaction, run with pty=true.
Core rule set
- Always prefer
--jsonwhen you need machine-readable output. - Prefer explicit arguments and non-interactive flows.
- For long-running jobs, use
--no-waitand thenstatus/poll. - Cite only URLs returned by the CLI output.
- Save large JSON outputs to a temp file when follow-up questions are likely.
- Use background processes only for genuinely long-running workflows; otherwise run in foreground.
- Prefer Hermes native tools unless the user wants Parallel specifically or needs Parallel-only workflows.
Quick reference
parallel-cli
├── auth
├── login
├── logout
├── search
├── extract / fetch
├── research run|status|poll|processors
├── enrich run|status|poll|plan|suggest|deploy
├── findall run|ingest|status|poll|result|enrich|extend|schema|cancel
└── monitor create|list|get|update|delete|events|event-group|simulate
Common flags and patterns
Commonly useful flags:
--jsonfor structured output--no-waitfor async jobs--previous-interaction-id <id>for follow-up tasks that reuse earlier context--max-results <n>for search result count--mode one-shot|agenticfor search behavior--include-domains domain1.com,domain2.com--exclude-domains domain1.com,domain2.com--after-date YYYY-MM-DD
Read from stdin when convenient:
echo "What is the latest funding for Anthropic?" | parallel-cli search - --json
echo "Research question" | parallel-cli research run - --json
Search
Use for current web lookups with structured results.
parallel-cli search "What is Anthropic's latest AI model?" --json
parallel-cli search "SEC filings for Apple" --include-domains sec.gov --json
parallel-cli search "bitcoin price" --after-date 2026-01-01 --max-results 10 --json
parallel-cli search "latest browser benchmarks" --mode one-shot --json
parallel-cli search "AI coding agent enterprise reviews" --mode agentic --json
Useful constraints:
--include-domainsto narrow trusted sources--exclude-domainsto strip noisy domains--after-datefor recency filtering--max-resultswhen you need broader coverage
If you expect follow-up questions, save output:
parallel-cli search "latest React 19 changes" --json -o /tmp/react-19-search.json
When summarizing results:
- lead with the answer
- include dates, names, and concrete facts
- cite only returned sources
- avoid inventing URLs or source titles
Extraction
Use to pull clean content or markdown from a URL.
parallel-cli extract https://example.com --json
parallel-cli extract https://company.com --objective "Find pricing info" --json
parallel-cli extract https://example.com --full-content --json
parallel-cli fetch https://example.com --json
Use --objective when the page is broad and you only need one slice of information.
Deep research
Use for deeper multi-step research tasks that may take time.
Common processor tiers:
lite/basefor faster, cheaper passescore/profor more thorough synthesisultrafor the heaviest research jobs
Synchronous
parallel-cli research run \
"Compare the leading AI coding agents by pricing, model support, and enterprise controls" \
--processor core \
--json
Async launch + poll
parallel-cli research run \
"Compare the leading AI coding agents by pricing, model support, and enterprise controls" \
--processor ultra \
--no-wait \
--json
parallel-cli research status trun_xxx --json
parallel-cli research poll trun_xxx --json
parallel-cli research processors --json
Context chaining / follow-up
parallel-cli research run "What are the top AI coding agents?" --json
parallel-cli research run \
"What enterprise controls does the top-ranked one offer?" \
--previous-interaction-id trun_xxx \
--json
Recommended Hermes workflow:
- launch with
--no-wait --json - capture the returned run/task ID
- if the user wants to continue other work, keep moving
- later call
statusorpoll - summarize the final report with citations from the returned sources
Enrichment
Use when the user has CSV/JSON/tabular inputs and wants additional columns inferred from web research.
Suggest columns
parallel-cli enrich suggest "Find the CEO and annual revenue" --json
Plan a config
parallel-cli enrich plan -o config.yaml
Inline data
parallel-cli enrich run \
--data '[{"company": "Anthropic"}, {"company": "Mistral"}]' \
--intent "Find headquarters and employee count" \
--json
Non-interactive file run
parallel-cli enrich run \
--source-type csv \
--source companies.csv \
--target enriched.csv \
--source-columns '[{"name": "company", "description": "Company name"}]' \
--intent "Find the CEO and annual revenue"
YAML config run
parallel-cli enrich run config.yaml
Status / polling
parallel-cli enrich status <task_group_id> --json
parallel-cli enrich poll <task_group_id> --json
Use explicit JSON arrays for column definitions when operating non-interactively. Validate the output file before reporting success.
FindAll
Use for web-scale entity discovery when the user wants a discovered dataset rather than a short answer.
parallel-cli findall run "Find AI coding agent startups with enterprise offerings" --json
parallel-cli findall run "AI startups in healthcare" -n 25 --json
parallel-cli findall status <run_id> --json
parallel-cli findall poll <run_id> --json
parallel-cli findall result <run_id> --json
parallel-cli findall schema <run_id> --json
This is a better fit than ordinary search when the user wants a discovered set of entities that can be reviewed, filtered, or enriched later.
Monitor
Use for ongoing change detection over time.
parallel-cli monitor list --json
parallel-cli monitor get <monitor_id> --json
parallel-cli monitor events <monitor_id> --json
parallel-cli monitor delete <monitor_id> --json
Creation is usually the sensitive part because cadence and delivery matter:
parallel-cli monitor create --help
Use this when the user wants recurring tracking of a page or source rather than a one-time fetch.
Recommended Hermes usage patterns
Fast answer with citations
- Run
parallel-cli search ... --json - Parse titles, URLs, dates, excerpts
- Summarize with inline citations from the returned URLs only
URL investigation
- Run
parallel-cli extract URL --json - If needed, rerun with
--objectiveor--full-content - Quote or summarize the extracted markdown
Long research workflow
- Run
parallel-cli research run ... --no-wait --json - Store the returned ID
- Continue other work or periodically poll
- Summarize the final report with citations
Structured enrichment workflow
- Inspect the input file and columns
- Use
enrich suggestor provide explicit enriched columns - Run
enrich run - Poll for completion if needed
- Validate the output file before reporting success
Error handling and exit codes
The CLI documents these exit codes:
0success2bad input3auth error4API error5timeout
If you hit auth errors:
- check
parallel-cli auth - confirm
PARALLEL_API_KEYor runparallel-cli login/parallel-cli login --device - verify
parallel-cliis onPATH
Maintenance
Check current auth / install state:
parallel-cli auth
parallel-cli --help
Update commands:
parallel-cli update
pip install --upgrade parallel-web-tools
parallel-cli config auto-update-check off
Pitfalls
- Do not omit
--jsonunless the user explicitly wants human-formatted output. - Do not cite sources not present in the CLI output.
loginmay require PTY/browser interaction.- Prefer foreground execution for short tasks; do not overuse background processes.
- For large result sets, save JSON to
/tmp/*.jsoninstead of stuffing everything into context. - Do not silently choose Parallel when Hermes native tools are already sufficient.
- Remember this is a vendor workflow that usually requires account auth and paid usage beyond the free tier.