hermes-agent/website/docs/index.md
Teknium 3299be6bdb
docs(windows): add native Windows guide + install one-liner on landing page (#22089)
New page: website/docs/user-guide/windows-native.md — comprehensive
Windows-native deep dive covering:

- Quick install (irm | iex) and parameterized form
- What the installer does end-to-end (uv, Python 3.11, Node 22,
  PortableGit, messaging SDK bootstrap)
- Feature matrix: native Windows vs WSL2 (dashboard /chat is WSL-only)
- How Hermes runs shell commands on Windows (Git Bash resolution,
  HERMES_GIT_BASH_PATH override, MinGit layout pitfall)
- UTF-8 console shim (configure_windows_stdio, opt-out via
  HERMES_DISABLE_WINDOWS_UTF8)
- Editor handling (notepad default, VSCode/Notepad++/nvim overrides,
  why Ctrl-X Ctrl-E used to silently do nothing)
- Ctrl+Enter for newline in the CLI
- Gateway as a Scheduled Task (schtasks + Startup-folder fallback,
  pythonw.exe detached spawn, why not a Windows Service)
- Data layout (%LOCALAPPDATA%\hermes vs %USERPROFILE%\.hermes split)
- PATH after install, environment variables, uninstall
- Process management internals (bpo-14484 os.kill(pid, 0) footgun,
  _pid_exists primitive, check-windows-footguns.py CI gate)
- 10+ concrete pitfalls with fixes

Also:
- docs/index.md: add inline 'Install' section with both Linux/macOS
  curl and Windows irm|iex one-liners right under the hero CTAs.
  Updates the quick-links row to include 'native Windows'.
- sidebars.ts: add Windows (Native) entry above Windows (WSL2).
- windows-wsl-quickstart.md: point native-install cross-link at the
  new dedicated page (was going to installation.md#windows-native).
- reference/environment-variables.md: document HERMES_GIT_BASH_PATH
  and HERMES_DISABLE_WINDOWS_UTF8 (previously undocumented).
2026-05-08 14:42:46 -07:00

6.4 KiB

slug sidebar_position title description hide_table_of_contents displayed_sidebar
/ 0 Hermes Agent Documentation The self-improving AI agent built by Nous Research. A built-in learning loop that creates skills from experience, improves them during use, and remembers across sessions. true docs

Hermes Agent

The self-improving AI agent built by Nous Research. The only agent with a built-in learning loop — it creates skills from experience, improves them during use, nudges itself to persist knowledge, and builds a deepening model of who you are across sessions.

Install

Linux / macOS / WSL2

curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash

Windows (native, PowerShell)

irm https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.ps1 | iex

Android (Termux) — same curl one-liner as Linux; the installer auto-detects Termux.

See the full Installation Guide for what the installer does, the per-user vs root layout, and Windows-specific notes.

What is Hermes Agent?

It's not a coding copilot tethered to an IDE or a chatbot wrapper around a single API. It's an autonomous agent that gets more capable the longer it runs. It lives wherever you put it — a $5 VPS, a GPU cluster, or serverless infrastructure (Daytona, Modal) that costs nearly nothing when idle. Talk to it from Telegram while it works on a cloud VM you never SSH into yourself. It's not tied to your laptop.

🚀 Installation Install in 60 seconds on Linux, macOS, WSL2, or native Windows
📖 Quickstart Tutorial Your first conversation and key features to try
🗺️ Learning Path Find the right docs for your experience level
⚙️ Configuration Config file, providers, models, and options
💬 Messaging Gateway Set up Telegram, Discord, Slack, WhatsApp, Teams, or more
🔧 Tools & Toolsets 68 built-in tools and how to configure them
🧠 Memory System Persistent memory that grows across sessions
📚 Skills System Procedural memory the agent creates and reuses
🔌 MCP Integration Connect to MCP servers, filter their tools, and extend Hermes safely
🧭 Use MCP with Hermes Practical MCP setup patterns, examples, and tutorials
🎙️ Voice Mode Real-time voice interaction in CLI, Telegram, Discord, and Discord VC
🗣️ Use Voice Mode with Hermes Hands-on setup and usage patterns for Hermes voice workflows
🎭 Personality & SOUL.md Define Hermes' default voice with a global SOUL.md
📄 Context Files Project context files that shape every conversation
🔒 Security Command approval, authorization, container isolation
💡 Tips & Best Practices Quick wins to get the most out of Hermes
🏗️ Architecture How it works under the hood
FAQ & Troubleshooting Common questions and solutions

Key Features

  • A closed learning loop — Agent-curated memory with periodic nudges, autonomous skill creation, skill self-improvement during use, FTS5 cross-session recall with LLM summarization, and Honcho dialectic user modeling
  • Runs anywhere, not just your laptop — 6 terminal backends: local, Docker, SSH, Daytona, Singularity, Modal. Daytona and Modal offer serverless persistence — your environment hibernates when idle, costing nearly nothing
  • Lives where you do — CLI, Telegram, Discord, Slack, WhatsApp, Signal, Matrix, Mattermost, Email, SMS, DingTalk, Feishu, WeCom, BlueBubbles, Home Assistant, Microsoft Teams — 15+ platforms from one gateway
  • Built by model trainers — Created by Nous Research, the lab behind Hermes, Nomos, and Psyche. Works with Nous Portal, OpenRouter, OpenAI, or any endpoint
  • Scheduled automations — Built-in cron with delivery to any platform
  • Delegates & parallelizes — Spawn isolated subagents for parallel workstreams. Programmatic Tool Calling via execute_code collapses multi-step pipelines into single inference calls
  • Open standard skills — Compatible with agentskills.io. Skills are portable, shareable, and community-contributed via the Skills Hub
  • Full web control — Search, extract, browse, vision, image generation, TTS
  • MCP support — Connect to any MCP server for extended tool capabilities
  • Research-ready — Batch processing, trajectory export, RL training with Atropos. Built by Nous Research — the lab behind Hermes, Nomos, and Psyche models

For LLMs and coding agents

Machine-readable entry points to this documentation:

  • /llms.txt — curated index of every doc page with short descriptions. ~17 KB, safe to load into an LLM context.
  • /llms-full.txt — every doc page concatenated into a single markdown file for one-shot ingestion. ~1.8 MB.

Both files also resolve at /docs/llms.txt and /docs/llms-full.txt. Generated fresh on every deploy.