hermes-agent/website/docs
teknium1 68fbae5692 docs: add Custom & Self-Hosted LLM Providers guide
Comprehensive guide for using Hermes Agent with alternative LLM backends:
- Ollama (local models, zero config)
- vLLM (high-performance GPU inference)
- SGLang (RadixAttention, prefix caching)
- llama.cpp / llama-server (CPU & Metal inference)
- LiteLLM Proxy (multi-provider gateway)
- ClawRouter (cost-optimized routing with complexity scoring)
- 10+ other compatible providers table (Together, Groq, DeepSeek, etc.)
- Choosing the Right Setup decision table
- General custom endpoint setup instructions

All of these work via the existing OPENAI_BASE_URL + OPENAI_API_KEY
custom endpoint support — no code changes needed.
2026-03-06 14:16:06 -08:00
..
developer-guide Merge PR #451: feat: Add Daytona environment backend 2026-03-06 03:32:40 -08:00
getting-started fix: allow self-hosted Firecrawl without API key + add self-hosting docs 2026-03-05 16:44:21 -08:00
reference docs: complete Daytona backend documentation coverage 2026-03-06 03:37:05 -08:00
user-guide docs: add Custom & Self-Hosted LLM Providers guide 2026-03-06 14:16:06 -08:00
index.md docs: rebrand messaging — 'the self-improving AI agent' 2026-03-06 04:34:06 -08:00