mirror of
https://github.com/NousResearch/hermes-agent.git
synced 2026-04-25 00:51:20 +00:00
docs: add context length detection references to FAQ and quickstart (#2179)
- quickstart.md: mention context length prompt for custom endpoints, link to configuration docs, add Ollama to provider table - faq.md: rewrite local models section with hermes model flow and context length prompt example, add Ollama num_ctx tip, expand context-length-exceeded troubleshooting with detection override options and config.yaml examples Co-authored-by: Test <test@test.com>
This commit is contained in:
parent
c52353cf8a
commit
80e578d3e3
2 changed files with 44 additions and 8 deletions
|
|
@ -54,10 +54,10 @@ hermes setup # Or configure everything at once
|
|||
| **OpenCode Zen** | Pay-as-you-go access to curated models | Set `OPENCODE_ZEN_API_KEY` |
|
||||
| **OpenCode Go** | $10/month subscription for open models | Set `OPENCODE_GO_API_KEY` |
|
||||
| **Vercel AI Gateway** | Vercel AI Gateway routing | Set `AI_GATEWAY_API_KEY` |
|
||||
| **Custom Endpoint** | VLLM, SGLang, or any OpenAI-compatible API | Set base URL + API key |
|
||||
| **Custom Endpoint** | VLLM, SGLang, Ollama, or any OpenAI-compatible API | Set base URL + API key |
|
||||
|
||||
:::tip
|
||||
You can switch providers at any time with `hermes model` — no code changes, no lock-in.
|
||||
You can switch providers at any time with `hermes model` — no code changes, no lock-in. When configuring a custom endpoint, Hermes will prompt for the context window size and auto-detect it when possible. See [Context Length Detection](../user-guide/configuration.md#context-length-detection) for details.
|
||||
:::
|
||||
|
||||
## 3. Start Chatting
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue