mirror of
https://github.com/NousResearch/hermes-agent.git
synced 2026-04-29 01:31:41 +00:00
fix: remove dead LLM_MODEL env var — add migration to clear stale .env entries (#6543)
The old setup wizard (pre-March 2026) wrote LLM_MODEL to ~/.hermes/.env
across 12 provider flows. Commit 9302690e removed the writes but never
cleaned up existing .env files, leaving a dead variable that:
- Nothing in the codebase reads (zero os.getenv calls)
- The docs incorrectly claimed the gateway still used as fallback
- Caused user confusion when debugging model resolution issues
Changes:
- config.py: Bump _config_version 12 → 13, add migration to clear
LLM_MODEL and OPENAI_MODEL from .env (both dead since March 2026)
- environment-variables.md: Remove LLM_MODEL row, fix HERMES_MODEL
description to stop referencing it
- providers.md: Update deprecation notice from 'deprecated' to 'removed'
This commit is contained in:
parent
8dfc96dbbb
commit
ad06bfccf0
3 changed files with 18 additions and 4 deletions
|
|
@ -230,7 +230,7 @@ model:
|
|||
```
|
||||
|
||||
:::warning Legacy env vars
|
||||
`OPENAI_BASE_URL` and `LLM_MODEL` in `.env` are **deprecated**. `OPENAI_BASE_URL` is no longer consulted for endpoint resolution — `config.yaml` is the single source of truth. The CLI ignores `LLM_MODEL` entirely (only the gateway reads it as a fallback). Use `hermes model` or edit `config.yaml` directly — both persist correctly across restarts and Docker containers.
|
||||
`OPENAI_BASE_URL` and `LLM_MODEL` in `.env` are **removed**. Neither is read by any part of Hermes — `config.yaml` is the single source of truth for model and endpoint configuration. If you have stale entries in your `.env`, they are automatically cleared on the next `hermes setup` or config migration. Use `hermes model` or edit `config.yaml` directly.
|
||||
:::
|
||||
|
||||
Both approaches persist to `config.yaml`, which is the source of truth for model, provider, and base URL.
|
||||
|
|
|
|||
|
|
@ -53,8 +53,7 @@ All variables go in `~/.hermes/.env`. You can also set them with `hermes config
|
|||
| `OPENCODE_GO_API_KEY` | OpenCode Go API key — $10/month subscription for open models ([opencode.ai](https://opencode.ai/auth)) |
|
||||
| `OPENCODE_GO_BASE_URL` | Override OpenCode Go base URL |
|
||||
| `CLAUDE_CODE_OAUTH_TOKEN` | Explicit Claude Code token override if you export one manually |
|
||||
| `HERMES_MODEL` | Preferred model name (checked before `LLM_MODEL`, used by gateway) |
|
||||
| `LLM_MODEL` | Default model name (fallback when not set in config.yaml) |
|
||||
| `HERMES_MODEL` | Override model name at process level (used by cron scheduler; prefer `config.yaml` for normal use) |
|
||||
| `VOICE_TOOLS_OPENAI_KEY` | Preferred OpenAI key for OpenAI speech-to-text and text-to-speech providers |
|
||||
| `HERMES_LOCAL_STT_COMMAND` | Optional local speech-to-text command template. Supports `{input_path}`, `{output_dir}`, `{language}`, and `{model}` placeholders |
|
||||
| `HERMES_LOCAL_STT_LANGUAGE` | Default language passed to `HERMES_LOCAL_STT_COMMAND` or auto-detected local `whisper` CLI fallback (default: `en`) |
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue