smokes working, fixing up toolserver. switched to llama.cpp, ollama sucks too much

This commit is contained in:
Shannon Sands 2026-02-03 11:41:34 +10:00
parent 4939130485
commit 16fb41f9cc
18 changed files with 822 additions and 238 deletions

View file

@ -22,12 +22,13 @@ HERMES_BACKEND=openai
# of OpenRouter.
#
# Local server convenience (base URL without /v1):
# ATROPOS_SERVER_BASE_URL=http://localhost:11434
# llama.cpp example (see `Hermes-Agent/scripts/launch_llama_cpp_glm47_flash.sh`):
# ATROPOS_SERVER_BASE_URL=http://127.0.0.1:8080
# ATROPOS_SERVER_MODEL=glm-4.7-flash
# ATROPOS_SERVER_API_KEY=local
#
# Generic OpenAI-compatible (base URL should include /v1):
# OPENAI_BASE_URL=http://localhost:11434/v1
# OPENAI_BASE_URL=http://127.0.0.1:8080/v1
# OPENAI_API_KEY=local
# =============================================================================