mirror of
https://github.com/NousResearch/hermes-agent.git
synced 2026-05-03 02:11:48 +00:00
Merge origin/main into atropos-integrations
Merged main's latest changes including: - New hermes_cli/ unified CLI commands - File operations tools, fuzzy match, patch parser - RL training tools and tinker-atropos submodule - Enhanced batch_runner and run_agent - Gateway improvements (Telegram, Discord) - Cron job management - Installation scripts Preserved our branch-specific features: - Modal backend (atropos/backends/modal_backend.py) - Modal terminal tool integration (ModalProfile, _ModalSandboxPool, etc.) - Singularity/Apptainer support - Atropos AgentEnv Modal config fields - Combined pyproject.toml extras (atropos + messaging + cron + cli) Conflict resolution: - cli.py, model_tools.py, README.md: accepted main (newer features) - pyproject.toml: combined both extras and package lists - tools/terminal_tool.py: accepted main's base + re-inserted Modal integration
This commit is contained in:
commit
36ea883d45
79 changed files with 22673 additions and 2082 deletions
40
.env.example
40
.env.example
|
|
@ -66,8 +66,8 @@ OPENROUTER_BASE_URL=https://openrouter.ai/api/v1
|
|||
OPENROUTER_API_KEY=
|
||||
|
||||
# Default model to use (OpenRouter format: provider/model)
|
||||
# Examples: anthropic/claude-sonnet-4, openai/gpt-4o, google/gemini-2.0-flash, zhipuai/glm-4-plus
|
||||
LLM_MODEL=anthropic/claude-sonnet-4
|
||||
# Examples: anthropic/claude-opus-4.6, openai/gpt-4o, google/gemini-2.0-flash, zhipuai/glm-4-plus
|
||||
LLM_MODEL=anthropic/claude-opus-4.6
|
||||
|
||||
# =============================================================================
|
||||
# TOOL API KEYS
|
||||
|
|
@ -96,13 +96,17 @@ FAL_KEY=
|
|||
# - modal: Runs in Modal cloud sandboxes (scalable, requires Modal account)
|
||||
TERMINAL_ENV=local
|
||||
|
||||
|
||||
# Container images (for singularity/docker/modal backends)
|
||||
TERMINAL_DOCKER_IMAGE=python:3.11
|
||||
TERMINAL_SINGULARITY_IMAGE=docker://python:3.11
|
||||
TERMINAL_MODAL_IMAGE=python:3.11
|
||||
|
||||
# Working directory inside the container
|
||||
TERMINAL_CWD=/tmp
|
||||
# Working directory for terminal commands
|
||||
# For CLI: "." means current directory (resolved automatically from config.yaml)
|
||||
# For containers (docker/singularity/modal): absolute path inside the container
|
||||
# Usually managed by config.yaml (terminal.cwd) — uncomment to override
|
||||
# TERMINAL_CWD=.
|
||||
|
||||
# Default command timeout in seconds
|
||||
TERMINAL_TIMEOUT=60
|
||||
|
|
@ -285,3 +289,31 @@ WEB_TOOLS_DEBUG=false
|
|||
VISION_TOOLS_DEBUG=false
|
||||
MOA_TOOLS_DEBUG=false
|
||||
IMAGE_TOOLS_DEBUG=false
|
||||
|
||||
# =============================================================================
|
||||
# CONTEXT COMPRESSION (Auto-shrinks long conversations)
|
||||
# =============================================================================
|
||||
# When conversation approaches model's context limit, middle turns are
|
||||
# automatically summarized to free up space.
|
||||
#
|
||||
# CONTEXT_COMPRESSION_ENABLED=true # Enable auto-compression (default: true)
|
||||
# CONTEXT_COMPRESSION_THRESHOLD=0.85 # Compress at 85% of context limit
|
||||
# CONTEXT_COMPRESSION_MODEL=google/gemini-2.0-flash-001 # Fast model for summaries
|
||||
|
||||
# =============================================================================
|
||||
# RL TRAINING (Tinker + Atropos)
|
||||
# =============================================================================
|
||||
# Run reinforcement learning training on language models using the Tinker API.
|
||||
# Requires the rl-server to be running (from tinker-atropos package).
|
||||
|
||||
# Tinker API Key - RL training service
|
||||
# Get at: https://tinker-console.thinkingmachines.ai/keys
|
||||
TINKER_API_KEY=
|
||||
|
||||
# Weights & Biases API Key - Experiment tracking and metrics
|
||||
# Get at: https://wandb.ai/authorize
|
||||
WANDB_API_KEY=
|
||||
|
||||
# RL API Server URL (default: http://localhost:8080)
|
||||
# Change if running the rl-server on a different host/port
|
||||
# RL_API_URL=http://localhost:8080
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue