..
__init__.py
Refactor Terminal and AIAgent cleanup
2026-02-21 22:31:43 -08:00
anthropic_adapter.py
fix(anthropic): use model-native output limits instead of hardcoded 16K ( #3426 )
2026-03-27 13:02:52 -07:00
auxiliary_client.py
feat(agent): configurable timeouts for auxiliary LLM calls via config.yaml ( #3597 )
2026-03-28 14:35:28 -07:00
context_compressor.py
fix: cap percentage displays at 100% in stats, gateway, and memory tool ( #3599 )
2026-03-28 14:55:18 -07:00
context_references.py
fix: add timeout to subprocess calls in context_references ( #3469 )
2026-03-27 17:51:14 -07:00
copilot_acp_client.py
fix(acp): preserve leading whitespace in streaming chunks
2026-03-20 09:38:13 -07:00
display.py
fix: cap context pressure percentage at 100% in display ( #3480 )
2026-03-27 21:42:09 -07:00
insights.py
chore: fix 154 f-strings, simplify getattr/URL patterns, remove dead code ( #3119 )
2026-03-25 19:47:58 -07:00
model_metadata.py
feat: curate HF model picker with OpenRouter analogues ( #3440 )
2026-03-27 13:54:46 -07:00
models_dev.py
fix: write models.dev disk cache atomically ( #3588 )
2026-03-28 14:20:30 -07:00
prompt_builder.py
feat: tool-use enforcement + strip budget warnings from history ( #3528 )
2026-03-28 07:38:36 -07:00
prompt_caching.py
fix(prompt-caching): skip top-level cache_control on role:tool for OpenRouter
2026-03-21 16:54:43 -07:00
redact.py
fix(redact): safely handle non-string inputs
2026-03-21 16:55:02 -07:00
skill_commands.py
fix: disabled skills respected across banner, system prompt, slash commands, and skill_view ( #1897 )
2026-03-18 03:17:37 -07:00
skill_utils.py
perf(ttft): cache skills prompt with shared skill_utils module (salvage #3366 ) ( #3421 )
2026-03-27 10:54:02 -07:00
smart_model_routing.py
feat: integrate GitHub Copilot providers across Hermes
2026-03-17 23:40:22 -07:00
title_generator.py
feat(agent): configurable timeouts for auxiliary LLM calls via config.yaml ( #3597 )
2026-03-28 14:35:28 -07:00
trajectory.py
Refactor Terminal and AIAgent cleanup
2026-02-21 22:31:43 -08:00
usage_pricing.py
fix: status bar shows 26K instead of 260K for token counts with trailing zeros ( #3024 )
2026-03-25 12:45:58 -07:00