refactor: route ad-hoc LLM consumers through centralized provider router

Route all remaining ad-hoc auxiliary LLM call sites through
resolve_provider_client() so auth, headers, and API format (Chat
Completions vs Responses API) are handled consistently in one place.

Files changed:

- tools/openrouter_client.py: Replace manual AsyncOpenAI construction
  with resolve_provider_client('openrouter', async_mode=True). The
  shared client module now delegates entirely to the router.

- tools/skills_guard.py: Replace inline OpenAI client construction
  (hardcoded OpenRouter base_url, manual api_key lookup, manual
  headers) with resolve_provider_client('openrouter'). Remove unused
  OPENROUTER_BASE_URL import.

- trajectory_compressor.py: Add _detect_provider() to map config
  base_url to a provider name, then route through
  resolve_provider_client. Falls back to raw construction for
  unrecognized custom endpoints.

- mini_swe_runner.py: Route default case (no explicit api_key/base_url)
  through resolve_provider_client('openrouter') with auto-detection
  fallback. Preserves direct construction when explicit creds are
  passed via CLI args.

- agent/auxiliary_client.py: Fix stale module docstring — vision auto
  mode now correctly documents that Codex and custom endpoints are
  tried (not skipped).
This commit is contained in:
teknium1 2026-03-11 20:02:36 -07:00
parent 8805e705a7
commit 07f09ecd83
5 changed files with 97 additions and 89 deletions

View file

@ -29,7 +29,7 @@ from datetime import datetime, timezone
from pathlib import Path
from typing import List, Tuple
from hermes_constants import OPENROUTER_BASE_URL
# ---------------------------------------------------------------------------
@ -934,24 +934,14 @@ def llm_audit_skill(skill_path: Path, static_result: ScanResult,
if not model:
return static_result
# Call the LLM via the OpenAI SDK (same pattern as run_agent.py)
# Call the LLM via the centralized provider router
try:
from openai import OpenAI
import os
from agent.auxiliary_client import resolve_provider_client
api_key = os.getenv("OPENROUTER_API_KEY", "")
if not api_key:
client, _default_model = resolve_provider_client("openrouter")
if client is None:
return static_result
client = OpenAI(
base_url=OPENROUTER_BASE_URL,
api_key=api_key,
default_headers={
"HTTP-Referer": "https://github.com/NousResearch/hermes-agent",
"X-OpenRouter-Title": "Hermes Agent",
"X-OpenRouter-Categories": "productivity,cli-agent",
},
)
response = client.chat.completions.create(
model=model,
messages=[{