fix(types): batch P1 ty hotfixes + run_agent.py annotation pass

15 P1 ship-stopper runtime bugs from the ty triage plus the cross-bucket
cleanup in run_agent.py. Net: -138 ty diagnostics (1953 -> 1815). Major
wins on not-subscriptable (-34), unresolved-attribute (-29),
invalid-argument-type (-26), invalid-type-form (-20),
unsupported-operator
(-18), invalid-key (-9).

Missing refs (structural):
- tools/rl_training_tool.py: RunState dataclass gains api_log_file,
  trainer_log_file, env_log_file fields; stop-run was closing undeclared
  handles.
- agent/credential_pool.py: remove_entry(entry_id) added, symmetric with
  add_entry; used by hermes_cli/web_server.py OAuth dashboard cleanup.
- hermes_cli/config.py: _CamofoxConfig TypedDict defined (was referenced
  by _BrowserConfig but never declared).
- hermes_cli/gateway.py: _setup_wecom_callback() added, mirroring
  _setup_wecom().
- tui_gateway/server.py: skills_hub imports corrected from
  hermes_cli.skills_hub -> tools.skills_hub.

Typo / deprecation:
- tools/transcription_tools.py: os.sys.modules -> sys.modules.
- gateway/platforms/bluebubbles.py: datetime.utcnow() ->
  datetime.now(timezone.utc).

None-guards:
- gateway/platforms/telegram.py:~2798 - msg.sticker None guard.
- gateway/platforms/discord.py:3602/3637 - interaction.data None +
  SelectMenu narrowing; :3009 - thread_id None before `in`; :1893 -
  guild.member_count None.
- gateway/platforms/matrix.py:2174/2185 - walrus-narrow
  re.search().group().
- agent/display.py:732 - start_time None before elapsed subtraction.
- gateway/run.py:10334 - assert _agent_timeout is not None before `//
  60`.

Platform override signature match:
- gateway/platforms/email.py: send_image accepts metadata kwarg;
  send_document accepts **kwargs (matches base class).

run_agent.py annotation pass:
- callable/any -> Callable/Any in annotation position (15 sites in
  run_agent.py + 5 in cli.py, toolset_distributions.py,
  tools/delegate_tool.py, hermes_cli/dingtalk_auth.py,
  tui_gateway/server.py).
- conversation_history param widened to list[dict[str, Any]] | None.
- OMIT_TEMPERATURE sentinel guarded from leaking into
  call_llm(temperature): kwargs-dict pattern at run_agent.py:7337 +
  scripts/trajectory_compressor.py:618/688.
- build_anthropic_client(timeout) widened to Optional[float].

Tests:
- tests/agent/test_credential_pool.py: remove_entry (id match,
  unknown-id, priority renumbering).
- tests/hermes_cli/test_config_shapes.py: _CamofoxConfig shape +
  nesting.
- tests/tools/test_rl_training_tool.py: RunState log_file fields.
This commit is contained in:
alt-glitch 2026-04-21 20:20:13 +05:30
parent b11e53e34f
commit 527ca7d238
24 changed files with 1725 additions and 254 deletions

View file

@ -611,11 +611,14 @@ Write only the summary, starting with "[CONTEXT SUMMARY]:" prefix."""
if getattr(self, '_use_call_llm', False):
from agent.auxiliary_client import call_llm
_call_llm_kwargs: dict = {}
if summary_temperature is not None:
_call_llm_kwargs["temperature"] = summary_temperature
response = call_llm(
provider=self._llm_provider,
model=self.config.summarization_model,
messages=[{"role": "user", "content": prompt}],
temperature=summary_temperature,
**_call_llm_kwargs,
max_tokens=self.config.summary_target_tokens * 2,
)
else:
@ -627,14 +630,14 @@ Write only the summary, starting with "[CONTEXT SUMMARY]:" prefix."""
if summary_temperature is not None:
_create_kwargs["temperature"] = summary_temperature
response = self.client.chat.completions.create(**_create_kwargs)
summary = self._coerce_summary_content(response.choices[0].message.content)
return self._ensure_summary_prefix(summary)
except Exception as e:
metrics.summarization_errors += 1
self.logger.warning(f"Summarization attempt {attempt + 1} failed: {e}")
if attempt < self.config.max_retries - 1:
time.sleep(jittered_backoff(attempt + 1, base_delay=self.config.retry_delay, max_delay=30.0))
else:
@ -681,11 +684,14 @@ Write only the summary, starting with "[CONTEXT SUMMARY]:" prefix."""
if getattr(self, '_use_call_llm', False):
from agent.auxiliary_client import async_call_llm
_async_llm_kwargs: dict = {}
if summary_temperature is not None:
_async_llm_kwargs["temperature"] = summary_temperature
response = await async_call_llm(
provider=self._llm_provider,
model=self.config.summarization_model,
messages=[{"role": "user", "content": prompt}],
temperature=summary_temperature,
**_async_llm_kwargs,
max_tokens=self.config.summary_target_tokens * 2,
)
else:
@ -697,14 +703,14 @@ Write only the summary, starting with "[CONTEXT SUMMARY]:" prefix."""
if summary_temperature is not None:
_create_kwargs["temperature"] = summary_temperature
response = await self._get_async_client().chat.completions.create(**_create_kwargs)
summary = self._coerce_summary_content(response.choices[0].message.content)
return self._ensure_summary_prefix(summary)
except Exception as e:
metrics.summarization_errors += 1
self.logger.warning(f"Summarization attempt {attempt + 1} failed: {e}")
if attempt < self.config.max_retries - 1:
await asyncio.sleep(jittered_backoff(attempt + 1, base_delay=self.config.retry_delay, max_delay=30.0))
else: