mirror of
https://github.com/NousResearch/hermes-agent.git
synced 2026-04-25 00:51:20 +00:00
fix(gateway): STT config resolution, stream consumer flood control fallback
Three targeted fixes from user-reported issues: 1. STT config resolution (transcription_tools.py): _has_openai_audio_backend() and _resolve_openai_audio_client_config() now check stt.openai.api_key/base_url in config.yaml FIRST, before falling back to env vars. Fixes voice transcription breaking when using a custom OpenAI-compatible endpoint via config.yaml. 2. Stream consumer flood control fallback (stream_consumer.py): When an edit fails mid-stream (e.g., Telegram flood control returns failure for waits >5s), reset _already_sent to False so the normal final send path delivers the complete response. Previously, a truncated partial was left as the final message. 3. Telegram edit_message comment alignment (telegram.py): Clarify that long flood waits return failure so streaming can fall back to a normal final send.
This commit is contained in:
parent
970042deab
commit
28380e7aed
3 changed files with 20 additions and 16 deletions
|
|
@ -901,9 +901,8 @@ class TelegramAdapter(BasePlatformAdapter):
|
|||
pass # best-effort truncation
|
||||
return SendResult(success=True, message_id=message_id)
|
||||
# Flood control / RetryAfter — short waits are retried inline,
|
||||
# long waits (>5s) return a failure so the caller can decide
|
||||
# whether to wait or degrade gracefully. (grammY auto-retry
|
||||
# pattern: maxDelaySeconds threshold.)
|
||||
# long waits return a failure immediately so streaming can fall back
|
||||
# to a normal final send instead of leaving a truncated partial.
|
||||
retry_after = getattr(e, "retry_after", None)
|
||||
if retry_after is not None or "retry after" in err_str:
|
||||
wait = retry_after if retry_after else 1.0
|
||||
|
|
@ -912,12 +911,7 @@ class TelegramAdapter(BasePlatformAdapter):
|
|||
self.name, wait,
|
||||
)
|
||||
if wait > 5.0:
|
||||
# Long wait — return failure immediately so callers
|
||||
# (progress edits, stream consumer) aren't blocked.
|
||||
return SendResult(
|
||||
success=False,
|
||||
error=f"flood_control:{wait}",
|
||||
)
|
||||
return SendResult(success=False, error=f"flood_control:{wait}")
|
||||
await asyncio.sleep(wait)
|
||||
try:
|
||||
await self._bot.edit_message_text(
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue