feat(steer): /steer <prompt> injects a mid-run note after the next tool call (#12116)

* feat(steer): /steer <prompt> injects a mid-run note after the next tool call

Adds a new slash command that sits between /queue (turn boundary) and
interrupt. /steer <text> stashes the message on the running agent and
the agent loop appends it to the LAST tool result's content once the
current tool batch finishes. The model sees it as part of the tool
output on its next iteration.

No interrupt is fired, no new user turn is inserted, and no prompt
cache invalidation happens beyond the normal per-turn tool-result
churn. Message-role alternation is preserved — we only modify an
existing role:"tool" message's content.

Wiring
------
- hermes_cli/commands.py: register /steer + add to ACTIVE_SESSION_BYPASS_COMMANDS.
- run_agent.py: add _pending_steer state, AIAgent.steer(), _drain_pending_steer(),
  _apply_pending_steer_to_tool_results(); drain at end of both parallel and
  sequential tool executors; clear on interrupt; return leftover as
  result['pending_steer'] if the agent exits before another tool batch.
- cli.py: /steer handler — route to agent.steer() when running, fall back to
  the regular queue otherwise; deliver result['pending_steer'] as next turn.
- gateway/run.py: running-agent intercept calls running_agent.steer(); idle-agent
  path strips the prefix and forwards as a regular user message.
- tui_gateway/server.py: new session.steer JSON-RPC method.
- ui-tui: SessionSteerResponse type + local /steer slash command that calls
  session.steer when ui.busy, otherwise enqueues for the next turn.

Fallbacks
---------
- Agent exits mid-steer → surfaces in run_conversation result as pending_steer
  so CLI/gateway deliver it as the next user turn instead of silently dropping it.
- All tools skipped after interrupt → re-stashes pending_steer for the caller.
- No active agent → /steer reduces to sending the text as a normal message.

Tests
-----
- tests/run_agent/test_steer.py — accept/reject, concatenation, drain,
  last-tool-result injection, multimodal list content, thread safety,
  cleared-on-interrupt, registry membership, bypass-set membership.
- tests/gateway/test_steer_command.py — running agent, pending sentinel,
  missing steer() method, rejected payload, empty payload.
- tests/gateway/test_command_bypass_active_session.py — /steer bypasses
  the Level-1 base adapter guard.
- tests/test_tui_gateway_server.py — session.steer RPC paths.

72/72 targeted tests pass under scripts/run_tests.sh.

* feat(steer): register /steer in Discord's native slash tree

Discord's app_commands tree is a curated subset of slash commands (not
derived from COMMAND_REGISTRY like Telegram/Slack). /steer already
works there as plain text (routes through handle_message → base
adapter bypass → runner), but registering it here adds Discord's
native autocomplete + argument hint UI so users can discover and
type it like any other first-class command.
This commit is contained in:
Teknium 2026-04-18 04:17:18 -07:00 committed by GitHub
parent f9667331e5
commit 2edebedc9e
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
12 changed files with 826 additions and 2 deletions

View file

@ -832,6 +832,16 @@ class AIAgent:
self._interrupt_thread_signal_pending = False
self._client_lock = threading.RLock()
# /steer mechanism — inject a user note into the next tool result
# without interrupting the agent. Unlike interrupt(), steer() does
# NOT set _interrupt_requested; it waits for the current tool batch
# to finish naturally, then the drain hook appends the text to the
# last tool result's content so the model sees it on its next
# iteration. Message-role alternation is preserved (we modify an
# existing tool message rather than inserting a new user turn).
self._pending_steer: Optional[str] = None
self._pending_steer_lock = threading.Lock()
# Concurrent-tool worker thread tracking. `_execute_tool_calls_concurrent`
# runs each tool on its own ThreadPoolExecutor worker — those worker
# threads have tids distinct from `_execution_thread_id`, so
@ -3265,6 +3275,129 @@ class AIAgent:
_set_interrupt(False, _wtid)
except Exception:
pass
# A hard interrupt supersedes any pending /steer — the steer was
# meant for the agent's next tool-call iteration, which will no
# longer happen. Drop it instead of surprising the user with a
# late injection on the post-interrupt turn.
_steer_lock = getattr(self, "_pending_steer_lock", None)
if _steer_lock is not None:
with _steer_lock:
self._pending_steer = None
def steer(self, text: str) -> bool:
"""
Inject a user message into the next tool result without interrupting.
Unlike interrupt(), this does NOT stop the current tool call. The
text is stashed and the agent loop appends it to the LAST tool
result's content once the current tool batch finishes. The model
sees the steer as part of the tool output on its next iteration.
Thread-safe: callable from gateway/CLI/TUI threads. Multiple calls
before the drain point concatenate with newlines.
Args:
text: The user text to inject. Empty strings are ignored.
Returns:
True if the steer was accepted, False if the text was empty.
"""
if not text or not text.strip():
return False
cleaned = text.strip()
_lock = getattr(self, "_pending_steer_lock", None)
if _lock is None:
# Test stubs that built AIAgent via object.__new__ skip __init__.
# Fall back to direct attribute set; no concurrent callers expected
# in those stubs.
existing = getattr(self, "_pending_steer", None)
self._pending_steer = (existing + "\n" + cleaned) if existing else cleaned
return True
with _lock:
if self._pending_steer:
self._pending_steer = self._pending_steer + "\n" + cleaned
else:
self._pending_steer = cleaned
return True
def _drain_pending_steer(self) -> Optional[str]:
"""Return the pending steer text (if any) and clear the slot.
Safe to call from the agent execution thread after appending tool
results. Returns None when no steer is pending.
"""
_lock = getattr(self, "_pending_steer_lock", None)
if _lock is None:
text = getattr(self, "_pending_steer", None)
self._pending_steer = None
return text
with _lock:
text = self._pending_steer
self._pending_steer = None
return text
def _apply_pending_steer_to_tool_results(self, messages: list, num_tool_msgs: int) -> None:
"""Append any pending /steer text to the last tool result in this turn.
Called at the end of a tool-call batch, before the next API call.
The steer is appended to the last ``role:"tool"`` message's content
with a clear marker so the model understands it came from the user
and NOT from the tool itself. Role alternation is preserved
nothing new is inserted, we only modify existing content.
Args:
messages: The running messages list.
num_tool_msgs: Number of tool results appended in this batch;
used to locate the tail slice safely.
"""
if num_tool_msgs <= 0 or not messages:
return
steer_text = self._drain_pending_steer()
if not steer_text:
return
# Find the last tool-role message in the recent tail. Skipping
# non-tool messages defends against future code appending
# something else at the boundary.
target_idx = None
for j in range(len(messages) - 1, max(len(messages) - num_tool_msgs - 1, -1), -1):
msg = messages[j]
if isinstance(msg, dict) and msg.get("role") == "tool":
target_idx = j
break
if target_idx is None:
# No tool result in this batch (e.g. all skipped by interrupt);
# put the steer back so the caller's fallback path can deliver
# it as a normal next-turn user message.
_lock = getattr(self, "_pending_steer_lock", None)
if _lock is not None:
with _lock:
if self._pending_steer:
self._pending_steer = self._pending_steer + "\n" + steer_text
else:
self._pending_steer = steer_text
else:
existing = getattr(self, "_pending_steer", None)
self._pending_steer = (existing + "\n" + steer_text) if existing else steer_text
return
marker = f"\n\n[USER STEER (injected mid-run, not tool output): {steer_text}]"
existing_content = messages[target_idx].get("content", "")
if not isinstance(existing_content, str):
# Anthropic multimodal content blocks — preserve them and append
# a text block at the end.
try:
blocks = list(existing_content) if existing_content else []
blocks.append({"type": "text", "text": marker.lstrip()})
messages[target_idx]["content"] = blocks
except Exception:
# Fall back to string replacement if content shape is unexpected.
messages[target_idx]["content"] = f"{existing_content}{marker}"
else:
messages[target_idx]["content"] = existing_content + marker
logger.info(
"Delivered /steer to agent after tool batch (%d chars): %s",
len(steer_text),
steer_text[:120] + ("..." if len(steer_text) > 120 else ""),
)
def _touch_activity(self, desc: str) -> None:
"""Update the last-activity timestamp and description (thread-safe)."""
@ -7951,6 +8084,13 @@ class AIAgent:
turn_tool_msgs = messages[-num_tools:]
enforce_turn_budget(turn_tool_msgs, env=get_active_env(effective_task_id))
# ── /steer injection ──────────────────────────────────────────────
# Append any pending user steer text to the last tool result so the
# agent sees it on its next iteration. Runs AFTER budget enforcement
# so the steer marker is never truncated. See steer() for details.
if num_tools > 0:
self._apply_pending_steer_to_tool_results(messages, num_tools)
def _execute_tool_calls_sequential(self, assistant_message, messages: list, effective_task_id: str, api_call_count: int = 0) -> None:
"""Execute tool calls sequentially (original behavior). Used for single calls or interactive tools."""
for i, tool_call in enumerate(assistant_message.tool_calls, 1):
@ -8330,6 +8470,12 @@ class AIAgent:
if num_tools_seq > 0:
enforce_turn_budget(messages[-num_tools_seq:], env=get_active_env(effective_task_id))
# ── /steer injection ──────────────────────────────────────────────
# See _execute_tool_calls_parallel for the rationale. Same hook,
# applied to sequential execution as well.
if num_tools_seq > 0:
self._apply_pending_steer_to_tool_results(messages, num_tools_seq)
def _handle_max_iterations(self, messages: list, api_call_count: int) -> str:
@ -11610,6 +11756,12 @@ class AIAgent:
"cost_status": self.session_cost_status,
"cost_source": self.session_cost_source,
}
# If a /steer landed after the final assistant turn (no more tool
# batches to drain into), hand it back to the caller so it can be
# delivered as the next user turn instead of being silently lost.
_leftover_steer = self._drain_pending_steer()
if _leftover_steer:
result["pending_steer"] = _leftover_steer
self._response_was_previewed = False
# Include interrupt message if one triggered the interrupt