feat: add transform_llm_output plugin hook

Enables plugins to transform LLM output text after generation,
useful for vocabulary/personality transformation without burning
inference tokens.

Follows same pattern as transform_tool_result and transform_terminal_output:
- First non-empty string result wins
- Fail-open: exceptions logged as warnings, agent continues
- Signature: (response_text, session_id, model, platform)
This commit is contained in:
BarnacleBoy 2026-05-06 15:44:19 +00:00 committed by Teknium
parent 6e250a55de
commit c3be6ec184
2 changed files with 25 additions and 0 deletions

View file

@ -80,6 +80,10 @@ VALID_HOOKS: Set[str] = {
"post_tool_call",
"transform_terminal_output",
"transform_tool_result",
# Transform LLM output before it's returned to the user.
# Plugins return a string to replace the response text, or None/empty to leave unchanged.
# First non-None string wins. Useful for vocabulary/personality transformation.
"transform_llm_output",
"pre_llm_call",
"post_llm_call",
"pre_api_request",