feat(computer-use): cua-driver backend, universal any-model schema

Background macOS desktop control via cua-driver MCP — does NOT steal the
user's cursor or keyboard focus, works with any tool-capable model.

Replaces the Anthropic-native `computer_20251124` approach from the
abandoned #4562 with a generic OpenAI function-calling schema plus SOM
(set-of-mark) captures so Claude, GPT, Gemini, and open models can all
drive the desktop via numbered element indices.

## What this adds

- `tools/computer_use/` package — swappable ComputerUseBackend ABC +
  CuaDriverBackend (stdio MCP client to trycua/cua's cua-driver binary).
- Universal `computer_use` tool with one schema for all providers.
  Actions: capture (som/vision/ax), click, double_click, right_click,
  middle_click, drag, scroll, type, key, wait, list_apps, focus_app.
- Multimodal tool-result envelope (`_multimodal=True`, OpenAI-style
  `content: [text, image_url]` parts) that flows through
  handle_function_call into the tool message. Anthropic adapter converts
  into native `tool_result` image blocks; OpenAI-compatible providers
  get the parts list directly.
- Image eviction in convert_messages_to_anthropic: only the 3 most
  recent screenshots carry real image data; older ones become text
  placeholders to cap per-turn token cost.
- Context compressor image pruning: old multimodal tool results have
  their image parts stripped instead of being skipped.
- Image-aware token estimation: each image counts as a flat 1500 tokens
  instead of its base64 char length (~1MB would have registered as
  ~250K tokens before).
- COMPUTER_USE_GUIDANCE system-prompt block — injected when the toolset
  is active.
- Session DB persistence strips base64 from multimodal tool messages.
- Trajectory saver normalises multimodal messages to text-only.
- `hermes tools` post-setup installs cua-driver via the upstream script
  and prints permission-grant instructions.
- CLI approval callback wired so destructive computer_use actions go
  through the same prompt_toolkit approval dialog as terminal commands.
- Hard safety guards at the tool level: blocked type patterns
  (curl|bash, sudo rm -rf, fork bomb), blocked key combos (empty trash,
  force delete, lock screen, log out).
- Skill `apple/macos-computer-use/SKILL.md` — universal (model-agnostic)
  workflow guide.
- Docs: `user-guide/features/computer-use.md` plus reference catalog
  entries.

## Tests

44 new tests in tests/tools/test_computer_use.py covering schema
shape (universal, not Anthropic-native), dispatch routing, safety
guards, multimodal envelope, Anthropic adapter conversion, screenshot
eviction, context compressor pruning, image-aware token estimation,
run_agent helpers, and universality guarantees.

469/469 pass across tests/tools/test_computer_use.py + the affected
agent/ test suites.

## Not in this PR

- `model_tools.py` provider-gating: the tool is available to every
  provider. Providers without multi-part tool message support will see
  text-only tool results (graceful degradation via `text_summary`).
- Anthropic server-side `clear_tool_uses_20250919` — deferred;
  client-side eviction + compressor pruning cover the same cost ceiling
  without a beta header.

## Caveats

- macOS only. cua-driver uses private SkyLight SPIs
  (SLEventPostToPid, SLPSPostEventRecordTo,
  _AXObserverAddNotificationAndCheckRemote) that can break on any macOS
  update. Pin with HERMES_CUA_DRIVER_VERSION.
- Requires Accessibility + Screen Recording permissions — the post-setup
  prints the Settings path.

Supersedes PR #4562 (pyautogui/Quartz foreground backend, Anthropic-
native schema). Credit @0xbyt4 for the original #3816 groundwork whose
context/eviction/token design is preserved here in generic form.
This commit is contained in:
Teknium 2026-04-23 16:44:24 -07:00
parent 24f139e16a
commit b07791db05
No known key found for this signature in database
23 changed files with 2861 additions and 27 deletions

View file

@ -0,0 +1,178 @@
"""Schema for the generic `computer_use` tool.
Model-agnostic. Any tool-calling model can drive this. Vision-capable models
should prefer `capture(mode='som')` then `click(element=N)` much more
reliable than pixel coordinates. Pixel coordinates remain supported for
models that were trained on them (e.g. Claude's computer-use RL).
"""
from __future__ import annotations
from typing import Any, Dict
# One consolidated tool with an `action` discriminator. Keeps the schema
# compact and the per-turn token cost low.
COMPUTER_USE_SCHEMA: Dict[str, Any] = {
"name": "computer_use",
"description": (
"Drive the macOS desktop in the background — screenshots, mouse, "
"keyboard, scroll, drag — without stealing the user's cursor, "
"keyboard focus, or Space. Preferred workflow: call with "
"action='capture' (mode='som' gives numbered element overlays), "
"then click by `element` index for reliability. Pixel coordinates "
"are supported for models trained on them. Works on any window — "
"hidden, minimized, on another Space, or behind another app. "
"macOS only; requires cua-driver to be installed."
),
"parameters": {
"type": "object",
"properties": {
"action": {
"type": "string",
"enum": [
"capture",
"click",
"double_click",
"right_click",
"middle_click",
"drag",
"scroll",
"type",
"key",
"wait",
"list_apps",
"focus_app",
],
"description": (
"Which action to perform. `capture` is free (no side "
"effects). All other actions require approval unless "
"auto-approved."
),
},
# ── capture ────────────────────────────────────────────
"mode": {
"type": "string",
"enum": ["som", "vision", "ax"],
"description": (
"Capture mode. `som` (default) is a screenshot with "
"numbered overlays on every interactable element plus "
"the AX tree — best for vision models, lets you click "
"by element index. `vision` is a plain screenshot. "
"`ax` is the accessibility tree only (no image; useful "
"for text-only models)."
),
},
"app": {
"type": "string",
"description": (
"Optional. Limit capture/action to a specific app "
"(by name, e.g. 'Safari', or bundle ID, "
"'com.apple.Safari'). If omitted, operates on the "
"frontmost app's window or the whole screen."
),
},
# ── click / drag / scroll targeting ────────────────────
"element": {
"type": "integer",
"description": (
"The 1-based SOM index returned by the last "
"`capture(mode='som')` call. Strongly preferred over "
"raw coordinates."
),
},
"coordinate": {
"type": "array",
"items": {"type": "integer"},
"minItems": 2,
"maxItems": 2,
"description": (
"Pixel coordinates [x, y] in logical screen space (as "
"returned by capture width/height). Only use this if "
"no element index is available."
),
},
"button": {
"type": "string",
"enum": ["left", "right", "middle"],
"description": "Mouse button. Defaults to left.",
},
"modifiers": {
"type": "array",
"items": {
"type": "string",
"enum": ["cmd", "shift", "option", "alt", "ctrl", "fn"],
},
"description": "Modifier keys held during the action.",
},
# ── drag ───────────────────────────────────────────────
"from_element": {"type": "integer",
"description": "Source element index (drag)."},
"to_element": {"type": "integer",
"description": "Target element index (drag)."},
"from_coordinate": {
"type": "array",
"items": {"type": "integer"},
"minItems": 2, "maxItems": 2,
"description": "Source [x,y] (drag; use when no element available).",
},
"to_coordinate": {
"type": "array",
"items": {"type": "integer"},
"minItems": 2, "maxItems": 2,
"description": "Target [x,y] (drag; use when no element available).",
},
# ── scroll ─────────────────────────────────────────────
"direction": {
"type": "string",
"enum": ["up", "down", "left", "right"],
"description": "Scroll direction.",
},
"amount": {
"type": "integer",
"description": "Scroll wheel ticks. Default 3.",
},
# ── type / key / wait ──────────────────────────────────
"text": {
"type": "string",
"description": "Text to type (respects the current layout).",
},
"keys": {
"type": "string",
"description": (
"Key combo, e.g. 'cmd+s', 'ctrl+alt+t', 'return', "
"'escape', 'tab'. Use '+' to combine."
),
},
"seconds": {
"type": "number",
"description": "Seconds to wait. Max 30.",
},
# ── focus_app ──────────────────────────────────────────
"raise_window": {
"type": "boolean",
"description": (
"Only for action='focus_app'. If true, brings the "
"window to front (DISRUPTS the user). Default false "
"— input is routed to the app without raising, "
"matching the background co-work model."
),
},
# ── return shape ───────────────────────────────────────
"capture_after": {
"type": "boolean",
"description": (
"If true, take a follow-up capture after the action "
"and include it in the response. Saves a round-trip "
"when you need to verify an action's effect."
),
},
},
"required": ["action"],
},
}
def get_computer_use_schema() -> Dict[str, Any]:
"""Return the generic OpenAI function-calling schema."""
return COMPUTER_USE_SCHEMA