mirror of
https://github.com/NousResearch/hermes-agent.git
synced 2026-04-26 01:01:40 +00:00
feat(computer-use): cua-driver backend, universal any-model schema
Background macOS desktop control via cua-driver MCP — does NOT steal the user's cursor or keyboard focus, works with any tool-capable model. Replaces the Anthropic-native `computer_20251124` approach from the abandoned #4562 with a generic OpenAI function-calling schema plus SOM (set-of-mark) captures so Claude, GPT, Gemini, and open models can all drive the desktop via numbered element indices. ## What this adds - `tools/computer_use/` package — swappable ComputerUseBackend ABC + CuaDriverBackend (stdio MCP client to trycua/cua's cua-driver binary). - Universal `computer_use` tool with one schema for all providers. Actions: capture (som/vision/ax), click, double_click, right_click, middle_click, drag, scroll, type, key, wait, list_apps, focus_app. - Multimodal tool-result envelope (`_multimodal=True`, OpenAI-style `content: [text, image_url]` parts) that flows through handle_function_call into the tool message. Anthropic adapter converts into native `tool_result` image blocks; OpenAI-compatible providers get the parts list directly. - Image eviction in convert_messages_to_anthropic: only the 3 most recent screenshots carry real image data; older ones become text placeholders to cap per-turn token cost. - Context compressor image pruning: old multimodal tool results have their image parts stripped instead of being skipped. - Image-aware token estimation: each image counts as a flat 1500 tokens instead of its base64 char length (~1MB would have registered as ~250K tokens before). - COMPUTER_USE_GUIDANCE system-prompt block — injected when the toolset is active. - Session DB persistence strips base64 from multimodal tool messages. - Trajectory saver normalises multimodal messages to text-only. - `hermes tools` post-setup installs cua-driver via the upstream script and prints permission-grant instructions. - CLI approval callback wired so destructive computer_use actions go through the same prompt_toolkit approval dialog as terminal commands. - Hard safety guards at the tool level: blocked type patterns (curl|bash, sudo rm -rf, fork bomb), blocked key combos (empty trash, force delete, lock screen, log out). - Skill `apple/macos-computer-use/SKILL.md` — universal (model-agnostic) workflow guide. - Docs: `user-guide/features/computer-use.md` plus reference catalog entries. ## Tests 44 new tests in tests/tools/test_computer_use.py covering schema shape (universal, not Anthropic-native), dispatch routing, safety guards, multimodal envelope, Anthropic adapter conversion, screenshot eviction, context compressor pruning, image-aware token estimation, run_agent helpers, and universality guarantees. 469/469 pass across tests/tools/test_computer_use.py + the affected agent/ test suites. ## Not in this PR - `model_tools.py` provider-gating: the tool is available to every provider. Providers without multi-part tool message support will see text-only tool results (graceful degradation via `text_summary`). - Anthropic server-side `clear_tool_uses_20250919` — deferred; client-side eviction + compressor pruning cover the same cost ceiling without a beta header. ## Caveats - macOS only. cua-driver uses private SkyLight SPIs (SLEventPostToPid, SLPSPostEventRecordTo, _AXObserverAddNotificationAndCheckRemote) that can break on any macOS update. Pin with HERMES_CUA_DRIVER_VERSION. - Requires Accessibility + Screen Recording permissions — the post-setup prints the Settings path. Supersedes PR #4562 (pyautogui/Quartz foreground backend, Anthropic- native schema). Credit @0xbyt4 for the original #3816 groundwork whose context/eviction/token design is preserved here in generic form.
This commit is contained in:
parent
24f139e16a
commit
b07791db05
23 changed files with 2861 additions and 27 deletions
150
tools/computer_use/backend.py
Normal file
150
tools/computer_use/backend.py
Normal file
|
|
@ -0,0 +1,150 @@
|
|||
"""Abstract backend interface for computer use.
|
||||
|
||||
Any implementation (cua-driver over MCP, pyautogui, noop, future Linux/Windows)
|
||||
must return the shape described below. All methods synchronous; async is
|
||||
handled inside the backend implementation if needed.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Any, Dict, List, Optional, Tuple
|
||||
|
||||
|
||||
@dataclass
|
||||
class UIElement:
|
||||
"""One interactable element on the current screen."""
|
||||
|
||||
index: int # 1-based SOM index
|
||||
role: str # AX role (AXButton, AXTextField, ...)
|
||||
label: str = "" # AXTitle / AXDescription / AXValue snippet
|
||||
bounds: Tuple[int, int, int, int] = (0, 0, 0, 0) # x, y, w, h (logical px)
|
||||
app: str = "" # owning bundle ID or app name
|
||||
pid: int = 0 # owning process PID
|
||||
window_id: int = 0 # SkyLight / CG window ID
|
||||
attributes: Dict[str, Any] = field(default_factory=dict)
|
||||
|
||||
def center(self) -> Tuple[int, int]:
|
||||
x, y, w, h = self.bounds
|
||||
return x + w // 2, y + h // 2
|
||||
|
||||
|
||||
@dataclass
|
||||
class CaptureResult:
|
||||
"""Result of a screen capture call.
|
||||
|
||||
At least one of png_b64 / elements is populated depending on capture mode:
|
||||
* mode="vision" → png_b64 only
|
||||
* mode="ax" → elements only
|
||||
* mode="som" → both (default): PNG already has numbered overlays
|
||||
drawn by the backend, and `elements` holds the
|
||||
matching index → element mapping.
|
||||
"""
|
||||
|
||||
mode: str
|
||||
width: int # screenshot width (logical px, pre-Anthropic-scale)
|
||||
height: int
|
||||
png_b64: Optional[str] = None
|
||||
elements: List[UIElement] = field(default_factory=list)
|
||||
# Optional: the target app/window the elements were captured for.
|
||||
app: str = ""
|
||||
window_title: str = ""
|
||||
# Raw bytes we sent to Anthropic, for token estimation.
|
||||
png_bytes_len: int = 0
|
||||
|
||||
|
||||
@dataclass
|
||||
class ActionResult:
|
||||
"""Result of any action (click / type / scroll / drag / key / wait)."""
|
||||
|
||||
ok: bool
|
||||
action: str
|
||||
message: str = "" # human-readable summary
|
||||
# Optional trailing screenshot — set when the caller asked for a
|
||||
# post-action capture or the backend always returns one.
|
||||
capture: Optional[CaptureResult] = None
|
||||
# Arbitrary extra fields for debugging / telemetry.
|
||||
meta: Dict[str, Any] = field(default_factory=dict)
|
||||
|
||||
|
||||
class ComputerUseBackend(ABC):
|
||||
"""Lifecycle: `start()` before first use, `stop()` at shutdown."""
|
||||
|
||||
@abstractmethod
|
||||
def start(self) -> None: ...
|
||||
|
||||
@abstractmethod
|
||||
def stop(self) -> None: ...
|
||||
|
||||
@abstractmethod
|
||||
def is_available(self) -> bool:
|
||||
"""Return True if the backend can be used on this host right now.
|
||||
|
||||
Used by check_fn gating and by the post-setup wizard.
|
||||
"""
|
||||
|
||||
# ── Capture ─────────────────────────────────────────────────────
|
||||
@abstractmethod
|
||||
def capture(self, mode: str = "som", app: Optional[str] = None) -> CaptureResult: ...
|
||||
|
||||
# ── Pointer actions ─────────────────────────────────────────────
|
||||
@abstractmethod
|
||||
def click(
|
||||
self,
|
||||
*,
|
||||
element: Optional[int] = None,
|
||||
x: Optional[int] = None,
|
||||
y: Optional[int] = None,
|
||||
button: str = "left", # left | right | middle
|
||||
click_count: int = 1,
|
||||
modifiers: Optional[List[str]] = None,
|
||||
) -> ActionResult: ...
|
||||
|
||||
@abstractmethod
|
||||
def drag(
|
||||
self,
|
||||
*,
|
||||
from_element: Optional[int] = None,
|
||||
to_element: Optional[int] = None,
|
||||
from_xy: Optional[Tuple[int, int]] = None,
|
||||
to_xy: Optional[Tuple[int, int]] = None,
|
||||
button: str = "left",
|
||||
modifiers: Optional[List[str]] = None,
|
||||
) -> ActionResult: ...
|
||||
|
||||
@abstractmethod
|
||||
def scroll(
|
||||
self,
|
||||
*,
|
||||
direction: str, # up | down | left | right
|
||||
amount: int = 3, # wheel ticks
|
||||
element: Optional[int] = None,
|
||||
x: Optional[int] = None,
|
||||
y: Optional[int] = None,
|
||||
modifiers: Optional[List[str]] = None,
|
||||
) -> ActionResult: ...
|
||||
|
||||
# ── Keyboard ────────────────────────────────────────────────────
|
||||
@abstractmethod
|
||||
def type_text(self, text: str) -> ActionResult: ...
|
||||
|
||||
@abstractmethod
|
||||
def key(self, keys: str) -> ActionResult:
|
||||
"""Send a key combo, e.g. 'cmd+s', 'ctrl+alt+t', 'return'."""
|
||||
|
||||
# ── Introspection ───────────────────────────────────────────────
|
||||
@abstractmethod
|
||||
def list_apps(self) -> List[Dict[str, Any]]:
|
||||
"""Return running apps with bundle IDs, PIDs, window counts."""
|
||||
|
||||
@abstractmethod
|
||||
def focus_app(self, app: str, raise_window: bool = False) -> ActionResult:
|
||||
"""Route input to `app` (by name or bundle ID). Default: focus without raise."""
|
||||
|
||||
# ── Timing ──────────────────────────────────────────────────────
|
||||
def wait(self, seconds: float) -> ActionResult:
|
||||
"""Default implementation: time.sleep."""
|
||||
import time
|
||||
time.sleep(max(0.0, min(seconds, 30.0)))
|
||||
return ActionResult(ok=True, action="wait", message=f"waited {seconds:.2f}s")
|
||||
Loading…
Add table
Add a link
Reference in a new issue