mirror of
https://github.com/NousResearch/hermes-agent.git
synced 2026-05-08 03:01:47 +00:00
feat(kanban): generic diagnostics engine for task distress signals (#20332)
* feat(kanban): generic diagnostics engine for task distress signals Replaces the hallucination-specific ``warnings`` / ``RecoverySection`` surface (shipped in PR #20232) with a reusable diagnostic-rule engine that covers five distress kinds in v1 and can be extended without touching UI code. The "something's wrong with this task" signal is no longer limited to phantom card ids. Closes the follow-up from #20232 discussion. New module ---------- ``hermes_cli/kanban_diagnostics.py`` — stateless, no-side-effect rule engine. Each rule is a pure function of ``(task, events, runs, now, config) -> list[Diagnostic]``. Registry is a simple list; adding a new distress kind is one function + one import, no UI or API changes required. v1 rule set ----------- * ``hallucinated_cards`` (error) — folds the existing ``completion_blocked_hallucination`` event into the new surface. * ``prose_phantom_refs`` (warning) — folds ``suspected_hallucinated_references``. * ``repeated_spawn_failures`` (error → critical at 2x threshold) — fires when ``tasks.spawn_failures >= 3``; suggests ``hermes -p <profile> doctor`` / ``auth``. * ``repeated_crashes`` (error → critical) — fires after N consecutive ``crashed`` run outcomes with no successful completion between; suggests ``hermes kanban log <id>``. * ``stuck_in_blocked`` (warning) — fires after 24h in ``blocked`` state with no comments / unblock attempts; suggests commenting. Every diagnostic carries structured ``actions`` (reclaim, reassign, unblock, cli_hint, comment, open_docs) that render consistently in both CLI and dashboard. Suggested actions are highlighted; generic recovery actions (reclaim / reassign) are available on every kind as fallbacks. Diagnostics auto-clear when the underlying failure resolves — a clean ``completed``/``edited`` event drops hallucination diagnostics, a successful run drops crash diagnostics, a comment drops stuck-blocked diagnostics. Audit events persist; the badge goes away. API --- ``plugin_api.py``: * ``/board`` now attaches ``diagnostics`` (full list) and ``warnings`` (compact summary with ``highest_severity``) per task. * ``/tasks/{id}`` attaches diagnostics so the drawer's Diagnostics section auto-opens on flagged tasks. * NEW ``/diagnostics`` endpoint — fleet-wide listing, filterable by severity, sorted critical-first. CLI --- * NEW ``hermes kanban diagnostics [--severity X] [--task id] [--json]`` — fleet view or single-task view, matches dashboard rule output so CLI users see the same picture. * ``hermes kanban show <id>`` now renders a Diagnostics section near the top with severity markers + suggested actions. Dashboard --------- * Card badge is severity-coloured (⚠ amber warning, !! orange error, !!! red critical) using ``warnings.highest_severity``. * Attention strip above the toolbar counts EVERY task with active diagnostics (not just hallucinations), severity-coloured, lists affected tasks with Open buttons when expanded. * Drawer's old ``RecoverySection`` replaced with generic ``DiagnosticsSection`` rendering a card per active diagnostic: title + detail + structured data (task-id chips when payload keys look like id lists) + action buttons. Reassign profile picker is inline per-diagnostic. Clipboard fallback uses ``.catch()`` for environments where writeText rejects. * Three-rung severity palette; amber for warning, orange for error, red for critical. Uses CSS variables so theming is straightforward. Tests ----- * NEW ``tests/hermes_cli/test_kanban_diagnostics.py`` — 14 unit tests covering each rule's positive/negative/threshold paths, severity sorting, broken-rule isolation, and sqlite3.Row integration. * Dashboard plugin tests extended: ``/diagnostics`` endpoint (empty, populated, severity-filtered), ``/board`` exposes both diagnostic list and compact summary with ``highest_severity``. * Existing hallucination-specific test (``test_board_surfaces_ warnings_field_for_hallucinated_completions``) updated to reflect the new contract: warning summary keys by diagnostic kind (``hallucinated_cards``) not event kind. 379 kanban-suite tests pass (+16 net from this PR). Live verification ----------------- Seeded all 5 diagnostic kinds + one clean + one plain-running task (7 total) into an isolated HERMES_HOME, spun up the dashboard, and verified: * Attention strip: shows ``!! 5 tasks need attention`` in the error-severity orange; Show expands to a list of 5 rows ordered critical > error > warning. * Card badges: error tasks render ``!!`` orange, warning tasks render ``⚠`` amber, clean and plain-running tasks render no badge. * Each of the 5 rules opens a correctly-coloured, correctly-styled diagnostic card in the drawer with its specific suggested action. * Live reassign from a diagnostic card flipped ``broken-ml-worker → alice`` and the drawer refreshed with the new assignee + the same diagnostic still firing (correct: spawn_failures counter hasn't reset yet). * CLI ``hermes kanban diagnostics`` prints all 5 in severity order; ``--severity error`` narrows to 3; ``kanban show <id>`` includes the Diagnostics block at the top with suggested action hint. Migration note -------------- The old ``warnings`` shape (``{count, kinds, latest_at}``) is preserved on the API but ``kinds`` now keys by diagnostic kind (``hallucinated_cards``) instead of event kind (``completion_blocked_hallucination``). ``highest_severity`` is a new required field. The dashboard was the only consumer and has been updated in the same commit; external API consumers of the ``warnings`` field will need to update their kind-match logic. * feat(kanban/diagnostics): lead titles with the actual error text The generic 'Worker crashed N runs in a row' / 'Worker failed to spawn N times' titles buried the actual cause in the data section. Operators had to open logs or expand the diagnostic to see WHY the worker is stuck — rate-limit vs insufficient quota vs bad auth vs context overflow vs network blip all looked identical at a glance. New titles: Agent crashed 3x: openai: 429 Too Many Requests - rate limit reached Agent crashed 3x: anthropic: 402 insufficient_quota - credit balance Agent crashed 3x: provider auth error: 401 Unauthorized Agent spawn failed 4x: insufficient_quota: You exceeded your current Detail keeps the full error snippet (capped at 500 chars + ellipsis for tracebacks). Title takes the first line capped at 160 chars. Fallback title if no error recorded stays honest ('no error recorded'). Tests: 4 new cases covering 429/billing/spawn/truncation. 383 total pass (+4). Live-verified on dashboard with 6 seeded scenarios (rate-limit, billing, auth, context, network, spawn-billing) — each card title leads with the actionable error text.
This commit is contained in:
parent
ec7f2f249e
commit
f67063ba81
7 changed files with 1895 additions and 289 deletions
|
|
@ -187,63 +187,109 @@ _WARNING_EVENT_KINDS = (
|
|||
)
|
||||
|
||||
|
||||
def _compute_warnings_for_tasks(
|
||||
def _compute_task_diagnostics(
|
||||
conn: sqlite3.Connection,
|
||||
task_ids: Optional[list[str]] = None,
|
||||
) -> dict[str, dict]:
|
||||
"""Return {task_id: {count, kinds, latest_at}} for tasks with
|
||||
hallucination warnings that occurred AFTER the most recent clean
|
||||
completion event (completed / edited). An empty dict means no tasks
|
||||
on the board have active warnings.
|
||||
) -> dict[str, list[dict]]:
|
||||
"""Run the diagnostic rule engine against every task (or a subset)
|
||||
and return ``{task_id: [diagnostic_dict, ...]}``.
|
||||
|
||||
``task_ids`` narrows the query; pass ``None`` to scan the whole DB
|
||||
(matches board-level rollup). Used by both the /board aggregate and
|
||||
per-task /tasks/:id endpoints.
|
||||
Tasks with no active diagnostics are omitted from the result.
|
||||
Uses ``hermes_cli.kanban_diagnostics`` — see that module for the
|
||||
rule definitions.
|
||||
"""
|
||||
params: tuple = ()
|
||||
from hermes_cli import kanban_diagnostics as kd
|
||||
|
||||
# Build the candidate task list. We need each task's row + its
|
||||
# events + its runs. Doing N separate queries works but scales
|
||||
# poorly; do three aggregate queries instead.
|
||||
if task_ids is not None:
|
||||
if not task_ids:
|
||||
return {}
|
||||
placeholders = ",".join(["?"] * len(task_ids))
|
||||
sql = (
|
||||
"SELECT task_id, kind, created_at FROM task_events "
|
||||
f"WHERE task_id IN ({placeholders}) AND kind IN "
|
||||
"('completion_blocked_hallucination', "
|
||||
" 'suspected_hallucinated_references', "
|
||||
" 'completed', 'edited') "
|
||||
"ORDER BY task_id, id"
|
||||
)
|
||||
params = tuple(task_ids)
|
||||
rows = conn.execute(
|
||||
f"SELECT * FROM tasks WHERE id IN ({placeholders})",
|
||||
tuple(task_ids),
|
||||
).fetchall()
|
||||
else:
|
||||
sql = (
|
||||
"SELECT task_id, kind, created_at FROM task_events "
|
||||
"WHERE kind IN "
|
||||
"('completion_blocked_hallucination', "
|
||||
" 'suspected_hallucinated_references', "
|
||||
" 'completed', 'edited') "
|
||||
"ORDER BY task_id, id"
|
||||
)
|
||||
rows = conn.execute(
|
||||
"SELECT * FROM tasks WHERE status != 'archived'",
|
||||
).fetchall()
|
||||
|
||||
out: dict[str, dict] = {}
|
||||
for row in conn.execute(sql, params).fetchall():
|
||||
tid = row["task_id"]
|
||||
kind = row["kind"]
|
||||
created_at = row["created_at"]
|
||||
if kind in ("completed", "edited"):
|
||||
# Clean event wipes prior warning counters; only events after
|
||||
# this timestamp count.
|
||||
out.pop(tid, None)
|
||||
continue
|
||||
bucket = out.setdefault(
|
||||
tid, {"count": 0, "kinds": {}, "latest_at": 0}
|
||||
if not rows:
|
||||
return {}
|
||||
|
||||
# Index events + runs by task id. For very large boards this will
|
||||
# slurp a lot — acceptable on the dashboard's typical working set
|
||||
# (hundreds of tasks), but we can add pagination / filtering later
|
||||
# if profiling shows it's a hotspot.
|
||||
row_ids = [r["id"] for r in rows]
|
||||
placeholders = ",".join(["?"] * len(row_ids))
|
||||
events_by_task: dict[str, list] = {tid: [] for tid in row_ids}
|
||||
for ev_row in conn.execute(
|
||||
f"SELECT * FROM task_events WHERE task_id IN ({placeholders}) ORDER BY id",
|
||||
tuple(row_ids),
|
||||
).fetchall():
|
||||
events_by_task.setdefault(ev_row["task_id"], []).append(ev_row)
|
||||
runs_by_task: dict[str, list] = {tid: [] for tid in row_ids}
|
||||
for run_row in conn.execute(
|
||||
f"SELECT * FROM task_runs WHERE task_id IN ({placeholders}) ORDER BY id",
|
||||
tuple(row_ids),
|
||||
).fetchall():
|
||||
runs_by_task.setdefault(run_row["task_id"], []).append(run_row)
|
||||
|
||||
out: dict[str, list[dict]] = {}
|
||||
for r in rows:
|
||||
tid = r["id"]
|
||||
diags = kd.compute_task_diagnostics(
|
||||
r,
|
||||
events_by_task.get(tid, []),
|
||||
runs_by_task.get(tid, []),
|
||||
)
|
||||
bucket["count"] += 1
|
||||
bucket["kinds"][kind] = bucket["kinds"].get(kind, 0) + 1
|
||||
if created_at > bucket["latest_at"]:
|
||||
bucket["latest_at"] = created_at
|
||||
if diags:
|
||||
out[tid] = [d.to_dict() for d in diags]
|
||||
return out
|
||||
|
||||
|
||||
def _warnings_summary_from_diagnostics(
|
||||
diagnostics: list[dict],
|
||||
) -> Optional[dict]:
|
||||
"""Compact summary for cards: {count, highest_severity, kinds,
|
||||
latest_at}. Replaces the old hallucination-only ``warnings`` object
|
||||
— same shape additions plus ``highest_severity`` so the UI can color
|
||||
badges per diagnostic severity.
|
||||
|
||||
Returns None when ``diagnostics`` is empty.
|
||||
"""
|
||||
if not diagnostics:
|
||||
return None
|
||||
from hermes_cli.kanban_diagnostics import SEVERITY_ORDER
|
||||
|
||||
kinds: dict[str, int] = {}
|
||||
latest = 0
|
||||
highest_idx = -1
|
||||
highest_sev: Optional[str] = None
|
||||
count = 0
|
||||
for d in diagnostics:
|
||||
kinds[d["kind"]] = kinds.get(d["kind"], 0) + d.get("count", 1)
|
||||
count += d.get("count", 1)
|
||||
la = d.get("last_seen_at") or 0
|
||||
if la > latest:
|
||||
latest = la
|
||||
sev = d.get("severity")
|
||||
if sev in SEVERITY_ORDER:
|
||||
idx = SEVERITY_ORDER.index(sev)
|
||||
if idx > highest_idx:
|
||||
highest_idx = idx
|
||||
highest_sev = sev
|
||||
return {
|
||||
"count": count,
|
||||
"kinds": kinds,
|
||||
"latest_at": latest,
|
||||
"highest_severity": highest_sev,
|
||||
}
|
||||
|
||||
|
||||
def _links_for(conn: sqlite3.Connection, task_id: str) -> dict[str, list[str]]:
|
||||
"""Return {'parents': [...], 'children': [...]} for a task."""
|
||||
parents = [
|
||||
|
|
@ -321,10 +367,11 @@ def get_board(
|
|||
if row["cstatus"] == "done":
|
||||
p["done"] += 1
|
||||
|
||||
# Hallucination-warning rollup for this board (all tasks).
|
||||
# Delegated to _compute_warnings_for_tasks so the per-task
|
||||
# /tasks/:id endpoint can reuse the same rule.
|
||||
warnings_per_task = _compute_warnings_for_tasks(conn, task_ids=None)
|
||||
# Diagnostics rollup for this board — see kanban_diagnostics.
|
||||
# We get the full structured list per task AND a compact
|
||||
# summary for the card badge (so cards don't carry the detail
|
||||
# text; the drawer fetches that via /tasks/:id or /diagnostics).
|
||||
diagnostics_per_task = _compute_task_diagnostics(conn, task_ids=None)
|
||||
|
||||
latest_event_id = conn.execute(
|
||||
"SELECT COALESCE(MAX(id), 0) AS m FROM task_events"
|
||||
|
|
@ -339,9 +386,13 @@ def get_board(
|
|||
d["link_counts"] = link_counts.get(t.id, {"parents": 0, "children": 0})
|
||||
d["comment_count"] = comment_counts.get(t.id, 0)
|
||||
d["progress"] = progress.get(t.id) # None when the task has no children
|
||||
w = warnings_per_task.get(t.id)
|
||||
if w:
|
||||
d["warnings"] = w
|
||||
diags = diagnostics_per_task.get(t.id)
|
||||
if diags:
|
||||
# Full list goes into the payload so the drawer can render
|
||||
# without a second round-trip. The board-level badge only
|
||||
# needs the summary.
|
||||
d["diagnostics"] = diags
|
||||
d["warnings"] = _warnings_summary_from_diagnostics(diags)
|
||||
col = t.status if t.status in columns else "todo"
|
||||
columns[col].append(d)
|
||||
|
||||
|
|
@ -390,11 +441,13 @@ def get_task(task_id: str, board: Optional[str] = Query(None)):
|
|||
if task is None:
|
||||
raise HTTPException(status_code=404, detail=f"task {task_id} not found")
|
||||
task_d = _task_dict(task)
|
||||
# Attach warnings metadata so the drawer's Recovery section can
|
||||
# auto-open when a hallucination is unresolved.
|
||||
warnings = _compute_warnings_for_tasks(conn, task_ids=[task_id])
|
||||
if warnings.get(task_id):
|
||||
task_d["warnings"] = warnings[task_id]
|
||||
# Attach diagnostics so the drawer's Diagnostics section can
|
||||
# render recovery actions without a second round-trip.
|
||||
diags = _compute_task_diagnostics(conn, task_ids=[task_id])
|
||||
diag_list = diags.get(task_id) or []
|
||||
if diag_list:
|
||||
task_d["diagnostics"] = diag_list
|
||||
task_d["warnings"] = _warnings_summary_from_diagnostics(diag_list)
|
||||
return {
|
||||
"task": task_d,
|
||||
"comments": [_comment_dict(c) for c in kanban_db.list_comments(conn, task_id)],
|
||||
|
|
@ -795,6 +848,89 @@ def bulk_update(payload: BulkTaskBody, board: Optional[str] = Query(None)):
|
|||
conn.close()
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Diagnostics — fleet-wide distress signals (hallucinations, crashes,
|
||||
# spawn failures, stuck-blocked). See hermes_cli.kanban_diagnostics for
|
||||
# the rule engine.
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
@router.get("/diagnostics")
|
||||
def list_diagnostics(
|
||||
board: Optional[str] = Query(None, description="Kanban board slug (omit for current)"),
|
||||
severity: Optional[str] = Query(
|
||||
None,
|
||||
description="Filter by severity: warning|error|critical",
|
||||
),
|
||||
):
|
||||
"""Return ``[{task_id, task_title, task_status, task_assignee,
|
||||
diagnostics: [...]}, ...]`` for every task on the board with at
|
||||
least one active diagnostic.
|
||||
|
||||
Severity-filterable so the UI can render "just the critical ones"
|
||||
or the CLI can grep. Useful for the board-header attention strip
|
||||
AND for ``hermes kanban diagnostics`` which shells to this
|
||||
endpoint when the dashboard's running, or invokes the engine
|
||||
directly when it isn't.
|
||||
"""
|
||||
board = _resolve_board(board)
|
||||
conn = _conn(board=board)
|
||||
try:
|
||||
diags_by_task = _compute_task_diagnostics(conn, task_ids=None)
|
||||
if not diags_by_task:
|
||||
return {"diagnostics": [], "count": 0}
|
||||
|
||||
# Narrow by severity if asked.
|
||||
if severity:
|
||||
filtered: dict[str, list[dict]] = {}
|
||||
for tid, dl in diags_by_task.items():
|
||||
keep = [d for d in dl if d.get("severity") == severity]
|
||||
if keep:
|
||||
filtered[tid] = keep
|
||||
diags_by_task = filtered
|
||||
if not diags_by_task:
|
||||
return {"diagnostics": [], "count": 0}
|
||||
|
||||
# Pull the task rows we need in one query so we can include
|
||||
# titles/statuses without a per-task lookup.
|
||||
ids = list(diags_by_task.keys())
|
||||
placeholders = ",".join(["?"] * len(ids))
|
||||
rows = {
|
||||
r["id"]: r
|
||||
for r in conn.execute(
|
||||
f"SELECT id, title, status, assignee FROM tasks WHERE id IN ({placeholders})",
|
||||
tuple(ids),
|
||||
).fetchall()
|
||||
}
|
||||
|
||||
out = []
|
||||
for tid, dl in diags_by_task.items():
|
||||
r = rows.get(tid)
|
||||
out.append({
|
||||
"task_id": tid,
|
||||
"task_title": r["title"] if r else None,
|
||||
"task_status": r["status"] if r else None,
|
||||
"task_assignee": r["assignee"] if r else None,
|
||||
"diagnostics": dl,
|
||||
})
|
||||
# Sort: highest severity first, then most recent.
|
||||
from hermes_cli.kanban_diagnostics import SEVERITY_ORDER
|
||||
sev_idx = {s: i for i, s in enumerate(SEVERITY_ORDER)}
|
||||
def _sort_key(row):
|
||||
top = row["diagnostics"][0]
|
||||
return (
|
||||
-sev_idx.get(top.get("severity"), -1),
|
||||
-(top.get("last_seen_at") or 0),
|
||||
)
|
||||
out.sort(key=_sort_key)
|
||||
|
||||
return {
|
||||
"diagnostics": out,
|
||||
"count": sum(len(d["diagnostics"]) for d in out),
|
||||
}
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Recovery actions — reclaim a running claim, reassign to a new profile
|
||||
# ---------------------------------------------------------------------------
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue