Commit graph

119 commits

Author SHA1 Message Date
Brooklyn Nicholson
ded011c5a5 fix(tui): tighten SGR fragment matching 2026-04-30 17:50:49 -05:00
Brooklyn Nicholson
71b685aee0 fix(tui): recover fragmented SGR mouse reports 2026-04-30 17:43:21 -05:00
brooklyn!
285e9efb3f
Merge pull request #17701 from NousResearch/bb/mouse-mode-self-heal
fix(cli): recover leaked mouse tracking terminal state
2026-04-30 10:09:39 -07:00
Brooklyn Nicholson
cad7944b92 fix(tui): reset extended keyboard modes 2026-04-30 12:05:15 -05:00
Teknium
71c8ca17dc chore(salvage): strip duplicated/merge-corrupted blocks from PR #17664
Removes drive-by duplication that accumulated during the contributor
branch's multiple rebases. All runtime-benign (dict last-wins,
redefinition last-wins) but left dead source that would confuse
reviewers and maintainers.

Surgical in-place de-duplication (kept PR's intentional additions,
removed only the doubled copy):

* hermes_cli/auth.py: duplicate "gmi" + "azure-foundry" ProviderConfig
* hermes_cli/models.py: duplicate "gmi" entry in _PROVIDER_MODELS
* hermes_cli/config.py: duplicate NOTION/LINEAR/AIRTABLE/TENOR skill env
  block + duplicate get_custom_provider_context_length definition
* hermes_cli/gateway.py: duplicate _setup_yuanbao
* gateway/platforms/base.py: duplicate is_host_excluded_by_no_proxy
* gateway/platforms/telegram.py: duplicate delete_message
* gateway/stream_consumer.py: duplicate _should_send_fresh_final and
  _try_fresh_final
* gateway/run.py: duplicate _parse_reasoning_command_args /
  _resolve_session_reasoning_config / _set_session_reasoning_override,
  duplicate "Drain silently when interrupted" interrupt check
* run_agent.py: duplicate HERMES_AGENT_HELP_GUIDANCE append, duplicate
  codex_message_items capture, duplicate custom_providers resolution
* tools/approval.py: duplicate HARDLINE_PATTERNS section and duplicate
  hardline call in check_dangerous_command
* tools/mcp_tool.py: duplicate _orphan_stdio_pids module-level decl
* cron/scheduler.py: duplicate "not configured/enabled" check — kept
  the new early-rejection, removed the stale late-path copy

Full-file resets to origin/main (all PR additions were duplicates of
content already on main):

* ui-tui/packages/hermes-ink/index.d.ts
* ui-tui/packages/hermes-ink/src/entry-exports.ts
* ui-tui/packages/hermes-ink/src/ink/selection.ts
* ui-tui/src/app/interfaces.ts
* ui-tui/src/app/slash/commands/core.ts
* ui-tui/src/components/thinking.tsx
* ui-tui/src/lib/memoryMonitor.ts
* ui-tui/src/types.ts
* ui-tui/src/types/hermes-ink.d.ts
* tests/hermes_cli/test_doctor.py
* tests/hermes_cli/test_api_key_providers.py
* tests/hermes_cli/test_model_validation.py
* tests/plugins/memory/test_hindsight_provider.py
* tests/run_agent/test_run_agent.py
* tests/gateway/test_email.py
* tests/tools/test_dockerfile_pid1_reaping.py
* hermes_cli/commands.py (slack_native_slashes block — full duplicate)
2026-04-29 21:56:51 -07:00
Ari Lotter
868bc1c242 feat(irc): add interactive setup
feat(gateway): refine Platform._missing_ and platform-connected dispatch

Restricts plugin-name acceptance to bundled plugin scan + registry
(no arbitrary string -> enum-pollution), pulls per-platform connectivity
checks into a _PLATFORM_CONNECTED_CHECKERS lambda map with a clean
_is_platform_connected method, and adds tests covering the checker map,
plugin platform interface, and IRC setup wizard.
2026-04-29 21:56:51 -07:00
brooklyn!
4cc6da84a1
fix(tui): normalize legacy Terminal.app colors (#17695)
Keep light Terminal.app TUI colors readable by normalizing non-banner theme tokens into ANSI256-safe buckets while preserving truecolor terminals.
2026-04-29 20:13:49 -07:00
Brooklyn Nicholson
d05497f812 fix(tui): reset terminal modes on startup and exit
Reset sticky mouse/focus/paste terminal modes before the TUI starts and during graceful shutdown paths so stale tab state from prior crashes cannot poison the next session.
2026-04-29 21:41:51 -05:00
brooklyn!
98f5be13fa
fix(tui): word-wrap composer input (#17651)
* fix(tui): word-wrap composer input

Wrap composer input at word boundaries and anchor the good-vibes heart to the full composer row.

* test(tui): cover composer word wrap edge

Add regression coverage for moving the next word instead of splitting it at the composer edge.
2026-04-29 16:55:49 -07:00
Brooklyn Nicholson
d3ab2b2e13 fix(tui): share composer prompt gap metric
Use one exported prompt gap constant for both composer width math and prompt prefix rendering.
2026-04-29 15:50:54 -05:00
Brooklyn Nicholson
10fcd620d2 fix(tui): render explicit prompt gap
Reserve the composer prompt gap as layout instead of relying on terminal handling of trailing spaces.
2026-04-29 15:25:06 -05:00
Austin Pickett
430302c197
Merge pull request #17175 from NousResearch/fix/markdown
feat(latex): latex in tui
2026-04-29 10:18:17 -07:00
brooklyn!
5e68503d2f
Merge pull request #17190 from NousResearch/bb/tui-cold-start-profiling
perf(tui): cut visible cold start ~57% with lazy agent init
2026-04-28 22:45:14 -07:00
brooklyn!
22cc7492ff
Potential fix for pull request finding
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
2026-04-28 22:44:58 -07:00
Brooklyn Nicholson
c2fd0fa684 fix(tui): preserve memory monitor in-flight guard
Copilot caught that clearing inFlight on a transient normal-memory tick could
allow a second dump/eviction to start before the first async tick completed.
Only clear dumped on normal; let the in-flight tick's finally remove its own
level.

Tests:
- cd ui-tui && npm run type-check && npm run build
2026-04-29 00:44:04 -05:00
Brooklyn Nicholson
88a9efdb1a fix(tui): tighten cold-start edge cases after review
Clean up the remaining review nits:

- let the deferred @hermes/ink import retry after a transient failure instead
  of memoizing a rejected promise forever
- keep memory-monitor in-flight state inside a finally so future exceptions
  cannot suppress that memory level indefinitely
- use read_raw_config for the TUI MCP cold-start probe instead of full
  load_config()
- keep input.detect_drop for explicit relative path prefixes (./ and ../)
  while preserving the no-RPC fast path for ordinary plain prompts

Tests:
- python -m py_compile tui_gateway/server.py tui_gateway/entry.py
- cd ui-tui && npm run type-check && npm run build
- scripts/run_tests.sh tests/tui_gateway/test_protocol.py::test_sess_found tests/tools/test_code_execution_modes.py tests/tools/test_code_execution.py
- cd ui-tui && npm test -- --run src/__tests__/useSessionLifecycle.test.ts src/__tests__/useConfigSync.test.ts
2026-04-29 00:08:34 -05:00
Brooklyn Nicholson
a2819e1820 fix(tui): address lazy startup review races
Copilot correctly flagged two concurrency windows:

- memoryMonitor could re-enter while awaiting the lazy @hermes/ink import or
  heap dump, producing duplicate imports/dumps under sustained pressure.
- _start_agent_build used a check-then-set guard without synchronization, so
  concurrent agent-backed RPCs could start duplicate agent builders.

Fix both with single-flight guards: cache the dynamic import promise and track
per-level dump in-flight state in memoryMonitor, and protect the TUI agent build
flag with a per-session lock.

Tests:
- python -m py_compile tui_gateway/server.py
- cd ui-tui && npm run type-check && npm run build
- cd ui-tui && npm test -- --run src/__tests__/useSessionLifecycle.test.ts src/__tests__/useConfigSync.test.ts
- scripts/run_tests.sh tests/tui_gateway/test_protocol.py::test_sess_found tests/tools/test_code_execution_modes.py tests/tools/test_code_execution.py
2026-04-28 23:54:33 -05:00
Brooklyn Nicholson
f542d17b00 style(tui): apply npm run fix
Run the TUI lint autofix and formatter on the PR branch after the sticky prompt and paste recovery changes.
2026-04-28 22:18:26 -05:00
Brooklyn Nicholson
ce2cc7302e fix(tui): stabilize sticky prompt tracking
Keep the latest prompt sticky while the viewport is in live assistant output beyond history, and clear stale sticky state at the real bottom using fresh scroll height.
2026-04-28 22:10:40 -05:00
Austin Pickett
e4120d1e6d Merge remote-tracking branch 'origin/main' into fix/markdown
Made-with: Cursor

# Conflicts:
#	ui-tui/src/components/markdown.tsx
2026-04-28 22:01:02 -04:00
Austin Pickett
3379f88ea4 docs: clarify wrapForFrac and streaming math-fence rationale
Address two Copilot review comments on PR #17175.

- `wrapForFrac` doc said "additive operators or whitespace" but the
  implementation also matches `*` and `/`. The wider behaviour is the
  one we want (nested products and fractions need parens to disambiguate
  inline `/`), so the doc is updated to match instead of tightening the
  regex.

- `fenceOpenAt` was flagged as "overly conservative" vs. `markdown.tsx`,
  which falls back to paragraph rendering for unclosed `$$` openers.
  Mirroring that fallback in the streaming chunker would prematurely
  commit a paragraph rendering of the unclosed opener to the monotonic
  stable prefix, where it would be frozen and become wrong the moment
  the closer streams in. The asymmetry is deliberate; document why so
  it isn't "fixed" again later.

Made-with: Cursor
2026-04-28 21:43:32 -04:00
Austin Pickett
cb039ac000 fix: account for latex 2026-04-28 21:20:43 -04:00
Brooklyn Nicholson
0399d4b976 perf(tui): shave ~190ms off hermes --tui cold start
Two targeted fixes on the critical path from `hermes --tui` launch to
`gateway.ready`:

1. **Defer `@hermes/ink` import in memoryMonitor.ts.** The static top-level
   import dragged the full ~414KB Ink bundle (React + renderer + all
   components/hooks) onto the critical path *before* `gw.start()` could
   spawn the Python gateway — serialising ~155ms of Node work in front of
   it on every launch. `evictInkCaches` only runs inside the 10-second
   tick under heap pressure, so it moves to a lazy dynamic import. First
   tick hits the ESM cache because the app entry has long since imported
   `@hermes/ink`.

2. **Gate `tools.mcp_tool` import on config in tui_gateway/entry.py.**
   Importing the module transitively pulls the MCP SDK + pydantic + httpx
   + jsonschema + starlette formparsers (~200ms). The overwhelming
   majority of users have no `mcp_servers` configured, so this runs for
   nothing. A cheap `load_config()` check (~25ms) skips the 200ms import
   when no servers are declared, with a conservative fallback to the old
   behaviour if the config probe itself fails.

## Measurements (macOS Terminal.app, Apple Silicon, n=12)

| Metric                     | Before (p50) | After (p50) | Δ        |
|----------------------------|--------------|-------------|----------|
| Python gateway boot alone  | 252–365ms    | 105–151ms   | −180ms   |
| `hermes --tui` banner paint | 686ms        | 665ms       | −21ms    |
| `hermes --tui` → ready      | **1843ms**   | **1655ms**  | **−188ms (−10.2%)** |
| `hermes --tui` → ready p90  | 1932ms       | 1778ms      | −154ms   |
| stdev (ready)              | 126ms        | 83ms        | also more consistent |

## Tests

- `scripts/run_tests.sh tests/tui_gateway/ tests/tools/test_mcp_tool.py`:
  195 passed.  (The one pre-existing failure in
  `test_session_resume_returns_hydrated_messages` reproduces on main —
  unrelated, it's a mock-DB kwarg mismatch.)
- `ui-tui` vitest: 430 tests, all pass.
- `npm run type-check` in ui-tui: clean.

## Notes

- Node-side first paint ("banner") didn't move meaningfully because that
  latency is dominated by Ink's render pipeline + React mount, not by
  which imports load first.
- The win shows up entirely in the time from banner to `gateway.ready`
  — exactly where we expected it, since both fixes shorten the Python
  gateway's boot path or let it overlap more with Node startup.
- No user-visible behaviour change. Memory monitoring still fires every
  10s; MCP still works when `mcp_servers` is configured.
2026-04-28 19:42:31 -05:00
brooklyn!
6b09df39be
fix(tui): restore macOS copy behavior and theme polish (#17131)
This PR groups the TUI fixes that restore macOS Terminal usability and clean up the theme/composer regressions:

- copy transcript selections on macOS drag-release so Terminal.app users can copy while mouse tracking is enabled
- copy composer selections on macOS drag-release; composer selection is internal to TextInput and does not use the global Ink selection bus
- keep IDE Cmd+C forwarding setup macOS-only, and make keybinding conflict checks respect simple when-clause overlap/negation
- force truecolor before chalk initializes (unless NO_COLOR / FORCE_COLOR / HERMES_TUI_TRUECOLOR opt-outs apply) so the default banner keeps its gold/amber/bronze gradient in Terminal.app
- move TUI surfaces onto semantic theme tokens and preserve skin prompt symbols as bare tokens with renderer-owned spacing
- render focused placeholders as dim hint text in TTY mode instead of inverse/selected-looking synthetic cursor text
2026-04-28 18:47:14 -05:00
Austin Pickett
c3d39feb3a feat(latex): latex in tui 2026-04-28 19:08:11 -04:00
Brooklyn Nicholson
d81b1cd86c chore: uptick 2026-04-26 22:22:31 -05:00
Brooklyn Nicholson
ffa33e53f6 chore(tui): remove dead branch cleanup code
- drop unused TUI helpers, test-only layout scaffolding, and stale public debug exports
- remove an unused profiler import and trim test-only coverage for deleted helpers
2026-04-26 21:54:24 -05:00
Brooklyn Nicholson
b51c528613 fix(tui): address virtual row and perf log review notes
Keep transcript row keys stable across capped-history trims and rename React Profiler timestamp fields so JSONL consumers don't confuse absolute timestamps with durations.
2026-04-26 21:37:43 -05:00
Brooklyn Nicholson
b1c49d5e73 chore(tui): /clean recent perf work — KISS/DRY pass
24 files, -319 LoC. Behaviour preserved, 369/369 tests green.

- hermes-ink caches: shared lruEvict helper for the four parallel LRU
  caches (stringWidth, wrapText, sliceAnsi, lineWidth); touch-on-read
  stays inlined per cache; tightened output.ts skip-slice fast path.
- wheelAccel: trimmed provenance header, collapsed env parsing, ternary
  dispatch in computeWheelStep.
- perfPane: folded ensureLogDir into once-flag, spread-with-overrides
  for fastPath/phases instead of full rebuilds.
- env: extracted truthy() (used 4×).
- virtualHeights: collapsed user/diff/slash height bumps; trail+todos
  estimate.
- useInputHandlers: scrollIdleTimer cleanup on unmount, ?? undefined
  shorthand.
- useMainApp: dropped dead liveTailVisible IIFE and liveProgress
  indirection.
- appLayout, markdown, messageLine, entry: vertical rhythm, dropped
  narration comments, inlined one-shot vars.
- fix: empty catch blocks → /* best-effort */ for no-empty lint.
2026-04-26 20:38:47 -05:00
Brooklyn Nicholson
527ac351b4 fix(tui): address Copilot review comments
- stringWidth: true LRU on cache hit (touch-on-read via delete+set) so
  hot strings stay resident under long sessions; was insertion-order
  FIFO before
- virtualHeights: include todos, panel sections, and intro version in
  messageHeightKey so height-cache reuse correctly invalidates when
  todo content / panel sections change
- virtualHeights: estimate trail+todos rows at todos.length+2 (or 2
  collapsed) instead of the generic ~1-line fallback, so initial
  virtualization offsets are closer to reality
- useInputHandlers: clearTimeout on unmount for scrollIdleTimer so
  pending relaxStreaming() never fires after teardown
- render-node-to-output: drop unused declined.noHint counter from
  scrollFastPathStats; it was always 0 (the "hint missing" branch is
  outside the diagnostics block)
- perfPane / hermes-ink.d.ts: follow the noHint removal
- wheelAccel: replace ~/claude-code path comment with generic
  attribution that doesn't reference a developer-local checkout
2026-04-26 20:07:41 -05:00
Brooklyn Nicholson
25767513f2 perf(tui): unified Ink cache eviction on memory pressure + session reset
Adds an `evictInkCaches(level)` API that prunes the four hot module-level
caches (`widthCache`, `wrapCache`, `sliceCache`, `lineWidthCache`) with
either a half-keep LRU pass or a full clear. Wired into:

- memoryMonitor: half-prune on 'high', full drop on 'critical', before
  the heap dump / auto-restart path. Gives long sessions a shot at
  recovering RSS instead of hard-exiting.
- useSessionLifecycle.resetSession: half-prune so a /new session starts
  with a half-warm pool and the prior session can resume cheaply.

Also: lineWidthCache now uses LRU half-eviction on overflow instead of a
full `cache.clear()`, matching the other three caches.

Comparison vs claude-code: both forks now share the same `prevScreen`
blit + dirty-cascade machinery in render-node-to-output. Their smoothness
came from sibling-memo discipline (every chrome pane memo'd so dirty
cascade doesn't disable transcript blit) — already in place in our
appLayout.tsx (TranscriptPane / ComposerPane / StatusRulePane all memo'd).
Alt-screen is not the cause; both use it. The remaining gap was per-row
CPU on width/wrap/slice, which the previous commit closed.
2026-04-26 19:41:53 -05:00
Brooklyn Nicholson
c370e2e1e5 perf(tui): cache stringWidth/wrapText/sliceAnsi + skip-slice when line fits clip
CPU profile (Apr 2026, real-user scroll on 11k-line session) showed three
hot loops in the per-frame render path:

  Output.get() per-frame walk:                 24% total
  └─ sliceAnsi(line, from, to) per write:     18% total
  stringWidth(line) chain (cached + JS):      14% total

All three were re-doing identical work every frame: same string → same
clipped slice → same width.

Fixes:

1. Memoize stringWidth (8k-entry LRU) for non-ASCII strings; ASCII fast-path
   skips the cache (inline scan beats Map.get for short ASCII, the >90%
   case). String.charCodeAt scan up to 64 chars is cheaper than the regex
   fallback.

2. Memoize wrapText (4k-entry LRU keyed by maxWidth|wrapType|text) — wrapAnsi
   is pure and the same content reflows identically every frame.

3. Memoize sliceAnsi (4k-entry LRU keyed by start|end|str) for the
   end-defined hot path used by Output.get().

4. Skip the slice entirely in Output.get() when the line already fits the
   clip box (startsBefore=false && endsAfter=false). Most transcript lines
   never exceed their container width, and tokenizing them just to slice
   (line, 0, width) was pure overhead. This single fast-path drops
   sliceAnsi from 18% → ~0% in the profile.

Also tighten virtualization constants (MAX_MOUNTED 260→120, OVERSCAN 40→20,
SLIDE_STEP 25→12) and cap historical-message render at 800 chars / 16
lines via HISTORY_RENDER_MAX_*; messages inside the FULL_RENDER_TAIL_ITEMS
window still render in full so reading-zone behavior is unchanged.

Validation, real-user CPU profile, page-up scroll on 11k-line session:

  Output.get() self-time:     24%   →   0.3%
  sliceAnsi total:            18%   →   not in top 25
  stringWidth family:         14%   →   ~3%
  idle:                     60.7%   →  77.3%

Frame timings (synthetic page-up profile harness):
  dur p95:   ~10ms   →  4.87ms
  dur p99:   25ms+   → 12.80ms
  yoga p99:  ~20ms   →  1.87ms

The remaining CPU in the profile is Yoga layoutNode + React commit,
which is the irreducible work for this UI tree size.
2026-04-26 19:28:09 -05:00
Brooklyn Nicholson
85e9a23efb feat(tui): HERMES_TUI_FPS=1 shows live fps counter
Adds a corner-overlay FPS readout gated on HERMES_TUI_FPS, fed by
ink's onFrame callback (so it's the REAL render rate, not a timer).
Displays fps, last-frame duration, and total frame count, colored by
threshold (green ≥50, yellow ≥30, red below).

Implementation:
  * lib/fpsStore.ts — nanostore atom updated from a trackFrame()
    sink.  Ring buffer of last 30 frame timestamps; fps = 29/elapsed.
    trackFrame is undefined when SHOW_FPS is off so ink's onFrame
    short-circuits at the optional chain.
  * components/fpsOverlay.tsx — tiny <Text> subscriber; returns null
    when SHOW_FPS is off (React skips the subtree entirely).
  * entry.tsx — composes onFrame from logFrameEvent (dev-perf) and
    trackFrame (fps) so both flags can coexist.  When both are off,
    onFrame is undefined and ink never attaches the handler.
  * appLayout.tsx — mounts the overlay as a flex-shrink=0 right-
    aligned Box below the composer, conditional on SHOW_FPS.

Usage:
  HERMES_TUI_FPS=1 hermes --tui
  # bottom right: "  62.3fps ·   0.8ms · #1234" (green/yellow/red)

Intended as a user-facing diagnostic during the scroll-perf tuning
pass — watch the counter drop while holding PageUp to see where
frames go silent, without having to run scripts/profile-tui.py in a
side terminal.

126 files post-compile with React Compiler; 352 tests still pass.
2026-04-26 17:20:47 -05:00
Brooklyn Nicholson
4395c2b007 feat(tui): port claude-code's wheel accel state machine
Replaces the static WHEEL_SCROLL_STEP=1 multiplier on wheel events
with an adaptive accel state machine that infers user intent from
inter-event timing.

Algorithm ported straight from claude-code's
src/components/ScrollKeybindingHandler.tsx.  All tuning constants,
the native/xterm.js path split, the encoder-bounce detection, the
trackpad-burst signature → all theirs.  This file is a mechanical
port into our module structure.

What it does:

  precision click (>500ms gap)   1 row/event   (deliberate scan)
  sustained mouse (40-200ms)     2-6 rows      (decay curve)
  detected wheel bounce          ramps to 15   (sticky wheel-mode)
  trackpad flick (5+ <5ms)       1 row/event   (burst detect)
  direction reversal             reset to base

Two implementation paths:

  * native terminals (ghostty, iTerm2, Kitty, WezTerm) — linear
    window-ramp + optional wheel-mode curve triggered by detected
    encoder bounce.  SGR proportional reporting handled via the
    burst-count guard.

  * xterm.js (VS Code / Cursor / browser terminals) — pure
    exponential-decay curve with fractional carry.  Events arrive
    1-per-notch with no pre-amplification, so the curve is more
    aggressive.

Selected at construction via isXtermJs() from @hermes/ink (now
exported).  Per-user tune via HERMES_TUI_SCROLL_SPEED (alias
CLAUDE_CODE_SCROLL_SPEED for portability).

13 unit tests covering direction flip/bounce/reversal, idle
disengage, trackpad-burst disengage, frac invariants, and the
native vs xterm.js branches.

Profiled under --rate 30 (stress test) and --rate 10 (realistic
sustained scroll): accel ramps to cap=6 at 30Hz burst, decays to
1-3 rows at sparse 10Hz clicks.  Perf is comparable to baseline
because accel IS multiplying step — the win is perceptual (fast
flicks cover distance, slow clicks keep precision), not raw fps.

Companion to the earlier WHEEL_SCROLL_STEP=1 change: that set the
base; this modulates around it.
2026-04-26 17:16:11 -05:00
Brooklyn Nicholson
f823535db2 perf(tui): instrument stdout drain — rule out terminal parse bottleneck
Adds four fields to FrameEvent.phases and the matching profile
summary:

  optimizedPatches  post-optimize patch count (what's actually
                    written to stdout; the .patches field is
                    pre-optimize)
  writeBytes        UTF-8 byte count of the write this frame
  backpressure      true when Node's stdout.write returned false
                    (Writable buffer full — outer terminal can't
                    keep up)
  prevFrameDrainMs  end-to-end drain time of the PREVIOUS frame's
                    write, captured from stdout.write's 2-arg
                    callback.  Reported on the next frame so the
                    measurement reflects "time until OS flushed
                    the bytes to the terminal fd", not "time until
                    queued in Node".

writeDiffToTerminal() now returns { bytes, backpressure } and
accepts an optional onDrain callback.  Only attached on TTY with
diff; piped/non-TTY stdout bypasses flow control so the callback
would fire synchronously anyway.

Initial measurements under hold-wheel_up against 1106-msg session
(30Hz for 6s):

  patches total    28,888
  optimized total  16,700   (ratio 0.58 — optimizer cuts ~42%)
  writeBytes       42 KB / 10s = 4.2 KB/s throughput
  drainMs p50      0.14 ms   terminal accepts bytes instantly
  drainMs p99      0.85 ms
  backpressure     0% of frames

This rules out the terminal-parse hypothesis — Cursor's xterm.js
drains our output in sub-millisecond time at only 4 KB/s.  The
remaining lag has to be in the render pipeline, not the wire.
Profile output now includes the bytes+drain+backpressure lines to
keep this visible on every subsequent iteration.
2026-04-26 17:06:22 -05:00
Brooklyn Nicholson
cd7a200e6c perf(tui): instrument scroll fast-path decline reasons
Adds scrollFastPathStats counters to render-node-to-output.ts: captures
every time a ScrollBox's DECSTBM scroll hint is generated, records
whether the fast path took it (blit+shift from prevScreen) or declined,
and why. Exposed through hermes-ink's public exports and snapshotted on
every FrameEvent so the profiler harness can correlate decline reasons
with the actual patch/renderer cost per frame.

This is pure observation — no behaviour change. Preparing for the
virtual-history rewrite: the hypothesis was that our topSpacer/
bottomSpacer scheme disqualifies every scroll via heightDelta
mismatch, but the data shows the fast path is actually taken on most
scrolls (19/23 over a 6s PageUp hold through 1100 messages) — the
remaining steady-state renderer cost is Yoga tree traversal, not
the per-frame full redraw I initially suspected.

Declines that do happen correlate with React commits that changed the
mounted range mid-scroll (heightDelta=±3 to ±35). Those are the rarer
cases the virtualization rewrite still needs to address.

No test diffs — instrumentation-only.  Build verified: `tsc --noEmit`
plus the full `npm run build` compiler post-pass pass cleanly.
2026-04-26 16:45:53 -05:00
Brooklyn Nicholson
71eee26640 perf(tui): full-pipeline instrumentation + profiling harness
Extends HERMES_DEV_PERF to capture the complete render pipeline, not
just React commits. Adds scripts/profile-tui.py to drive repeatable
hold-PageUp stress tests against a real long session.

perfPane.tsx:
  Wires ink's onFrame callback (already plumbed through the fork) into
  the same perf.log as the React.Profiler samples. Captures per-phase
  timing (yoga calculateLayout, renderNodeToOutput, screen diff, patch
  optimize, stdout write) plus yoga counters (visited/measured/cache-
  Hits/live) and patch counts per frame.  Events are tagged
  {src: 'react'|'frame'} so jq can split them.  logFrameEvent is
  undefined when HERMES_DEV_PERF is unset, so ink doesn't even attach
  the callback.

entry.tsx:
  Passes logFrameEvent into render().

types/hermes-ink.d.ts:
  Declares FrameEvent + onFrame on RenderOptions so the ui-tui side
  type-checks against the plumbed-through ink option.

scripts/profile-tui.py:
  New harness. Launches the built TUI under a PTY with the longest
  session in state.db resumed, holds PageUp/PageDown/etc at a
  configurable Hz for N seconds, then parses perf.log and prints
  per-phase p50/p95/p99/max plus yoga-counter summaries. Zero deps
  beyond stdlib. Exit 2 if nothing was captured (wiring broken).

Initial findings (1106-msg session, 6s PageUp hold at 30Hz):
  - Steady state: 10 fps; renderer phase p99=63ms, write p99=0.2ms
  - 4/107 heavy frames (>=16ms), all dominated by renderNodeToOutput
  - One pathological 97ms frame with yoga measuring 70,415 text cells
    and Yoga visiting 225k nodes — the cold-unmeasured-region hit
  - Ink's scroll fast-path (DECSTBM blit from prevScreen) is
    disqualified because our spacer-based virtual history doesn't
    keep heightDelta in sync with scroll.delta, so every PageUp step
    falls through to a full 2000-4800 patch re-render instead of ~40
2026-04-26 16:36:25 -05:00
Brooklyn Nicholson
bde89c169b fix(cli): -c picks the most recently used session 2026-04-26 16:17:39 -05:00
Brooklyn Nicholson
c78b528125 feat(tui): archive todos at turn end with incomplete hint 2026-04-26 16:14:58 -05:00
Brooklyn Nicholson
319c1c1691 fix(tui): inline todo in transcript, group across thinking 2026-04-26 16:09:28 -05:00
Brooklyn Nicholson
4943ea2a7c fix(tui): merge tools into contextual shelves 2026-04-26 16:00:38 -05:00
Brooklyn Nicholson
a5319fb7af test(tui): cover live todo completion flow 2026-04-26 15:56:08 -05:00
Brooklyn Nicholson
f5552f92e2 fix(tui): stabilize live todo progress 2026-04-26 15:55:38 -05:00
Brooklyn Nicholson
6a3873942f fix(tui): format thinking paragraphs 2026-04-26 15:38:18 -05:00
Brooklyn Nicholson
cf8439263a fix(tui): keep todo pinned outside transcript 2026-04-26 15:33:01 -05:00
Brooklyn Nicholson
3271ffbd80 fix(tui): pin todo panel above live output 2026-04-26 15:27:31 -05:00
Brooklyn Nicholson
a7831b63db fix(tui): stabilize live progress rendering 2026-04-26 15:23:43 -05:00
Brooklyn Nicholson
d4dde6b5f2 fix(tui): restore resumed transcript lineage 2026-04-26 15:16:12 -05:00
Brooklyn Nicholson
7b5b524fc7 refactor(tui): clean thinking and viewport helpers 2026-04-26 14:03:36 -05:00
Brooklyn Nicholson
c9f7b703dd fix(tui): filter thinking status noise 2026-04-26 13:59:56 -05:00