Copilot caught that clearing inFlight on a transient normal-memory tick could
allow a second dump/eviction to start before the first async tick completed.
Only clear dumped on normal; let the in-flight tick's finally remove its own
level.
Tests:
- cd ui-tui && npm run type-check && npm run build
Clean up the remaining review nits:
- let the deferred @hermes/ink import retry after a transient failure instead
of memoizing a rejected promise forever
- keep memory-monitor in-flight state inside a finally so future exceptions
cannot suppress that memory level indefinitely
- use read_raw_config for the TUI MCP cold-start probe instead of full
load_config()
- keep input.detect_drop for explicit relative path prefixes (./ and ../)
while preserving the no-RPC fast path for ordinary plain prompts
Tests:
- python -m py_compile tui_gateway/server.py tui_gateway/entry.py
- cd ui-tui && npm run type-check && npm run build
- scripts/run_tests.sh tests/tui_gateway/test_protocol.py::test_sess_found tests/tools/test_code_execution_modes.py tests/tools/test_code_execution.py
- cd ui-tui && npm test -- --run src/__tests__/useSessionLifecycle.test.ts src/__tests__/useConfigSync.test.ts
Copilot correctly flagged two concurrency windows:
- memoryMonitor could re-enter while awaiting the lazy @hermes/ink import or
heap dump, producing duplicate imports/dumps under sustained pressure.
- _start_agent_build used a check-then-set guard without synchronization, so
concurrent agent-backed RPCs could start duplicate agent builders.
Fix both with single-flight guards: cache the dynamic import promise and track
per-level dump in-flight state in memoryMonitor, and protect the TUI agent build
flag with a per-session lock.
Tests:
- python -m py_compile tui_gateway/server.py
- cd ui-tui && npm run type-check && npm run build
- cd ui-tui && npm test -- --run src/__tests__/useSessionLifecycle.test.ts src/__tests__/useConfigSync.test.ts
- scripts/run_tests.sh tests/tui_gateway/test_protocol.py::test_sess_found tests/tools/test_code_execution_modes.py tests/tools/test_code_execution.py
Keep the latest prompt sticky while the viewport is in live assistant output beyond history, and clear stale sticky state at the real bottom using fresh scroll height.
Two targeted fixes on the critical path from `hermes --tui` launch to
`gateway.ready`:
1. **Defer `@hermes/ink` import in memoryMonitor.ts.** The static top-level
import dragged the full ~414KB Ink bundle (React + renderer + all
components/hooks) onto the critical path *before* `gw.start()` could
spawn the Python gateway — serialising ~155ms of Node work in front of
it on every launch. `evictInkCaches` only runs inside the 10-second
tick under heap pressure, so it moves to a lazy dynamic import. First
tick hits the ESM cache because the app entry has long since imported
`@hermes/ink`.
2. **Gate `tools.mcp_tool` import on config in tui_gateway/entry.py.**
Importing the module transitively pulls the MCP SDK + pydantic + httpx
+ jsonschema + starlette formparsers (~200ms). The overwhelming
majority of users have no `mcp_servers` configured, so this runs for
nothing. A cheap `load_config()` check (~25ms) skips the 200ms import
when no servers are declared, with a conservative fallback to the old
behaviour if the config probe itself fails.
## Measurements (macOS Terminal.app, Apple Silicon, n=12)
| Metric | Before (p50) | After (p50) | Δ |
|----------------------------|--------------|-------------|----------|
| Python gateway boot alone | 252–365ms | 105–151ms | −180ms |
| `hermes --tui` banner paint | 686ms | 665ms | −21ms |
| `hermes --tui` → ready | **1843ms** | **1655ms** | **−188ms (−10.2%)** |
| `hermes --tui` → ready p90 | 1932ms | 1778ms | −154ms |
| stdev (ready) | 126ms | 83ms | also more consistent |
## Tests
- `scripts/run_tests.sh tests/tui_gateway/ tests/tools/test_mcp_tool.py`:
195 passed. (The one pre-existing failure in
`test_session_resume_returns_hydrated_messages` reproduces on main —
unrelated, it's a mock-DB kwarg mismatch.)
- `ui-tui` vitest: 430 tests, all pass.
- `npm run type-check` in ui-tui: clean.
## Notes
- Node-side first paint ("banner") didn't move meaningfully because that
latency is dominated by Ink's render pipeline + React mount, not by
which imports load first.
- The win shows up entirely in the time from banner to `gateway.ready`
— exactly where we expected it, since both fixes shorten the Python
gateway's boot path or let it overlap more with Node startup.
- No user-visible behaviour change. Memory monitoring still fires every
10s; MCP still works when `mcp_servers` is configured.
This PR groups the TUI fixes that restore macOS Terminal usability and clean up the theme/composer regressions:
- copy transcript selections on macOS drag-release so Terminal.app users can copy while mouse tracking is enabled
- copy composer selections on macOS drag-release; composer selection is internal to TextInput and does not use the global Ink selection bus
- keep IDE Cmd+C forwarding setup macOS-only, and make keybinding conflict checks respect simple when-clause overlap/negation
- force truecolor before chalk initializes (unless NO_COLOR / FORCE_COLOR / HERMES_TUI_TRUECOLOR opt-outs apply) so the default banner keeps its gold/amber/bronze gradient in Terminal.app
- move TUI surfaces onto semantic theme tokens and preserve skin prompt symbols as bare tokens with renderer-owned spacing
- render focused placeholders as dim hint text in TTY mode instead of inverse/selected-looking synthetic cursor text
- drop unused TUI helpers, test-only layout scaffolding, and stale public debug exports
- remove an unused profiler import and trim test-only coverage for deleted helpers
- stringWidth: true LRU on cache hit (touch-on-read via delete+set) so
hot strings stay resident under long sessions; was insertion-order
FIFO before
- virtualHeights: include todos, panel sections, and intro version in
messageHeightKey so height-cache reuse correctly invalidates when
todo content / panel sections change
- virtualHeights: estimate trail+todos rows at todos.length+2 (or 2
collapsed) instead of the generic ~1-line fallback, so initial
virtualization offsets are closer to reality
- useInputHandlers: clearTimeout on unmount for scrollIdleTimer so
pending relaxStreaming() never fires after teardown
- render-node-to-output: drop unused declined.noHint counter from
scrollFastPathStats; it was always 0 (the "hint missing" branch is
outside the diagnostics block)
- perfPane / hermes-ink.d.ts: follow the noHint removal
- wheelAccel: replace ~/claude-code path comment with generic
attribution that doesn't reference a developer-local checkout
Adds an `evictInkCaches(level)` API that prunes the four hot module-level
caches (`widthCache`, `wrapCache`, `sliceCache`, `lineWidthCache`) with
either a half-keep LRU pass or a full clear. Wired into:
- memoryMonitor: half-prune on 'high', full drop on 'critical', before
the heap dump / auto-restart path. Gives long sessions a shot at
recovering RSS instead of hard-exiting.
- useSessionLifecycle.resetSession: half-prune so a /new session starts
with a half-warm pool and the prior session can resume cheaply.
Also: lineWidthCache now uses LRU half-eviction on overflow instead of a
full `cache.clear()`, matching the other three caches.
Comparison vs claude-code: both forks now share the same `prevScreen`
blit + dirty-cascade machinery in render-node-to-output. Their smoothness
came from sibling-memo discipline (every chrome pane memo'd so dirty
cascade doesn't disable transcript blit) — already in place in our
appLayout.tsx (TranscriptPane / ComposerPane / StatusRulePane all memo'd).
Alt-screen is not the cause; both use it. The remaining gap was per-row
CPU on width/wrap/slice, which the previous commit closed.
CPU profile (Apr 2026, real-user scroll on 11k-line session) showed three
hot loops in the per-frame render path:
Output.get() per-frame walk: 24% total
└─ sliceAnsi(line, from, to) per write: 18% total
stringWidth(line) chain (cached + JS): 14% total
All three were re-doing identical work every frame: same string → same
clipped slice → same width.
Fixes:
1. Memoize stringWidth (8k-entry LRU) for non-ASCII strings; ASCII fast-path
skips the cache (inline scan beats Map.get for short ASCII, the >90%
case). String.charCodeAt scan up to 64 chars is cheaper than the regex
fallback.
2. Memoize wrapText (4k-entry LRU keyed by maxWidth|wrapType|text) — wrapAnsi
is pure and the same content reflows identically every frame.
3. Memoize sliceAnsi (4k-entry LRU keyed by start|end|str) for the
end-defined hot path used by Output.get().
4. Skip the slice entirely in Output.get() when the line already fits the
clip box (startsBefore=false && endsAfter=false). Most transcript lines
never exceed their container width, and tokenizing them just to slice
(line, 0, width) was pure overhead. This single fast-path drops
sliceAnsi from 18% → ~0% in the profile.
Also tighten virtualization constants (MAX_MOUNTED 260→120, OVERSCAN 40→20,
SLIDE_STEP 25→12) and cap historical-message render at 800 chars / 16
lines via HISTORY_RENDER_MAX_*; messages inside the FULL_RENDER_TAIL_ITEMS
window still render in full so reading-zone behavior is unchanged.
Validation, real-user CPU profile, page-up scroll on 11k-line session:
Output.get() self-time: 24% → 0.3%
sliceAnsi total: 18% → not in top 25
stringWidth family: 14% → ~3%
idle: 60.7% → 77.3%
Frame timings (synthetic page-up profile harness):
dur p95: ~10ms → 4.87ms
dur p99: 25ms+ → 12.80ms
yoga p99: ~20ms → 1.87ms
The remaining CPU in the profile is Yoga layoutNode + React commit,
which is the irreducible work for this UI tree size.
Adds a corner-overlay FPS readout gated on HERMES_TUI_FPS, fed by
ink's onFrame callback (so it's the REAL render rate, not a timer).
Displays fps, last-frame duration, and total frame count, colored by
threshold (green ≥50, yellow ≥30, red below).
Implementation:
* lib/fpsStore.ts — nanostore atom updated from a trackFrame()
sink. Ring buffer of last 30 frame timestamps; fps = 29/elapsed.
trackFrame is undefined when SHOW_FPS is off so ink's onFrame
short-circuits at the optional chain.
* components/fpsOverlay.tsx — tiny <Text> subscriber; returns null
when SHOW_FPS is off (React skips the subtree entirely).
* entry.tsx — composes onFrame from logFrameEvent (dev-perf) and
trackFrame (fps) so both flags can coexist. When both are off,
onFrame is undefined and ink never attaches the handler.
* appLayout.tsx — mounts the overlay as a flex-shrink=0 right-
aligned Box below the composer, conditional on SHOW_FPS.
Usage:
HERMES_TUI_FPS=1 hermes --tui
# bottom right: " 62.3fps · 0.8ms · #1234" (green/yellow/red)
Intended as a user-facing diagnostic during the scroll-perf tuning
pass — watch the counter drop while holding PageUp to see where
frames go silent, without having to run scripts/profile-tui.py in a
side terminal.
126 files post-compile with React Compiler; 352 tests still pass.
Replaces the static WHEEL_SCROLL_STEP=1 multiplier on wheel events
with an adaptive accel state machine that infers user intent from
inter-event timing.
Algorithm ported straight from claude-code's
src/components/ScrollKeybindingHandler.tsx. All tuning constants,
the native/xterm.js path split, the encoder-bounce detection, the
trackpad-burst signature → all theirs. This file is a mechanical
port into our module structure.
What it does:
precision click (>500ms gap) 1 row/event (deliberate scan)
sustained mouse (40-200ms) 2-6 rows (decay curve)
detected wheel bounce ramps to 15 (sticky wheel-mode)
trackpad flick (5+ <5ms) 1 row/event (burst detect)
direction reversal reset to base
Two implementation paths:
* native terminals (ghostty, iTerm2, Kitty, WezTerm) — linear
window-ramp + optional wheel-mode curve triggered by detected
encoder bounce. SGR proportional reporting handled via the
burst-count guard.
* xterm.js (VS Code / Cursor / browser terminals) — pure
exponential-decay curve with fractional carry. Events arrive
1-per-notch with no pre-amplification, so the curve is more
aggressive.
Selected at construction via isXtermJs() from @hermes/ink (now
exported). Per-user tune via HERMES_TUI_SCROLL_SPEED (alias
CLAUDE_CODE_SCROLL_SPEED for portability).
13 unit tests covering direction flip/bounce/reversal, idle
disengage, trackpad-burst disengage, frac invariants, and the
native vs xterm.js branches.
Profiled under --rate 30 (stress test) and --rate 10 (realistic
sustained scroll): accel ramps to cap=6 at 30Hz burst, decays to
1-3 rows at sparse 10Hz clicks. Perf is comparable to baseline
because accel IS multiplying step — the win is perceptual (fast
flicks cover distance, slow clicks keep precision), not raw fps.
Companion to the earlier WHEEL_SCROLL_STEP=1 change: that set the
base; this modulates around it.
Adds four fields to FrameEvent.phases and the matching profile
summary:
optimizedPatches post-optimize patch count (what's actually
written to stdout; the .patches field is
pre-optimize)
writeBytes UTF-8 byte count of the write this frame
backpressure true when Node's stdout.write returned false
(Writable buffer full — outer terminal can't
keep up)
prevFrameDrainMs end-to-end drain time of the PREVIOUS frame's
write, captured from stdout.write's 2-arg
callback. Reported on the next frame so the
measurement reflects "time until OS flushed
the bytes to the terminal fd", not "time until
queued in Node".
writeDiffToTerminal() now returns { bytes, backpressure } and
accepts an optional onDrain callback. Only attached on TTY with
diff; piped/non-TTY stdout bypasses flow control so the callback
would fire synchronously anyway.
Initial measurements under hold-wheel_up against 1106-msg session
(30Hz for 6s):
patches total 28,888
optimized total 16,700 (ratio 0.58 — optimizer cuts ~42%)
writeBytes 42 KB / 10s = 4.2 KB/s throughput
drainMs p50 0.14 ms terminal accepts bytes instantly
drainMs p99 0.85 ms
backpressure 0% of frames
This rules out the terminal-parse hypothesis — Cursor's xterm.js
drains our output in sub-millisecond time at only 4 KB/s. The
remaining lag has to be in the render pipeline, not the wire.
Profile output now includes the bytes+drain+backpressure lines to
keep this visible on every subsequent iteration.
Adds scrollFastPathStats counters to render-node-to-output.ts: captures
every time a ScrollBox's DECSTBM scroll hint is generated, records
whether the fast path took it (blit+shift from prevScreen) or declined,
and why. Exposed through hermes-ink's public exports and snapshotted on
every FrameEvent so the profiler harness can correlate decline reasons
with the actual patch/renderer cost per frame.
This is pure observation — no behaviour change. Preparing for the
virtual-history rewrite: the hypothesis was that our topSpacer/
bottomSpacer scheme disqualifies every scroll via heightDelta
mismatch, but the data shows the fast path is actually taken on most
scrolls (19/23 over a 6s PageUp hold through 1100 messages) — the
remaining steady-state renderer cost is Yoga tree traversal, not
the per-frame full redraw I initially suspected.
Declines that do happen correlate with React commits that changed the
mounted range mid-scroll (heightDelta=±3 to ±35). Those are the rarer
cases the virtualization rewrite still needs to address.
No test diffs — instrumentation-only. Build verified: `tsc --noEmit`
plus the full `npm run build` compiler post-pass pass cleanly.
Extends HERMES_DEV_PERF to capture the complete render pipeline, not
just React commits. Adds scripts/profile-tui.py to drive repeatable
hold-PageUp stress tests against a real long session.
perfPane.tsx:
Wires ink's onFrame callback (already plumbed through the fork) into
the same perf.log as the React.Profiler samples. Captures per-phase
timing (yoga calculateLayout, renderNodeToOutput, screen diff, patch
optimize, stdout write) plus yoga counters (visited/measured/cache-
Hits/live) and patch counts per frame. Events are tagged
{src: 'react'|'frame'} so jq can split them. logFrameEvent is
undefined when HERMES_DEV_PERF is unset, so ink doesn't even attach
the callback.
entry.tsx:
Passes logFrameEvent into render().
types/hermes-ink.d.ts:
Declares FrameEvent + onFrame on RenderOptions so the ui-tui side
type-checks against the plumbed-through ink option.
scripts/profile-tui.py:
New harness. Launches the built TUI under a PTY with the longest
session in state.db resumed, holds PageUp/PageDown/etc at a
configurable Hz for N seconds, then parses perf.log and prints
per-phase p50/p95/p99/max plus yoga-counter summaries. Zero deps
beyond stdlib. Exit 2 if nothing was captured (wiring broken).
Initial findings (1106-msg session, 6s PageUp hold at 30Hz):
- Steady state: 10 fps; renderer phase p99=63ms, write p99=0.2ms
- 4/107 heavy frames (>=16ms), all dominated by renderNodeToOutput
- One pathological 97ms frame with yoga measuring 70,415 text cells
and Yoga visiting 225k nodes — the cold-unmeasured-region hit
- Ink's scroll fast-path (DECSTBM blit from prevScreen) is
disqualified because our spacer-based virtual history doesn't
keep heightDelta in sync with scroll.delta, so every PageUp step
falls through to a full 2000-4800 patch re-render instead of ~40
- resolveEditor() now returns argv (string[]) so EDITOR='code --wait'
and VISUAL='emacsclient -t' tokenize correctly into spawnSync's
separate command + args. Previously the whole string was passed as
argv[0] and would ENOENT.
- Skip the POSIX X_OK PATH walk on Windows; return ['notepad.exe']
there since fs.constants.X_OK is not meaningful and PATHEXT-based
resolution would need its own implementation.
- Surface openEditor() rejections via actions.sys instead of letting
them become unhandled promise rejections in the useInput callback.
- Hotkey docs/comment now say Cmd/Ctrl+G to match isAction()'s
platform-action-modifier behavior (Cmd on macOS, Ctrl elsewhere).
- editor.ts: collapse two private helpers into one flatMap-driven lookup,
keep `isExecutable` as the only named primitive, document the fallback
chain with prompt_toolkit parity
- editor.test.ts: hoist the `exe` helper out of `describe`, drop the
empty afterEach + dead mkdir branch, materialize expected paths before
the resolveEditor call so argument evaluation order doesn't bite
- useComposerState.openEditor: rmSync the mkdtemp dir (was leaking),
early-return on bad exit / empty buffer, run cleanup in finally
- useInputHandlers: cheap `ch.toLowerCase() === 'g'` guard before the
modifier check
- hermes-ink/screen.ts: pick up `npm run fix` import-sort cleanup so
lint passes
Base CLI's editor UX was better because prompt_toolkit picks the system
editor first, then friendly terminal editors before vi. Do not override
that with a vim-first chain.
Keep the CLI on prompt_toolkit's picker and only set tempfile_suffix='.md'
to avoid the complex-tempfile EEXIST path. Update the TUI resolver to
match prompt_toolkit's fallback order: $VISUAL, $EDITOR, editor, nano,
pico, vi, emacs.
Setting buffer.tempfile = 'prompt.md' pushed prompt_toolkit into its
complex-tempfile path, which creates a temp dir and then calls
os.makedirs() on that same path when no subdirectory is present. That
raises EEXIST before the editor can launch.
Keep prompt_toolkit on the simple tempfile path with .md suffix, and
make the editor fallback chain explicit on both surfaces:
$VISUAL -> $EDITOR -> nvim -> vim -> vi -> nano.
prompt_toolkit's default editor list is: $VISUAL, $EDITOR, /usr/bin/editor,
/usr/bin/nano, /usr/bin/pico, /usr/bin/vi, /usr/bin/emacs — so when
neither env var is set, the base CLI launched nano. The TUI fell back
to a literal 'vi'. Same Ctrl+G keystroke, two different editors.
Pick the same chain on both surfaces:
$VISUAL → $EDITOR → vim → vi → nano
CLI: override input_area.buffer._open_file_in_editor on the TextArea
once at app build time. Local to that buffer; doesn't touch
os.environ or affect other subprocesses.
TUI: extract resolveEditor() into ui-tui/src/lib/editor.ts. PATH walk
with accessSync(X_OK), no shelling out. Six-line unit test verifies
the priority order and the multi-entry PATH walk.
- accept forwarded Cmd+C for selection copy in SSH sessions even when Hermes runs on Linux
- keep local Linux Alt+C from acting as copy and update TUI hotkey hints for remote shells
- run the requested ui-tui lint+format pass and include resulting formatting updates
- guard text-measure cache eviction key in hermes-ink so ui-tui type-check stays green
When the user runs /voice and then presses Ctrl+B in the TUI, three
handlers collaborate to consume the chord and none of them dispatch
voice.record:
- isAction() is platform-aware — on macOS it requires Cmd (meta/super),
so Ctrl+B fails the match in useInputHandlers and never triggers
voiceStart/voiceStop.
- TextInput's Ctrl+B pass-through list doesn't include 'b', so the
keystroke falls through to the wordMod backward-word branch on Linux
and to the printable-char insertion branch on macOS — the latter is
exactly what timmie reported ("enters a b into the tui").
- /voice emits "voice: on" with no hint, so the user has no way to
know Ctrl+B is the recording toggle.
Introduces isVoiceToggleKey(key, ch) in lib/platform.ts that matches
raw Ctrl+B on every platform (mirrors tips.py and config.yaml's
voice.record_key default) and additionally accepts Cmd+B on macOS so
existing muscle memory keeps working. Wires it into useInputHandlers,
adds Ctrl+B to TextInput's pass-through list so the global handler
actually receives the chord, and appends "press Ctrl+B to record" to
the /voice on message.
Empirically verified with hermes --tui: Ctrl+B no longer leaks 'b'
into the composer and now dispatches the voice.record RPC (the
downstream ImportError for hermes_cli.voice is a separate upstream
bug — follow-up patch).
Pull duplicated rules into ui-tui/src/lib/subagentTree so the live overlay,
disk snapshot label, and diff pane all speak one dialect:
- export fmtDuration(seconds) — was a private helper in subagentTree;
agentsOverlay's local secLabel/fmtDur/fmtElapsedLabel now wrap the same
core (with UI-only empty-string policy).
- export topLevelSubagents(items) — matches buildSubagentTree's orphan
semantics (no parent OR parent not in snapshot). Replaces three hand-
rolled copies across createGatewayEventHandler (disk label), agentsOverlay
DiffPane, and prior inline filters.
Also collapse agentsOverlay boilerplate:
- replace IIFE title + inner `delta` helper with straight expressions;
- introduce module-level diffMetricLine for replay-diff rows;
- tighten OverlayScrollbar (single thumbColor expression, vBar/thumbBody).
Adds unit coverage for the new exports (fmtDuration + topLevelSubagents).
No behaviour change; 221 tests pass.