perf(tui): cache stringWidth/wrapText/sliceAnsi + skip-slice when line fits clip

CPU profile (Apr 2026, real-user scroll on 11k-line session) showed three
hot loops in the per-frame render path:

  Output.get() per-frame walk:                 24% total
  └─ sliceAnsi(line, from, to) per write:     18% total
  stringWidth(line) chain (cached + JS):      14% total

All three were re-doing identical work every frame: same string → same
clipped slice → same width.

Fixes:

1. Memoize stringWidth (8k-entry LRU) for non-ASCII strings; ASCII fast-path
   skips the cache (inline scan beats Map.get for short ASCII, the >90%
   case). String.charCodeAt scan up to 64 chars is cheaper than the regex
   fallback.

2. Memoize wrapText (4k-entry LRU keyed by maxWidth|wrapType|text) — wrapAnsi
   is pure and the same content reflows identically every frame.

3. Memoize sliceAnsi (4k-entry LRU keyed by start|end|str) for the
   end-defined hot path used by Output.get().

4. Skip the slice entirely in Output.get() when the line already fits the
   clip box (startsBefore=false && endsAfter=false). Most transcript lines
   never exceed their container width, and tokenizing them just to slice
   (line, 0, width) was pure overhead. This single fast-path drops
   sliceAnsi from 18% → ~0% in the profile.

Also tighten virtualization constants (MAX_MOUNTED 260→120, OVERSCAN 40→20,
SLIDE_STEP 25→12) and cap historical-message render at 800 chars / 16
lines via HISTORY_RENDER_MAX_*; messages inside the FULL_RENDER_TAIL_ITEMS
window still render in full so reading-zone behavior is unchanged.

Validation, real-user CPU profile, page-up scroll on 11k-line session:

  Output.get() self-time:     24%   →   0.3%
  sliceAnsi total:            18%   →   not in top 25
  stringWidth family:         14%   →   ~3%
  idle:                     60.7%   →  77.3%

Frame timings (synthetic page-up profile harness):
  dur p95:   ~10ms   →  4.87ms
  dur p99:   25ms+   → 12.80ms
  yoga p99:  ~20ms   →  1.87ms

The remaining CPU in the profile is Yoga layoutNode + React commit,
which is the irreducible work for this UI tree size.
This commit is contained in:
Brooklyn Nicholson 2026-04-26 19:28:09 -05:00
parent 85e9a23efb
commit c370e2e1e5
14 changed files with 450 additions and 42 deletions

View file

@ -10,7 +10,42 @@ function filterStartCodes(codes: AnsiCode[]): AnsiCode[] {
return codes.filter(c => !isEndCode(c))
}
// LRU cache: same (string, start, end) → same output. Output.get() re-emits
// identical writes every frame for stable transcript content; this avoids
// re-tokenizing them. CPU profile (Apr 2026) showed sliceAnsi at 18% total
// time during scroll. Bounded at 4096 entries — entries are short clipped
// lines so memory cost is small.
const sliceCache = new Map<string, string>()
const SLICE_CACHE_LIMIT = 4096
export default function sliceAnsi(str: string, start: number, end?: number): string {
if (!str) return ''
// Hot-path: only cache when end is defined (the Output.get() use-case).
if (end !== undefined) {
const key = `${start}|${end}|${str}`
const cached = sliceCache.get(key)
if (cached !== undefined) {
sliceCache.delete(key)
sliceCache.set(key, cached)
return cached
}
const result = computeSlice(str, start, end)
if (sliceCache.size >= SLICE_CACHE_LIMIT) {
sliceCache.delete(sliceCache.keys().next().value!)
}
sliceCache.set(key, result)
return result
}
return computeSlice(str, start, end)
}
function computeSlice(str: string, start: number, end?: number): string {
const tokens = tokenize(str)
let activeCodes: AnsiCode[] = []
let position = 0