Lab

Architecture decisions from production systems. No proprietary code – just the problem, the options, what I picked, and what happened.


Building a vim editor as a pure state machine

How I implemented a vim-style editor in 867 lines of TypeScript using pure functions and immutable state. Every operation – motions, operators, undo, dot repeat – is a state machine transition with no side effects.


Orchestrating multi-model AI pipelines

How I use one LLM as an orchestrator that plans which tools to invoke, runs them in parallel, and synthesizes results through streaming. Why different models handle different jobs, and why rigid function-calling schemas weren’t flexible enough.


Designing a memory layer for AI agents

Vector search alone doesn’t work for agent memory. Semantic similarity doesn’t understand importance or recency. This is how I built a hybrid storage system with per-agent isolation, importance scoring, and background summarization – and what’s still unsolved.


From reinforcement learning to LLM tool-calling

I built a PPO trading agent and shelved it. LLM tool-calling shipped the same features in a fraction of the time. Here’s when each approach makes sense.