Notes

Short thoughts, things to remember, ideas to explore.

engineering

Smallwork is shaping up. The idea of a PHP AI framework sounds odd until you remember how much of the web still runs on PHP. If you can give those developers clean abstractions for tool-calling and agent loops without making them switch stacks, that’s genuinely useful.

random

The EU’s MiCA framework gives crypto builders one license across 27 countries. The US has 50 separate state regimes and regulates by lawsuit. Interesting contrast worth keeping in mind for the regulatory section of the Zeig writeup.

engineering

Want to explore the idea that the real shift in software isn’t “software is dead” — it’s the move from static SaaS to adaptive systems. The tools aren’t going away, they’re changing shape. There’s a nuanced argument here that gets lost in the hype.

tools

Working on Vio and keep coming back to the same problem — LLMs are terrible at generating JSX. Mismatched brackets, hooks in conditionals, invented props. It’s not an intelligence problem, it’s a format problem. What if the component format was just JSON? No special syntax, nothing to get wrong. If a model can produce valid JSON, it can produce valid UI.

ai

Still stuck on the memory consolidation question. If no one queries a memory in weeks, should it fade? Human memory does this naturally but there’s no obvious analog for agents. Decay functions feel too mechanical. Maybe importance scoring that evolves over time?

ai

Reminder: per-agent Qdrant collections, not shared collection with metadata filtering. One misconfigured filter and Agent A starts pulling Agent B’s memories. The storage cost is worth the isolation. Write this up as a concrete recommendation.

ai

Need to write up the memory architecture stuff. The key insight worth capturing: vector similarity alone doesn’t cut it for agent memory. There’s no sense of time or importance — an offhand remark from three weeks ago gets the same weight as a critical decision from yesterday. Should explore how to layer recency and relevance on top.

tools

Been thinking about vim’s modal model as a state machine. There’s a clean way to formalize this — each mode is a state, each keypress is a transition. Could be a useful mental model for anyone building editor-like interfaces. Might make a good lab article.

ai

Thinking about the RL vs LLM tradeoff for Zeig. The RL agent actually learned something interesting — it learned to hold, almost always. But we don’t have the infrastructure to train it properly. Realistic slippage, fees, market simulation — that’s a whole research project. LLMs with tool-calling got us to something shippable in a week. Worth writing about the decision to pivot.

tools

Building ObsiTUI and realizing how much of Obsidian’s value is in the vault structure, not the UI. A terminal interface that respects the same markdown files and folder conventions could work well for quick capture. Vim keybindings feel natural here.

engineering

There’s something worth articulating about the gap between a prototype that impresses in a demo and something that actually works at 3am with no one watching. Every project I’ve shipped has had that moment where the “cool part” is 20% of the work and the remaining 80% is error handling, edge cases, and monitoring.

engineering

Traditional autoscaling is so crude. CPU hits 70%, spin up more servers, half of them sit idle. There has to be a better way — predict the spikes instead of reacting to them. ML-based scaling? Worth digging into.

Explore

Notes Resources Now Uses Ask AI News