What it’s like to ship 4000 lines in 24 hours with an LLM partner
The chapter opens on a commit log, because that is what the day felt like: not a smooth narrative, but a string of hard edges. I am staring at 3f272ea with cold coffee beside me and the sense that the system has finally stopped fighting itself. “Twin Stack v1.4: double-jump loop converges, BookFactory ships as singleton.” A few lines down are the fossils of the previous day: 25dc659, where everything became agent.py all the way down; 60cb9ff, where the tether bridge and one-command launcher appeared; 03d379e, a small but necessary fix for Azure endpoint dispatch; 223fd1e, where v1 itself landed with the hatch page and the “hippocampus brainstem-in-the-cloud.” In twenty-four hours, the project moved from first shape to fourth revision. The thing that matters is not just 4,000 lines. It is the number of times I had to decide whether a thing should exist at all.
I remember the v1.3 moment most clearly because it was the least glamorous. By then I had enough machinery to impress myself and enough machinery to become dangerous. The release message said it plainly: “agent.py all the way down + .egg snapshots.” That phrase sounds almost comic now, but it marked a turn. The day before, I had been sliding toward a more decorated architecture: pipeline DSL, orchestration endpoints, special kinds of steps. These are the kinds of ideas that feel responsible when you are tired. They make a system look intentional. They also multiply the number of things you have to keep in your head.
Claude was useful precisely because it would keep building in the direction I pointed, even when that direction was wrong. That can sound like a criticism, but in practice it made my own confusion visible faster. I would ask for the extra layer, see it take shape, and then feel the weight of it immediately. The correction loop was not me asking the model to do less work. It was me asking it to preserve the behavior while collapsing the ontology. Keep the book factory. Keep the composition. Keep the snapshots. Lose the extra kind of thing.
There is a particular pleasure in that speed. You ask for a one-page HTML demo and soon 412ece4 exists: “Book factory click-and-watch HTML.” You ask for a simulator, workspaces, doc share, mobile PWA, and there is f423108. You fix one maddening endpoint problem in swarm/llm.py and the entire stack starts talking to itself again. The machine shortens the distance between idea and evidence. You do not have to wonder for long whether a structure will hold. You can stand it up, walk around it, and feel where it buckles.
So what is it like to ship 4,000 lines in 24 hours with an LLM partner? Less like commanding a tool, more like steering a fast conversation that leaves artifacts behind. The model gives you momentum. It does not give you taste, or restraint, or the nerve to delete the impressive wrong thing. Those still have to come from you.