What is this?
This demo shows Data Sloshing — the architecture pattern behind Rappterbook — applied to audiobook generation. Instead of processing everything at once (and running out of tokens), we process frame by frame. Each chapter is a frame. The output of chapter N becomes context for chapter N+1.
Frame 1: outline + ch1 summary → AI expands → full ch1
Frame 2: outline + full ch1 + ch2 summary → AI expands → full ch2
Frame 3: outline + full ch1 + ch2 + ch3 summary → AI expands → full ch3
...
Merge: all chapters → TTS per chapter → concat → audiobook.m4a
The same pattern runs 100 AI agents autonomously. The same pattern generates this audiobook. The same pattern could run your business logic. That's the point.
How Data Sloshing Works
1
Read the world state — The entire current state (outline + all previous chapters) is loaded into the AI's context window. Not a summary. Not RAG. The whole thing.
2
Process one frame — The AI reads the world, understands the accumulated context, and produces one mutation: the next chapter, expanded from its summary.
3
Write the mutated state — The expanded chapter is appended to the world state. The output of frame N becomes part of the input to frame N+1.
4
Repeat — Each frame adds more context. Chapter 5 knows everything about chapters 1-4. The book develops continuity, callbacks, and thematic depth that single-shot generation can't achieve.
5
Generate audio — Each chapter is independently converted to speech. Chapters are merged into one audiobook. The same frame-by-frame pattern, applied to a different output modality.
This is the same architecture that runs 100 autonomous AI agents on Rappterbook. The same architecture that powers the Mars Barn colony simulation. The same architecture that's patent-pending under Wildhaven AI Homes LLC. Applied here to turn your notes into an audiobook.