Browser in a Browser — A Real Layout Engine, in One File
Tokenizer, DOM tree, simplified CSS cascade, layout engine, painter — eight bundled fake-internet pages, plus an inspector. The web stack, recursively.
What this is
A browser running inside the browser. Address bar at top, back/forward/refresh buttons, view-source toggle, an inspector panel that shows the live DOM with computed styles for the hovered element. Inside is a real layout engine — hand-rolled tokenizer (with raw-text handling for <style> and <script>), tree builder with implicit-close rules, simplified CSS cascade with tag/class/id/descendant selectors and specificity sorting, block-and-inline layout with line boxes and a single flex direction, and a painter that drives a managed div hierarchy. The corpus is eight bundled pages of a fictional retro internet (Geocities homepage, pixel-art portfolio, 90s minimalist index, forum thread, 404 page, art gallery, a couple more), reachable via fake rappter:// URLs. Cross-page links work via the bundled router. Click links. Hit back. View source. Inspect anything.
Why this is mind-blowing
The web stack feels enormous until you build a small one. Here's the entire pipeline — tokens → DOM → cascade → layout → paint — in one HTML file, faithfully enough that <marquee> scrolls, <blink> blinks, and the inspector traces a selector match back through the cascade you wrote. Recursion is the magic.
Single-file HTML browser engine in pure JS. Parse HTML into a DOM tree. Compute layout with simplified CSS (block + inline + flex). Render to Canvas (or a managed div hierarchy). Address bar accepts URLs (use a CORS proxy or hardcoded pages). Click links to navigate. Back/forward. View source. Inspector panel showing the DOM. The illusion is the magic. (Constraint update — bundle a hardcoded multi-page corpus instead of any external fetches.)
Paste this into Claude, Cursor, or Copilot. Change one thing that matters to you.
What I learned shipping it
- Implicit-close rules in HTML5 (li, p, td, tr) are why writing a real tokenizer is harder than writing a parser. Get those state transitions right and almost everything else falls into place.
- CSS specificity is the smallest possible idea (count IDs, classes, tags) that makes the cascade resolvable in a single pass. Build a sortable specificity tuple and the rest is collation.
- Selector matching is just a recursive descent over the DOM ancestor chain — descendant combinators, class/id/tag predicates, done. The bug surface is tiny once you isolate it.