Three Tiers, Zero Servers
The blog demo has been running entirely in the browser since January — articles, comments, Turbo Streams broadcasting, real-time updates across tabs. But it ran everything on the main thread with IndexedDB. Every database query, every controller action, every view render blocked the UI thread. And IndexedDB, while persistent, is the first thing browsers evict under storage pressure.
Now there is a new worker deployment target that can run that same blog across three tiers, all in the browser, no server:
| Main Thread (Presentation) | SharedWorker (Application) | Dedicated Worker (Data) |
|---|---|---|
| Turbo Drive | Router.dispatch() | SQLite WASM |
| Stimulus Controllers | Controllers | opfs-sahpool VFS |
| DOM updates | Models & Views | OPFS persistence |
| BroadcastChannel | TurboBroadcast | (file storage) |
The main thread intercepts Turbo navigation and forwards requests to a SharedWorker via MessagePort. The SharedWorker runs the full request cycle — routing, controller actions, model queries, view rendering — and returns HTML. Turbo renders it. The dedicated Worker runs SQLite with OPFS persistence, receiving SQL over postMessage.
What Works
The list is longer than I expected when we started:
- SQLite WASM with
opfs-sahpoolpersistence — no COOP/COEP headers needed, data survives browser restarts - PGlite (PostgreSQL in WASM) with OPFS persistence, IndexedDB fallback
- Active Storage with file content in OPFS and metadata in SQL
- Turbo Streams broadcasting via BroadcastChannel across all tabs
- Action Cable protocol support — the SharedWorker implements the server side
- Turbo Drive navigation intercepted by the client bridge, dispatched through the SharedWorker
- Stimulus controllers on the main thread, unaffected
- Tailwind CSS compiled by
@tailwindcss/viteduring the build - Multi-tab sharing — one SharedWorker instance serves all open tabs
- Cross-browser support — Firefox and Chrome, with different Worker creation strategies
- Fingerprinted assets — all Worker scripts get content-hashed filenames for cache busting
The architecture mirrors how a real three-tier application works. The presentation tier doesn't know or care that the application tier is a SharedWorker instead of a Node.js server. The application tier doesn't know the data tier is a dedicated Worker instead of PostgreSQL on a remote host. The message protocols are the same shape as HTTP request/response and SQL query/result.
Why Three Tiers
Two reasons forced the three-tier split.
OPFS requires a dedicated Worker. The FileSystemSyncAccessHandle API — which SQLite needs for performant synchronous file I/O — is only available in dedicated Workers. Not on the main thread, not in SharedWorkers. So the database has to run in its own Worker.
SharedWorker gives multi-tab sharing. Without it, each tab would have its own database connection, and OPFS's exclusive lock would mean only one tab could write at a time. A SharedWorker serves all tabs through a single database connection. One application instance, shared state, no coordination needed.
The main thread stays unblocked. All application logic and database I/O happen off-thread. Navigation feels instant because the UI thread never waits for a query.
The Chrome Workaround
Firefox lets a SharedWorker create a dedicated Worker directly — new Worker() works inside SharedWorkerGlobalScope. Chrome doesn't. The Worker constructor simply isn't defined.
The solution: the SharedWorker asks a connected tab to create the dedicated Worker on its behalf. The tab creates the Worker, sets up a MessageChannel, and transfers one port to the SharedWorker. The SharedWorker communicates through the channel port with the same postMessage/addEventListener interface as a real Worker.
When the hosting tab closes (detected via beforeunload and BroadcastChannel), the SharedWorker picks another tab and respawns the dedicated Worker. The database file persists in OPFS, so the new Worker just reopens it. Queries resume seamlessly.
On Firefox, none of this is needed. The SharedWorker owns the dedicated Worker directly, and it survives tab closes.
Bundle Size
| Component | Gzipped |
|---|---|
| Presentation (main thread) | ~54 KB |
| Application (SharedWorker) | ~22 KB |
| Data tier (db_worker + SQLite) | ~110 KB |
| SQLite WASM binary | ~399 KB |
| Total | ~585 KB |
Sub-second on 4G. After first load, everything is cached — fingerprinted filenames mean the browser won't re-download until the app changes. Opening a second tab downloads nothing.
wasmify-rails
The same week we built this, wasmify-rails was featured on Ruby Stack News. It solves the same problem — Rails in the browser — from the opposite direction. Where Juntos transpiles Ruby to JavaScript, wasmify-rails compiles the entire Ruby VM and Rails framework to WebAssembly. Same goal, different tradeoffs:
| Juntos | wasmify-rails | |
|---|---|---|
| Approach | Transpile Ruby to JS | Compile Ruby VM to WASM |
| Bundle size | ~600 KB gzipped | ~76 MB |
| Startup | Sub-second | Multiple seconds (VM boot) |
| Compatibility | Subset of Ruby/Rails | Full Rails, any Ruby gem |
| Architecture | SharedWorker (multi-tab) | Service Worker (request/response) |
| Action Cable | Yes (BroadcastChannel) | Not yet (Service Workers can't do WebSockets) |
Juntos starts faster and runs lighter. wasmify-rails supports any Ruby gem and any Ruby code. Neither can send email from the browser.
But here's what's interesting: the SharedWorker architecture isn't specific to transpilation. wasmify-rails currently uses a Service Worker to intercept fetch requests and route them through Rack. Service Workers can't handle WebSocket connections, so Action Cable doesn't work. They also can't create dedicated Workers for OPFS access.
A SharedWorker running the Ruby WASM VM would solve both problems. The client bridge we built is target-agnostic — it sends HTTP-like messages and receives HTML responses. It doesn't care whether the SharedWorker runs transpiled JavaScript or a Ruby VM. The three-tier architecture, the BroadcastChannel broadcasting, the Chrome Worker delegation, the beforeunload respawn — all of it would work unchanged with a WASM Ruby backend.
The Rack bridge would be the main new code: convert MessagePort messages to Rack env hashes, call Rails.application.call(env), serialize the response back. Everything else is plumbing that already exists.
If that collaboration happens, both projects benefit. wasmify-rails gets multi-tab sharing, Action Cable, and OPFS persistence. Juntos gets a growth path to full Rails compatibility for applications that outgrow the transpiler's Ruby subset. Same architecture, different engines.
Juntos is open source: github.com/ruby2js/ruby2js