intertwingly

It’s just data

Offline DJ Playlists with IndexedDB and Tigris CORS


The showcase application provides a DJ list view where DJs can play uploaded songs for solo performances during dance competitions. The feature worked perfectly on reliable networks. But at live events, WiFi reliability becomes mission-critical—and event WiFi is notoriously unreliable.

The symptom: Songs would start playing but stop after a few seconds. DJs would restart playback. The song would play briefly, then stop again. Multiply this by 50+ solo performances in a session, and you have significant event delays and frustrated DJs.

The cause: Poor WiFi couldn't maintain consistent streaming to the browser's <audio> element. Brief interruptions caused buffering failures, even though the network would recover moments later.

The solution: Progressive audio caching with IndexedDB, plus direct CORS configuration on the Tigris storage bucket.

TL;DR: Built IndexedDB-based progressive audio caching with Object URLs (50ms page load), configured Tigris CORS via AWS CLI, and used stable base URLs as cache keys for pre-signed URLs. Collaboration with Claude accelerated development. Jump to conclusion


The Problem: Unreliable Event WiFi

Dance competitions have predictable audio requirements:

Before the event: DJs or organizers upload audio files (typically MP3/M4A, 2-5MB each) to the showcase application via Rails Active Storage, which stores them in Tigris (S3-compatible object storage on Fly.io).

During the event: DJs access a playlist view showing all solo performances with embedded <audio> controls. Clicking play fetches the audio file from Tigris and streams it to the browser.

This architecture works beautifully on reliable networks. But event WiFi introduces several failure modes:

  1. Intermittent connectivity - WiFi drops for 5-10 seconds, interrupting streaming
  2. Bandwidth contention - 200+ attendees on the same network
  3. Router restarts - Venue staff "fixing" WiFi by rebooting equipment
  4. Signal dead zones - DJ station positioned poorly relative to access points

Traditional <audio> streaming handles these poorly. The element buffers a few seconds of audio, but longer interruptions cause playback to fail entirely. The browser won't automatically retry—it just stops.

DJs can't afford this. A solo performance is 90-120 seconds. If playback fails halfway through, the dancer stops, the DJ restarts the song, and the event falls behind schedule.


Development Process: Human-AI Collaboration

This feature was built through collaboration with Claude, demonstrating how AI assistance can accelerate problem-solving while human expertise guides the direction.

The conversation flow:

  1. I started with the problem: Event WiFi causing songs to start then stop mid-performance
  2. I proposed the solution: Progressive audio downloads to cache files before they're needed
  3. Claude suggested IndexedDB: Persistent browser storage that survives page reloads, better than simple in-memory caching
  4. I had basic CORS knowledge: Understood cross-origin requests were blocking, but not the details
  5. Claude navigated Tigris specifics: AWS CLI configuration, CORS rules, pre-signed URL mechanics
  6. I suggested Data URLs: Simple approach to store cached audio
  7. Claude identified the performance issue: Data URLs were causing 6-second page loads
  8. Claude recommended Object URLs: Dropped page load time to 50 milliseconds
  9. Production revealed the cache key bug: Songs weren't restoring after page reload
  10. Claude diagnosed the issue: Pre-signed URLs changing signatures on each page load
  11. Claude implemented the fix: Extract base URL for stable cache keys

What worked well:

The result: A production feature that's been tested with 123 songs (254 MB) caching in seconds and surviving page reloads.

For more on this development methodology, see Disciplined use of Claude.

With this collaborative approach established, let's explore the technical solution we built.


The Solution: Progressive Audio Caching

The fix is conceptually simple: download and cache all audio files before they're needed. When the DJ clicks play, serve from the cache instead of streaming from the network.

This requires:

  1. Persistent browser storage - IndexedDB stores audio files as Blobs
  2. Progressive download - Fetch files in background without blocking the UI
  3. Cache-aware playback - Check cache first, fall back to network if needed
  4. Event-scoped caching - Each event's audio is separate (no cross-contamination)
  5. Automatic expiration - Remove cached audio after 30 days

Implementation: Stimulus Controller

Rails applications with Hotwire/Turbo use Stimulus controllers for JavaScript behaviors. The progressive-audio controller handles all caching logic.

Core responsibilities:

Progressive download strategies: The controller tries multiple approaches if downloads fail:

  1. Simple fetch - Fast and reliable on good connections
  2. Chunked streaming - Better for large files on slow connections
  3. Range requests - Handles servers that support byte-range requests

This progressive fallback ensures downloads succeed even on challenging networks. See the complete implementation in the "Final Implementation" section below.

Performance Optimization: Object URLs

Initial implementation stored audio as Data URLs (base64-encoded strings). This was simple but devastatingly slow:

// Data URL approach (slow)
const dataUrl = `data:${blob.type};base64,${base64String}`;
audioSource.src = dataUrl;

Result: Page load took 6 seconds to convert 50 cached audio files from Blobs to Data URLs.

The fix: Object URLs create in-memory references to Blobs without encoding:

// Object URL approach (fast)
const objectUrl = URL.createObjectURL(blob);
audioSource.src = objectUrl;

Result: Page load dropped to 50 milliseconds.

Why the difference? Data URLs require base64 encoding (adds 33% size overhead) and create string copies of the entire audio file. Object URLs are just pointers—no encoding, no copying.


The CORS Obstacle: When Development Met Production

The solution worked perfectly in development, where we tested it extensively. But the first production deployment revealed a critical blocker:

Cross-Origin Request Blocked: The Same Origin Policy disallows reading the
remote resource at https://showcase.fly.storage.tigris.dev/...
(Reason: CORS header 'Access-Control-Allow-Origin' missing).

Why? The Rails application runs at https://smooth.fly.dev/showcase/ but audio files are served from https://showcase.fly.storage.tigris.dev/. Different domain = cross-origin request = CORS headers required.

In development, both the app and storage URLs are localhost, so no CORS issue. Production exposed the problem.


Configuring CORS on Tigris

The solution: configure CORS directly on the Tigris storage bucket.

S3-compatible storage services support CORS configuration via the S3 API. Tigris provides a dashboard for this, but it had a permission bug (clicking "Update" showed "access denied" even on my own bucket).

The fix: Use AWS CLI to configure CORS directly.

Step 1: Install AWS CLI

brew install awscli

Step 2: Configure Tigris Credentials

aws configure --profile tigris

Enter:

Step 3: Create CORS Configuration

{
  "CORSRules": [
    {
      "AllowedOrigins": ["*"],
      "AllowedMethods": ["GET", "HEAD", "OPTIONS"],
      "AllowedHeaders": ["Range", "Content-Type", "Authorization"],
      "ExposeHeaders": ["Content-Length", "Content-Range", "ETag"],
      "MaxAgeSeconds": 80000
    }
  ]
}

Why AllowedOrigins: ["*"]? The DJ list page itself is password-protected, but CORS restrictions don't provide additional security here. The actual security is the authentication layer preventing unauthorized page access. Pre-signed URLs expire after 1 hour, and while the audio files are sensitive, they don't pose a security risk. CORS is a browser security mechanism—if someone obtains a pre-signed URL, they can access it directly regardless of CORS settings. Additionally, all file accesses are logged for monitoring and audit purposes.

Step 4: Apply CORS Configuration

aws s3api put-bucket-cors \
  --profile tigris \
  --endpoint-url https://fly.storage.tigris.dev \
  --bucket showcase \
  --cors-configuration file://cors.json

Step 5: Verify

aws s3api get-bucket-cors \
  --profile tigris \
  --endpoint-url https://fly.storage.tigris.dev \
  --bucket showcase

Returns the configured CORS rules, confirming success.

Step 6: Update Rails URLs

With CORS configured, audio files can be accessed directly from Tigris:

<!-- app/views/solos/djlist.html.erb -->
<audio controls preload="none">
  <source src="<%= heat.solo.song_file.url(expires_in: 1.hour) %>"
          type="<%= heat.solo.song_file.content_type %>">
</audio>

Why 1 hour expiration? Pre-signed URLs expire for security. Five minutes (the default) was too short—DJs would load the page, cache songs, then find URLs expired before caching finished. One hour provides comfortable margin.


The Final Implementation

Here's what the complete solution looks like:

1. DJ List View with Caching UI

<div data-controller="progressive-audio"
     data-progressive-audio-event-id-value="<%= ENV['RAILS_APP_DB'] %>">

  <button data-progressive-audio-target="button"
          data-action="click->progressive-audio#cache">
    Cache Songs Locally
  </button>

  <button data-progressive-audio-target="clearButton"
          data-action="click->progressive-audio#clearCache"
          class="hidden">
    Clear Cache
  </button>

  <div data-progressive-audio-target="stats">
    Checking cache...
  </div>

  <div data-progressive-audio-target="progress" class="hidden">
    <div class="progress-bar">
      <div data-progressive-audio-target="progressBar"></div>
    </div>
    <p data-progressive-audio-target="message">Caching songs...</p>
  </div>

  <table>
    <% @heats.each do |heat| %>
      <tr>
        <td><%= heat.number %></td>
        <td>
          <audio controls preload="none">
            <source src="<%= heat.solo.song_file.url(expires_in: 1.hour) %>"
                    type="<%= heat.solo.song_file.content_type %>">
          </audio>
        </td>
      </tr>
    <% end %>
  </table>
</div>

2. IndexedDB Cache Management

class ProgressiveAudioController extends Controller {
  async initIndexedDB() {
    const request = indexedDB.open('showcase-audio', 1);

    request.onupgradeneeded = (event) => {
      const db = event.target.result;
      if (!db.objectStoreNames.contains('audio')) {
        db.createObjectStore('audio', { keyPath: 'id' });
      }
    };

    return new Promise((resolve, reject) => {
      request.onsuccess = () => resolve(request.result);
      request.onerror = () => reject(request.error);
    });
  }

  async storeInIndexedDB(url, blob, eventId) {
    const db = await this.initIndexedDB();
    const transaction = db.transaction(['audio'], 'readwrite');
    const store = transaction.objectStore('audio');

    const cacheEntry = {
      id: url,
      blob: blob,
      eventId: eventId,
      timestamp: Date.now()
    };

    store.put(cacheEntry);
  }

  async getCachedAudio(url) {
    const db = await this.initIndexedDB();
    const transaction = db.transaction(['audio'], 'readonly');
    const store = transaction.objectStore('audio');

    return new Promise((resolve, reject) => {
      const request = store.get(url);
      request.onsuccess = () => resolve(request.result);
      request.onerror = () => reject(request.error);
    });
  }
}

3. Cache Key Design: Handling Pre-Signed URLs

Critical issue: Pre-signed S3 URLs change on every page load because they include timestamps and cryptographic signatures:

https://showcase.fly.storage.tigris.dev/mj5ntp16qbfjeknq8p3to3tb19fo?
  X-Amz-Date=20251116T011507Z&
  X-Amz-Signature=32e9233d06e766d659c34c89f17afd1cb2da88ca5ab14176554c503c340626a3

If you cache using the full URL as the key, cache lookups fail on page reload because the signature has changed.

Solution: Use the base URL (without query parameters) as the cache key:

getBaseUrl(url) {
  try {
    const urlObj = new URL(url)
    return urlObj.origin + urlObj.pathname
  } catch (error) {
    return url
  }
}

async storeSong(url, blob, contentType) {
  const baseUrl = this.getBaseUrl(url)  // Extract stable key
  const transaction = this.db.transaction(['songs'], 'readwrite')
  const objectStore = transaction.objectStore('songs')

  const song = {
    url: baseUrl,  // Use base URL as stable key
    blob: blob,
    contentType: contentType,
    cachedAt: Date.now(),
    eventId: this.eventIdValue,
    size: blob.size
  }

  objectStore.put(song)
}

async getCachedSong(url) {
  const baseUrl = this.getBaseUrl(url)  // Extract stable key
  const transaction = this.db.transaction(['songs'], 'readonly')
  const objectStore = transaction.objectStore('songs')
  return objectStore.get(baseUrl)  // Lookup by base URL
}

Result: https://showcase.fly.storage.tigris.dev/mj5ntp16qbfjeknq8p3to3tb19fo becomes the stable cache key, matching across page reloads regardless of signature changes.

4. Auto-Restore Cached Audio on Page Load

async checkCacheStatus() {
  const audioElements = document.querySelectorAll('audio source');
  const eventId = this.eventIdValue;
  let cachedCount = 0;

  for (const source of audioElements) {
    const cached = await this.getCachedAudio(source.src);

    if (cached && cached.eventId === eventId) {
      // Check expiration (30 days)
      const age = Date.now() - cached.timestamp;
      if (age < 30 * 24 * 60 * 60 * 1000) {
        const objectUrl = URL.createObjectURL(cached.blob);
        source.src = objectUrl;
        source.closest('tr').classList.add('cached');
        cachedCount++;
      }
    }
  }

  this.updateStats(cachedCount, audioElements.length);
}

5. User Experience

On page load:

When caching:

When playing:


What We Learned

1. Object URLs Are Dramatically Faster Than Data URLs

For in-memory Blob references, Object URLs avoid encoding overhead:

Lesson: Use URL.createObjectURL(blob) for audio/video playback. Use Data URLs only when you need embeddable strings (email, serialization).

2. Progressive Fallback Handles Network Variability

Different networks require different fetch strategies:

Lesson: Implement multiple strategies and fall back gracefully. Don't assume one approach works everywhere.

3. Pre-Signed URLs Need Stable Cache Keys

S3 pre-signed URLs change on every request because they include timestamps and cryptographic signatures. Caching by full URL means cache lookups fail after page reload.

Extract the base URL (without query parameters) as the cache key:

const baseUrl = new URL(url).origin + new URL(url).pathname
// https://storage.example.com/file.mp3 (stable)
// not https://storage.example.com/file.mp3?signature=xyz (changes)

Lesson: For any resource with dynamic query parameters (signatures, tokens, timestamps), use a stable portion of the URL as the cache key.


Try It Yourself

The complete implementation is open source:


Conclusion

Unreliable event WiFi shouldn't stop live performances. Progressive audio caching with IndexedDB provides resilience without operational complexity.

The pattern is straightforward:

  1. Cache audio files before playback - IndexedDB stores Blobs persistently
  2. Use Object URLs for performance - Avoid Data URL encoding overhead
  3. Implement progressive fallback - Multiple fetch strategies handle network variability
  4. Use stable cache keys - Extract base URL without query parameters for pre-signed URLs
  5. Configure CORS on the bucket - Use AWS CLI to set CORS rules on S3-compatible storage

For showcase, this means DJs can cache all solo audio files once, then play them reliably regardless of WiFi quality. Events stay on schedule. Dancers perform without interruption.

Building resilient event applications isn't about perfect networks—it's about gracefully handling imperfect ones.

For related architectural patterns in showcase, see:


For AWS CLI CORS configuration and Tigris-specific details, see the Tigris documentation and AWS S3 API reference.