AI & ML

Post-PostgreSQL: Is SQLite on the Edge Finally Production Ready?

· 5 min read
SitePoint Premium
Stay Relevant and Grow Your Career in Tech
  • Premium Results
  • Publish articles on SitePoint
  • Daily curated jobs
  • Learning Paths
  • Discounts to dev tools
Start Free Trial

7 Day Free Trial. Cancel Anytime.

For years, the reflexive answer to "what database should I use?" has been some flavor of managed Postgres. Three simultaneous developments have pushed edge SQLite past the production-readiness threshold — and by the end of this article, the goal is a clear framework for deciding when edge SQLite is the right call.

Table of Contents

Prerequisites

This walkthrough was tested with: Bun 1.x, @libsql/client 0.x, hono 4.x, wrangler 3.x, and Turso CLI 0.x. Pin your dependency versions to avoid breakage from future releases. You will also need a Turso account (free tier works), a Cloudflare account (for D1 sections), and a Fly.io account with flyctl installed (for LiteFS sections).

Why the "Just Use Postgres" Era Is Ending

For years, the reflexive answer to "what database should I use?" has been some flavor of managed Postgres. Neon, Supabase, RDS, Aurora — pick your vendor, spin up an instance, and move on. That advice made sense when SQLite edge deployments were experimental curiosities. It makes less sense now. Three simultaneous developments have pushed edge SQLite past the production-readiness threshold: Cloudflare D1 reached general availability in April 2024 with global read replication, Turso has shipped embedded replicas with automatic sync, and LiteFS has stabilized, adding a static-lease option that makes the Consul dependency optional for simpler single-primary deployments. For read-heavy workloads at side-project-to-medium scale, these solutions deliver sub-10ms reads for warm, co-located requests (per vendor documentation from Turso and Cloudflare; run your own benchmarks to validate) at significantly lower cost than managed Postgres. For a concrete comparison: Turso's free tier includes 9 GB of storage and 500 million row reads per month, while Neon's free tier limits you to 0.5 GB of storage and a single compute instance, with paid plans starting at $19/month as of mid-2025.

"Edge" here means compute running at CDN points of presence: Cloudflare Workers, Fly.io machines, Vercel Edge Functions. The database lives close to the user, not in a single us-east-1 instance hundreds of milliseconds away.

By the end of this article, the goal is a clear framework for deciding when edge SQLite is the right call, backed by concrete feature comparisons, working code, and an honest accounting of the trade-offs.

How SQLite Became a Networked Database

The Original Limitation: Single-Node, Single-Writer

SQLite's architecture is elegant in its constraints. It runs without a server and stores data in a single file. In WAL (Write-Ahead Logging) mode, it supports concurrent readers alongside a single writer and requires no separate daemon process. That simplicity is exactly why it powers billions of deployed instances on mobile devices and embedded systems.

It is also why SQLite historically could not support multi-region or multi-process web applications. One file, one machine, one writer. No built-in replication. No network protocol. Scaling meant switching to Postgres or MySQL.

Three Replication Architectures

Three distinct architectural approaches have dismantled that limitation.

LiteFS intercepts SQLite's WAL at the filesystem level using FUSE, captures transaction pages, and ships them to replica nodes. The application code has no idea replication is happening. This is virtual WAL replication, operating entirely below the application layer.

Turso maintains libSQL, an open-source fork of SQLite that adds a server mode, HTTP-based replication, and embedded replicas: local SQLite files that sync automatically from a remote primary. The fork diverges from upstream SQLite intentionally, adding features like native vector search and ALTER TABLE extensions that upstream will likely never adopt.

Cloudflare built SQLite directly into the Workers runtime with D1, handling replication transparently across their global network. The developer interacts with a binding, not a file or a connection string. Replication is fully managed and invisible.

Three simultaneous developments have pushed edge SQLite past the production-readiness threshold: Cloudflare D1 reached general availability in April 2024 with global read replication, Turso has shipped embedded replicas with automatic sync, and LiteFS has stabilized.

The Contenders: D1, Turso, and LiteFS in 2026

Cloudflare D1

D1 runs SQLite inside Cloudflare Workers via a native binding. When a Worker executes a read query, D1 serves it from the nearest edge PoP with automatic read replicas. Writes route to a single primary. The service prices on a per-request plus storage model: reads, writes, and stored data each have their own meter. See Cloudflare's D1 pricing page for current read/write unit definitions and free tier limits.

D1 fits best in a Cloudflare-native stack. If an application already uses Workers, Pages, R2, and KV, D1 slots in with zero additional infrastructure. The binding model means no connection strings, no connection pooling headaches, and no cold-start database connections.

That said, D1 databases currently cap at 10 GB per database (see current plan documentation for the latest limits). Cloudflare exposes no external TCP or HTTP endpoint for the raw database file, so tooling outside the Cloudflare ecosystem cannot connect directly. D1's migration tools lag behind what the Postgres ecosystem offers. Vendor lock-in is the obvious trade-off.

# wrangler.toml
compatibility_date = "2024-04-01" # Use today's date or later

[[d1_databases]]
binding = "DB"
database_name = "bookmarks-prod"
database_id = "" # TODO: Replace with the actual ID from: wrangler d1 list
// Cloudflare Worker — D1 CRUD
export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const url = new URL(request.url);

    try {
      if (url.pathname === "/bookmarks" && request.method === "GET") {
        const { results } = await env.DB.prepare(
          "SELECT id, url, title, created_at FROM bookmarks ORDER BY created_at DESC LIMIT 50"
        ).all();
        return Response.json(results);
      }

      if (url.pathname === "/bookmarks" && request.method === "POST") {
        const contentType = request.headers.get("content-type") ?? "";
        if (!contentType.includes("application/json")) {
          return new Response("Unsupported Media Type", { status: 415 });
        }

        let bookmarkUrl: string;
        let title: string;
        try {
          const body = await request.json<{ url: string; title: string }>();
          bookmarkUrl = body?.url;
          title = body?.title;
        } catch {
          return new Response("Invalid JSON", { status: 400 });
        }

        if (!bookmarkUrl || typeof bookmarkUrl !== "string" || bookmarkUrl.length > 2048) {
          return new Response("Invalid or missing 'url'", { status: 400 });
        }
        if (!title || typeof title !== "string" || title.length > 512) {
          return new Response("Invalid or missing 'title'", { status: 400 });
        }

        // Validate URL format and restrict to http/https
        try {
          const parsed = new URL(bookmarkUrl);
          if (!["http:", "https:"].includes(parsed.protocol)) throw new Error();
        } catch {
          return new Response("'url' must be a valid http/https URL", { status: 400 });
        }

        await env.DB.prepare(
          "INSERT INTO bookmarks (url, title, created_at) VALUES (?, ?, datetime('now'))"
        ).bind(bookmarkUrl, title).run();
        return new Response("Created", { status: 201 });
      }

      return new Response("Not Found", { status: 404 });
    } catch (err) {
      console.error("D1 handler error:", err instanceof Error ? err.message : String(err));
      return new Response("Internal Server Error", { status: 500 });
    }
  },
};

Turso (libSQL)

Turso runs a libSQL server as the primary and syncs data to embedded replicas: local SQLite files living alongside the application process. Reads hit the local file. Writes go to the remote primary and then sync back. The pricing model is row-based (row reads, row writes) plus storage, organized around database groups that can span multiple regions. See the current Turso pricing page for up-to-date plan details.

Where Turso differentiates is multi-cloud and multi-runtime flexibility. The @libsql/client SDK works on Node.js, Bun, and Deno runtimes deployed to platforms including Cloudflare Workers, Fly.io, and Railway. No single infrastructure provider owns the stack.

Embedded replica sync latency is the nuance to watch. After a write, the local replica won't show the new row until the next sync interval fires or you call sync() explicitly. For most read-heavy applications this is invisible, but workflows that write-then-immediately-read can hit stale data if the sync is not triggered. The libSQL fork's divergence from upstream SQLite also means some tooling that expects vanilla SQLite may encounter behavioral differences.

// Turso embedded replica — Node.js / Bun
import { createClient } from "@libsql/client";

const SYNC_INTERVAL_SECONDS = Number(process.env.SYNC_INTERVAL ?? "60");

const db = createClient({
  url: "file:local-replica.db",
  syncUrl: process.env.TURSO_DATABASE_URL,
  authToken: process.env.TURSO_AUTH_TOKEN,
  syncInterval: SYNC_INTERVAL_SECONDS, // seconds; reads may lag writes by up to this interval without explicit sync()
});

// Write goes to remote primary
await db.execute({
  sql: "INSERT INTO bookmarks (url, title) VALUES (?, ?)",
  args: ["https://example.com", "Example Site"],
});

// Explicitly sync to pull latest data into local replica
await db.sync();

// Read from local embedded replica — sub-5ms
const result = await db.execute(
  "SELECT id, url, title, created_at FROM bookmarks ORDER BY rowid DESC LIMIT 10"
);
console.log(result.rows);

LiteFS (Fly.io)

LiteFS takes a fundamentally different approach. It is a FUSE-based filesystem that sits between the application and the SQLite database file. LiteFS captures every transaction written to the primary and replicates it to read replicas via HTTP transaction streaming, with a lease-based mechanism controlling which node holds write authority. The static lease type removes the Consul dependency but requires a fixed, pre-assigned primary hostname.

For Fly.io deployments, LiteFS gives teams full ownership of the SQLite file and the freedom to run any language runtime: Go, Python, Ruby, Elixir, whatever speaks SQLite. LiteFS is open-source, and the operational model is transparent.

On the trade-off side: while LiteFS can technically run outside Fly.io, the LiteFS project documentation does not list that as a supported configuration. The single-writer constraint remains. FUSE introduces a thin layer of filesystem overhead, though this overhead is negligible for most workloads in practice.

# litefs.yml
fuse:
  dir: "/litefs"

data:
  dir: "/var/lib/litefs"

lease:
  type: "static"
  hostname: "${FLY_APP_NAME}.internal" # Must resolve to the primary machine. For Fly.io,
  advertise-url: "http://${FLY_APP_NAME}.internal:20202" # use your app's internal DNS name.
  # If using auto-assigned machine names that don't match this pattern,
  # consider the "consul" lease type or explicitly naming your primary machine.

exec:
  - cmd: "node server.js" # Replace with your runtime command, e.g., "bun run server.ts"
# fly.toml (relevant section)
[mounts]
  source = "litefs_data"
  destination = "/var/lib/litefs"

Feature Matrix: Edge SQLite Options Compared

FeatureCloudflare D1Turso (libSQL)LiteFS (Fly.io)Managed Postgres (Neon/Supabase)
Runtime lock-inCloudflare WorkersAny (client SDK)Fly.io (primarily)Any
Read latency (edge)*~5-15 ms~1-5 ms (embedded replica)~2-10 ms (same region)~30-80 ms (e.g., us-east-1 to eu-west-1; varies by provider and region pair)
Write modelSingle-writer, replicatedSingle primary group, replicatedSingle primary, FUSE replicatedSingle primary with read replicas; active-active multi-writer requires additional products (e.g., Citus, Aurora Global)
Max DB size10 GB (see current plan docs)100 GB+ (see current plan docs)Disk-boundEffectively unlimited
Branching / Preview DBsYes (per-environment databases via wrangler preview)Yes (database groups)Manual (snapshot)Yes (Neon)
Pricing modelPer-request + storageRow-based + storageInfra cost (Fly Machines)Compute + storage + egress
Open-source coreNoYes (libSQL)YesVaries
ORM supportDrizzle, Prisma (adapter)Drizzle, Prisma, KyselyAny (standard SQLite)Full ecosystem
Ideal scale ceilingSmall-mediumMediumSmall-mediumMedium-large

*Latency figures are illustrative estimates based on co-located or same-region scenarios. Actual latency depends on geographic distance, payload size, and infrastructure. Run benchmarks for your topology.

The read latency row tells the core story. Embedded replicas (Turso) and same-region FUSE replicas (LiteFS) deliver single-digit millisecond reads because the data is local. D1 is slightly higher due to edge PoP replication hops. Managed Postgres, serving cross-region from a central instance, sits at 30-80 ms. For read-heavy applications in favorable topologies, that gap represents a 3-10x latency reduction. Actual results depend on geography and workload.

Postgres still wins convincingly on write concurrency, maximum database size, query complexity at scale, and ecosystem maturity. Edge SQLite does not replace Postgres universally.

Embedded replicas (Turso) and same-region FUSE replicas (LiteFS) deliver single-digit millisecond reads because the data is local. D1 is slightly higher due to edge PoP replication hops. Managed Postgres, serving cross-region from a central instance, sits at 30-80 ms.

When to Choose Edge SQLite (and When Not To)

Green Flags: Edge SQLite Is Likely the Right Call

Read-heavy, low-write workloads are the clearest fit: blogs, dashboards, content APIs, and public-facing read endpoints. Single-tenant or per-tenant database patterns map naturally to SQLite's one-database-per-file model, and Turso's database groups make this operationally clean. Then there are latency-sensitive applications. Shaving 30-60 ms off every read (the gap between the edge SQLite and managed Postgres rows in the feature matrix above) directly impacts user experience when every request pays that tax. Side projects and small SaaS products where operational simplicity and cost matter should strongly consider edge SQLite over a managed Postgres instance running $25-50/month (as of mid-2025; check current pricing for Neon, Supabase, or your vendor of choice).

Red Flags: Stick with Postgres

High write concurrency or multi-writer requirements rule out all three options. They are all single-writer architectures. Complex relational queries with heavy JOINs across large datasets will hit SQLite's query planner limitations: it evaluates JOINs with nested loops only, with no hash join or parallel scan. Applications needing JSONB operators, full-text search beyond SQLite's FTS5, or PostGIS spatial queries should stay on Postgres. Regulatory requirements demanding specific managed-database compliance certifications (SOC 2, HIPAA BAAs from the database vendor) narrow the field. And if a team's expertise is deeply Postgres-native with existing migration tooling, monitoring, and backup workflows? Switching databases for marginal latency gains introduces unnecessary risk.

Deploying a Production Edge SQLite App with Turso

Turso is the most runtime-agnostic of the three, making it the natural choice for a walkthrough.

Step 1: Create the Database and Schema

turso db create bookmarks-prod --group default

Run the following SQL in the Turso interactive shell. This is an interactive REPL — do not paste the turso db shell command and the SQL together into a script. For non-interactive use (e.g., CI), save the SQL to a file and run turso db shell bookmarks-prod < schema.sql.

turso db shell bookmarks-prod
CREATE TABLE bookmarks (
  id INTEGER PRIMARY KEY AUTOINCREMENT,
  url TEXT NOT NULL,
  title TEXT NOT NULL,
  created_at TEXT DEFAULT (datetime('now'))
);
# Grab credentials for the application
turso db show bookmarks-prod --url
turso db tokens create bookmarks-prod

Store these in a .env file. Ensure .env is in .gitignore. Never commit auth tokens to version control.

Set the following environment variables before running the application:

export TURSO_DATABASE_URL="<output of turso db show --url>"
export TURSO_AUTH_TOKEN="<output of turso db tokens create>"
export REPLICA_DB_PATH="/data/local-bookmarks.db"  # Must be an absolute path within /data on a persistent volume
export SYNC_INTERVAL="60"                           # Optional: sync interval in seconds (default: 60)

Step 2: Connect from an Edge Function with Embedded Replicas

The embedded replica requires a writable local filesystem. The file: URL resolves relative to the process working directory — in containerized or ephemeral deployments (Fly.io, Railway), a bare filename like "file:local-bookmarks.db" will silently create a new empty database on every deploy or restart. Always use an absolute path on a persistent volume mount.

// src/index.ts — Hono on Bun with Turso embedded replica
import { Hono } from "hono";
import { createClient } from "@libsql/client";
import path from "node:path";

const app = new Hono();

// Validate and restrict REPLICA_DB_PATH to the /data directory to prevent path traversal
const rawPath = process.env.REPLICA_DB_PATH ?? "/data/local-bookmarks.db";
const ALLOWED_DIR = "/data";
const resolvedPath = path.resolve(rawPath);
if (!resolvedPath.startsWith(ALLOWED_DIR + "/") && resolvedPath !== ALLOWED_DIR) {
  throw new Error(`REPLICA_DB_PATH must be within ${ALLOWED_DIR}. Got: ${resolvedPath}`);
}

const SYNC_INTERVAL_SECONDS = Number(process.env.SYNC_INTERVAL ?? "60");

const db = createClient({
  url: `file:${resolvedPath}`,
  syncUrl: process.env.TURSO_DATABASE_URL!,
  authToken: process.env.TURSO_AUTH_TOKEN!,
  syncInterval: SYNC_INTERVAL_SECONDS, // seconds; reads may lag writes by up to this interval without explicit sync()
});

// Graceful shutdown — clears the sync interval timer and closes the client
const shutdown = () => {
  db.close();
  process.exit(0);
};
process.on("SIGTERM", shutdown);
process.on("SIGINT", shutdown);

app.get("/bookmarks", async (c) => {
  const result = await db.execute(
    "SELECT id, url, title, created_at FROM bookmarks ORDER BY created_at DESC LIMIT 50"
  );
  return c.json(result.rows);
});

app.post("/bookmarks", async (c) => {
  let body: { url: string; title: string };
  try {
    body = await c.req.json();
    if (!body.url || typeof body.url !== "string" || body.url.length > 2048) throw new Error();
    if (!body.title || typeof body.title !== "string" || body.title.length > 512) throw new Error();
    const parsed = new URL(body.url);
    if (!["http:", "https:"].includes(parsed.protocol)) throw new Error();
  } catch {
    return c.json({ error: "Invalid request body. Provide valid 'url' (http/https, ≤2048 chars) and 'title' (≤512 chars)." }, 400);
  }

  await db.execute({
    sql: "INSERT INTO bookmarks (url, title) VALUES (?, ?)",
    args: [body.url, body.title],
  });

  // Sync separately — failure here does not undo the write; log and continue
  try {
    await db.sync();
  } catch (syncErr) {
    console.error("Replica sync failed after write:", syncErr instanceof Error ? syncErr.message : String(syncErr));
    // Write succeeded on the remote primary; return 201 regardless of sync outcome
  }

  return c.json({ status: "created" }, 201);
});

export default app;

await db.sync() pulls the latest committed state from the remote primary into the local replica. This reduces staleness but does not guarantee strong read-your-own-writes consistency under network partition. Without the explicit sync() call, subsequent reads would use the local file and potentially return stale data until the next syncInterval tick. On process restart or redeployment, the local replica file will be re-synced from the remote primary; if the file path is ephemeral, the replica starts empty and must complete a full initial sync. Note that a sync() failure after a successful write does not mean the write was lost — the write was committed on the remote primary. The local replica will catch up on the next successful sync cycle.

Step 3: Verify Replication and Measure Latency

// latency-probe.ts — run as a standalone script against the existing db client
import { Client } from "@libsql/client";

async function measureLatency(db: Client) {
  // Use a dedicated throwaway table — never write to production tables in benchmarks
  await db.execute(`
    CREATE TABLE IF NOT EXISTS _latency_probe (
      id INTEGER PRIMARY KEY AUTOINCREMENT,
      ts TEXT
    )
  `);

  const writeStart = performance.now();
  await db.execute({
    sql: "INSERT INTO _latency_probe (ts) VALUES (datetime('now'))",
    args: [],
  });
  const writeEnd = performance.now();

  await db.sync();

  const readStart = performance.now();
  await db.execute("SELECT id, ts FROM _latency_probe ORDER BY rowid DESC LIMIT 1");
  const readEnd = performance.now();

  // Clean up probe row immediately to avoid polluting data
  await db.execute("DELETE FROM _latency_probe WHERE id = (SELECT MAX(id) FROM _latency_probe)");

  console.log(JSON.stringify({
    writeMs: Number((writeEnd - writeStart).toFixed(2)),
    readMs: Number((readEnd - readStart).toFixed(2)),
  }));
}

export { measureLatency };

// Usage: import { measureLatency } from "./latency-probe";
// await measureLatency(db);

Write latency will reflect the network round-trip to the Turso primary. Read latency should consistently land in the 1-5 ms range for warm reads after initial sync, since it hits the local SQLite file.

The New Default for Small-Scale Apps

Edge SQLite is production-ready for the right workloads. For teams fully committed to Cloudflare, D1 is the natural fit: zero-config bindings, no connection management, tight integration with Workers and R2. Turso offers the most flexibility across runtimes and cloud providers, making it the strongest choice for teams that want to avoid infrastructure lock-in. LiteFS gives Fly.io-native teams full control over their SQLite files with transparent, filesystem-level replication.

Managed Postgres remains the right choice for complex, write-heavy, large-scale systems. This is not "post-Postgres" universally. But for a large class of applications — the kind most developers actually build most often — the trade-off increasingly favors edge SQLite for read-heavy, small-to-medium workloads. The latency numbers, the pricing, and the operational simplicity all point the same direction.

For a large class of applications — the kind most developers actually build most often — the trade-off increasingly favors edge SQLite for read-heavy, small-to-medium workloads. The latency numbers, the pricing, and the operational simplicity all point the same direction.