Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.archil.com/llms.txt

Use this file to discover all available pages before exploring further.

Cloudflare Workers run at the edge with no persistent filesystem. Archil gives your Workers read/write access to S3-backed storage with the disk SDK — a pure-JavaScript control-plane client with no native dependencies, which works inside the Workers runtime. The pattern: a Worker dispatches work to disk.exec, which runs the command in a container with the filesystem mounted. The Worker stays small and stateless; the heavy lifting happens server-side, against the disk.

Install the SDK

npm install disk

Worker example

This Worker accepts a query parameter, runs grep against an Archil disk via disk.exec, and returns the result:
import * as archil from "disk";

interface Env {
  ARCHIL_API_KEY: string;
  DISK_ID: string;
}

export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    archil.configure({
      apiKey: env.ARCHIL_API_KEY,
      region: "aws-us-east-1",
    });

    const url = new URL(request.url);
    const query = url.searchParams.get("q") ?? "";

    try {
      const d = await archil.getDisk(env.DISK_ID);

      if (request.method === "PUT") {
        const path = url.pathname.replace(/[^a-zA-Z0-9/_.-]/g, "");
        const body = await request.text();
        // disk.exec doesn't provide stdin, so base64-encode the body into the
        // command itself. For larger payloads, write via a mounted client instead.
        const b64 = Buffer.from(body).toString("base64");
        await d.exec(
          `mkdir -p $(dirname ${path}) && echo ${b64} | base64 -d > ${path}`,
        );
        return new Response("OK", { status: 200 });
      }

      // GET: grep the disk
      const { stdout, exitCode } = await d.exec(
        `grep -r ${JSON.stringify(query)} logs`,
      );
      return new Response(stdout, { status: exitCode === 0 ? 200 : 404 });
    } catch (err: any) {
      return new Response(err.message, { status: 500 });
    }
  },
};

Configuration

Set your Archil API key as a Worker secret:
npx wrangler secret put ARCHIL_API_KEY
In your wrangler.toml, enable the Node.js compatibility flag (the SDK depends on standard Node APIs but no native modules):
compatibility_flags = ["nodejs_compat"]

Archil with R2

If you’re already using Cloudflare R2 for object storage, you can use R2 as Archil’s data source. Archil keeps a managed cache in front of R2, so reads from your Workers go to fast cached storage instead of round-tripping to R2 on every request — and you get filesystem semantics on top of the bucket. Create a disk backed by your R2 bucket and your Workers can exec against it.

Performance tips

  • Pick the right Archil region. disk.exec runs in the Archil region, not at the edge — choose a region close to where the bulk of your traffic originates, or where your data already lives.
  • Push work into exec, not into the Worker. Workers have CPU-time limits; serverless exec doesn’t. If a request requires scanning many files, doing it through exec keeps the Worker fast and avoids streaming gigabytes through the edge.
  • Reuse getDisk() across requests in long-lived isolates by caching the returned Disk at module scope.