Skip to main content
Archil file systems can run your code. disk.exec(command) spins up a container with the file system already mounted, runs an arbitrary shell command to completion, and returns stdout, stderr, exit code, and timing information. You pay only for the time the command is actively running. Serverless execution is available today for disks in our AWS regions (aws-us-east-1, aws-us-west-2, aws-eu-west-1).
import * as archil from "disk";

const { disk } = await archil.createDisk({
  name: "agent-workspace",
  mounts: [{ type: "s3", bucketName: "my-bucket" }],
});

const { stdout } = await disk.exec("grep -r ERROR logs");
Or from a shell, with the disk CLI:
npx disk dsk-abc123 exec "grep -r ERROR logs"
Serverless execution always uses disks mounted in shared mode. This means that, by default, you will not be able to perform some kinds of mutating actions (such as editing or removing existing files). For more details on how to use shared disks, see Shared Disks.

When to use disk.exec

Reach for serverless execution when:
  • You only want a result, not the raw data. A full-text search across a bucket returns a few kilobytes; downloading the bucket to grep locally could be gigabytes.
  • You need to fan out work across the same data. Every exec gets its own container. Promise.all of N execs is a map-reduce.
  • You’re giving an agent a bash tool. Persistent file system + ephemeral compute is the model agents want, and disk.exec gives you both in one SDK call.
  • You don’t want to manage compute lifecycle. No sandbox to warm up, no instance to size, no cold start to pay for when you’re idle.
If you instead need local latency for an interactive session, or you’re running a long-lived process that streams work, mount the disk on a Linux server and talk to it as a POSIX file system.

How it works

When you call disk.exec:
  1. The request is sent to the Archil control plane and queued for a runtime container.
  2. A container is started (or an existing warm one is claimed) with the file system mounted at /mnt/data.
  3. Your command runs inside the container with bash -c "<command>".
  4. When the command exits, stdout, stderr, and the exit code are returned. The container is released.
The returned ExecResult includes a timing object with three fields:
FieldMeaning
totalMsEnd-to-end wall clock, measured on the server.
queueMsTime spent queueing, scheduling, booting/claiming a container, and mounting the filesystem.
executeMsTime your command itself ran.
Billing is based on executeMs — the wall-clock time your command runs — in 1ms increments, with a 100ms minimum per call. Queue time is not billed. See the pricing page for current rates.

Working directory and environment

Every exec runs with the disk as the working directory — commands can reference files on the disk using relative paths. The runtime image includes common Unix tools (coreutils, grep, sed, awk, find, curl, jq, python3, node) — if you need a specific interpreter, write the script to the disk first (through a mounted client or a prior exec) and invoke it by path.
// Write a script via a mounted client, then run it through exec
await disk.exec("python etl.py --input raw --output clean");

Consistency with other clients

Serverless execs are regular Archil clients — they use the same read-after-write consistency model as any mounted client. A file written by a FUSE mount is visible to the next exec that reads it, and vice versa. For concurrent writes, the usual delegation rules apply; most short-lived execs run fine as non-exclusive writers in their own subdirectory.

Limits

  • Command timeout: the HTTP response returns after 5 minutes; longer-running work should be broken up into multiple exec calls that communicate through files on the disk.
  • Output size: stdout and stderr are each capped at 128 KiB per invocation. If you hit the cap, the output is truncated and a trailer is appended noting how many bytes were dropped. Pipe large outputs to a file on the disk instead.
  • Concurrency: Per-disk concurrency limits are generous but not unbounded — contact us before running large fan-outs.

Next steps