Skip to main content
Containers lose all local data when they stop unless you configure external storage. Mounting an Archil disk gives the processes inside the container a persistent, elastic POSIX filesystem backed by S3 — no EBS snapshots, no manual volume provisioning, no object-storage SDKs. The best way to get access to Archil in a container depends on your runtime:
RuntimeRecommended
Docker / Podman / OCI containers sharing the host kernelBind mount from the host — simpler, no privileged containers, and avoids the seccomp pitfalls of FUSE-inside-the-container.
MicroVMs (Firecracker, Cloud Hypervisor, Kata, gVisor) with isolated kernelsFUSE inside the microVM — there’s no host FUSE mount to bind from, so the VM mounts Archil itself.
KubernetesUse the Archil CSI driver. It manages Archil mounts as native persistent volumes without privileged pods.

Bind mount from the host

First, install the archil CLI and mount a disk on the host:
curl -s https://archil.com/install | sh

sudo mkdir -p /mnt/archil
export ARCHIL_MOUNT_TOKEN="<token>"
sudo --preserve-env=ARCHIL_MOUNT_TOKEN archil mount <disk-name> /mnt/archil --region aws-us-east-1
Then start the container with a volume mount:
docker run -v /mnt/archil:/data myimage
The application inside the container reads and writes to /data as a normal filesystem. Archil handles caching on fast local SSD and synchronizing to S3 in the background.

Docker Compose

services:
  app:
    image: myimage
    volumes:
      - /mnt/archil:/data
You can manage delegations from inside the container by installing the Archil client there too:
archil checkout /data/my-file.txt

FUSE inside the container

Some runtimes don’t share the host’s kernel — most commonly microVMs like Firecracker, Cloud Hypervisor, Kata Containers, or gVisor. In principle you could share the host’s FUSE mount into the guest over virtio-fs, but virtio-fs support across microVM runtimes is spotty — some implementations don’t support it at all, others lag on features like extended attributes, locking, or large-file I/O, and performance varies widely. It’s simpler and more portable to have the guest mount Archil itself. Install the Archil client in the guest image and run archil mount during boot or as part of your workload’s entrypoint:
curl -s https://archil.com/install | sh
mkdir -p /mnt/archil
archil mount <disk-name> /mnt/archil --region aws-us-east-1
For a standard Docker container sharing the host kernel, doing this requires --privileged (or SYS_ADMIN) plus /dev/fuse:
docker run --privileged --device /dev/fuse \
  -e ARCHIL_MOUNT_TOKEN="<token>" \
  myimage
Running an OCI container with --privileged grants full host access and exposes the mount to the host’s seccomp policies — some operations (e.g. executing a binary from the disk) can fail with EACCES. If you’re on Docker/Podman, use the bind-mount approach above. --privileged is only the right answer for microVMs that don’t share the host kernel.

Kubernetes

For production Kubernetes deployments, use the Archil CSI driver. It manages Archil mounts as native Kubernetes persistent volumes, so pods get Archil storage without privileged containers or host mounts.

Command reference

For every flag archil mount, archil unmount, and the rest of the archil CLI accept, see the archil CLI reference.