Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.archil.com/llms.txt

Use this file to discover all available pages before exploring further.

Before Archil can read or write to your bucket, the bucket needs to grant Archil access. This is a one-time setup per bucket, regardless of how you use the disk afterwards (mount, disk exec, anything else). The Archil console walks you through this when you create a disk in the UI. This page documents what the console is doing, for setting things up by hand or scripting it.

Amazon S3

Amazon S3 supports two ways to authorize Archil:
  • Bucket resource policy — recommended. Archil’s IAM role assumes access to your bucket via a resource policy you attach to the bucket. No long-lived credentials change hands.
  • Static IAM credentials — paste an access key and secret key from an IAM user that has read/write access to the bucket. Simpler to set up, but you’re responsible for rotating the credentials.

Bucket resource policy

In the AWS S3 console, select your bucket, open the Permissions tab, and edit the Bucket policy. Add the following statement, substituting YOUR-BUCKET-NAME and the disk’s YOUR-FILESYSTEM-ID (shown in the Archil console):
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowArchilAccess",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::123456789:role/archil-s3.prod.us-east-1"
            },
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::YOUR-BUCKET-NAME",
                "arn:aws:s3:::YOUR-BUCKET-NAME/*"
            ],
            "Condition": {
                "StringLike": {
                    "aws:userid": "*:YOUR-FILESYSTEM-ID"
                }
            }
        }
    ]
}
The aws:userid condition scopes the grant to a single Archil filesystem, so if you create multiple disks against the same bucket each one carries its own grant. The exact Principal ARN varies by Archil region — the Archil console displays the precise policy to paste, including the right principal for your region.

Static credentials

If you’d rather not edit the bucket policy, create an IAM user with s3:* permissions on the bucket and pass its access key ID and secret to Archil when you create the disk:
import * as archil from "disk";

await archil.createDisk({
  name: "my-disk",
  mounts: [{
    type: "s3",
    bucketName: "my-bucket",
    accessKeyId: "AKIA...",
    secretAccessKey: "...",
  }],
});

Google Cloud Storage

Google Cloud Storage authorizes Archil through static AWS-compatible HMAC credentials.
  1. Open the Google Cloud Storage console.
  2. Click Settings, then Interoperability.
  3. Under Service account HMAC, click Create a key for another service account.
  4. Grant the Cloud Storage → Storage Object Admin role to the new service account.
  5. Record the Access key and Secret for the new service account HMAC key.
Pass them when you create the disk:
await archil.createDisk({
  name: "my-disk",
  mounts: [{
    type: "gcs",
    bucketName: "my-bucket",
    accessKeyId: "GOOG1EABCD1234567890",
    secretAccessKey: "1234567890abcdef1234567890ABCD/EF",
  }],
});

Cloudflare R2

Cloudflare R2 authorizes Archil through static AWS-compatible credentials.
  1. Open the Cloudflare console.
  2. Browse to R2 and click Manage R2 API Tokens.
  3. Create a new token with Object Read & Write permissions.
  4. Record the Access Key ID, Secret Access Key, and the default endpoint (looks like https://<account>.r2.cloudflarestorage.com).
Pass them when you create the disk:
await archil.createDisk({
  name: "my-disk",
  mounts: [{
    type: "r2",
    bucketName: "my-bucket",
    accessKeyId: "R2_ACCESS_KEY_ID",
    secretAccessKey: "R2_SECRET_ACCESS_KEY",
    endpoint: "https://<account>.r2.cloudflarestorage.com",
  }],
});

Generic S3-compatible storage

Many providers expose S3-compatible APIs and work with Archil out of the box. To configure one:
  1. Get the S3-compatible API endpoint from your provider.
  2. Create or obtain access credentials (Access Key ID and Secret Access Key) with read/write/list/delete permissions on the bucket.
  3. When creating your disk, select Generic S3 Compatible as the data source type and provide the endpoint plus credentials.
Tested with: