Problem: Mount fails with permission denied
The Archil client cannot authenticate against the disk. This usually means the token is wrong or expired, or the IAM role is not listed as an authorized disk user.
# Verify your IAM identity
aws sts get-caller-identity --query 'Arn' --output text | \
sed 's/\:sts\:/\:iam\:/' | \
sed 's/\:assumed-role\//\:role\//' | \
cut -d'/' -f1-2
Compare the output ARN to the Authorized Users list on the disk’s Details page in the Archil console. If you are using token authentication, regenerate the token from the console — Archil does not store tokens in plaintext, so a lost token cannot be recovered.
When passing a token through sudo, use sudo --preserve-env=ARCHIL_MOUNT_TOKEN so the variable is available to the Archil process. Using sudo alone drops the environment.
Problem: Mount hangs or times out
The client cannot reach the Archil control plane. Common causes are a wrong region, a network connectivity issue, or DNS resolution failure.
# Confirm the region matches the disk's region shown in the console
sudo archil mount <disk-name> /mnt/data --region aws-us-east-1
# Test connectivity to the Archil control server
curl -v https://mount.green.us-east-1.aws.prod.archil.com:8100 2>&1 | head -20
Ensure your security groups or firewall rules allow outbound traffic on ports 8100 and 5000-5002. If you are running inside a VPC without public internet access, verify that DNS resolution is working and that a NAT gateway or VPC endpoint is configured.
Problem: Files not appearing in S3
This is expected behavior. Archil provides eventual synchronization consistency between the disk and the backing data source. Writes sync to S3 within a few minutes, not instantly.
# Verify the disk is healthy and accepting writes
archil status /mnt/data
Problem: Another client cannot write to files
Only one Archil client can have write ownership (a “delegation”) over any given file or directory at a time. When a client mounts without --shared, it checks out the entire root, blocking all other clients from writing.
Solutions:
# Option 1: Mount both clients in shared mode and partition by directory
sudo archil mount <disk-name> /mnt/data --region aws-us-east-1 --shared
archil checkout /mnt/data/my-dir/
# Option 2: Use subdirectory mounts so each client owns a different subtree
sudo archil mount <disk-name>:/models /mnt/models --region aws-us-east-1 # Client A
sudo archil mount <disk-name>:/data /mnt/data --region aws-us-east-1 # Client B
# Option 3: Force-claim ownership (use with caution)
archil checkout /mnt/data/contested-dir/ --force
--force immediately revokes another client’s delegation. Any unflushed writes from that client will be lost and subsequent fsync calls will return EIO.
Problem: Permission denied when accessing files
This is expected. The Archil FUSE client mounts as root, so all files start with root ownership.
# Change ownership to your user
sudo chown -R $USER:$USER /mnt/data
# Or run commands with sudo
sudo ls /mnt/data
Problem: Data not appearing after writing to S3 directly
Objects created directly in S3 (or another backing store) are subject to eventual synchronization consistency. It can take a few minutes for new objects to appear on the Archil disk.
To get faster updates, reduce the directory cache expiry:
# Set cache expiry to 0 seconds for near-real-time visibility
archil set-cache-expiry /mnt/data/incoming --readdir-expiry 0
# Or use a short TTL for a performance/freshness balance
archil set-cache-expiry /mnt/data/incoming --readdir-expiry 5
Lower cache expiry values improve freshness but increase the number of requests to the Archil server, which may affect read performance.
Problem: SDK connection fails
The TypeScript SDK cannot connect to the disk. Check the following:
const client = await ArchilClient.connect({
region: "aws-us-east-1", // Must match the disk's region
diskName: "myorg/mydisk", // Format: org/disk-name or disk ID
authToken: process.env.ARCHIL_DISK_TOKEN, // Required outside AWS
});
- Credentials — Ensure
ARCHIL_DISK_TOKEN is set in the environment, or that IAM credentials are available (e.g., running on EC2 with an attached role).
- Disk name — Use the owner-qualified format
org/disk-name, not just the disk name alone.
- Region — The region string must match exactly (e.g.,
aws-us-east-1, not us-east-1).
- Node.js version — The native module requires Node.js 18 or later.