aws s3, boto3, s5cmd, or any other S3 client at the Archil endpoint, present per-disk SigV4 credentials, and read the disk’s contents over HTTPS without installing the Archil client or running an exec.
- A consumer just needs to fetch a few files and you don’t want it to mount the disk.
- You’re integrating with a tool that already speaks S3 (data warehouses, ETL pipelines, training-job inputs, image CDNs).
- You want a quick way to download a file from a serverless function without pulling in the Archil client.
Endpoint
The S3 API is served from a per-region endpoint. Use the endpoint that matches the region your disk lives in:| Region | Code | Endpoint |
|---|---|---|
| AWS US East (N. Virginia) | aws-us-east-1 | https://s3.green.us-east-1.aws.prod.archil.com |
| AWS US West (Oregon) | aws-us-west-2 | https://s3.green.us-west-2.aws.prod.archil.com |
| AWS EU West (Ireland) | aws-eu-west-1 | https://s3.green.eu-west-1.aws.prod.archil.com |
| GCP US Central (Iowa) | gcp-us-central1 | https://s3.blue.us-central1.gcp.prod.archil.com |
https://<endpoint>/<bucket>/<key>); virtual-host style is not supported. Most S3 clients require an explicit flag to force path style:
Authentication
Each disk has its own set of S3 credentials — there are no account-wide S3 keys. To get credentials, add an S3 user to the disk:- Open the disk’s Details page in the Archil console.
- Under Authorized Users, click Add User and select S3 credentials.
- The console shows the Access Key ID and Secret Access Key exactly once. Save them somewhere secure — the secret cannot be retrieved again.
AccessDenied. To rotate, add a new S3 user and remove the old one. The S3 API uses standard AWS Signature Version 4 — any S3 SDK signs correctly out of the box.
S3 credentials are distinct from disk tokens (used by the Archil client when mounting) and from Control Plane API keys (account-level, used to manage disks). They cannot be used in place of either one.
Bucket name
The bucket name is the disk ID as shown in the console (dsk-…).
Supported operations
| Operation | Method | Notes |
|---|---|---|
HeadBucket | HEAD /<bucket> | Verifies the bucket exists and the credentials can read it. |
GetBucketLocation | GET /<bucket>?location | Returns the disk’s region. |
ListObjectsV2 | GET /<bucket>?list-type=2 | Supports prefix, delimiter, max-keys, and continuation-token. |
GetObject | GET /<bucket>/<key> | Streams the object body. Supports HTTP Range. |
HeadObject | HEAD /<bucket>/<key> | Returns size, content type, and last-modified. |
PutObject, DeleteObject, CopyObject, multipart uploads) and ListBuckets return MethodNotAllowed or NotImplemented. To modify a disk, mount it as a file system or use disk.exec.
Listing
ListObjectsV2 accepts the standard query parameters:
/. Any other delimiter value returns a 400 InvalidArgument. This matches how Archil clients see the underlying directory tree — delimiter=/ collapses each directory into a single CommonPrefixes entry, exactly as a real S3 bucket would.
Pagination uses the standard NextContinuationToken — pass it back as continuation-token to fetch the next page.
Range requests
GetObject honours the HTTP Range header end-to-end and returns 206 Partial Content with a Content-Range header:
Accept-Ranges: bytes is set on every response so clients can discover range support without an explicit probe.
Content type
Content-Type is derived from the object key’s extension when one is recognised; otherwise Archil sniffs the first 512 bytes. HEAD and GET always agree on the same content type for a given key.
What gets exposed
The S3 API exposes the disk’s filesystem as it currently exists — every file written through any client (FUSE mount,disk.exec, the Console file browser) appears as an object with the key set to its POSIX path relative to the disk root. Newly written files are visible to the next S3 request after the write commits, with the same read-after-write consistency as any other Archil client.
Symlinks resolve to the file they point at. Directories appear as CommonPrefixes when you list with delimiter=/. There is no separate set of “S3 objects” — the bucket is just a different protocol view of the same data.
Share URLs
For one-off, ad-hoc downloads, Archil can mint pre-signed share URLs that don’t require any credentials. Generate one from the console’s file browser (“Share” in the row menu), or via the Control Plane API:expiresIn is in seconds and accepts 3600 (1h), 28800 (8h), 86400 (24h), or 604800 (7d). The default is 24h. The returned URL can be opened in any browser; it streams the file with a sensible Content-Type and Content-Disposition so files render inline where the browser supports it. Share URLs are signed by Archil’s internal KMS key, so they survive control-plane restarts and work across replicas.
Limits
- Read-only. All write methods return
MethodNotAllowed. - One bucket per credential. S3 credentials are disk-scoped; cross-disk access requires a separate set of credentials.
/is the only supported delimiter forListObjectsV2.- No
ListBuckets. With disk-scoped credentials there is at most one bucket to list, so the operation isn’t meaningful. - Performance. The S3 API is optimised for convenience over throughput. For sustained, high-throughput streaming use the Archil client — it talks the native protocol and benefits from local caching.
Related
- Disk Users — how authorization works for the disk’s other access paths
- Mounting on Linux — the POSIX-protocol path, with full read/write support
- Serverless Execution — run code directly on the disk instead of pulling data out