Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.archil.com/llms.txt

Use this file to discover all available pages before exploring further.

The Archil CSI Driver provides seamless integration between Kubernetes and Archil’s distributed disk technology. It implements the CSI specification to allow Kubernetes workloads to mount and use Archil disks with both ReadWriteOnce (RWO) and ReadWriteMany (RWX) access modes. Requirements:
  • Helm chart distribution: The driver is provided as an OCI Helm chart and requires Helm v3.8.0 or newer.
  • DNS access: Pods must have access to external DNS to mount Archil disks. (Note: some providers, such as Fly.io, don’t enable this by default.)

Dynamic Provisioning

Dynamic provisioning automatically creates and deletes Archil disks when PersistentVolumeClaims are created and deleted. This is the recommended approach for most workloads.

1. Install CSI driver with controller

Create a Kubernetes secret with your API key:
kubectl create secret generic archil-controlplane-api-key \
  --from-literal=api-key=YOUR_API_KEY \
  --namespace archil-system
Install the CSI driver with the controller and StorageClass enabled:
helm install archil-csi-driver \
  oci://registry-1.docker.io/archildata/csi-driver-chart \
  --namespace archil-system \
  --create-namespace \
  --set controller.enabled=true \
  --set storageClass.enabled=true \
  --set storageClass.region=aws-us-east-1 \
  --set storageClass.nodeAuthType=awssts \
  --set storageClass.iamRoleArn=arn:aws:iam::123456789012:role/ArchilNodeRole \
  --set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"=arn:aws:iam::123456789012:role/ArchilNodeRole

2. Create a PersistentVolumeClaim

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-archil-pvc
spec:
  accessModes:
    - ReadWriteOnce  # Or ReadWriteMany for multi-node access
  storageClassName: archil
  resources:
    requests:
      storage: 1Ti  # Archil disks are elastic; this value is informational

3. Use the disk in a pod

apiVersion: v1
kind: Pod
metadata:
  name: archil-test
spec:
  containers:
  - name: app
    image: busybox
    command: ['sh', '-c', 'echo "Hello from Archil" > /data/hello.txt && cat /data/hello.txt']
    volumeMounts:
    - name: archil-storage
      mountPath: /data
  volumes:
  - name: archil-storage
    persistentVolumeClaim:
      claimName: my-archil-pvc
  restartPolicy: Never
When the PVC is created, the controller automatically provisions an Archil disk via the controlplane API. When the PVC is deleted (with reclaimPolicy: Delete), the disk is cleaned up automatically.

Static Provisioning

If you need to use a pre-existing Archil disk, you can create PersistentVolumes manually.

1. Install CSI driver

Install the CSI driver using our official Helm chart from Docker Hub. The chart deploys the archildata/csi-driver container image.
helm install archil-csi-driver \
  oci://registry-1.docker.io/archildata/csi-driver-chart \
  --namespace archil-system \
  --create-namespace

2. Configure disk users

Archil natively supports using AWS IAM users or roles to authorize access to the disk. Simply add the ARN of the IAM user or role that will be used to connect to the disk.
  • Get your EC2 server’s IAM role arn using the following command:
archil utils get-iam-role
  • Copy the resulting IAM role arn.
  • Update your helm release:
helm upgrade archil-csi-driver \
  oci://registry-1.docker.io/archildata/csi-driver-chart \
  --set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"=arn:aws:iam::123456789012:role/ArchilCSIRole
  • Restart the CSI driver daemonset:
kubectl rollout restart daemonset/archil-csi-node -n archil-system
This IAM role is used for authentication from the CSI driver to all disks. The Archil CSI Driver does not currently support specifying IAM roles on a per-disk basis.

3. Create PersistentVolume

apiVersion: v1
kind: PersistentVolume
metadata:
  name: archil-pv
spec:
  capacity:
    # Can be any size, Archil disks are infinite
    storage: 1000Gi
  accessModes:
    # Or use ReadWriteOnce for single-node access
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  csi:
    driver: csi.archil.com
    # Your disk ID from Archil console
    volumeHandle: dsk-XXXXXXXXXXXX
    volumeAttributes:
      # Region where your disk exists
      region: aws-us-east-1
  # Only include this section if using disk token authentication:
  # nodePublishSecretRef:
  #   name: archil-token-dsk-XXXXXXXXXXXX
  #   namespace: archil-system

4. Create PersistentVolumeClaim

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: archil-pvc
  namespace: default
spec:
  # Must match PV access mode
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1000Gi
  # Must match PV name
  volumeName: archil-pv

5. Create a pod

This example creates a simple demo pod to verify connectivity to your Archil disk. You can deploy your own applications using any pod configuration, just mount the PVC as shown below.
apiVersion: v1
kind: Pod
metadata:
  name: archil-test
spec:
  containers:
  - name: test
    image: busybox
    command: ['sh', '-c', 'ls -la /data && echo "---" && echo "Archil disk contents listed above"']
    volumeMounts:
    - name: archil-storage
      mountPath: /data
  volumes:
  - name: archil-storage
    persistentVolumeClaim:
      # Must match PVC name
      claimName: archil-pvc
  restartPolicy: Never

Configuration reference

Storage access modes

The accessMode that you use when creating your PersistentVolume and PersistentVolumeClaim will determine whether the disk is mounted with support for other clients to access it. When using ReadWriteOnce, the disk is mounted into single ownership mode, preventing other clients from accessing the disk. When using ReadWriteMany, the disk is mounted with --shared, allowing multiple clients to access the disk simultaneously. For more details, see the Shared Disks page.

Helm Chart values

The following table describes all available configuration options in values.yaml:
ParameterDescriptionDefaultRequired
image.tagContainer image tag0.1.0No
image.pullPolicyImage pull policyIfNotPresentNo
serviceAccount.annotationsService account annotations for AWS IRSA{}No
extraEnvAdditional environment variables for the CSI driver container[]No
resourcesResource limits and requests for the CSI driver container{}No
nodeSelectorNode selector for CSI driver pod placement{}No
tolerationsPod tolerations[]No
affinityPod affinity rules{}No
controller.enabledEnable the controller plugin for dynamic provisioningfalseNo
controller.apiKeySecretNameName of the K8s Secret containing the API keyarchil-controlplane-api-keyNo
controller.controlplaneURLOverride controlplane URL (empty = auto-detect from region)""No
controller.serviceAccount.annotationsAnnotations for the controller service account{}No
storageClass.enabledCreate a StorageClass for dynamic provisioningfalseNo
storageClass.nameStorageClass namearchilNo
storageClass.isDefaultSet as default StorageClassfalseNo
storageClass.reclaimPolicyPV reclaim policy (Delete or Retain)DeleteNo
storageClass.volumeBindingModeWhen to bind volumes (Immediate or WaitForFirstConsumer)ImmediateNo
storageClass.regionArchil region for provisioned disks""Yes (if enabled)
storageClass.nodeAuthTypeNode authentication type (awssts or token)awsstsNo
storageClass.iamRoleArnIAM role ARN for node authentication""Yes (if awssts)

Changelog

0.2.0
2026-03-02
  • Dynamic provisioning via controller plugin
  • Automatic disk creation/deletion when PVCs are created/deleted
  • Support for both IAM (AWS STS) and disk-token-based node authentication
  • New StorageClass with configurable region and auth type
0.1.0
2025-07-28
  • Initial preview release