The Archil CSI Driver is currently in preview.
The Archil CSI Driver provides seamless integration between Kubernetes and Archil’s distributed disk technology.
It implements the CSI specification to allow Kubernetes workloads to mount and use
Archil volumes with both ReadWriteOnce (RWO) and ReadWriteMany (RWX) access modes.
Requirements:
- Helm chart distribution: The driver is provided as an OCI Helm chart and requires Helm v3.8.0 or newer.
- DNS access: Pods must have access to external DNS to mount Archil disks. (Note: some providers, such as Fly.io, don’t enable this by default.)
Quickstart
The Archil CSI Driver does not support dynamic provisioning of disks. You must create the disk in the Archil console first.
1. Install CSI driver
Install the CSI driver using our official Helm chart from Docker Hub. The chart deploys the archildata/csi-driver container image.
helm install archil-csi-driver \
oci://registry-1.docker.io/archildata/csi-driver-chart \
--namespace archil-system \
--create-namespace
Archil natively supports using AWS IAM users or roles to authorize access to the disk. Simply add the ARN of the IAM user or role that will be used to connect to the disk.
- Get your EC2 server’s IAM role arn using the following command:
aws sts get-caller-identity --query 'Arn' --output text |\
sed 's/\:sts\:/\:iam\:/' |\
sed 's/\:assumed-role\//\:role\//' |\
cut -d'/' -f1-2
- Copy the resulting IAM role arn.
- Update your helm release:
helm upgrade archil-csi-driver \
oci://registry-1.docker.io/archildata/csi-driver-chart \
--set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"=arn:aws:iam::123456789012:role/ArchilCSIRole
- Restart the CSI driver daemonset:
kubectl rollout restart daemonset/archil-csi-node -n archil-system
This IAM role is used for authentication from the CSI driver to all disks. The Archil CSI Driver does not currently support specifying IAM roles on a per-disk basis.
Outside of AWS, Archil can create static token credentials that can be used to gain access to the disk.
- Navigate to the disk’s Details page in the Archil console
- Click Generate Token to create a new authorization token
- Copy the generated token
- Create a Kubernetes secret with your Archil API token:
# Replace with your actual disk ID and token
kubectl create secret generic archil-token-dsk-XXXXXXXXXXXX \
--from-literal=token=YOUR_API_TOKEN \
--namespace=archil-system
3. Create PersistentVolume
apiVersion: v1
kind: PersistentVolume
metadata:
name: archil-pv
spec:
capacity:
# Can be any size, Archil disks are infinite
storage: 1000Gi
accessModes:
# Or use ReadWriteOnce for single-node access
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
csi:
driver: csi.archil.com
# Your disk ID from Archil console
volumeHandle: dsk-XXXXXXXXXXXX
volumeAttributes:
# Region where your disk exists
region: aws-us-east-1
# Only include this section if using token authentication:
# nodePublishSecretRef:
# name: archil-token-dsk-XXXXXXXXXXXX
# namespace: archil-system
4. Create PersistentVolumeClaim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: archil-pvc
namespace: default
spec:
# Must match PV access mode
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1000Gi
# Must match PV name
volumeName: archil-pv
5. Create a pod
This example creates a simple demo pod to verify connectivity to your Archil disk. You can deploy your own applications using any pod configuration, just mount the PVC as shown below.
apiVersion: v1
kind: Pod
metadata:
name: archil-test
spec:
containers:
- name: test
image: busybox
command: ['sh', '-c', 'ls -la /data && echo "---" && echo "Archil disk contents listed above"']
volumeMounts:
- name: archil-storage
mountPath: /data
volumes:
- name: archil-storage
persistentVolumeClaim:
# Must match PVC name
claimName: archil-pvc
restartPolicy: Never
Configuration reference
Storage access modes
The accessMode that you use when creating your PersistentVolume and PersistentVolumeClaim will determine whether the
disk is mounted with support for other clients to access it. When using ReadWriteOnce, the disk is mounted into single
ownership mode, preventing other clients from accessing the disk. When using ReadWriteMany, the disk is mounted with
--shared, allowing multiple clients to access the disk simultaneously. For more details, see the
Shared Disks page.
Helm Chart values
The following table describes all available configuration options in values.yaml:
| Parameter | Description | Default | Required |
|---|
| image.tag | Container image tag | 0.1.0 | No |
| image.pullPolicy | Image pull policy | IfNotPresent | No |
| serviceAccount.annotations | Service account annotations for AWS IRSA | {} | No |
| extraEnv | Additional environment variables for the CSI driver container | [] | No |
| resources | Resource limits and requests for the CSI driver container | {} | No |
| nodeSelector | Node selector for CSI driver pod placement | {} | No |
| tolerations | Pod tolerations | [] | No |
| affinity | Pod affinity rules | {} | No |
Changelog