How do I use persistent storage in Amazon EKS?
Last updated: 2021-02-04
I want to use persistent storage in Amazon Elastic Kubernetes Service (Amazon EKS).
You can set up persistent storage in Amazon EKS using either of the following options:
- Amazon Elastic Block Store (Amazon EBS) Container Storage Interface (CSI) driver
- Amazon Elastic File System (Amazon EFS) Container Storage Interface (CSI) driver
To use one of these options, complete the steps in either of the following sections
- Option A: Deploy and test the Amazon EBS CSI driver
- Option B: Deploy and test the Amazon EFS CSI driver
Note: The commands in this article require kubectl version 1.14 or greater. To see what version of kubectl that you have, run the kubectl version --client --short command.
Note: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, be sure that you’re using the most recent version of the AWS CLI.
Before you complete the steps in either section, you must:
- Install the AWS CLI.
- Set AWS Identity and Access Management (IAM) permissions for creating and attaching a policy to the Amazon EKS worker node role NodeInstanceRole.
- Create your Amazon EKS cluster and join your worker nodes to the cluster.
Note: To verify that your worker nodes are attached to your cluster, run the kubectl get nodes command.
Option A: Deploy and test the Amazon EBS CSI driver
Deploy the Amazon EBS CSI driver:
1. To download an example IAM policy with permissions that enable your worker nodes to create and modify Amazon EBS volumes, run the following command:
curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-ebs-csi-driver/v0.4.0/docs/example-iam-policy.json
2. To create an IAM policy called Amazon_EBS_CSI_Driver, run the following command:
aws iam create-policy --policy-name Amazon_EBS_CSI_Driver \--policy-document file://example-iam-policy.json
3. To attach your new IAM policy to NodeInstanceRole, run the following command:
aws iam attach-role-policy \ --policy-arn arn:aws:iam::111122223333:policy/Amazon_EBS_CSI_Driver \ --role-name eksctl-alb-nodegroup-ng-xxxxxx-NodeInstanceRole-xxxxxxxxxx
Note: Replace the policy Amazon Resource Name (ARN) with the ARN of the policy created in the preceding step 2. Replace the role name with NodeInstanceRole.
4. To deploy the Amazon EBS CSI driver, run the following command:
kubectl apply -k "github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=master"
Test the Amazon EBS CSI driver:
You can test your Amazon EBS CSI driver with an application that uses dynamic provisioning. The Amazon EBS volume is provisioned on demand by a pod that needs it.
1. To clone the aws-ebs-csi-driver repository from AWS GitHub, run the following command:
git clone https://github.com/kubernetes-sigs/aws-ebs-csi-driver.git
2. To change your working directory to the folder that contains the Amazon EBS driver test files, run the following command:
3. To create the Kubernetes resources required for testing, run the following command:
kubectl apply -f specs/
Note: The kubectl command creates a StorageClass (from the Kubernetes website), PersistentVolumeClaim (PVC) (from the Kubernetes website), and pod. The pod references the PVC. An Amazon EBS volume is provisioned only when the pod is created.
4. To view the Persistent Volume created as a result of the pod that references the PVC, run the following command:
kubectl get persistentvolumes
5. To view information about the Persistent Volume, run the following command:
kubectl describe persistentvolumes pv_name
Note: Replace pv_name with the name of the Persistent Volume returned from the preceding step 4. The value of the Source.VolumeHandle property in the output is the ID of the physical Amazon EBS volume created in your account.
6. To verify that the pod is successfully writing data to the volume, run the following command:
kubectl exec -it app cat /data/out.txt
Note: The command output displays the current date and time stored in the /data/out.txt file, which consists of the day, month, date, and time.
Option B: Deploy and test the Amazon EFS CSI driver
Deploy the Amazon EFS CSI driver:
The Amazon EFS CSI driver allows multiple pods to write to a volume at the same time with the ReadWriteMany mode.
1. To deploy the Amazon EFS CSI driver, run the following command:
kubectl apply -k "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/?ref=master"
2. To get the VPC ID for your Amazon EKS cluster, run the following command:
aws eks describe-cluster --name cluster_name --query "cluster.resourcesVpcConfig.vpcId" --output text
3. To get the CIDR range for your VPC cluster, run the following command:
aws ec2 describe-vpcs --vpc-ids vpc-id --query "Vpcs.CidrBlock" --output text
Note: Replace the VPC name with the VPC ID.
4. To create a security group that allows inbound network file system (NFS) traffic for your Amazon EFS mount points, run the following command:
aws ec2 create-security-group --description efs-test-sg --group-name efs-sg --vpc-id VPC_ID
Note: Replace VPC_ID with the output from the preceding step 2. Save the GroupId for later use.
5. To add an NFS inbound rule to enable resources in your VPC to communicate with your Amazon EFS file system, run the following command:
aws ec2 authorize-security-group-ingress --group-id sg-xxx --protocol tcp --port 2049 --cidr VPC_CIDR
Note: Replace VPC_CIDR with the output from the preceding step 3.
6. To create an Amazon EFS file system for your Amazon EKS cluster, run the following command:
aws efs create-file-system --creation-token eks-efs
Note: Note the FileSystemId for later use.
7. To create a mount target for the EFS, run the following command in all the Availability Zones where your worker nodes are running:
aws efs create-mount-target --file-system-id FileSystemId --subnet-id SubnetID --security-group GroupID
Important: Replace FileSystemId with the output of step 6 (where you created the Amazon EFS file system). Replace GroupID with the output of step 4 (where you created the security group). Replace SubnetID with the subnet used by your worker nodes. If you want to create mount targets in multiple subnets, then you must run the command in step 7 separately for each subnet ID. It's a best practice to create a mount target in each Availability Zone where your worker nodes are running.
Note: You can create mount targets for all the Availability Zones where worker nodes are launched. Then, all the Amazon Elastic Compute Cloud (Amazon EC2) instances in the Availability Zone with the mount target can use the file system.
The Amazon EFS file system and its mount targets are now running and ready to be used by pods in the cluster.
Test the Amazon EFS CSI driver:
You can test the Amazon EFS CSI driver by deploying two pods that write to the same file.
1. To clone the aws-efs-csi-driver repository from AWS GitHub, run the following command:
git clone https://github.com/kubernetes-sigs/aws-efs-csi-driver.git
2. To change your working directory to the folder that contains the Amazon EFS CSI driver test files, run the following command:
3. To retrieve your Amazon EFS file system ID that was created earlier, run the following command:
aws efs describe-file-systems --query "FileSystems[*].FileSystemId" --output text
Note: If the preceding command returns more than one result, you can use the Amazon EFS file system ID that you saved earlier.
4. Edit the specs/pv.yaml file, and replace the spec.sci.volumeHandle value with your Amazon EFS FileSystemId from previous steps.
5. To create the Kubernetes resources required for testing, run the following command:
kubectl apply -f specs/
Note: The kubectl command in the preceding step 5 creates an Amazon EFS storage class, PVC, Persistent Volume, and two pods (app1 and app2).
6. To test if the two pods are writing data to the file, wait for about one minute, and then run the following commands:
kubectl exec -it app1 -- tail /data/out1.txt kubectl exec -it app2 -- tail /data/out1.txt
The output shows the current date written to /data/out1.txt by the two pods.