How do I use persistent storage in Amazon EKS?

Last updated: 2020-02-03

I want to use persistent storage in Amazon Elastic Kubernetes Service (Amazon EKS).

Short Description

You can set up persistent storage in Amazon EKS using either of the following options:

To use one of these options, complete the steps in either of the following sections:

  • Option A: Deploy and test the Amazon EBS CSI Driver
  • Option B: Deploy and test the Amazon EFS CSI Driver

Note: The commands in this article require kubectl version 1.14 or greater. To see what version of kubectl you have, run the kubectl version --client --short command.

Before you complete the steps in either section, you must:

  1. Install the AWS Command Line Interface (AWS CLI).
  2. Set AWS Identity and Access Management (IAM) permissions for creating and attaching a policy to the Amazon EKS worker node role NodeInstanceRole.
  3. Create your Amazon EKS cluster and join your worker nodes to the cluster.
    Note: To verify that your worker nodes are attached to your cluster, run the kubectl get nodes command.

Resolution

Option A: Deploy the Amazon EBS CSI Driver

1.    To download an example IAM policy with permissions that enable your worker nodes to create and modify Amazon EBS volumes, run the following command:

curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-ebs-csi-driver/v0.4.0/docs/example-iam-policy.json

2.    To create an IAM policy called Amazon_EBS_CSI_Driver, run the following command:

aws iam create-policy --policy-name Amazon_EBS_CSI_Driver \--policy-document file://example-iam-policy.json

3.    To attach your new IAM policy to NodeInstanceRole, run the following command:

aws iam attach-role-policy \
--policy-arn arn:aws:iam::111122223333:policy/Amazon_EBS_CSI_Driver \
--role-name eksctl-alb-nodegroup-ng-xxxxxx-NodeInstanceRole-xxxxxxxxxx

Note: Replace the policy Amazon Resource Name (ARN) with the ARN of the policy created in the preceding step 2. Replace the role name with NodeInstanceRole.

4.    To deploy the Amazon EBS CSI Driver, run the following command:

kubectl apply -k "github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=master"

Now, complete the steps in the Option A: Test the Amazon EBS CSI Driver section.

Option A: Test the Amazon EBS CSI Driver

Test your Amazon EBS CSI Driver with an application that uses dynamic provisioning. The Amazon EBS volume is provisioned on demand by a pod that needs it.

1.    To clone the aws-ebs-csi-driver repository from AWS GitHub, run the following command:

git clone https://github.com/kubernetes-sigs/aws-ebs-csi-driver.git

2.    To change your working directory to the folder that contains the Amazon EBS Driver test files, run the following command:

cd aws-ebs-csi-driver/examples/kubernetes/dynamic-provisioning/

3.    To create the Kubernetes resources required for testing, run the following command:

kubectl apply -f specs/

Note: The kubectl command creates a StorageClass, PersistentVolumeClaim (PVC), and pod. The pod references the PVC. An Amazon EBS volume is provisioned only when the pod is created.

4.    To view the Persistent Volume created as a result of the pod that references the PVC, run the following command:

kubectl get persistentvolumes

5.    To view information about the Persistent Volume, run the following command:

kubectl describe persistentvolumes pv_name

Note: Replace pv_name with the name of the Persistent Volume returned from the preceding step 4. The value of the Source.VolumeHandle property in the output is the ID of the physical Amazon EBS volume created in your account.

6.    To verify that the pod is successfully writing data to the volume, run the following command:

kubectl exec -it app cat /data/out.txt

Node: The command output displays the current date and time stored in the /data/out.txt file, which consists of the day, month, date, and time.

Option B: Deploy the Amazon EFS CSI Driver

The Amazon EFS CSI Driver allows multiple pods to write to a volume at the same time with the ReadWriteMany mode.

1.    To deploy the Amazon EFS CSI Driver, run the following command:

kubectl apply -k "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/?ref=master"

2.    To get the VPC ID for your Amazon EKS cluster, run the following command:

aws eks describe-cluster --name cluster_name --query "cluster.resourcesVpcConfig.vpcId" --output text

3.    To get the CIDR range for your VPC cluster, run the following command:

aws ec2 describe-vpcs --vpc-ids vpc-id --query "Vpcs[].CidrBlock" --output text

Note: Replace the VPC name with the VPC ID.

4.    To create a security group that allows inbound network file system (NFS) traffic for your Amazon EFS mount points, run the following command:

aws ec2 create-security-group --description efs-test-sg --group-name efs-sg --vpc-id VPC_ID

Note: Replace VPC_ID with the output from the preceding step 2. Save the GroupId for later use.

5.    To add an NFS inbound rule to enable resources in your VPC to communicate with your EFS, run the following command:

aws ec2 authorize-security-group-ingress --group-id sg-xxx  --protocol tcp --port 2049 --cidr VPC_CIDR

Note: Replace VPC_CIDR with the output from the preceding step 3.

6.    To create an Amazon EFS file system for your Amazon EKS cluster, run the following command:

aws efs create-file-system --creation-token eks-efs

Note: Note the FileSystemId for later use.

7.    To create a mount target for the EFS, run the following command in all the Availability Zones where your worker nodes are running:

aws efs create-mount-target --file-system-id FileSystemId --subnet-id SubnetID --security-group GroupID

Note: Replace FileSystemId, SubnetID, and GroupId with the FileSystemId that you created in the preceding steps 6 and 7.

Note: You can create mount targets for all the Availability Zones where worker nodes are launched. Then, all the Amazon Elastic Compute Cloud (Amazon EC2) instances in the Available Zone where a mount target has been created can use the file system.

The EFS and its mount targets are now running and ready to be used by pods in the cluster.

Now, complete the steps in the Option B: Test the EFS CSI Driver section.

Option B: Test the Amazon EFS CSI Driver

You can test the Amazon EFS CSI Driver by deploying two pods that write to the same file.

1.    To clone the aws-efs-csi-driver repository from AWS GitHub, run the following command:

git clone https://github.com/kubernetes-sigs/aws-efs-csi-driver.git

2.    To change your working directory to the folder that contains the Amazon EFS CSI Driver test files, run the following command:

cd aws-efs-csi-driver/examples/kubernetes/multiple_pods/

3.    To retrieve your Amazon EFS file system ID that was created earlier, run the following command:

aws efs describe-file-systems --query "FileSystems[*].FileSystemId" --output text

Note: If the preceding command returns more than one result, you can use the EFS file system ID that you saved earlier.

4.    Edit the specs/pv.yaml file, and replace the spec.sci.volumeHandle value with your Amazon EFS FileSystemId from previous steps.

5.    To create the Kubernetes resources required for testing, run the following command:

kubectl apply -f specs/

Note: The kubectl command in the preceding step 5 creates an Amazon EFS storage class, PVC, Persistent Volume, and two pods (app1 and app2).

6.    To test if the two pods are writing data to the file, wait for about one minute, and then run the following commands:

kubectl exec -it app1 -- tail /data/out1.txt 
kubectl exec -it app2 -- tail /data/out1.txt

The output shows the current date written to /data/out1.txt by the two pods.


Did this article help you?

Anything we could improve?


Need more help?