Containers

Introducing Amazon EFS CSI dynamic provisioning

As companies move more of their workloads to Kubernetes, they are increasingly deploying applications that need a way to share or persist data or state outside the container. Kubernetes addresses this need by exposing block and file storage systems to containerized workloads via the Container Storage Interface (CSI). Amazon Elastic Kubernetes Service (Amazon EKS) currently supports three storage options via CSI, Amazon Elastic File System (Amazon EFS), Amazon Elastic Block Store (Amazon EBS), and Amazon FSx for Lustre. This article will focus on Amazon EFS, a popular storage option among our customers that provides a shared file system that is accessible from different Availability Zones within the same AWS Region, and is designed to be highly durable. This can be particularly helpful if your Amazon EKS cluster spans across multiple availability zones, or your containerized applications need persistent storage to keep configuration files, static assets, or anything that is shared across more than one pod.

Kubernetes introduces a logical separation between storage-related duties within the same cluster. PersistentVolumes (PV) are a unit of storage in the cluster that has either been provisioned by an administrator or dynamically provisioned using the CSI driver. A PersistentVolumeClaim (PVC) is a request for storage normally created by a user or by an application. It is similar to a pod and generally follows the lifecycle of the application.

Until now, Kubernetes administrators using the Amazon EFS CSI driver had to statically provision PVs (and the underlying Amazon EFS resources) before users could create PVCs. We are happy to announce that dynamic provisioning has been included in the latest version of the Amazon EFS CSI driver. Dynamic provisioning uses EFS Access Points, application-specific entry points into an EFS file system, to allow up to 120 PVs to be automatically provisioned within a single file system.

The rest of this blog will show you how to get started with dynamic provisioning with the newly released version 1.2 of the Amazon EFS CSI driver.

Setup

The driver requires IAM permission to create and manage EFS access points. By using the IAM roles for service accounts (IRSA) feature, you no longer need to provide extended permissions to the node IAM role so that pods on that node can call AWS APIs.

In this article, we are going to assume you already have an existing EFS file system, and EKS cluster with an associated OIDC provider. Detailed steps on how to configure this can be found in the official documentation. We are also assuming that you have eksctl and kubectl installed and configured to access your cluster. Note that you need to be running at least Kubernetes version 1.17 to use EFS CSI dynamic provisioning.

1. First we need to create the IAM role that allows the driver to manage EFS access points. Make sure to replace the <example values> (including <>) with your own values:

## Download the IAM policy document 
curl -S https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/v1.2.0/docs/iam-policy-example.json -o iam-policy.json

## Create an IAM policy 
aws iam create-policy \ 
  --policy-name EFSCSIControllerIAMPolicy \ 
  --policy-document file://iam-policy.json 

## Create a Kubernetes service account 
eksctl create iamserviceaccount \ 
  --cluster=<cluster> \ 
  --region <AWS Region> \ 
  --namespace=kube-system \ 
  --name=efs-csi-controller-sa \ 
  --override-existing-serviceaccounts \ 
  --attach-policy-arn=arn:aws:iam::<AWS account ID>:policy/EFSCSIControllerIAMPolicy \ 
  --approve

This step can also be achieved using the AWS Management Console or the AWS CLI. Please refer to the official documentation for the step by step guide.

2a. Install the Amazon EFS CSI driver using the Helm chart. You can find the corresponding Amazon ECR repository URL prefix for your AWS region in the EKS documentation.

helm repo add aws-efs-csi-driver https://kubernetes-sigs.github.io/aws-efs-csi-driver
helm repo update
helm upgrade -i aws-efs-csi-driver aws-efs-csi-driver/aws-efs-csi-driver \ 
  --namespace kube-system \ 
  --set image.repository=602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/aws-efs-csi-driver \ 
  --set controller.serviceAccount.create=false \ 
  --set controller.serviceAccount.name=efs-csi-controller-sa

2b: Alternatively, you can install the EFS CSI driver using the following manifest

kubectl kustomize "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-1.2" > driver.yaml
vim driver.yaml # delete the service account created in step 1.
kubectl apply -f driver.yaml

TIP: Make sure to allow traffic on port 2049 on the security group associated to your EFS file system from the CIDR assigned to your EKS cluster.

Test

1. To test dynamic provisioning, let’s create a storage class for EFS. Make sure to add your EFS file system ID to the storage class definition.

Please check the official GitHub page for the full list of parameters and configuration options.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: efs-sc
provisioner: efs.csi.aws.com
parameters:
  provisioningMode: efs-ap
  fileSystemId: <EFS file system ID>
  directoryPerms: "700"

2. To test the automatic provisioning, we can deploy a pod that will make use of the PersistentVolumeClaim. Note that the actual storage capacity value in the persistent volume claim is not used, given the elastic capabilities of EFS. However, since the storage capacity is a required field in Kubernetes, you must specify a value. You can use any valid value for the capacity.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-claim
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: efs-sc
  resources:
    requests:
      storage: 20Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: efs-example
spec:
  containers:
    - name: app
      image: centos
      command: ["/bin/sh"]
      args: ["-c", "while true; do echo $(date -u) >> /example/out.txt; sleep 5; done"]
      volumeMounts:
        - name: persistent-storage
          mountPath: /example
  volumes:
    - name: persistent-storage
      persistentVolumeClaim:
        claimName: efs-claim

3. After few seconds, we can observe the controller picking up the change (edited for readability):

➜ kubectl logs efs-csi-controller-55ff98d4-wt54t -n kube-system -c csi-provisioner --tail 10
[cut]
1 controller.go:737] successfully created PV pvc-4df6c960-a9e6-4626-bcff-62d6c4d7fe13 for PVC efs-claim and csi volume name fs-1eff1845::fsap-04c355c91d3af1544

4. At this point, the PersistentVolume should be created automatically and “bound” to the PersistentVolumeClaim:

➜ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-0922e0e2-2ea5-4b70-82e9-f45c4866dc24 20Gi RWX Delete Bound default/efs-claim efs-sc 3s

➜ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
efs-claim Bound pvc-0922e0e2-2ea5-4b70-82e9-f45c4866dc24 20Gi RWX efs-sc 8s

5. To further verify this, let’s terminate the EKS worker node and wait for the pod to be rescheduled:

➜ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
efs-example 1/1 Running 0 30s 172.31.81.36 ip-172-31-89-143.eu-central-1.compute.internal

➜ kubectl exec efs-example -- bash -c "cat example/out.txt"
Tue Mar 2 09:31:15 UTC 2021
Tue Mar 2 09:31:20 UTC 2021
Tue Mar 2 09:31:25 UTC 2021
Tue Mar 2 09:31:30 UTC 2021
Tue Mar 2 09:31:35 UTC 2021

➜ k get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
efs-example 1/1 Running 0 15s 172.31.56.18 ip-172-31-49-60.eu-central-1.compute.internal

➜ kubectl exec efs-example -- bash -c "cat example/out.txt"
Tue Mar 2 09:31:15 UTC 2021
Tue Mar 2 09:31:20 UTC 2021
Tue Mar 2 09:31:25 UTC 2021
Tue Mar 2 09:31:30 UTC 2021
Tue Mar 2 09:31:35 UTC 2021
Tue Mar 2 09:31:40 UTC 2021
[cut]
Tue Mar 2 09:37:23 UTC 2021

StatefulSets

One thing we have noted talking to customers is that some of them are confused about the relationship between Kubernetes StatefulSets and automatic storage provisioning. To be clear, StatefulSets do not have a hard requirement for storage auto provisioning. In fact, before this launch, you could have used StatefulSets with EKS and EFS CSI by simply pre-creating the persistent volumes to be consumed by StatefulSets. With this feature, we make the user experience of working with StatefulSets even better by removing the requirement to pre-provision the persistent volumes. Now you can declare the StatefulSets in your YAML definition and the EFS CSI driver will provision the volumes for you automatically.

AWS Fargate

Dynamic provisioning is not yet supported for Fargate pods. The EFS CSI driver now contains two components, a controller that makes calls to EFS for dynamic provisioning, and a node agent that mounts volumes into a pod. With Amazon EKS on AWS Fargate, this node agent is built in. If you look closely at the EFS CSI helm chart, you’ll see that Fargate is excluded as a target from the node agent installation. The controller on other hand is not built in, and needs to be run as a separate deployment. Version 1.2 of the controller is not compatible with the v1.1 node agent running on Fargate. Additionally, the controller currently requires access to IMDS, which is not compatible with pods running on Fargate. We are in the process of rolling out an EFS CSI node agent update across the Fargate fleet, which would allow for dynamic provisioning as long as the controller is run on an EC2 based worker node. You can subscribe to this GitHub issue for progress updates. Additionally, we are investigating workarounds for the controller IMDS requirement, so that you can run the controller as a Fargate pod.

Conclusion

Amazon Elastic File System can automatically scale from gigabytes to petabytes of data, supports automatic encryption of your data at rest and in transit, and offers seamless integration with AWS backup. With the introduction of dynamic provisioning for EFS PersistentVolumes in Kubernetes, we can now dynamically provision storage and provide a better integration with modern containerized applications. EFS access points can be used in conjunction with the CSI driver to enforce user identity and offer a nice out-of-the-box logical separation between storage spaces within the same EFS file system. To learn more, you can visit the open source EFS CSI driver project on GitHub, or see the Amazon EKS documentation.

TAGS: ,
Mike Stefaniak

Mike Stefaniak

Mike Stefaniak is a Principal Product Manager at Amazon Web Services focusing on all things Kubernetes and delivering features that help customers accelerate their modernization journey on AWS.

Marco Ballerini

Marco Ballerini

Marco is a Senior DevOps Consultant at Amazon Web Services focusing on Kubernetes and delivering features that help customers accelerate their containers adoption.