AWS Storage Blog

Mount Amazon EFS file systems cross-account from Amazon EKS

Many customers use multiple AWS accounts managed by AWS Organizations to create security and cost boundaries around business units, projects, or applications. AWS Organizations helps you centrally manage and govern your environment as you grow and scale your AWS resources. In some cases, an application in one AWS account must access data in another. As you grow and scale your environment, you may benefit by accessing data from multiple AWS accounts from a single AWS account. In order for Amazon Elastic Kubernetes Service (EKS) and Amazon Elastic File System (Amazon EFS) to utilize AWS Organizations more effectively, cross-account Amazon EFS file system mount is required. This blog shows you how to mount Amazon EFS file systems cross-account from Amazon EKS.

Until recently, mounting Amazon EFS resources in another account meant manually configuring IP to hostname bindings. For those of you using EKS, it also meant using hostname aliases. Amazon EFS CSI driver version 1.3.2, has added support for API-based resolution of Amazon EFS mount targets. This addition makes it simpler to mount file systems in other AWS accounts.

Mount Amazon EFS file systems cross-account from Amazon EKS (1)

Setup

In this post, we make the following assumptions:

  • You have an EKS cluster already created in account A and an EFS file system in another account B.
  • You have established a connection between EKS cluster VPC in account A and EFS file system VPC in account B or accounts A and B are connected through VPC sharing. To set up VPC-to-VPC connectivity or VPC sharing refer to the official documentation.
  • You have an OpenID Connect (OIDC) provider setup for your EKS cluster and a service account associated with the CSI driver’s controller service. Refer to the official documentation for detailed steps on how to configure controller service account.
  • You have installed and configured eksctl and kubectl. To set up eksctl and kubectl, refer to the official documentation for eksctl and kubectl.

You will need a cross-account IAM role in the AWS account where your EFS file system exists with permissions to describe mount targets. This IAM role will be assumed by efs-csi-driver to describe mount targets of the EFS file system. The driver will select an IP address from one of the mount targets on the EFS file system to perform cross-account mount.

  1. Create an IAM role in AWS account B hosting your EFS file system. Add a trust relationship with AWS account A hosting your EKS cluster to the role. Describe mount targets by attaching the IAM role with an IAM policy with permissions.
## Download the IAM trust relationship policy
$ curl -s https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/master/examples/kubernetes/cross_account_mount/iam-policy-examples/trust-relationship-example.json -o iam_trust.json

## Edit the principal to include your EKS cluster's AWS account ID `A`.
"Principal": {
   "AWS": "arn:aws:iam::<aws-account-id-A>:root"
 }
 
## Create an IAM role with cross-account trust relationship
$ aws iam create-role \
     --role-name EFSCrossAccountAccessRole \
     --assume-role-policy-document file://iam_trust.json 

## Download the IAM policy to describe mount targets
$ curl -s https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/master/examples/kubernetes/cross_account_mount/iam-policy-examples/describe-mount-target-example.json -o describe_mt.json

## Replace the resource for describe mount target with your file system ID's ARN in describe_mt.json
"Resource" : "arn:aws:elasticfilesystem:<region>:<aws-account-id-B>:file-system/<file-system-id>"

## Create an IAM policy to describe mount targets
$ aws iam create-policy \ 
    --policy-name EFSDescribeMountTargetIAMPolicy \
    --policy-document file://describe_mt.json
    
## Attach it to the cross-account role above
$ aws iam attach-role-policy \
    --role-name EFSCrossAccountAccessRole \
    --policy-arn "arn:aws:iam::<aws-account-id-B>:policy/EFSDescribeMountTargetIAMPolicy"
  1. In the AWS account A hosting your EKS cluster, create and attach an IAM policy with sts assume permissions to cross-account IAM role created in Step 1. Attach this policy to IAM role associated with service account of driver’s controller service.
## Download sts assume iam policy
$ curl -s https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/master/examples/kubernetes/cross_account_mount/iam-policy-examples/cross-account-assume-policy-example.json -o iam_assume.json

## Edit the resource with cross-account role created above
"Resource": "arn:aws:iam::<aws-account-id-B>:role/EFSCrossAccountAccessRole"

## Create an IAM policy for assume role
$ aws iam create-policy \ 
    --policy-name AssumeEFSRoleInAccountB \
    --policy-document file://iam_assume.json

## Describe controller's service account
$ kubectl describe sa efs-csi-controller-sa -n kube-system
Name:                efs-csi-controller-sa
Namespace:           kube-system
Labels:              app.kubernetes.io/name=aws-efs-csi-driver
Annotations:         eks.amazonaws.com/role-arn: arn:aws:iam::<aws-account-id>:role/eksctl-iamserviceaccount-Role-XXYYZZ112233
Image pull secrets:  <none>
Events:              <none>

## Grab the IAM role in the annotations section from above result
ROLE="eksctl-iamserviceaccount-Role-XXYYZZ112233"

## Attach assume permissions to service account's IAM role
$ aws iam attach-role-policy \
    --role-name $ROLE\
    --policy-arn "arn:aws:iam::<aws-account-id-A>:policy/AssumeEFSRoleInAccountB"
  1. Create a kubernetes secret with awsRoleArn as the key and the cross-account assume role from step 1 as the value.
kubectl create secret generic x-account \
        --namespace=kube-system \
        --from-literal=awsRoleArn="arn:aws:iam::<aws-account-id-B>:role/EFSCrossAccountAccessRole"
  1. Add a file system policy to your file system in AWS account B to allow mounts from AWS account A hosting the EKS cluster.
## File System Policy
$ cat file-system-policy.json
{
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "elasticfilesystem:ClientMount",
                "elasticfilesystem:ClientWrite"
            ],
            "Principal": {
                "AWS": "arn:aws:iam::<aws-account-id-A>:root" # Replace with AWS account ID of EKS cluster
            }
        }                                                                                                 
    ]
}

## Put file system policy to your file system
$ aws efs put-file-system-policy --file-system-id fs-abcd1234 \
    --policy file://file-system-policy.json.json
  1. Create a kubernetes service account for driver’s node daemonset.
## Create a Kubernetes service account 
$ eksctl create iamserviceaccount \ 
  --cluster=<cluster> \ 
  --region <AWS Region> \ 
  --namespace=kube-system \ 
  --name=efs-csi-node-sa \ 
  --override-existing-serviceaccounts \ 
  --attach-policy-arn=arn:aws:iam::aws:policy/AmazonElasticFileSystemClientFullAccess \ 
  --approve
  1. Add the service account to driver’s node daemonset and deploy the driver.
$ kubectl kustomize "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-1.3" > driver.yaml 
$ vim driver.yaml # Append service account created above to node daemonset
$ kubectl apply -f driver.yaml

At this point, your Amazon EFS CSI driver is set up for cross-account mount.

Test

  1. To test cross-account mount using dynamic provisioning, lets create a storage class for the file system. Check the official github page for a full list of storage class parameters for dynamic provisioning.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: efs-sc
provisioner: efs.csi.aws.com
mountOptions:
  - tls
  - iam
parameters:
  provisioningMode: efs-ap
  fileSystemId: fs-1234abcd
  directoryPerms: "700"
  az: us-east-1a
  csi.storage.k8s.io/provisioner-secret-name: x-account
  csi.storage.k8s.io/provisioner-secret-namespace: kube-system
  1. To test volume provisioning and successful mounting, we deploy a persistent volume claim and a pod. The actual storage capacity value is not used and it’s only provided to satisfy kubernetes constraints as EFS scales elastically in size.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-claim
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: efs-sc
  resources:
    requests:
      storage: 5Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: efs-app
spec:
  containers:
    - name: app
      image: centos
      command: ["/bin/sh"]
      args: ["-c", "while true; do echo $(date -u) >> /data/out; sleep 5; done"]
      volumeMounts:
        - name: persistent-storage
          mountPath: /data
  volumes:
    - name: persistent-storage
      persistentVolumeClaim:
        claimName: efs-claim
  1. After a few seconds, the volume should be provisioned and bound.
$ kubectl logs  efs-csi-controller-6c955bd4dc-zv6ww -n kube-system -c csi-provisioner --tail 10
[cut]
I0708 23:57:07.952387       1 controller.go:737] successfully created PV pvc-9eb6a5a4-53fb-4b35-8059-43d59560c0cd for PVC efs-claim and csi volume name fs-b670dd02::fsap-05d7c7795b4fb378b

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS   REASON   AGE
pvc-9eb6a5a4-53fb-4b35-8059-43d59560c0cd   5Gi        RWX            Delete           Bound    default/efs-claim   efs-sc                  44s

$kubectl get pvc
NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
efs-claim   Bound    pvc-9eb6a5a4-53fb-4b35-8059-43d59560c0cd   5Gi        RWX            efs-sc         100s
  1. At this point, the pod should have started writing data to file system. Let’s verify by checking the pod’s data directory.
$ kubectl get pods -o wide
NAME      READY   STATUS    RESTARTS   AGE
efs-app   1/1     Running   0          3m57s

$ kubectl exec -it efs-app -- bash -c "cat /data/out"
Thu Jul 8 23:57:10 UTC 2021
Thu Jul 8 23:57:15 UTC 2021
[cut]
Thu Jul 8 23:58:01 UTC 2021

After testing, you have verified your Amazon CSI driver can perform cross-account mount. You can now access data across multiple AWS accounts from a single AWS account.

Cleaning up

Remember to delete example resources if you no longer need them.

Delete your sample application pod: $ kubectl delete pod efs-app.

Delete your PVC: $ kubectl delete efs-claim.

Conclusion

In this blog, we showed you how to add cross-account mounts in the Amazon EFS CSI driver, enabling you to decouple your storage and compute resources. Additionally, we showed you how to place each in a separate account or VPC. The ability to access data from multiple AWS accounts from a single AWS account makes using AWS Organizations for EKS and Amazon EFS more effective. It can help you create security and cost boundaries around business units, projects, or applications as you grow and scale your environment. I hope you enjoyed learning about this way of attaching persistent shared file storage to your applications while continuing to use multiple AWS accounts. Get started by visiting the Amazon EFS CSI driver documentation.

If you have any comments or questions, share them in the comments section.

Karthik Basavaraj

Karthik Basavaraj

Karthik Basavaraj is a Software Engineer working on Amazon Elastic File System at AWS. He is passionate about storage, containers, and large scale distributed systems. He is focused on improving EFS integration with AWS containers. He is an outdoor enthusiast and enjoys discovering new hiking trails.

Suman Debnath

Suman Debnath

Suman Debnath is a Principal Developer Advocate working on Amazon Elastic File System at Amazon Web Services, primarily focusing on Storage, Serverless and Machine Learning. He is passionate about large scale distributed systems and is a vivid fan of Python. His background is in storage performance and tool development, where he has developed various performance benchmarking and monitoring tools. You can find him on LinkedIn (https://www.linkedin.com/in/suman-d/) and follow him on Twitter (https://twitter.com/_sumand).