How do I mount an Amazon EFS file system on an Amazon ECS container or task running on Fargate?

Last updated: 2020-12-17

I want to mount an Amazon Elastic File System (Amazon EFS) file system on an Amazon Elastic Container Service (Amazon ECS) container or task running on AWS Fargate.

Short description

To mount an Amazon EFS file system on a Fargate task or container, you must create a task definition, and then make that task definition available to the containers in your task across all Availability Zones in your AWS Region. Then, your Fargate tasks use Amazon EFS to automatically mount the file system to the tasks that you specify in your task definition.

Important: The following resolution applies to the Fargate platform version 1.4.0 or later, which has persistent storage that you can define at the task and container level in Amazon ECS. Fargate platform versions 1.3.0 or earlier don't support persistent storage using Amazon EFS.

Before you complete the steps in the Resolution section, you must have the following:


Create and configure an Amazon EFS file system

1.    Create an Amazon EFS file system, and then note the EFS ID and security group ID.

Note: Your Amazon EFS file system, Amazon ECS cluster, and Fargate tasks must all be in the same VPC.

2.    To allow inbound connections on port 2049 (Network File System, or NFS) from the security group associated with your Fargate task or service, edit the security group rules of your EFS file system.

3.    Update the security group of your Amazon ECS service to allow outbound connections on port 2049 to your Amazon EFS file system's security group.

Create a task definition

1.    Open the Amazon ECS console.

2.    From the navigation pane, choose Task Definitions, and then choose Create new Task Definition.

3.    In the Select launch type compatibility section, choose FARGATE, and choose Next Step.

4.    In the Configure task and container definitions section, for Task Definition Name, enter a name for your task definition.

5.    In the Volumes section, choose Add volume.

6.    For Name, enter a name for your volume.

7.    For Volume type, enter EFS.

8.    For File system ID, enter the ID for your Amazon EFS file system.

Note: You can specify custom options for Root directory, Encryption in transit, and EFS IAM authorization. Or, you can accept the default, where "/" is the root directory.

9.    Choose Add.

10.    In the Containers Definition section, choose Add container.

11.    In the STORAGE AND LOGGING section, in the Mount points sub-section, select the volume that you created for Source volume in step 5.

12.    For Container path, choose your container path.

13.    (Optional) In the ENVIRONMENT section, for Entry point, enter your entry point.

14.    For Command, enter the [df ,-h] command to display the mounted file system. 

Note: You can use the entry point and command to test if your Amazon EFS file system is mounted successfully. By default, the container exits after the df -h command executes successfully. The JSON task definition example in step 16 uses an infinite while loop to keep the task running.

15.    Choose Add.

16.    Fill out the remaining fields in the task definition wizard, and then choose Create.

In the following example, the task definition creates a data volume named efs-test. The nginx container mounts the host data volume at the Any_Container_Path path.

    "family": "sample-fargate-test",
    "networkMode": "awsvpc",
    "executionRoleArn": "arn:aws:iam::1234567890:role/ecsTaskExecutionRole",
    "containerDefinitions": [
            "name": "fargate-app",
            "image": "nginx",
            "portMappings": [
                    "containerPort": 80,
                    "hostPort": 80,
                    "protocol": "tcp"
            "essential": true,
            "entryPoint": [
            "command": [
                "df -h && while true; do echo \"RUNNING\"; done"
            "mountPoints": [
                    "sourceVolume": "efs-test",
                    "containerPath": "Any_Container_Path"
            "logConfiguration": {
              "logDriver": "awslogs",
              "options": {
                "awslogs-group": "AWS_LOG_GROUP_PATH",
                "awslogs-region": "AWS_REGION",
                "awslogs-stream-prefix": "AWS_STREAM_PREFIX"

    "volumes": [
            "name": "efs-test",
            "efsVolumeConfiguration": {
                "fileSystemId": "fs-123xx4x5"

    "requiresCompatibilities": [

    "cpu": "256",
    "memory": "512"
Note: Replace fileSystemId, logConfiguration, containerPath, and other placeholder values with values for your custom configuration. Also, confirm that your task definition has an execution role Amazon Resource Name (ARN) to support the awslogs log driver.

Run a Fargate task and check your task logs

1.    Run a Fargate task using the task definition that you created earlier.

Important: Be sure to run your task on the Fargate platform version 1.4.0.

2.    To verify that your Amazon EFS file system is successfully mounted to your Fargate container, check your task logs.

The output of df-h looks similar to the following:

2020-10-27 15:15:35
Filesystem 1K-blocks Used Available Use% Mounted on

2020-10-27 15:15:35
overlay 30832548 9859324 19383976 34% /

2020-10-27 15:15:35
tmpfs 65536 0 65536 0% /dev

2020-10-27 15:15:35
shm 2018788 0 2018788 0% /dev/shm

2020-10-27 15:15:35
tmpfs 2018788 0 2018788 0% /sys/fs/cgroup

2020-10-27 15:15:35 9007199254739968 0 9007199254739968 0% /Any_Container_Path

2020-10-27 15:15:35
/dev/xvdcz 30832548 9859324 19383976 34% /etc/hosts

2020-10-27 15:15:35
tmpfs 2018788 0 2018788 0% /proc/acpi

2020-10-27 15:15:35
tmpfs 2018788 0 2018788 0% /sys/firmware

2020-10-27 15:15:35
tmpfs 2018788 0 2018788 0% /proc/scsi 


Did this article help?

Do you need billing or technical support?