Q. What is Amazon Elastic File System?
Amazon EFS is designed to provide serverless, fully elastic file storage that lets you share file data without provisioning or managing storage capacity and performance. With a few selections in the AWS Management Console, you can create file systems that are accessible to Amazon Elastic Compute Cloud (EC2) instances, Amazon container services (Amazon Elastic Container Service [ECS], Amazon Elastic Kubernetes Service [EKS], and AWS Fargate), and AWS Lambda functions through a file system interface (using standard operating system file I/O APIs). They also support full file system access semantics, such as strong consistency and file locking.
Amazon EFS file systems can automatically scale from gigabytes to petabytes of data without needing to provision storage. Tens, hundreds, or even thousands of compute instances can access an Amazon EFS file system at the same time, and Amazon EFS provides consistent performance to each compute instance. Amazon EFS is designed to be highly durable and highly available. With Amazon EFS, there is no minimum fee or setup costs, and you pay only for what you use.
Q. What use cases does Amazon EFS support?
Amazon EFS provides performance for a broad spectrum of workloads and applications: big data and analytics, media processing workflows, content management, web serving, and home directories.
Amazon EFS Standard storage classes are ideal for workloads that require the highest levels of durability and availability.
EFS One Zone storage classes are ideal for workloads such as development, build, and staging environments. They are also ideal for analytics, simulation, and media transcoding, and for backups or replicas of on-premises data that don’t require Multi-AZ resilience.
Q: When should I use Amazon EFS vs. Amazon Elastic Block Store (Amazon EBS) vs. Amazon S3?
AWS offers cloud storage services to support a wide range of storage workloads.
EFS is a file storage service for use with Amazon compute (EC2, containers, serverless) and on-premises servers. EFS provides a file system interface, file system access semantics (such as strong consistency and file locking), and concurrently accessible storage for up to thousands of EC2 instances.
Amazon EBS is a block-level storage service for use with EC2. EBS can deliver performance for workloads that require the lowest-latency access to data from a single EC2 instance.
Amazon S3 is an object storage service. S3 makes data available through an internet API that can be accessed anywhere.
Learn more about what to evaluate when considering Amazon EFS.
Q. What Regions is Amazon EFS currently available in?
Refer to Regional Products and Services for details of Amazon EFS service availability by Region.
Q. How do I start using Amazon EFS?
To use Amazon EFS, you must have an AWS account. If you don’t already have one, you can sign up for an AWS account, and instantly get access to the AWS Free Tier.
Once you have created an AWS account, refer to the EFS Getting started guide to begin using EFS. You can create a file system through the console, the AWS Command Line Interface (CLI) and the EFS API (and various language-specific SDKs).
Q. How do I access a file system from an EC2 instance?
To access your file system, mount the file system on an EC2 Linux-based instance using the standard Linux mount command and the file system’s DNS name. To simplify accessing your Amazon EFS file systems, we recommend using the Amazon EFS mount helper utility. Once mounted, you can work with the files and directories in your file system like you would with a local file system.
EFS uses the Network File System version 4 (NFS v4) protocol. For a step-by-step example of how to access a file system from an EC2 instance, see the guide here.
Q. How do I manage a file system?
Amazon EFS is a fully managed service, so all of the file storage infrastructure is managed for you. When you use Amazon EFS, you avoid the complexity of deploying and maintaining complex file system infrastructure. An Amazon EFS file system grows and shrinks automatically as you add and remove files, so you don’t need to manage storage procurement or provisioning.
You can administer a file system through the console, CLI, or the EFS API (and various language-specific SDKs). The console, API, and SDK provide the ability to create and delete file systems, configure how file systems are accessed, create and edit file system tags, enable features such as Provisioned Throughput and Lifecycle Management, and display detailed information about file systems.
Q. How do I load data into a file system?
AWS DataSync provides a fast way to securely sync existing file systems with Amazon EFS. DataSync works over any network connection, including with AWS Direct Connect or AWS VPN. EFS, DataSync, and Direct Connect without Amazon or AWS. You can also use standard Linux copy tools to move data files to Amazon EFS.
For more information about accessing a file system from an on-premises server, see the On-premises Access section of this FAQ.
For more information about moving data to the Amazon cloud, see the Cloud Data Migration page.
Scale and performance
Q. How much data can I store?
You can store petabytes of data with Amazon EFS. Amazon EFS file systems are elastic, automatically growing and shrinking as you add and remove files. There’s no need to provision file system size up front, and you pay only for what you use.
Q. How many EC2 instances can connect to a file system?
Amazon EFS supports one to thousands of Amazon Elastic Compute Cloud (EC2) instances connecting to a file system concurrently.
Q. How many file systems, mount targets, or access points can I create?
Please visit the Amazon EFS Limits page for more information on Amazon EFS limits.
Q. What latency, throughput, and IOPS performance can I expect for my Amazon EFS file system?
The expected performance for your Amazon EFS file system depends on its specific configuration (e.g., storage class and thoroughput mode) and the specific file system operation type (read or write). Please see the File System Performance documentation for more information on expected latency, maximum throughput, and maximum IOPS performance for Amazon EFS file systems.
Q. What throughput modes are available for my file system?
By default, Amazon EFS file systems are built to provide throughput that scales with the amount of storage in your file system and supports bursting to higher levels for up to 12 hours a day.
For throughput-intensive workloads, EFS offers two options for delivering higher levels of performance independent of your file system storage: Elastic Throughput and Provisioned Throughput. Use Elastic Throughput if you’re unsure of your application’s peak throughput needs or if your application is very spiky, with low baseline activity (such that it uses less than 5% of capacity on average when you provision for peak needs). With Elastic Throughput, throughput performance automatically scales with your workload activity, and you only pay for the throughput you use. Use Provisioned Throughput if you know your workload’s peak throughput requirements and you expect your workload to consume a higher share (more than 5% on average) of your application’s peak throughput capacity. Provisioned throughput is designed to offer the highest levels of throughput consistency while providing a predictable billing experience.
The amount of throughput you can deliver depends on the throughput mode you choose. Please see the documentation on File System Performance for more information.
Q: How do I monitor my read and write throughput usage?
You can monitor your throughput using Amazon CloudWatch. The TotalIOBytes, ReadIOBytes, WriteIOBytes, and MetadataIOBytes metrics reflect the actual throughput your applications are driving. PermittedThroughput and MeteredIOBytes reflect your metered throughput limit and usage, respectively, after metering read requests at a 1:3 ratio to other requests. With the Amazon EFS console, you can use the Percent Throughput Limit graph to monitor your throughput use. If you use custom CloudWatch dashboards or another monitoring tool, you can also create a CloudWatch metric math expression that compares MeteredIOBytes to PermittedThroughput. If these values are equal, you’re consuming your entire amount of throughput, and should consider configuring provisioned throughput or increasing the amount of throughput configured. For bursting throughput mode file systems, monitor the BurstCreditBalance metric and alert on a balance approaching zero to operate your file system at its burst rate.
Q: How will I be billed in Elastic Throughput mode?
With Elastic Throughput, you are billed for the amount of data transferred (reads and writes). If you access data from Infrequent Access storage classes, you also pay the IA data access charge.
Q. How will I be billed in Provisioned Throughput mode?
In Provisioned Throughput mode, you’re billed independently for storage you used and throughput you provisioned. You’re billed hourly in the following dimensions:
- Storage (per GB-month): You’re billed for the amount of storage you use in GB-month.
- Throughput (per MB/second-month): You’re billed for throughput you provision in MB/second-month.
Q. What happens to my burst credits when I switch to Elastic Throughput?
You do not accrue or consume any burst credits when you are in in Elastic Throughput mode. You can continue to view your existing burst credit balance on Amazon CloudWatch in Elastic mode.
Q. What is the throughput of my file system if the Provisioned Throughput mode is set to less than the Baseline Throughput I am entitled to in Bursting Throughput mode?
In the default Bursting Throughput mode, the throughput of your file system scales with the amount of data stored. If your file system in the Provisioned Throughput mode grows in size after the initial configuration, your file system could potentially have a higher baseline rate in Bursting Throughput mode than in the Provisioned Throughput mode.
In that case, your file system throughput will be the throughput it’s entitled to in the default Bursting Throughput mode, and you won’t incur any additional charge for the throughput beyond the bursting storage cost. You can also burst according to the Amazon EFS throughput bursting model.
Storage classes and lifecycle management
Q. What storage classes does Amazon EFS offer?
Amazon EFS offers you the choice of creating file systems using Standard or One Zone storage classes. Standard storage classes store data with and across multiple AZs. One Zone storage classes store data redundantly within a single AZ. This results in a 47% lower price compared to file systems using Standard storage classes, for workloads that don’t require Multi-AZ resilience.
EFS offers four storage classes: two Standard storage classes, EFS Standard and EFS Standard-Infrequent Access (EFS Standard-IA); and two One Zone storage classes, EFS One Zone and EFS One Zone-Infrequent Access (EFS One Zone-IA).
Q. How do I move files to EFS Standard-IA and EFS One Zone-IA?
Moving files to EFS Standard-IA and EFS One Zone-IA starts by enabling Amazon EFS Lifecycle Management and choosing an age-off policy for your files. Lifecycle Management automatically moves your data from the EFS Standard to the EFS Standard-IA storage class, or from the EFS One Zone to the EFS One Zone-IA storage class. For example, you can automatically move files from EFS Standard to EFS Standard-IA if they aren’t accessed after one day.
Q. What is EFS Intelligent-Tiering?
EFS Intelligent-Tiering is designed to deliver automatic cost savings for workloads with changing access patterns. EFS Intelligent-Tiering uses EFS Lifecycle Management to monitor the access patterns of your workload. It automatically moves files that aren’t accessed for the duration of the Lifecycle policy (for example, 30 days) from performance-optimized storage classes (EFS Standard or EFS One Zone) to their corresponding cost-optimized Infrequent Access storage class (EFS Standard-Infrequent Access or EFS One Zone-Infrequent Access). This helps you take advantage of IA storage pricing that is up to 92% lower than EFS Standard or EFS One Zone storage pricing. If access patterns change and that data is accessed again, Lifecycle Management automatically moves the files back to EFS Standard or EFS One Zone, reducing the risk of unbounded access charges. If the files become infrequently accessed again, Lifecycle Management transitions the files back to the appropriate IA storage class based on your Lifecycle policy.
Q. When should I use EFS Intelligent-Tiering?
Use EFS Intelligent-Tiering to automatically move files between performance-optimized and cost-optimized storage classes when data access patterns are unknown. Activate EFS Lifecycle Management by choosing a policy to automatically move files to EFS Standard-IA or EFS One Zone-IA. Additionally, choose a policy to automatically move files back to EFS Standard or EFS One Zone when they’re accessed. With EFS Intelligent-Tiering, you can save on storage costs even if your application access patterns are unknown or access patterns change over time. With these two Lifecycle Management policies set, you pay only for data transition charges between storage classes, and not for repeated data access. Examples of workloads that might have unknown access patterns include web assets and blogs stored by content management systems, logs, machine learning (ML) inference files, and genomic data.
Q. What Amazon EFS features are supported when using EFS Standard-IA and EFS One Zone-IA storage classes?
All Amazon EFS features are supported when using the EFS Standard-IA and EFS One Zone-IA storage classes. Files smaller than 128 KiB are not eligible for Lifecycle Management and will always be stored on either the EFS Standard storage class or the EFS One Zone storage class.
Q. What is the latency difference between the performance-optimized storage classes (EFS Standard, EFS One Zone) and the cost-optimized IA storage classes (EFS Standard-IA, EFS One Zone-IA)?
When reading from or writing to the EFS Standard-IA or EFS One Zone-IA storage class, your first-byte latency is higher than EFS Standard or EFS One Zone storage classes. The EFS Standard and EFS One Zone storage classes are designed to provide submillisecond read latencies and single-digit millisecond write latencies on average. The EFS Standard-IA and EFS One Zone-IA storage classes are designed to provide double-digit millisecond latencies on average.
Q. What throughput can I drive against files stored in the EFS Standard-IA or EFS One Zone-IA storage class?
Under the default Bursting throughput mode, the throughput you can drive against an Amazon EFS file system scales linearly with the amount of data stored on the EFS Standard or EFS One Zone storage classes. All Amazon EFS file systems, regardless of size, can burst to 100 MiB/second of throughput. File systems with more than 1 TiB of data stored on EFS Standard or EFS One Zone storage classes can burst to 100 MiB/second per TiB of data stored on EFS Standard or EFS One Zone storage classes. If you require higher amounts of throughput to EFS Standard-IA or EFS One Zone-IA storage classes, use Amazon EFS Elastic Throughput or Provisioned Throughput. For more information, see the Amazon EFS performance documentation.
Data protection and availability
Q: How is Amazon EFS designed to provide high durability and availability?
By default, every EFS file system object (such as directory, file, and link) is redundantly stored across multiple AZs for file systems using Standard storage classes. If you select Amazon EFS One Zone storage classes, your data is redundantly stored within a single AZ. Amazon EFS is designed to sustain concurrent device failures by quickly detecting and repairing any lost redundancy. And using Standard storage classes, a file system can be accessed concurrently from all AZs in the Region where it’s located. That means you can architect your application to failover from one AZ to other AZs in the Region to achieve the highest level of application availability. Mount targets are designed to be highly available within an AZ for all EFS storage classes. For more information on availability, see the Amazon EFS Service Level Agreement.
Q: How durable is Amazon EFS?
Amazon EFS is designed to provide 99.999999999% (11 nines) of durability over a given year. And EFS Standard and EFS Standard-IA storage classes are designed to sustain data if an AZ is lost. Because EFS One Zone storage classes store data in a single AZ, data stored in these storage classes might be lost during a disaster or other fault within the AZ. As with any environment, best practice is to have a backup, and to put in place safeguards against accidental deletion. For Amazon EFS data, that best practice includes replicating your file system across Regions using Amazon EFS Replication, and a functioning, regularly tested backup using AWS Backup. File systems using EFS One Zone storage classes are configured to automatically back up files by default at file system creation, unless you choose to stop this functionality.
Q: What failure modes should I consider when using Amazon EFS One Zone compared to Standard storage classes?
File systems using Amazon EFS One Zone storage classes are not resilient to a complete AZ outage. During an AZ outage, you will experience a loss of availability, because your file system data is not replicated to a different AZ. During a disaster or fault within an AZ affecting all copies of your data, you might experience loss of data that has not been replicated using Amazon EFS Replication. EFS Replication is designed to meet a recovery point objective (RPO) and recovery time objective (RTO) of minutes. You can use AWS Backup to store additional copies of your file system data and restore them to a new file system in an AZ or Region of your choice. Amazon EFS file system backup data created and managed by AWS Backup is replicated to three AZs and is designed for 99.999999999% (11 nines) durability.
Q. How can I guard my EFS One Zone file system against the loss of an AZ?
You can use Amazon EFS Replication or AWS Backup to guard your EFS One Zone file system against the loss of an AZ. Amazon EFS Replication replicates your file system data to another Region or within the same Region, without additional infrastructure or a custom process to monitor and synchronize data changes. EFS replication is nearly continuous and designed to provide a recovery point objective (RPO) and a recovery time objective (RTO) of minutes for many file systems.
Backups are enabled by default for all file systems using Amazon EFS One Zone storage classes. You can deactivate this setting when creating file systems. You are able to restore your file data from a recent backup to a newly created file system in any operating AZ during an AZ loss. If your data is stored in One Zone storage classes, you might experience data loss during an AZ loss for files that have changed since the last automatic backup.
Q: What is Amazon EFS Replication?
EFS Replication can replicate your file system data to another Region or within the same Region without requiring additional infrastructure or a custom process. Amazon EFS Replication automatically and transparently replicates your data to a second file system in a Region or AZ of your choice. You can use the Amazon EFS console, AWS CLI, and APIs to activate replication on an existing file system. EFS Replication is continual and provides a recovery point objective (RPO) and a recovery time objective (RTO) of minutes, helping you meet your compliance and business continuity goals.
Q: Why should I use EFS Replication?
If you must maintain a copy of your file system many miles apart for disaster recovery, compliance, or business continuity planning, EFS Replication can help you meet those requirements. For applications that require a low network latency cross-Region access, Amazon EFS Replication provides a read-only copy in the Region of your choice. With Amazon EFS Replication, you can cost-optimize and save up to 75% on disaster recovery storage costs. Use EFS One Zone storage classes and a 7-day age-off lifecycle management policy for your destination file system to reduce costs. There is no need to build and maintain a custom process for data replication. EFS Replication also makes it streamlined to monitor and alarm on your RPO status using Amazon CloudWatch.
Q: Is my replica file system point-in-time consistent?
No. EFS Replication doesn’t provide point-in-time consistent replication. EFS Replication publishes a timestamp metric on Amazon CloudWatch called TimeSinceLastSync. All changes made to your source file system at least as of the published time will be copied over to the destination. Changes to your source file system after the recorded time might not have been replicated over. You can monitor the health of your EFS Replication using Amazon CloudWatch. If you interrupt the replication process due to a disaster recovery event, files from the source file system might have transferred but are not yet copied to their final locations. These files and their contents can be found on your destination file system in a lost+found directory created by EFS Replication under the root directory.
Q: How can I use my destination file system while replication is enabled and when replication is deleted?
When your replication is in Enabled state, only EFS Replication can affect your destination file system. You can access your replica in the read-only mode during this time. During a disaster, you can fail over to your destination file system by deleting your replication configuration from the console or by using the DeleteReplicationConfiguration API. When you delete the Replication, Amazon EFS will stop replicating additional changes and make the destination file system writeable. You can then point your application to your destination file system to continue your operations. You can use the Amazon EFS console or the DescribeReplicationConfigurations API call to check your destination file system status after you’ve failed over.
Q: Can I use EFS Replication to replicate my file system to more than one AWS Region or to multiple file systems within a second Region?
No. EFS Replication supports replication between exactly two file systems.
Q: Can I replicate Amazon EFS file systems across AWS accounts?
No. Amazon EFS does not support replicating file systems to a different AWS account.
Q: Does EFS Replication consume my file system burst credits, IOPS limit, and throughput limits?
No. EFS Replication activity does not consume burst credits or count against the file system IOPS and throughput limits for either file system in a replication pair.
Q: Can I expect my destination file system to be available as soon as I activate EFS Replication?
Yes. When you first activate EFS Replication, your replica file system will be created in read-only mode, and your entire source file system will be copied to the destination you selected. The time to complete this operation depends on the size of your source file system. Although you can fail over to your destination file system at any time, it is recommended that you wait until the copy is complete to minimize data loss. You can monitor the progress of your replication from the Amazon EFS console, which indicates the last time your source file system and destination file system were synchronized.
Q. How do I control which Amazon EC2 instances can access my file system?
You control which EC2 instances can access your file system using VPC security group rules and IAM policies. Use VPC security groups to control the network traffic to and from your file system. Attach an IAM policy to your file system to control which clients can mount your file system and with what permissions, and use EFS Access Points to manage application access. Control access to files and directories with POSIX-compliant user and group-level permissions.
Q. How can I use IAM policies to manage file system access?
Using the Amazon EFS console, you can apply common policies to your file system, such as disabling root access, enforcing read-only access, or enforcing that all connections to your file system are encrypted. You can also apply more advanced policies, such as granting access to specific IAM roles, including those in other AWS accounts.
Q. What is an Amazon EFS Access Point?
An EFS Access Point is a network endpoint that users and applications can use to access an EFS file system and enforce file- and folder- level permissions (POSIX) based on fine-grained access control and policy-based permissions defined in IAM.
Q. Why should I use Amazon EFS Access Points?
EFS Access Points gives you the flexibility to create and manage multi-tenant environments for your file applications in a cloud-native way, helping you simplify data sharing. Unlike traditional POSIX ACLs to control file system access, or Kerberos to control authentication, both requiring complex set-up, management, and maintenance, and which often introduce risk, EFS Access Points integrates with IAM to enable cloud native applications to use POSIX-based shared file storage. Use cases that can benefit from Amazon EFS Access Points include container-based environments where developers build and deploy their own containers, data science applications that require access to production data, and sharing a specific directory in your file system with other AWS accounts.
Q. How do Amazon EFS Access Points work?
When you create an Amazon EFS Access Point, you can configure an operating system user and group, and a root directory for all connections that use it. If you specify the root directory’s owner, EFS will automatically create it with the permissions you provide the first time a client connects to the access point. You can also update your file system’s IAM policy to apply to your access points. For example, you can apply a policy that requires a specific IAM identity in order to connect to a given access point. For more information, see the Amazon EFS user guide.
Q. What is Amazon EFS Encryption?
Amazon EFS offers the ability to encrypt data at rest and in transit.
Data encrypted at rest is transparently encrypted while being written, and transparently decrypted while being read, so you don’t have to modify your applications. Encryption keys are managed by the AWS KMS, eliminating the need to build and maintain a secure key management infrastructure.
Data encryption in transit uses industry-standard Transport Layer Security (TLS) 1.2 to encrypt data sent between your clients and EFS file systems.
Encryption of data at rest and data in transit can be configured together or separately to help meet your unique security requirements.
For more details, see the user documentation on Encryption.
Q. What is the AWS Key Management Service (KMS)?
AWS KMS is a managed service that makes it easier for you to create and control the encryption keys used to encrypt your data. AWS KMS is integrated with AWS services, including EFS, EBS, and S3, making it simpler to encrypt your data with encryption keys that you manage. AWS KMS is also integrated with AWS CloudTrail to provide you with logs of all key usage to help meet your regulatory and compliance needs.
Q. How do I enable encryption for my Amazon EFS file system?
You can enable encryption at rest in the EFS console by using the CLI or SDKs. When creaking a new file system in the EFS console, select “Create File System” and then select the checkbox to enable encryption.
Data can be encrypted in transit between your Amazon EFS file system and its clients by using the Amazon EFS mount helper.
Encryption of data at rest and data in transit can be configured together or separately to help meet your unique security requirements.
For more details, see the user documentation on Encryption.
Q. Does encryption impact Amazon EFS performance?
Encrypting your data has a minimal effect on I/O latency and throughput.
Q. How do I access an Amazon EFS file system from servers in my on-premises datacenter?
You mount an Amazon EFS file system on your on-premises Linux server using the standard Linux mount command for mounting a file system using the NFS v4.1 protocol.
For more information about accessing Amazon EFS file systems from on-premises servers, see the documentation.
Q. What can I do by enabling access to my Amazon EFS file systems from my on-premises servers?
You can mount your Amazon EFS file systems on your on-premises servers, and move file data to and from Amazon EFS using standard Linux tools and scripts or AWS DataSync. The ability to move file data to and from Amazon EFS file systems allows for three use cases.
First, you can migrate data from on-premises datacenters to permanently reside in EFS file systems.
Second, you can support cloud bursting workloads to off-load your application processing to the cloud. You can move data from your on-premises servers into your Amazon EFS file systems, analyze it on a cluster of EC2 instances in your Amazon VPC, and store the results permanently in your Amazon EFS file systems or move the results back to your on-premises servers.
Third, you can periodically copy your on-premises file data to Amazon EFS to support backup and disaster recovery scenarios.
Q. Can I access my Amazon EFS file system concurrently from my on-premises datacenter servers as well as EC2 instances?
Yes. You can access your Amazon EFS file system concurrently from servers in your on-premises datacenter as well as EC2 instances in your Amazon VPC. Amazon EFS provides the same file system access semantics, such as strong data consistency and file locking, across all EC2 instances and on-premises servers accessing a file system.
Q. What is the recommended best practice when moving file data to and from on-premises servers?
Because of the propagation delay tied to data traveling over long distances, the network latency of the network connection between your on-premises datacenter and your Amazon VPC can be tens of milliseconds. If your file operations are serialized, the latency of the network connection directly impacts your read and write throughput; in essence, the volume of data you can read or write during a period of time is bounded by the amount of time it takes for each read and write operation to complete. To maximize your throughput, parallelize your file operations so that multiple reads and writes are processed by Amazon EFS concurrently. Standard tools like GNU parallel help you to parallelize the copying of file data. For more information, see the online documentation.
Q. How do I copy existing data from on-premises file storage to Amazon EFS?
There are a number of methods to copy existing on-premises data into Amazon EFS. AWS DataSync provides a fast and simple way to securely sync existing file systems into EFS and works over any network, including AWS Direct Connect.
AWS Direct Connect provides a high-bandwidth and lower-latency dedicated network connection over which you can mount your EFS file systems. Once mounted, you can use DataSync to copy data into EFS up to 10 times faster than standard Linux copy tools.
For more information on AWS DataSync, see the Data transfer section of this FAQ.
Q. What AWS-native options do I have to transfer data into my file system?
DataSync is an online data transfer service that makes it faster and simpler to move data between on-premises storage and Amazon EFS. DataSync uses a purpose-built protocol to accelerate and secure transfer over the internet or Direct Connect, at speeds up to 10 times faster than open-source tools. Using DataSync, you can perform one-time data migrations, transfer on-premises data for timely in-cloud analysis, and automate replication to AWS for data protection and recovery.
AWS Transfer Family is a fully managed file transfer service that provides support for Secure File Transfer Protocol (SFTP), File Transfer Protocol over SSL (FTPS), and File Transfer Protocol (FTP). The AWS Transfer Family provides you with a fully managed, highly available file transfer service with auto scaling capabilities, eliminating the need for you to manage file transfer–related infrastructure. Your end users’ workflows remain unchanged, while data uploaded and downloaded over the chosen protocols is stored in your Amazon EFS file system.
Q. How do I transfer data into or out of my Amazon EFS file system?
To get started with DataSync, you can use the console or CLI to connect the agent to your on-premises or in-cloud file systems using the Network File System (NFS) protocol, select your Amazon EFS file system, and start copying data. You must first deploy a software agent that is available for download from the console, except when copying files between two Amazon EFS file systems.
To get started with AWS Transfer Family, first ensure that your file system’s directories are accessible by the POSIX users that you plan to assign to AWS Transfer. Then you can use the console, CLI, or API to create a Transfer Family endpoint and user(s). Once complete, your end users can use their SFTP, FTP, or FTPS clients to access data stored in your Amazon EFS file system.
Q. Can Amazon EFS data be transferred between Regions?
You can use DataSync to transfer files between two Amazon EFS file systems, including ones in different AWS Regions. AWS Transfer Family endpoints must be in the same Region as your Amazon EFS file system.
Q. Can I access my file system with another AWS account?
Yes. You can use DataSync to copy files to an Amazon EFS file system in another AWS account.
You can also configure your Amazon EFS file system to be accessed by AWS Transfer Family using another account as long as the account has been granted permissions to do so. To learn more about granting Transfer Family permissions to external AWS accounts via file system policies, see the documentation.
Q. What interoperability and compatibility is there between existing AWS services and Amazon EFS?
EFS is integrated with a number of other AWS services, including CloudWatch, AWS CloudFormation, CloudTrail, IAM, and AWS tagging services.
CloudWatch helps you monitor file system activity using metrics. CloudFormation helps you create and manage file systems using templates.
CloudTrail helps you record all EFS API calls in log files.
IAM helps you control who can administer your file system. AWS tagging services helps you label your file systems with metadata that you define.
You can plan and manage your Amazon EFS file system costs by using AWS Budgets. You can work with AWS Budgets from the AWS Billing and Cost Management console. To use AWS Budgets, you create a monthly cost budget for your Amazon EFS file systems.
Q. What type of locking does Amazon EFS support?
Locking in Amazon EFS follows the NFS v4.1 protocol for advisory locking and allows your applications to use both whole file and byte range locks.
Q. Are file system names global (like S3 bucket names)?
Every file system has an automatically generated ID number that is globally unique. You can tag your file system with a name, and these names don’t need to be unique.
Pricing and billing
Q. How much does Amazon EFS cost?
With Amazon EFS, you pay only for what you use per month.
When using the Provisioned Throughput mode, you pay for the throughput you provision per month. There is no minimum fee and no setup charges.
Amazon EFS infrequent access storage is priced based on the amount of storage used and the amount of data accessed. Until Lifecycle Management fully moves your file to an EFS infrequent access storage class (EFS Standard-IA or EFS One Zone-IA), it’s stored on EFS Standard or EFS One Zone and billed at the Standard or One Zone rate, as applicable.
For more EFS pricing information, visit the Amazon EFS Pricing page.
Q. Do your prices include taxes?
Except as otherwise noted, our prices are exclusive of applicable taxes and duties, including VAT and applicable sales tax. For customers with a Japanese billing address, use of AWS services is subject to Japanese Consumption Tax. Learn more.
Access from AWS services
Q. Can I access Amazon EFS from Amazon ECS containers?
Yes. You can access EFS from containerized applications launched by Amazon ECS using both EC2 and Fargate launch types by referencing an EFS file system in your task definition. Find instructions for getting started in the ECS documentation.
Q. Can I access Amazon EFS from Amazon Elastic Kubernetes Service (EKS) pods?
Yes. You can access EFS from containerized applications launched by Amazon EKS, with either EC2 or Fargate launch types, using the EFS CSI driver. Find instructions for getting started in the EKS documentation.
Q. Can I access Amazon EFS from AWS Lambda functions?
Q. Can I access Amazon EFS from Amazon SageMaker?
Yes. You can access training data in EFS from Amazon SageMaker training jobs by referencing an EFS file system in your CreateTrainingJob request. EFS is also automatically used for home directories created by SageMaker Studio.