Containers

AWS Fargate adds support for larger ephemeral volumes

Introduction

AWS Fargate is a serverless, pay-as-you-go compute engine that allows you focus on building applications without having to manage servers. Starting today, the amount of ephemeral storage you can allocate to the containers in a EKS Fargate pod is configurable up to a maximum of 175 GiB per pod. Prior to this launch, all AWS Fargate pods came with 20 GiB of ephemeral storage by default. The cost for this storage was included in the cost of running a Fargate pod.

With configurable ephemeral storage, you’re only billed for the additional storage you allocate to the containers beyond the default 20 GiB you get by default. The actual amount of storage AWS Fargate provisions will always exceed the amount you’ve requested because it reserves 10% of each container’s storage for the file system, plus another 5 GiB for various system components. For example, when you create a pod with two containers that each request 50 GiB of ephemeral storage, AWS Fargate allocates storage according to the following formula:

(sum of all requests + (10% * sum of all requests) + 5G)

In this particular instance, AWS Fargate provisions at least 115 GiB (100 + (10% * 100) + (5 G)) of ephemeral storage. You can see the amounts of provisioned capacity and allocatable storage by running the following commands:

kubectl get node <fargate_node_name> -o=jsonpath="{$.status.allocatable.ephemeral-storage}"
kubectl get node <fargate_node_name> -o=jsonpath="{$.status.capacity.ephemeral-storage}"

The amount of allocatable ephemeral storage should be roughly equivalent to or greater than the total ephermal storage of all containers you request. In current case, you may expect the amount of allocatable ephemeral storage to be equal to or greater than 100GiB. However, you will be only billed for 95GiB in the current case, which is calculated based on the above formula minus 20GiB, because 20 GiB is included for free. You can calculate the cost of ephemeral storage by subtracting 20 GiB from the value derived from the formula then you may multiple the result by the hourly rate per GiB for your region which can be found on the AWS Fargate pricing page.

With as much as 175 GiB of ephemeral storage available, you can run a wider range of workloads such as machine learning inference, media transcoding, extract-transform-load (ETL), and other data processing workloads that involve very large datasets and files.

The following section explains how to create a pod with ephemeral storage and how AWS Fargate calculates the total amount of storage to provision.

Prerequisites

You’ll need the following to deploy the sample workload provided in this post:

Walkthrough

Allocating ephemeral storage to AWS Fargate pods

You’ll need an Amazon EKS cluster with an AWS Fargate profile for this demonstration. You can also create a cluster using eksctl:

eksctl create cluster --fargate

With the –fargate option, eksctl creates a pod execution role, an AWS Fargate profile, and patches the coredns deployment to run on AWS Fargate.

Deploy a sample workload

Let’s start by creating a Deployment that deploys a pod with two containers and assigns ephemeral storage to both containers.

kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: fargate-ephvol-test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: fargate-ephvol-test
  template:
    metadata:
      labels:
        app: fargate-ephvol-test
    spec:
      containers:
      - name: fargate-container1
        image: busybox
        command: [ "/bin/sh", "-c", "--" ]
        args: [ "while true; do sleep 30; done;" ]
        resources:
          limits:
            ephemeral-storage: 60Gi
          requests:
            ephemeral-storage: 30Gi
      - name: fargate-container2
        image: busybox
        command: [ "/bin/sh", "-c", "--" ]
        args: [ "while true; do sleep 30; done;" ]
        resources:
          requests:
            ephemeral-storage: 70Gi
EOF

In this example, AWS Fargate allocates a 122.9 G for the root volume for this pod and both containers get access to the entire [root] volume. Even though we have defined a 60 Gi storage limit for the first container, it can use the entire root volume. This is because AWS Fargate ignores ephemeral storage limits. In other words, the kubelet eviction manager won’t evict the pod if the amount of storage consumed by fargate-container1 exceeds 60 Gi. Instead, both containers will be able to write to the root volume until the volume is full.

On Amazon Elastic Compute Cloud (Amazon EC2) worker nodes, if a container’s writable layer and log usage exceeds its storage limit, then the kubelet marks the pod for eviction. Whereas on EKS Fargate, the pod won’t be evicted because limit are ignored. Furthermore, all EKS Fargate pods are marked with the PriorityClass system-node-critical.

Resource limits are a Kubernetes mechanism to deal with the noisy neighbor effect. Limits ensure that a pod doesn’t exceed its allocated capacity, which can potentially impact other Pods on the node. AWS Fargate, however, runs every pod on a dedicated virtual machine with its own storage, network, CPU, and memory. Therefore, customers don’t have to limit resource consumption in Fargate pods.

Ephemeral volume allocation

When pods are scheduled on EKS Fargate, the ephemeral storage requests (or limits when requests are undefined) determine how much ephemeral storage to provision. Additionally:

  • The greater of the sum of all ephemeral storage requests for init containers or long running containers determines the amount of ephemeral storage provisioned by AWS Fargate (up to a max 175 Gi).
  • All AWS Fargate pods get 20 GiB ephemeral volume by default. AWS Fargate allocates additional ephemeral storage when pods request more than 20 GiB of ephemeral storage.
  • AWS Fargate allocates ephemeral storage based on storage limits only when requests are undefined

As with other resource requests and limits, you can restrict the amount of ephemeral storage that can be provisioned by configuring resource quotas or limit ranges. For example, the following limit range sets the default ephemeral storage request value to 20 Gi and a maximum value of 175 Gi.

apiVersion: v1
kind: LimitRange
metadata:
  name: storage-limit-range
spec:
  limits:
  - defaultRequest:
      ephemeral-storage: 20Gi
    max: 
      ephemeral-storage: 175Gi 
    type: Container

Monitoring ephemeral storage usage

Ephemeral storage capacity is visible in kubectl describe node.

The kubelet reports ephemeral storage usage in node summary API:

$ kubectl get --raw \
 "/api/v1/nodes/{node}/proxy/stats/summary"
 
...
"ephemeral-storage": {
    "time": "2023-04-26T00:21:41Z",
    "availableBytes": 43025469440,
    "capacityBytes": 80135319552,
    "usedBytes": 21474951168,
    "inodesFree": 4802640,
    "inodes": 4980736,
    "inodesUsed": 24
   }
...

Kubernetes currently doesn’t expose ephemeral storage metrics to Prometheus (Github issue).

If you need to monitor ephemeral volume usage, then there are open source projects such as kubelet-stats-exporter that export data from the kubelet’s stat API endpoint.

Cleaning up

Delete the sample deployment:

kubectl delete deployment fargate-ephvol-test

If you created an EKS cluster, delete the cluster by executing:

eksctl delete cluster &lt;cluster name&gt;

Conclusion

In this post, we showed you that EKS Fargate is a serverless compute platform that reduced administrative overhead by abstracting away the maintenance and scaling of nodes. Each pod runs in its own dedicated compute environment with separate storage, memory, and CPU. With this latest update, you can allocate as much as 175 GiB of ephemeral storage to an EKS Fargate pod, which expands the use cases that AWS Fargate can address. We can’t wait to see what you do with this increase in storage. To get started with EKS Fargate, please refer to the Amazon EKS Fargate documentation.

Jeremy Cowan

Jeremy Cowan

Jeremy Cowan is a Specialist Solutions Architect for containers at AWS, although his family thinks he sells "cloud space". Prior to joining AWS, Jeremy worked for several large software vendors, including VMware, Microsoft, and IBM. When he's not working, you can usually find on a trail in the wilderness, far away from technology.

Re Alvarez-Parmar

Re Alvarez-Parmar

In his role as Containers Specialist Solutions Architect at Amazon Web Services, Re advises engineering teams with modernizing and building distributed services in the cloud. Prior to joining AWS, he spent more than 15 years as Enterprise and Software Architect. He is based out of Seattle. Connect on LinkedIn at: linkedin.com/in/realvarez/