Containers
Amazon EKS on AWS Graviton2 generally available: considerations on multi-architecture apps
Today, Amazon EKS on AWS Graviton2 is generally available and with this post we want to give you some background on what this means for you and how it works in practice. We had first-generation AWS Graviton from early 2019 on in preview and many of you participated in the AWS Graviton2 preview program launched earlier this year. Thank you for being part of this program and due to your feedback and suggestions we’re now in the position to announce that Amazon EKS on AWS Graviton2 is in General Availability (GA). Before we get into the details what this GA means, let’s have a look at what multi-architecture in the context of containerized workloads, specifically with Amazon EKS is.
Multi-Arch across development & deployment
With multi-architecture (multi-arch for short) we mean support for two or more CPU architecture families. For your app to run on different CPUs, code needs to be available in different Instruction Set Architecture (ISA) implementations such as ARMv8 or x86-64.
How does “to be available” look across the development and deployment life cycle? The following figure depicts this, in principle:
Starting on the left-hand side:
- As a developer, you’re adding features to your code or fix bugs. The programming language and its ecosystem needs to be multi-arch aware and support you creating the artifacts for the target architectures.
- The runtime environment, in our case a container orchestrator like Amazon EKS, uses the artifacts provided by developers in the previous steps.
- The cycle is closed by considering operational insights, from the runtime. This can be metrics, logs, and traces for troubleshooting purposes or usage patterns analysis (which paths of an application are hot, which are mostly unused) as an input for feature roadmap exercises.
Alright, let’s have a closer look at first two phases of the development and deployment life cycle in the following.
As a developer, you’re adding a new features to your code or you might be fixing a bug. We will be assuming you either have an environment handy that natively allows you to build said artifacts, for example a Linux-based Arm workstation or, going forward, also Apple MacBooks or you can employ cross-platform builds. The programming language you’re using and its ecosystem needs to be multi-arch aware. It needs to support you creating the artifacts, such as container images, for the target architectures.
No matter if you’re using cloud native programming languages such as Go or Rust that come with built-in support for multi-arch—GoArm and Rust platform support—or interpreted environments like PHP, Python, Ruby, or Node.js, once your code is ready you build a Open Container Initiative (OCI) compliant container image. This you can do, for example, using Docker‘s buildx or, equally possible, a remote build. In this context, you want to check the multi-arch readiness of your automated build and test pipeline, for example, support in Travis.
Next up, you push your artifacts including container images to a registry which acts as the devops hand-over for the runtime environment. Earlier this year we introduced multi-arch container images for Amazon ECR, so we got you covered in this life cycle part as well.
The final step is the deployment: the runtime environment, in our case a container orchestrator like Kubernetes, uses the artifacts provided by developers in the previous steps. Kubernetes, written in Go, is inherently multi-arch, providing its control plane components throughout for a number of architectures. In Kubernetes, and by extension in Amazon EKS, the worker node-local supervisor called kubelet
instructs the container runtime via a standardized interface to pull container images from a registry such as Amazon ECR and launch them, accordingly. All of which is multi-arch enabled and automated.
From here on forward we will be focusing on the runtime environment, specifically Amazon EKS and what general availability of AWS Graviton2 for it means.
What does General Availability mean?
Kubernetes has the notion of a control plane (where cluster state is stored and manipulated through the Kubernetes API server) and a data plane consisting of worker nodes:
For the data plane you can use managed node groups based on EC2 instances running in your account or AWS Fargate, a serverless data plane if you like.
AWS Graviton2 processors power Arm-based EC2 instances delivering a major leap in performance and capabilities as well as significant cost savings. A primary goal of running containers is to improve the cost efficiency for your applications. Combine both and you get a great price performance. For example, based on internal testing of workloads we saw 20% lower cost and up to 40% higher performance for M6g, C6g, and R6g instances over M5, C5, and R5 instances.
With today, Amazon EKS on AWS Graviton2 is generally available where both services are available regionally and that means:
- We’re supporting ARMv8.2 architecture (64 bit), amongst others.
- End-to-end multi-architecture support (see below for details).
- Mixed managed node groups are now supported.
- The EKS API and tooling such as
eksctl
take care of the architecture-specific configurations, for example, launching Arm-based control plane components such as CoreDNS orkube-proxy
pods.
Now that we have a general idea what AWS Graviton2 EC2 instances in the Amazon EKS data plane mean, let’s see them in action.
Arm in action: deploying an open source CMS
In the hands-on part of this post we will focus on the deployment arc of the life cycle. As a prerequisite you will need an Amazon EKS cluster with at least one Graviton2 node group provisioned as per the docs. To verify this, have a look at the nodes in your EKS cluster with the following command where you should see at least one arch=arm64
in the (rightmost) LABELS
column:
$ kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
ip-192-168-15-231.eu-west-1.compute.internal Ready <none> 11d v1.15.11-eks-065dce beta.kubernetes.io/arch=arm64,beta.kubernetes.io/instance-type=m6g.medium,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=eu-west-1,failure-domain.beta.kubernetes.io/zone=eu-west-1a,kubernetes.io/arch=arm64,kubernetes.io/hostname=ip-192-168-15-231.eu-west-1.compute.internal,kubernetes.io/os=linux
ip-192-168-33-98.eu-west-1.compute.internal Ready <none> 11d v1.15.11-eks-065dce beta.kubernetes.io/arch=arm64,beta.kubernetes.io/instance-type=m6g.medium,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=eu-west-1,failure-domain.beta.kubernetes.io/zone=eu-west-1c,kubernetes.io/arch=arm64,kubernetes.io/hostname=ip-192-168-33-98.eu-west-1.compute.internal,kubernetes.io/os=linux
ip-192-168-48-242.eu-west-1.compute.internal Ready <none> 11d v1.15.11-eks-065dce beta.kubernetes.io/arch=arm64,beta.kubernetes.io/instance-type=m6g.medium,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=eu-west-1,failure-domain.beta.kubernetes.io/zone=eu-west-1c,kubernetes.io/arch=arm64,kubernetes.io/hostname=ip-192-168-48-242.eu-west-1.compute.internal,kubernetes.io/os=linux
For the workload example we picked Plone, an open source Content Management System (CMS) written in Python. Use the following Kubernetes manifest and store it in an file called plone.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: plone
spec:
replicas: 1
selector:
matchLabels:
app: plone
template:
metadata:
labels:
app: plone
spec:
containers:
- name: main
image: arm64v8/plone:5.2.1
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: plone
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: plone
Next, deploy the workload using kubectl apply -f example.yaml
and check if everything is up and running:
$ kubectl get deploy,po,svc
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.extensions/plone 1/1 1 1 1d
NAME READY STATUS RESTARTS AGE
pod/plone-576f69df8b-7xzgn 1/1 Running 0 1d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/plone ClusterIP 10.100.89.103 <none> 80/TCP 1d
OK, now let’s see if the Plone pod in fact runs on as an Arm 64 bit application:
$ kubectl exec -it \
$(kubectl get pods -l=app=plone -o=jsonpath={.items..metadata.name}) \
cat /proc/version
Linux version 4.14.186-146.268.amzn2.aarch64 (mockbuild@ip-10-0-1-125) (gcc version 7.3.1 20180712 (Red Hat 7.3.1-9) (GCC)) #1 SMP Tue Jul 14 18:17:02 UTC 2020
And there we have it, the aarch64
is the proof that the pod is running in an Arm-based Linux environment.
To wrap up and to proof that the CMS really works as intended, execute kubectl port-forward svc/plone 8888:80
and head over to a browser where you enter http://127.0.0.1:8888/
and you should see the following:
Congratulations, you have successfully launched an Arm-based containerized application in Amazon EKS running on AWS Graviton2. You are set up to get the most bang for your bucks!
Get involved and what’s up next
While we’ve GAed AWS Graviton2 for Amazon EKS now, we don’t stop here. In the following a few tips how to get started.
We’re maintaining a GitHub repository, as a starting place for all your Graviton-related explorations, so check out and consider contributing to: aws/aws-graviton-gettting-started.
If you want to try out the above example on Amazon EKS yourself, we recommend to give it a go with the official CLI tool eksctl: to create a test cluster with eksctl
use a config file like the following:
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: al-arm
region: eu-west-1
managedNodeGroups:
- name: mng-arm0
instanceType: m6g.medium
desiredCapacity: 3
Last but not least, keep an eye on our containers roadmap, especially issue 793 on Arm-based tasks in AWS Fargate, which will enable AWS Graviton serverless. Please let us know if something doesn’t work the way you expect, and also leave any feedback here, comment, or open an issue on the AWS containers roadmap on GitHub.