Announcing General Availability of Amazon EKS Anywhere on Snow

This is the beginning of a beautiful friendship; a tale of two cloud services, traveling down two seemingly independent paths, destined to converge.

But first, a brief history:

  • Since their launch in November of 2018, AWS Snowball Edge devices have been used to run applications for data processing, analytics, and machine learning in remote or disconnected environments from rugged construction sites, to seafaring vessels, factory floors, and beyond.
  • Earlier in June of 2018, in response to increased adoption of microservice architectures and Kubernetes taking the lead as the de facto standard for container orchestration, Amazon Elastic Kubernetes Service (Amazon EKS) also became generally available, offering customers a fully managed Kubernetes control plane backed by the performance, scale, and reliability of AWS infrastructure complete with AWS networking and security integrations.

That brings us to today.

We’re happy to announce the general availability of Amazon EKS Anywhere on Snow. This release automates the creation and management of Kubernetes clusters on AWS Snowball Edge Compute Optimized devices with AWS optimization and support for containerized workloads that need to run at edge locations without a reliable internet connection or self-managed hardware.

A picture of a cute English bulldog on top of 3 AWS Snowball Edge device.

With Amazon EKS Anywhere Clusters on AWS Snowball Edge devices, you can run containerized workloads at the edge using ruggedized hardware provided by AWS with pay as you go pricing. AWS Snowball also offers discount pricing for 1-year and 3-year usage commitments. See the AWS Snowball pricing page for more details. Amazon EKS Anywhere simplifies the creation and management of your Kubernetes clusters on-premises, bringing your computing applications closer to data sources for enhanced analytics and real-time processing.

With this release, you can use the eksctl anywhere CLI to create an Amazon EKS Anywhere cluster on a single AWS Snowball Edge device or spread the control plane and data plane nodes across up to three devices for high availability. Fewer devices can also be used to reduce cost depending on your specific compute and storage needs. Review the quotas for AWS Snowball Edge for more information.

Solution Overview

The diagram below illustrates an Amazon EKS Anywhere on Snow deployment:

An Illustration of an EKS Anywhere Cluster running on a single AWS Snowball Edge device.

  • The AWS Snowball Edge device is connected to a local area network (LAN) router via an RJ45, SFP+, or QSFP+ physical network interface and assigned a routable IP address within the LAN subnet via dynamic host configuration protocol (DHCP).
  • Two virtual network interfaces are created to associate two Amazon Elastic Compute Cloud (Amazon EC2) instances with the physical network interface of the AWS Snowball Edge device for further administration.
  • For disconnected environments without internet access, a local Harbor registry can be hosted on an Amazon EC2 instance deployed within the AWS Snowball Edge device. This is an optional addition. You can also use your own local container registry, or if you plan to have internet connectivity you can pull the necessary container images from Amazon ECR.
  • The cluster creation workflow begins from an Amazon EKS Anywhere Administrative (EKS-A Admin) instance running on the AWS Snowball Edge device, where a kind bootstrap cluster running inside a Docker container ingests a cluster configuration file to stand up the corresponding Amazon EKS Anywhere cluster.
  • Cilium is used as a container network interface (CNI) plugin, and direct network interfaces (DNI) are created and associated with each cluster node, allowing pods to communicate with each other without network address translation (NAT).
  • kube-vip is used as a control plane load balancer. In this context, kube-vip is running as a static pod on the control plane nodes and will use address resolution protocol (ARP) to update the route mapping between the cluster endpoint virtual IP address (VIP) and the corresponding hardware media access control (MAC) address upon failover. kube-vip also uses the Kubernetes Go client library to perform leader election for new control plane nodes in the event of failover.

Getting Started

Before ordering a AWS Snowball Edge device to host your Amazon EKS Anywhere cluster, be sure to complete the prerequisite actions for building an Amazon EKS Distro Amazon Machine Image (AMI) based on the Ubuntu 20.04 LTS – Focal subscription from AWS Marketplace using the Kubernetes Image Builder. This AMI is used to deploy both the control plane and data plane nodes on your device. If you plan to work in disconnected environments and don’t have a local container registry, you can also build a Harbor registry AMI.

Next complete the steps for ordering a Snowball Edge device. You will notice a few updates to the console, including an Amazon EKS Anywhere tab and a Build your own AMI section where the Amazon EKS Distro AMI you built, and optionally the Harbor registry AMI, will appear in a list of AMIs available to add to your Snowball job to be shipped pre-installed on the device. An AMI for the EKS-A Admin instance will also come pre-installed. If you want to order multiple AWS Snowball Edge devices, you can select the number of devices you want in the High Availability section.

A picture of the updated AWS Snowball console

When your devices arrive, ensure they are connected to your LAN before powering them on. Each device will then be assigned an IP address from you LAN subnet via DHCP. Note, ARP needs to be allowed in your LAN. After retrieving the IP address from each device and the manifest, follow the instructions for unlocking the Snowball Edge device using either the Snowball Edge client (SBE client) or AWS OpsHub. Both of these tools can be downloaded from the AWS Snowball resources page.

To simplify the management of multiple devices, you can configure multiple profiles using the SBE client or AWS OpsHub as well. These profiles are similar to the named profiles you can create when using the AWS Command Line Interface (AWS CLI). In fact, you can also create named profiles for the AWS CLI that target different locally deployed devices by either retrieving the root credentials of the device or creating an AWS Identity and Access Management (AWS IAM) local user on the device. See the instructions for setting up local users for more information.

After starting the EKS-A Admin instance on your AWS Snowball Edge device, you can also optionally configure a Harbor instance to serve as a local container registry in environments without an internet connection.

 Generate the Cluster Configuration File

Before you can create an Amazon EKS Anywhere cluster, you must first compose a cluster configuration file. Luckily, rather than starting from scratch, you can generate a template manifest by executing the following commands within the EKS-A Admin instance, denoting snow as the provider:

export CLUSTER_NAME=snow-cluster 

eksctl anywhere generate clusterconfig $CLUSTER_NAME --provider snow > eksa-cluster.yaml

The generated template manifest contains default configurations for several different objects, including a Clusterobject, a SnowDatacenterConfig object, and two SnowMachineConfig objects for the respective control plane and data plane nodes. Some of the attributes for these objects will need to be modified based on the specific configuration of your LAN while others can be optionally modified to meet the requirements of your particular use case. For example, each SnowMachineConfig object needs to reference the Amazon EKS Distro Ubuntu AMI as well as the physical network interface IP address for each target device you want to spread your nodes across. You should also update the Cluster object with CIDR blocks that will be used for pod and service IP address provisioning to correspond with a range that does not overlap your LAN, and you should denote an IP address on your LAN that is not reserved to be used by the control plane endpoint. See the Amazon EKS Anywhere documentation for more details.

Note: By default the eksctl anywhere generate clusterconfig command produces a Cluster manifest that is configured for a stacked etcd topology, where the etcd members and control plane components are co-located on the same instance, but you can configure an unstacked topology where etcd members run on dedicated instances for high availability. To use an unstacked etcd topology, define a static IP range using the SnowIPPoolobject to assign addresses to the direct network interfaces that will be attached to each instance instead of using DHCP.

Create Your First Cluster

The bootstrap cluster needs access to IAM credentials and device certificates in order to launch the Amazon EKS Anywhere cluster nodes. You may use the root credentials for testing, but it’s recommended that you create an access key for a IAM local user with a scoped down permission policy document for Amazon EKS Anywhere to use on your devices.

After creating a local user and corresponding access keys on each AWS Snowball Edge device, use the SBE client from your local machine to retrieve the credentials and package them into a consolidated text file using the snowballedge list-access-keys and get-secret-access-key commands. Repeat the same process to retrieve and package the Snowball Edge device certificates using the snowballedge list-certificates and get-certificate commands. Entries in the resulting files should look like the following example, but with different values:

cat creds # example credentials file

aws_access_key_id = AKIAIOSFODNN7EXAMPLE
aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
region = snow


cat certs # example certificates file 

-----END CERTIFICATE-----       

Note, if you are using multiple AWS Snowball Edge devices, the respective user credentials and certificates may be in any order in the consolidated text files.

Next, create a directory on the EKS-A Admin instance to store the consolidated credentials and certificates files, then copy them over using the IP address of the attached virtual network interface (VNI_IP):

export VNI_IP=

ssh -i eks-a-admin-key.pem ubuntu@$VNI_IP mkdir -p path/to/
scp -i eks-a-admin-key.pem creds certs ubuntu@$VNI_IP:~/path/to/

SSH into the EKS-A Admin instance and set the following environment variables to reference the credentials and device certificates you copied over:

ssh -i eks-a-admin-key.pem ubuntu@$VNI_IP

export EKSA_AWS_CREDENTIALS_FILE='path/to/creds'
export EKSA_AWS_CA_BUNDLES_FILE='path/to/certs'

Finally, use the eksctl anywhere CLI to initiate the creation of the Amazon EKS Anywhere cluster, referencing the previously created cluster configuration file, and optionally including the --bundles-override flag for air-gapped scenarios using a local registry:

eksctl anywhere create cluster \
 -f eksa-cluster.yaml \
 --bundles-override /usr/lib/eks-a/manifests/bundle-release.yaml

After the Amazon EKS Anywhere cluster is created, you can set an environment variable pointing to the kubeconfig file that was generated as part of the same process. The kubeconfig file will then be used by kubectl CLI to access the Amazon EKS Anywhere cluster. For example, you can verify that the workload cluster creation process was completed by listing the machines to view the status of both the control plane and data plane nodes:

export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig

kubectl get machines -A

eksa-system   snow-cluster-ch8vp                   snow-cluster   s-i-8b3a2f2348e6e385b   aws-snow:///   Running   5m29s   v1.21.13-eks-1-21-16
eksa-system   snow-cluster-fbnh4                   snow-cluster   s-i-8af36ce3a3f4d0045   aws-snow:///   Running   5m29s   v1.21.13-eks-1-21-16
eksa-system   snow-cluster-g5t42                   snow-cluster   s-i-833e6498b74122f4b   aws-snow:///    Running   5m29s   v1.21.13-eks-1-21-16
eksa-system   snow-cluster-md-0-75659f959c-4kt89   snow-cluster   s-i-8b6cf64a4a2553207   aws-snow:///   Running   5m28s   v1.21.13-eks-1-21-16
eksa-system   snow-cluster-md-0-75659f959c-p9d6t   snow-cluster   s-i-881f3fc6be178ef5a   aws-snow:///    Running   5m28s   v1.21.13-eks-1-21-16
eksa-system   snow-cluster-md-0-75659f959c-v5dpb   snow-cluster   s-i-8e0e995541c0d046d   aws-snow:///   Running   5m28s   v1.21.13-eks-1-21-16

Things to Know

Device Support – Amazon EKS Anywhere on Snow is currently only supported on AWS Snowball Edge Compute Optimized devices and is not yet available on other Snow Family devices.

Version Support – Please refer to the official Amazon EKS Anywhere and Kubernetes version support policy page.

Pricing – In addition to AWS Snowball pricing, also keep in mind the Amazon EKS Anywhere Enterprise Subscription option available with Amazon EKS Anywhere, which is required to get support for the Amazon EKS Anywhere clusters and access to additional paid features such as Amazon EKS Anywhere Curated Packages.

Add-On Validation – The Amazon EKS Anywhere Conformance and Validation Framework is available to help partners validate their solutions for Amazon EKS Anywhere on Snow deployments.

Launch Partners

AWS Partners are key to the success of Amazon EKS Anywhere. The following partners have validated their software solutions through our Amazon EKS Anywhere Conformance and Validation Framework, extending their GitOps driven integrations to Amazon EKS Anywhere on Snow devices. Customers can deploy the validated solutions that these partners provide to operate their Amazon EKS Anywhere clusters on Snowball Edge devices, addressing common production readiness concerns such as secrets management, storage, and maintenance of third-party components across a distributed fleet of devices.

  • Dynatrace provides you with operational and business performance metrics
  • HashiCorp helps you manage secrets and protect sensitive data
  • Kubecost is a popular open source cloud cost monitoring tool
  • Sysdig enables you to confidently secure containers, Kubernetes, and cloud services
  • SUSE NeuVector is an open source, Zero Trust container security platform
Nathan Arnold

Nathan Arnold

Nathan is a Solutions Architect based out of North Carolina. He works primarily with AWS Federal Partners on migration, modernization, and compliance efforts, but also specializes in Kubernetes and AWS container services. When he's not working with customers, he enjoys tackling home renovation projects and playing with his dogs.