Containers

Secure containerized workloads on Amazon EKS and AWS Fargate with Aqua

Introduction

Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate provides serverless compute for containerized workloads that run on Kubernetes. By eliminating the need for infrastructure management with AWS Fargate, customers can avoid the operational overhead of scaling, patching, and securing instances. AWS Fargate provides a secure and a controlled environment for container execution. Consequently, customers are not allowed to extend extra privileges to containers in operation. As a result, traditional methods for enhancing visibility and ensuring container runtime security will not work. This post demonstrates the use of Aqua’s Cloud Native Security Platform on AWS Fargate to deliver runtime security without requiring added privileges. Aqua’s platform is compatible with containers deployed on various infrastructures, such as Amazon Elastic Container Service (Amazon ECS) and Amazon EKS.This post will focus on Amazon EKS..

The container runtime security element of Aqua’s Platform, the MicroEnforcer, is an agent that can be added to Kubernetes pods and can run unprivileged on AWS Fargate. Aqua’s Platform injects the MicroEnforcer into a Kubernetes pod and enforces run-time security, without the user having to make changes to the application or their deployment specifications. These run-time protection capabilities are delivered as part of comprehensive cloud-native security platform, spanning vulnerability management, cloud security posture management, supply chain security, Kubernetes security, assurance, and cloud-integrated storage (CIS) benchmarking.

Aqua Security is an AWS Advanced Technology Partner with the AWS Containers Competency. They provide highly integrated security controls that customers use to build full code-to-production security across their continuous integration/continuous deployment (CI/CD) pipeline, with an orchestration layer and runtime environments.

Solution overview

In this post, we’ll provide a walkthrough of how to configure container runtime security for Kubernetes workloads running on Amazon EKS and AWS Fargate using Aqua’s MicroEnforcer. The MicroEnforcer has features including:

  • Identification of malicious activity, such as access to unauthorized networks or attempts to inject code into the container and prevents these attempts at runtime.
  • Prevention of Fileless Malware
  • Preventing containers from running non-compliant images.
  • Preventing executables (not in the original image) from running through Drift Prevention for container immutability.
  • Specifies the executables run by the container, while all other executables are prevented from running.
  • Preventing execution of Reverse Shells.
  • Monitoring files and directories for read, write, and modify operations.

All alerts generated by MicroEnforcer are sent to the Aqua Platform, which in turn can send them to integrated security information and event management (SIEM) and analytics tools, including AWS Security Hub.

How Aqua MicroEnforcer works

Aqua’s MicroEnforcer can be deployed in two ways:

  1. Side car architecture – The MicroEnforcer runs in its own container scheduled in the same Kubernetes pod as the application container. As the MicroEnforcer and the application are in the same Kubernetes Pod, they’ll share the same Process ID and Network namespace, which provide the relevant privileges for the MicroEnforcer to implement runtime policies.

A Kubernetes mutating admission controller is part of Aqua’s Security Platform. This admission controller is called KubeEnforcer and is deployed into the Kubernetes cluster. KubeEnforcer first validates the parameters in a Kubernetes pod manifest based on policies defined in Aqua’s Platform. If the Pod specification meets the policies, the KubeEnforcer mutates the manifest, which adds in the MicroEnforcer side car container.

  1. Embedded in the application container image – The MicroEnforcer is embedded into the application container image when the container image is built. When the application container image runs, there will then be two running processes: the application and the MicroEnforcer. As the MicroEnforcer is running in the same container as the workload, it has ability to enforce runtime policies.

Walkthrough

In this post, we’ll deploy a sample application called Yelb on to an Amazon EKS cluster with AWS Fargate and use the side car architecture to deploy Aqua’s MicroEnforcer. The Yelb application includes the following components:

  • A frontend called yelb-ui is responsible for vending the JavaScript code to the browser.
  • An application server named yelb-appserver, a Sinatra application that reads and writes to a cache server (i.e., redis-server), and a PostgreSQL backend database (yelb-db).
  • Redis stores the number of page views and PostgreSQL stores the votes.

Yelb application architecture

Figure 1:Yelb application architecture

Prerequisites

To follow along with this walkthrough a workstation with the relevant command line interface (CLI) tools and AWS permissions to create AWS infrastructure are required. This could be a local workstation, such as a laptop, or an AWS Cloud9 instance. The following CLI tools are required:

You’ll also need an Aqua Cloud Native Security Platform license to complete this walkthrough. You can request a trial license through the Aqua Trial License landing page.

  1. Create the Amazon EKS clusters and configure Kubectl

Let’s start by creating an Amazon EKS cluster with an Amazon EKS Managed Node Group to deploy the Aqua Security Platform onto. The Aqua Security Platform has a persistent storage element that needs to be stored on an Amazon Elastic Block Store (Amazon EBS) volume and can’t currently run on AWS Fargate. In this step, we’ll also create an AWS Fargate profile to deploy the Yelb application.

To follow along with this walkthrough, an eksctl cluster configuration file can be found in a sample repository here. Clone this repository down to your local machine:

$ git clone https://github.com/aws-samples/eks-fargate-aquasec/

Next, you’ll need to edit the cluster.yaml file, by replacing the AWS Account ID and AWS Region that are found in the current file with your relevant pieces of information.

# For Linux Users:
$ sed -i 's/111222333444/MY_AWS_ACCOUNT_ID/g' setup/cluster.yaml
$ sed -i 's/eu-west-1/MY_AWS_REGION/g' setup/cluster.yaml

# For Mac OS users:
$ sed -i "" 's/111222333444/MY_AWS_ACCOUNT_ID/g' setup/cluster.yaml
$ sed -i "" 's/eu-west-1/MY_AWS_REGION/g' setup/cluster.yaml

You are now ready to create the cluster.

$ eksctl create cluster -f setup/cluster.yaml

Note the cluster will take a few minutes to be created.

2. Install the Aqua Platform in the Amazon EKS cluster

Using helm we can easily deploy the Aqua Platform to the cluster we’ve just created. First, let’s add the Aqua helm repository to our workspace.

$ helm repo add aqua-helm https://helm.aquasec.com
$ helm repo update					

We can validate that we are able to search the repository.

$ helm search repo aqua-helm/server --versions

Next, we’ll export some local variables to use in the deployment. The AQUA_ACCOUNT_USERNAME and AQUA_ACCOUNT_PASSWORD variables are the credentials associated with your Aqua Customer Success Portal that was setup when you requested the trial license. The AQUA_PLATFORM_PASSWORD is a local password used to access the Aqua Platform running on Amazon EKS. Finally, the AQUA_PLATFORM_LICENSE was generated for you when you requested the trial license and can be found in the Aqua Customer Success Portal.

$ export AQUA_ACCOUNT_USERNAME=<Aqua registry username>
$ export AQUA_ACCOUNT_PASSWORD=<Aqua registry password>
$ export AQUA_PLATFORM_PASSWORD=<Admin password for the Aqua console>
$ export AQUA_PLATFORM_LICENSE=<License key obtained from Aqua>

We can now deploy the Aqua Security Platform (i.e., Aqua Server) and the Admission Controller (i.e., KubeEnforcer) on to the Amazon EKS cluster using their respective helm charts.

$ helm upgrade \
    --install \
    --namespace aqua \
    aqua-server \
    aqua-helm/server \
    --create-namespace \
    --set imageCredentials.username=$AQUA_ACCOUNT_USERNAME,imageCredentials.password=$AQUA_ACCOUNT_PASSWORD,admin.token=$AQUA_PLATFORM_LICENSE,admin.password=$AQUA_PLATFORM_PASSWORD,global.platform=eks

$ helm upgrade \
    --install \
    --namespace aqua \
    aqua-kube-enforcer \
    aqua-helm/kube-enforcer \
    --set global.gateway.address=aqua-server-gateway-svc.aqua,global.gateway.port=8443,certsSecret.autoGenerate=true,global.imageCredentials.create=true,global.imageCredentials.username=$AQUA_ACCOUNT_USERNAME,global.imageCredentials.password=$AQUA_ACCOUNT_PASSWORD,serviceAccount.create=true

It will take several minutes for the pods to become ready. You can verify that the pods are up and running with kubectl.

$ kubectl get pods -n aqua 
NAME                                         READY   STATUS    RESTARTS   AGE
aqua-kube-enforcer-6c7878d7cb-jsvj2          1/1     Running   0          10m
aqua-server-audit-database-c6966cf6f-jbhd7   1/1     Running   0          11m
aqua-server-console-67f98f9bdf-rj29q         1/1     Running   0          11m
aqua-server-database-84bd945fb7-k54hz        1/1     Running   0          11m
aqua-server-gateway-7bc484b6bc-ggz2q         1/1     Running   0          11m
starboard-operator-655ff4566d-j6x5s          1/1     Running   0          10m 

To access the Aqua Security Platform, use kubectl to retrieve the domain name of the newly provisioned Elastic Load Balancer that was deployed as part of the aqua-server helm chart.

$ AQUA_ELB=$(kubectl get svc aqua-server-console-svc --namespace aqua -o jsonpath="{.status.loadBalancer.ingress[0].hostname}")
$ AQUA_CONSOLE=http://$AQUA_ELB:8080
$ echo $AQUA_CONSOLE

With a browser on your local machine, you can now log in to the Aqua Platform console running on your Amazon EKS cluster using the Aqua Console URL. Use the username administrator and the AQUA_PLATFORM_PASSWORD you set when deploying the helm chart.

Aqua Platform login screen

Figure 2: Aqua Platform login screen

3. Configure the Aqua KubeEnforcer

The next step is to configure the KubeEnforcer Kubernetes admission controller. This controller is responsible for injecting the MicroEnforcer into the Kubernetes pods.

After logging in to the Aqua Console, navigate to Settings -> Runtime Protection and select Switch To Custom Mode. Custom mode gives us complete control over the policies, which allows us to create a sample policy later on in this walkthrough.

Configuring custom runtime production

Figure 3: Configuring custom runtime production 

Next, navigate to Administration and then Enforcers. Make sure that aqua-kube-enforcer is connected and assigned to the helm-default-ke-group enforcer group. The aqua-kube-enforcer pod is responsible for injecting the MicroEnforcer to the application pods to enforce the runtime security.

Aqua Security Platform Enforcer screen

Figure 4: Aqua Security Platform Enforcer screen

Next, we want to configure the auto injection of the MicroEnforcer into the Kubernetes pod. Select the helm-default-ke-group in the MicroEnforcer Console, as shown in Figure 4. Select the three dots on the right-hand side and select Edit Group. Navigate to the Advanced Options. First, set the Enforcement Mode to Enforce and secondly toggle the Enable Pod Enforcer injection. Choose Set to close the Window and then select Save in the top right corner to exit the editing page of the KubeEnforcer.

Aqua Security Platform Enforcer advanced settings

Figure 5: Aqua Security Platform Enforcer advanced settings

4. Configure the runtime policies

The Aqua MicroEnforcer communicates with the Aqua Security Portal to retrieve runtime policies. Before deploying the Yelb application, we’ll configure an example runtime policy in the Aqua Security Portal.

Navigate to Polices in the Aqua Security Platform Console and select Runtime Polices. Choose Add Policy in the top right corner and then Container Runtime.

Aqua Security Platform runtime polices screen

Figure 6: Aqua Security Platform runtime polices screen

Provide the name for the runtime policy as DemoPolicy. For the scope of the Runtime Policy, select Additional Scope Criteria, and select the scope for a single Kubernetes namespace yelb-secure. (This is the namespace associated with an AWS Fargate profile and where we’ll deploy our sample application.) Finally, set the policy Status to Enabled and Enforcement mode to Enforce, which means that this policy will be enforced by the MicroEnforcer.

Aqua Security Platform runtime policies scope selection

Figure 7: Aqua Security Platform runtime policies scope selection

Next, scroll down to the Controls section. In the list of controls, select Executables Allowed and add /bin/bash and /bin/touch to the allowed executables. This policy by default prevents all executables, so we now have added bash and touch to an allow list. Select Save.

Aqua Security Platform runtime polices control selection

Figure 8: Aqua Security Platform runtime polices control selection

5. Deploy the sample Yelb application

Finally, we deploy the sample yelb application on to AWS Fargate using a helm chart. First, create a Kubernetes namespace to place the yelb objects into. This namespace is referenced in our AWS Fargate profile created in step one.

$ kubectl create namespace yelb-secure

Next, we need to create a Kubernetes Secret with credentials to the Aqua Container Image Repository where the MicroEnforcer container images are stored. Here we can reuse the environment variables set in step 3.

$ kubectl --namespace yelb-secure \
    create secret docker-registry aqua-registry \
    --docker-email=$AQUA_ACCOUNT_USERNAME \
    --docker-username=$AQUA_ACCOUNT_USERNAME \
    --docker-password=$AQUA_ACCOUNT_PASSWORD \
    --docker-server=registry.aquasec.com

Using helm, we deploy the yelb-secure application using a local chart stored in the sample repository that we cloned in step 1.

$ helm upgrade \
    --install \
    --namespace yelb-secure \
    yelb-secure \
    ./yelb-secure/

Once the application is deployed, you can access the application through an Elastic Load Balancer pointing to the yelb-ui pod. You can view the pods and their status with the kubectl get pods command. In the following kubectl command, notice there are two containers in each pod: one is the application container and the one is the MicroEnforcer container that has been injected by the KubeEnforcer.

$ kubectl get pods --namespace yelb-secure
NAME                              READY   STATUS    RESTARTS   AGE
redis-server-58c8db79b-664j7      2/2     Running   0          93s
yelb-appserver-54fc9d4946-7ww72   2/2     Running   0          93s
yelb-db-6968c57d5-fs2jx           2/2     Running   0          93s
yelb-ui-7cf9b94746-sn95p          2/2     Running   0          93s

$ yelburl=$(kubectl get svc yelb-ui --namespace yelb-secure -o jsonpath="{.status.loadBalancer.ingress[0].hostname}")
$ echo $yelburl

In a browser, you can then browse to the Elastic Load Balancer URL.

Yelb application screen

Figure 9: Yelb application screen

6. Test the runtime policies

We can test the runtime security polices by using kubectl execution to log in to our application pods and execute some commands.

The kubectl command below executes into the yelb-ui pod. We then attempt to create a directory inside our container using the mkdir command (which should be blocked by our runtime policies), and we also attempt to create a file using touch command (which should be allowed by our runtime policies).

$ yelbui=$(kubectl get pods --namespace yelb-secure -l app=yelb-ui -o jsonpath="{.items[0].metadata.name}")
$ kubectl --namespace yelb-secure exec -it $yelbui -c yelb-ui -- /bin/bash 
/# mkdir yelbsecure
bash: /bin/mkdir: Permission denied
/# touch hello	
/#

To review what happened, we can navigate to an audit report in the Aqua Security Platform Console. On the left hand select Security Reports and select Audit to get a detailed report on the executables blocked. Aqua has blocked the mkdir command, as shown in the following screenshot.

Aqua Security Platform audit screen for executables blocked

Figure 10: Aqua Security Platform audit screen for executables blocked

Cleaning up

To remove all of the resources created in this walkthrough, use a clean up script provided in the sample repository. To run the clean up script execute:

$ bash setup/cleanup.sh

Conclusion

In this blog, we demonstrated how Aqua’s Cloud Native Security Platform offers runtime security for your workloads on Amazon EKS and AWS Fargate. To enhance your container’s security posture on Amazon EKS and AWS Fargate, you can use this technology and manage the processes and binaries within your container images. Furthermore, Aqua’s Platform and the MicroEnforcer technology can also enhance the security of your containers running on Amazon ECS and AWS Fargate. To learn more, please visit the Aqua Security landing page on the  Amazon Partner Network Portal.

Hariharsudan Nandakumar

Hariharsudan Nandakumar

Hariharsudan Nandakumar is a Senior Technical Account Manager at Amazon Web Services. He is a containers specialist and works with enterprise customers. He helps customers with application modernization using containers. He is passionate about containers and kubernetes. He spends his free time playing with his 7-year-old and playing computer games.

Mridula Grandhi

Mridula Grandhi

Mridula Grandhi is a Sr Leader, Specialist Solutions Architect for AWS, Compute. Mridula provides visionary leadership and strategic acumen in shaping the success and transform customer's workloads on AWS through modernization. You can reach her on Twitter via @gmridula1 (DMs are open).

Josh Dean

Josh Dean

Josh Dean has been at AWS for 7 years, first working in Premium Support, followed by his current role as a Senior Partner Solutions Architect, helping Networking and Security ISV Partners build their software and solutions on AWS.