AWS Open Source Blog
Running secure workloads on EKS using Fairwinds Polaris
Getting configurations right, especially at scale, can be a challenging task in cloud-native land. Automation helps you to make that task more manageable. In this guest post from EJ Etherington, CTO for Fairwinds, we look at an open source tool that allows you to check your EKS cluster setup, providing you with a graphical overview of the overall cluster state and detailed status, security, and health information.
Kubernetes with EKS
Amazon Elastic Kubernetes Service (Amazon EKS) makes it easy to deploy, manage, and scale containerized applications using Kubernetes on AWS. As we all know, the easier it is to deploy workloads to Kubernetes, the easier it is to rush to production without understanding – let alone implementing – best practices for container security, pod resource configuration, etc. At Fairwinds, we wanted to provide an easy way to check all the most important aspects of our Kubernetes workload configurations, continually audit our workloads, block anyone from uploading a configuration that doesn’t adhere to approved guidelines, and use CI/CD to integrate into our deployment workflow. So we have built and open sourced a new project: Fairwinds Polaris.
Polaris from Fairwinds
Creating clusters is easy, but running stable, secure workloads is hard – this is the kind of thing Polaris can help with.
We’ve seen time after time that seemingly small missteps in deployment configuration can lead to much bigger issues – the kind that wake people up at night. Something as simple as forgetting to configure resource requests can break auto scaling or even cause workloads to run out of resources. Small configuration issues can balloon into production outages. Polaris aims to make these problems more easily identifiable and preventable.
As a company, Fairwinds looks to help companies succeed in running Kubernetes at scale, in production. Polaris is one way we make that easier for both our customers and now, as an open source project, for the community. Whether you’re a developer looking to improve the standards of your deployments, or you’re the head of operations looking to give insight to your technical leaders, Polaris provides the information you need.
It’s not just resources
How do you ensure that all your third-party Kubernetes packages are configured as securely and resiliently as possible? Polaris checks more than just resources: it also audits container health checks, image tags, networking, and security settings (to name a few).
Polaris can help you avoid configuration issues that affect the stability, reliability, scalability, and security of your applications. It provides a simple way to identify shortcomings in your deployment configuration and prevent future issues. With Polaris, you can sleep soundly knowing that your applications are deployed with a set of well-tested standards.
Polaris has four key modes:
- A dashboard that provides an overview of how well current deployments are configured within a cluster.
- A CLI utility that provides YAML output similar to the dashboard.
- An experimental validating webhook that can prevent any future deployments that do not live up to a configured standard.
- A YAML file check, handy for use when you aren’t ready to commit to the webhook, but don’t want any misconfigurations to leak through you CI/CD System.
The Polaris dashboard
Run Polaris in the cluster and view the dashboard to ensure you can see all pods.
First, point your kubeconfig to your EKS cluster:
Then install the dashboard:
Alternatively, with Helm:
then use port-forward
to access the dashboard:
and visit http://localhost:8080/
to view the dashboard.
Now you have a dynamically-updating ‘grade’ for how well your cluster workloads are configured with regards to the checks listed above. You can work from this auto-updating list to improve your workload configurations and help ensure that they will be stable, scalable, and resilient. All you need to do is fix the errors shown in the dashboard, and your score will automatically refresh when you reload the page.
The dashboard includes a high-level summary of checks by category, annotated with helpful information:
You will also see deployments broken out by namespace with specific misconfigurations listed:
A Note about Polaris’ out-of-the-box defaults
Polaris’ default settings for configuration analysis are very conservative, so don’t be surprised if your score is lower than you expected – a key goal for Polaris was to aim for great configuration by default. If the defaults we’ve included are too strict for your use case, you can easily adjust them as part of the deployment configuration to better suit your workloads.
In releasing Polaris, we’ve included thorough documentation for the checks we’ve chosen to include. Each check includes a link to corresponding documentation that explains why we think it is important, with links to further resources around the topic.
The Polaris CLI
What if you want to check your cluster workloads, but you don’t want to deploy another app into your cluster? The Polaris CLI is just what you need. With the CLI you can view the dashboard locally or get YAML output. Learn more about this in the Polaris Installation and Usage docs.
The Polaris webhook
While the dashboard and CLI provide an overview of the state of your current deployment configuration, the webhook mode provides a way to enforce a higher standard for all future deployments to your cluster.
Once you’ve had a chance to address any issues identified by the dashboard, you can deploy the Polaris webhook to ensure that configuration never slips below that standard again. When deployed in your cluster, the webhook will prevent any deployments that have any “error” level configuration violations.
Although we’re very excited about the potential for this webhook, we’re still working on more thorough testing before we’re ready to consider it production-ready. This is still an experimental feature, and part of a brand new open source project. Because it does have the potential to prevent updates to your deployments, use it with caution.
Use Polaris in your CI/CD pipelines into your EKS clusters
Install Polaris in your CI/CD image and run:
to see an overview of the health of what you’re about to deploy.
Conclusion
Amazon EKS is a great way to start using Kubernetes, but to help ensure that your workloads are configured properly, you can run Polaris in the mode that works best for your use case. For more information, check out the Polaris project on GitHub. Fairwinds is always looking for help contributing to our open source projects as we look to improve Kubernetes adoption and build great tools for the ecosystem — reach out to Fairwinds if you have questions or want to get involved!
The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post.