Containers
How to capture application logs when using Amazon EKS on AWS Fargate
Update 12/05/20: EKS on Fargate now supports capturing applications logs natively. Please see this blog post for details.
Amazon Elastic Kubernetes Service (Amazon EKS) now allows you to run your applications on AWS Fargate. You can run Kubernetes pods without having to provision and manage EC2 instances. Because Fargate runs every pod in VM-isolated environment, the concept of daemonsets
currently doesn’t exist in Fargate. Therefore to capture application logs when using Fargate, you need to reconsider how and where your application emits logs. This tutorial shows how to capture and ship application logs for pods running on Fargate.
Kubernetes logging architecture
According to the Twelve-Factor App manifesto, which provides the gold standard for architecting modern applications, containerized applications should output their logs to stdout
and stderr
. This is also considered best practice in Kubernetes and cluster level log collection systems are built on this premise.
The Kubernetes logging architecture defines three distinct levels:
- Basic level logging: the ability to grab pods log using kubectl (e.g.
kubectl logs myapp
– wheremyapp
is a pod running in my cluster) - Node level logging: The container engine captures logs from the application’s
stdout
andstderr
, and writes them to a log file. - Cluster level logging: Building upon node level logging; a log capturing agent runs on each node. The agent collects logs on the local filesystem and sends them to a centralized logging destination like Elasticsearch or CloudWatch. The agent collects two types of logs:
- Container logs captured by the container engine on the node.
- System logs.
Kubernetes, by itself, doesn’t provide a native solution to collect and store logs. It configures the container runtime to save logs in JSON format on the local filesystem. Container runtime – like Docker – redirects container’s stdout
and stderr
streams to a logging driver. In Kubernetes, container logs are written to /var/log/pods/*.log
on the node. Kubelet and container runtime write their own logs to /var/logs
or to journald, in operating systems with systemd. Then cluster-wide log collector systems like Fluentd can tail
these log files on the node and ship logs for retention. These log collector systems usually run as DaemonSets on worker nodes. But running DaemonSets is not the only way to aggregate logs in Kubernetes.
Shipping container logs to a centralized log aggregation system
There are three common approaches for capturing logs in Kubernetes:
- Node level agent, like a Fluentd daemonset. This is the recommended pattern.
- Sidecar container, like a Fluentd sidecar container.
- Directly writing to log collection system. In this approach, the application is responsible for shipping the logs. This is the least recommended option because you will have to include the log aggregation system’s SDK in your application code instead of reusing community build solutions like Fluentd. This pattern also disobeys the principle of separation of concerns, according to which, logging implementation should be independent of the application. Doing so allows you to change the logging infrastructure without impacting or changing your application.
For pods running on Fargate, you need to use the sidecar pattern. You can run a Fluentd (or Fluent Bit) sidecar container to capture logs produced by your applications. This option requires that the application writes logs to filesystem instead of stdout
or stderr
. A consequence of this approach is that you will not be able use kubectl
logs to view container logs. To make logs appear in kubectl logs
, you can write application logs to both stdout
and filesystem simultaneously. In the tutorial below, I am using tee
write to file and stdout
.
We understand that, if your application logs to stdout/stderr, you may need to make changes to your applications to capture cluster level logs in EKS on Fargate. We have heard from customers that this is undesirable and we are working to create a solution that doesn’t need application refactoring. Until then, if you want to run your workloads without managing EC2 instances, you can use the sidecar pattern to capture cluster level application logs. Note that, if you only need to capture basic logging at the pod-level, kubectl logs
will do without any application refactoring.
Pods on Fargate get 20GB of ephemeral storage, which is available to all the containers that belong to a pod. You can configure your application to write logs to the local filesystem and instruct Fluentd to watch the log directory (or file). Fluentd will read events from the tail of log files and send the events to a destination like CloudWatch for storage. Ensure that you rotate logs regularly to prevent logs from usurping the entire volume.
Tutorial
The demo container produces logs to /var/log/containers/application.log
. Fluentd is configured to watch /var/log/containers
and send log events to CloudWatch. The pod also runs a logrotate sidecar container that ensures the container logs don’t deplete the disk space. In the example, cron triggers logrotate
every 15 minutes; you can customize the logrotate behavior using environment variables.
You will need the latest version of eksctl to create the cluster and Fargate profile.
The command below will create an EKS cluster. All pods in kube-system
and default
namespaces will run on Fargate. There will be no EC2 nodes in this cluster.
Create a new namespace that will run the demo application.
Create a new Fargate profile for logdemo
namespace. This tells EKS to run the pods in logdemo
namespace on Fargate.
Create an IAM OIDC identity provider for the cluster.
Create an IAM role and a Kubernetes service account for Fluentd. This role permits Fluentd container to write log events to CloudWatch.
You can review the service account created in the previous step.
Create a manifest for Fluentd ClusterRole
,RoleBinding
, and ConfigMap
.
Apply the manifest.
The result should look like this:
Create a manifest for the sample application. The pod contains an initContainer
that copies the Fluentd ConfigMap
and copies it to /fluentd/etc/
. This directory is mounted in the Fluentd container.
? Edit the value of REGION, AWS_REGION, and CLUSTER_NAME to match your environment.
Deploy the sample application with the command.
You can see the written logs using the AWS CLI or CloudWatch console. Using AWS CLI:
You should see log events generated by the demo container:
To view in the CloudWatch console, search for log group “/aws/containerinsights/eksfargate-logging-demo/springapp.”
Conclusion
If you want to use Fargate to run your pods, you will need to use the sidecar pattern to capture application logs. Consider writing to stdout
and file simultaneously so you can view logs using kubectl
. You can still use the daemonset pattern for applications running on EC2 nodes.
We are working to provide a native solution for application logging for EKS on Fargate. The FireLens on EKS Fargate issue on the AWS Containers Roadmap includes the proposal we’re considering. Leave us a comment, we would love to hear your feedback.