Containers

Analyze Kubernetes container logs using Amazon S3 and Amazon Athena

Logs are crucial when understanding any system’s behavior and performance. For postmortem analysis of software, along with traces and metrics, logs can be the closest thing to having a time machine. A dilemma many developers have traditionally faced is: what to log and what not to? This predicament has led to too many logs or worse, not enough. Historically, high storage costs have forced developers to reduce the level of detail being captured in application logs. But cloud computing has reduced the costs of storage significantly. Services like Amazon S3 offer customers a cost-efficient and durable storage for virtually unlimited amounts of data, data that can then be analyzed as-is, at scale using Amazon Athena and Redshift Spectrum.

We will demonstrate how you can capture Kubernetes application logs using Fluent Bit, store them in Amazon S3, and analyze them using Amazon Athena. At the crux of the solution is Fluent Bit, an open source log processor and forwarder that allows you to collect logs from different sources, and unify and send them to multiple destinations. Fluent Bit plugins support various AWS and partner monitoring solutions, including Amazon CloudWatch, Amazon Kinesis, Datadog, Splunk, and Amazon S3.

For log analysis, we use Amazon Athena, an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to set up, manage, or pay for. You are charged for the amount of data scanned by each query you run. You have the ability to analyze hundreds of terabytes of data without any upfront or recurring infrastructure costs.

Architecture

The reference architecture we propose in this post uses Fluent Bit to collect container logs produced by a sample Python application running in an Amazon EKS cluster. Fluent Bit runs as a DaemonSet and ships logs to an S3 bucket for permanent retention. Once the logs are available in Amazon S3, we use Amazon Athena to analyze.

You will need the following to complete the tutorial:

Let’s start by setting a few environment variables:

export EKS_CLUSTER=<<The name of your EKS cluster>>
export AWS_REGION=<<us-east-1 or your AWS Region>>
export S3_BUCKET=<<eks-fluentbit-logs-yourusername>>

You can use the AWS CLI to find out the name of your EKS cluster by listing EKS clusters in your AWS Region:

aws eks list-clusters

Deploy the sample application

The post provides a mock e-commerce ordering application that generates dummy logs that contain sales records in JSON-encoded format. To use the sample app, you can create a Docker image and push it to an ECR repository in your account.

Create a Python script by running the command:

cat > ordering_app.py <<EOF
#!/usr/bin/python
import random, datetime, time

states = ("AL","AK","AZ","AR","CA","CO","CT","DE","FL","GA","HI","ID","IL","IN",
"IA","KS","KY","LA","ME","MD","MA","MI","MN","MS","MO","MT","NE","NV","NH","NJ",
"NM","NY","NC","ND","OH","OK","OR","PA","RI","SC","SD","TN","TX","UT","VT","VA",
"WA","WV","WI","WY")
shipping_types = ("Free", "3-Day", "2-Day")
product_categories = ("Garden", "Kitchen", "Office", "Household")
referrals = ("Other", "Friend/Colleague", "Repeat Customer", "Online Ad")

while True:
    item_id = random.randint(1,100)
    state = states[random.randint(0,len(states)-1)]
    shipping_type = shipping_types[random.randint(0,len(shipping_types)-1)]
    product_category = product_categories[random.randint(0,len(product_categories)-1)]
    quantity = random.randint(1,4)
    referral = referrals[random.randint(0,len(referrals)-1)]
    price = random.randint(1,100)
    order_date = datetime.date(2020,random.randint(1,12),random.randint(1,28)).isoformat()
    print("{\"item_id\":\"%d\",\"product_category\":\"%s\",\"price\":\"%d\",\"quantity\":\"%d\",\"order_date\":\"%s\",\"state\":\"%s\",\"shipping_type\":\"%s\",\"referral\":\"%s\"}" % (item_id,
        product_category, price, quantity, order_date,
        state, shipping_type, referral))
    time.sleep(2)
EOF

Create DockerFile:

cat > Dockerfile<<EOF
FROM python:3
ADD ordering_app.py /
CMD [ "python", "./ordering_app.py" ]
EOF

Build the image:

docker build -t logging-demo-app .

Create an ECR repository and push the image:

# Create an ECR 
ECR_URI=$(aws ecr create-repository \
    --repository-name logging-demo-app \
    --query 'repository.repositoryUri'\
    --output text)

# login to ECR
aws ecr get-login-password | docker login --username AWS --password-stdin $ECR_URI

# tag the image
docker tag logging-demo-app:latest $ECR_URI

# push the image
docker push $ECR_URI

Fluent Bit IAM role configuration

In this demo, we want to analyze logs produced by the sample application. Suppose we are interested in analyzing the log entries for sales in California. We can use Fluent Bit to filter log records with CA in the state field and send them to an S3 bucket, while the rest of the logs go to CloudWatch Logs.

We will get into how we filter logs using Fluent Bit shortly. First, the Fluent Bit pods need an IAM role to be able to write logs to the S3 bucket and CloudWatch Logs. We have to create and associate an OIDC provider with the EKS cluster so pods can assume IAM roles. eksctl can automate this with a single command:

eksctl utils associate-iam-oidc-provider \
    --cluster $EKS_CLUSTER \
    --approve

Now, create a Kubernetes service account in the cluster. This service account has an associated IAM role with permissions to write to S3 buckets and CloudWatch Logs. In production, you should create a fine-grained IAM policy that only permits writes to a specific S3 bucket.

eksctl create iamserviceaccount \
    --name fluent-bit \
    --namespace kube-system \
    --cluster $EKS_CLUSTER \
    --attach-policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess \
    --attach-policy-arn arn:aws:iam::aws:policy/CloudWatchFullAccess \
    --approve --override-existing-serviceaccounts

Deploy Fluent Bit

Create the required ClusterRole and ClusterRoleBinding for Fluent Bit:

kubectl apply -f https://raw.githubusercontent.com/aws-samples/amazon-ecs-fluent-bit-daemon-service/mainline/eks/eks-fluent-bit-daemonset-rbac.yaml

Fluent Bit stores its configuration in a Kubernetes ConfigMap. We need to create a Fluent Bit ConfigMap to include log input and output details. The [INPUT] section is the local filesystem directory that stores container logs, which is /var/log/containers/*.log in Kubernetes. The [OUTPUT] section defines the destination where Fluent Bit transmits container logs for retention. In the current scenario, the outputs will be S3 and CloudWatch Logs.

Fluent Bit supports multiple input and output streams. Using tags, you can route input streams to various output destinations instead of storing different kinds of logs into one destination. As an example, the Fluent Bit config map below has one input and two outputs. The input matches any log file in var/log/containers/. We use Fluent Bit Stream Processing to inspect each log entry, and if it matches our criteria (whether state equals CA or not), it is sent to one of the two destinations.

The sample application generates fake sales records and logs them in this format:

{"item_id":"39","product_category":"Office","price":"78","quantity":"1",
"order_date":"2020-10-11","state":"AL","shipping_type":"3-Day","referral":"Repeat Customer"} 

We are interested in analyzing log entries where the state key has ‘CA’ as its value. We create two Fluent Bit Stream Processors (called STREAM_TASK in the Fluent Bit ConfigMap): the first processor looks for state = ‘CA’ and sends matching records to the S3 bucket. The second processor looks for state != ‘CA’, and sends matching records to CloudWatch Logs. If you want to send all records to CloudWatch irrespective of the content, you can configure the output to match the input’s tag like this:

apiVersion: v1
data:
  fluent-bit.conf: |
      [INPUT]
        Name              tail
        Tag               containerlogs <-- Input Tag
        Path              /var/log/containers/*.log
        ...
      [OUTPUT]
        Name cloudwatch_logs
        Match containerlogs <-- Input Tag
        ...

You can customize these rules to fit your scenario. For example, you can send DEBUG level logs to S3, while others to CloudWatch as explained in splitting an application’s logs into multiple streams: a Fluent tutorial.

Create a config map for Fluent Bit:

echo "
apiVersion: v1
data:
  fluent-bit.conf: |
    [SERVICE]
        Parsers_File  parsers.conf
        Streams_File  streams.conf
    [INPUT]
        Name              tail
        Tag               order
        Path              /var/log/containers/*.log
        Parser            docker
        DB                /var/log/flb_kube.db
        Mem_Buf_Limit     256MB
        DB.locking true
        Rotate_Wait 30
        Docker_Mode On
        Docker_Mode_Flush 10
        Skip_Long_Lines   On
        Refresh_Interval  10
    [FILTER]
        Name parser
        Match order
        Parser dummy
        Key_Name log
        Reserve_Data True
    [OUTPUT]
        Name s3
        Match states.ca
        bucket $S3_BUCKET
        region $AWS_REGION
        store_dir /var/log/fluentbit
        total_file_size 30M
        upload_timeout 3m
    [OUTPUT]
        Name cloudwatch_logs
        Match states.notca
        region $AWS_REGION
        log_group_name fluent-bit-cloudwatch-demo
        log_stream_prefix from-fluent-bit-
        auto_create_group On
  parsers.conf: |
    [PARSER]
        Name   dummy
        Format json
    [PARSER]
        Name   docker
        Format json
  streams.conf: |
    [STREAM_TASK]
        Name    state_filter
        Exec    CREATE STREAM CA_SALES WITH (tag='states.ca') AS SELECT * FROM TAG:'order*' WHERE state = 'CA';
    [STREAM_TASK]
        Name    state_filter_notca
        Exec    CREATE STREAM NOTCA_SALES WITH (tag='states.notca') AS SELECT * FROM TAG:'order*' WHERE state <> 'CA';
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/name: fluentbit
  name: fluent-bit-config
  namespace: kube-system
" | kubectl apply -f -

In a real-world use case, you can have many inputs and outputs. For example, you can send low priority raw logs to an S3 bucket and send other logs to Amazon CloudWatch, or any other Fluent Bit supported destination.

The Fluent Bit S3 output plugin buffers data locally in its store_dir, which we have set to a directory on the node’s filesystem. We do this so that data will still be sent even if the Fluent Bit pod suddenly stops and restarts. We’ve set maximum file size and a timeout so that each uploaded file is never more than 30 MB, and data is uploaded at least once every 3 minutes (even if less than 30 MB have been received). Fluent Bit uses multipart uploads to send larger files in chunks; hence, only a minimal amount of data is buffered at any point in time.

The next step is to create the Fluent Bit DaemonSet, which runs a pod on each node in the Kubernetes cluster; this pod monitors the node’s filesystem for logs and buffers them to the destination.

We need to find out the image repository and version to create the Fluent Bit DaemonSet. We can use AWS Systems Manager to get this information:

export Fluent_Bit_image=$(aws ssm get-parameter --name \
      $(aws ssm get-parameters-by-path \
      --path /aws/service/aws-for-fluent-bit/ \
      --query 'Parameters[*].Name' \
      --output yaml | sort | sed 'x;$!d' | cut -d ' ' -f2) \
    --query 'Parameter.Value' --output text)

This command requires AWS CLI version 2. If you’re using AWS CLI version 1 and the command above doesn’t work, you can find out the image version and repository by following the instructions at AWS for Fluent Bit GitHub repository.

Create the Fluent Bit DaemonSet:

echo "
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentbit
  namespace: kube-system
  labels:
    app.kubernetes.io/name: fluentbit
spec:
  selector:
    matchLabels:
      name: fluentbit
  template:
    metadata:
      labels:
        name: fluentbit
    spec:
      serviceAccountName: fluent-bit
      containers:
      - name: aws-for-fluent-bit
        image: $Fluent_Bit_image
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: fluent-bit-config
          mountPath: /fluent-bit/etc/
        - name: mnt
          mountPath: /mnt
          readOnly: true
        resources:
          limits:
            memory: 256Mi
          requests:
            cpu: 500m
            memory: 100Mi
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: fluent-bit-config
        configMap:
          name: fluent-bit-config
      - name: mnt
        hostPath:
          path: /mnt
" | kubectl apply -f -

Verify that Fluent Bit Pods are running:

kubectl -n kube-system get ds fluentbit              
---
NAME        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
fluentbit   3         3         3       3            3           <none>          1m

Generate logs

Now that the logging infrastructure is operational, it’s time to test it by generating logs. Execute the manifest below, and it will create a deployment with three pods from the image you pushed to your ECR repository earlier.

echo "
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ordering-app
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: ordering-app
  template:
    metadata:
      labels:
        app: ordering-app
    spec:
      containers:
        - name: ordering-app
          image: $ECR_URI
          ports:
            - containerPort: 3000
" | kubectl apply -f - 

Verify that the sample application’s pods are running:

kubectl get deployments.apps ordering-app  
---        
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
ordering-app   3/3     3            3           1m

Once the sample application pods are running, you can check Fluent Bit logs to verify that logs are being pushed to S3 successfully.

for p in $(kubectl get pods \
    --namespace=kube-system \
    -l name=fluentbit -o name \
    ); \
do kubectl logs --namespace=kube-system $p; \
done | grep output:s3

The output should look like this:

[2020/10/27 21:21:06] [ info] [output:s3:s3.0] Successfully uploaded object /fluent-bit-logs/order/2020/10/27/21/21/00-objectCYrCq319

Query logs using Athena

Fluent Bit is sending the logs that the sample application creates to the S3 bucket. Below, you will see the folder structure of the S3 bucket. Fluent Bit stores logs in Hive format and partitioned by date and time.

eks-fluent-bit-logs
└── fluent-bit-logs
    └── states.ca
        └── 2020 <-- Year
            └── 10 <-- Month
                └── 27 <-- Date
                    ├── 21 <-- Hour
                    │   ├── 02 <-- Minute
                    │   │   ├── 25-<<log file>>

S3 bucket contents

Amazon Athena allows you to query data in S3 without setting up or maintaining any infrastructure. With Athena, you can:

  • Query data using ANSI SQL. You don’t need to learn a new query language.
  • Perform complex analysis including large joins, window functions, and arrays.
  • Cost-optimize storage. You can store data in S3 rather than a costly database.

To analyze logs stored in S3, we now need to navigate to the Amazon Athena console and create a table. But before that, let’s take a look at what happens to log entries as they go through different systems.

The sample application logs transaction details in JSON format to the standard output (stdout):

{
  "item_id":"15",
  "product_category":"Garden",
  "price":"87","quantity":"4",
  "order_date":"2020-03-19",
  "state":"RI",
  "shipping_type":"2-Day",
  "referral":"Repeat Customer"
}

When the container runtime saves those logs to the local filesystem, it adds metadata to the application’s log entries, and the transformed log entry looks like this:

{
   "log":"{\"item_id\":\"69\",
     \"product_category\":\"Garden\",
     \"price\":\"87\",
     \"quantity\":\"4\",
     \"order_date\":\"2020-03-19\",
     \"state\":\"RI\",
     \"shipping_type\":\"2-Day\",
     \"referral\":\"Repeat Customer\"}\n",
   "stream":"stdout",
   "time":"2020-11-11T00:17:35.495898374Z"
}

Then, Fluent Bit adds its metadata to each log entry, so the same log entry from above looks like this in the file saved on the S3 bucket.

{
   "date":"2020-11-10T21:45:25.901236Z",
   "item_id":"69",
   "product_category":"Garden",
   "price":"87",
   "quantity":"4",
   "order_date":"2020-03-19",
   "state":"RI",
   "shipping_type":"2-Day",
   "referral":"Repeat Customer",
   "stream":"stdout",
   "time":"2020-11-11T00:17:35.495898374Z"
}

To analyze the logs stored in the files in the S3 bucket, we need to parse the log entries and convert fields into rows and columns. Athena uses SerDe (Serializer/Deserializer) to interact with data in different formats (see Athena documentation includes a list of supported SerDes). Since the log entries are JSON-encoded, we can use the OpenX JSON SerDe.

Open Amazon Athena in the AWS Management Console and create an Athena table using DDL:

CREATE EXTERNAL TABLE `eks_fb_s3`(
  `date` string, 
  `item_id` string, 
  `product_category` string, 
  `price` string, 
  `quantity` int, 
  `order_date` date, 
  `state` string, 
  `shipping_type` string, 
  `referral` string
  )
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
LOCATION 's3://<<YOUR S3 BUCKET>>/fluent-bit-logs/states.ca/'
TBLPROPERTIES ('has_encrypted_data'='false')

Ensure that you enter the name of your S3 bucket in the LOCATION section

The command above creates a table called eks_fb_s3. You can see a sample of the data in eks_fb_s3 table by running the following query:

SELECT * from eks_fb_s3
LIMIT 10

Notice that the table contains records for where state=CA. Meanwhile, the logs entries for other states are sent to CloudWatch. Head back to AWS CLI and run the command below to see application logs in CloudWatch:

aws logs get-log-events \
    --log-group-name "fluent-bit-cloudwatch-demo" \
    --log-stream-name from-fluent-bit-states.notca

The result shouldn’t contain any records for sales in California.

events:
- ingestionTime: 1605140197756
  message: '{"item_id":"64","product_category":"Household","price":"7","quantity":"4","order_date":"2020-04-22","state":"NC","shipping_type":"3-Day","referral":"Repeat Customer","stream":"stdout","time":"2020-11-12T00:16:00.574121475Z"}'
  timestamp: 1605140197665
- ingestionTime: 1605140197756
  message: '{"item_id":"2","product_category":"Garden","price":"100","quantity":"4","order_date":"2020-05-15","state":"AR","shipping_type":"2-Day","referral":"Friend/Colleague","stream":"stdout","time":"2020-11-12T00:16:00.574126243Z"}'
  timestamp: 1605140197665

Cleanup

Use the following commands to delete resources created during this post:

# Delete the sample application's pods
kubectl delete deployment ordering-app
# Delete Fluent bit daemonset, configmap, cluster role, crb
kubectl delete daemonset fluentbit -n kube-system
kubectl delete cm fluent-bit-config -n kube-system  
kubectl delete clusterrole pod-log-reader  
kubectl delete clusterrolebinding pod-log-crb
# Delete the data in S3 bucket
aws s3 rm s3://$S3_BUCKET/fluent-bit-logs/ --recursive
# Delete CW Loggroup
aws logs delete-log-group --log-group-name fluent-bit-cloudwatch-demo
# delete ecr repository
aws ecr delete-repository --repository-name logging-demo-app --force 

Fluent Bit support for Amazon Kinesis Data Firehose

Many customers use Fluent Bit’s support for Amazon Kinesis Data Firehose to stream logs to Amazon S3. Using Firehose to deliver data to S3 can be more reliable since data is transmitted to Firehose much quickly compared to Fluent Bit’s integration with S3. It is because Firehose acts as a distributed buffer and manages retries. Fluent Bit has to handle the buffering and retrying in the absence of Firehose in the middle, which isn’t by itself a bad thing, but if Fluent Bit (or any underlying component like the node, cluster, etc.) fails, any un-transmitted logs could be lost. You can improve the Fluent Bit’s reliability by using a persistent volume, as explained here, which makes Fluent Bit look for any previously un-transmitted data upon restart. It’s possible to lose logs if the containers logs are rotated before the Fluent Bit pod restarts and is ready to transmit them.

Be aware of the quotas when using Amazon Kinesis Data Firehose. You may have to request a limit increase if your applications generate large volumes of logs.

Conclusion

Amazon S3 provides cost-effective and extensible storage, which allows you to collect and analyze data using Amazon Athena without incurring high storage and infrastructure costs. You can use Fluent Bit’s S3 plugin to aggregate and transmit logs to Amazon S3, and many other destinations. Fluent Bit’s S3 plugin is designed to handle data at volume, and it optimizes data transfer to S3 using the multipart upload API.

You can learn more about the upcoming features for Fluent Bit’s S3 output plugin on Fluent Bit’s GitHub repository.

Further reading

It’s helpful to understand how container logs are stored on a Kubernetes worker node’s filesystem. Kubernetes configures the container runtime to store logs in JSON format on the node’s local filesystem. In EKS, the container runtime stores container logs at /var/lib/docker/containers/{Container ID}/{UID}/{UID-json.log}. Kubernetes also creates a symlink for log files in /var/log/pods and /var/log/containers.

The naming format for log files differs in each directory. In /var/log/pods, log files naming follows this scheme: {Kubernetes namespace}_{Pod name}_{Pod UID}. In /var/log/containers, log files naming follows a different scheme: {Pod name}_{Kubernetes namespace}_{Container name}_{Container ID}.

Notice the contents of /var/log/pods and /var/log/containers on an EKS worker node.

As you can see, the permissions vary in each directory. While any process can read files in /var/log/containers, the permissions in /var/log/pods are more restrictive. /var/log/containers is the preferred source for container logs in Kubernetes.

In Fluent Bit, you can also create inputs that match the log files of a particular pod or deployment. For example, if pods are named “ordering-app”, you can create a Fluent Bit input that monitors all files at /var/log/containers/ordering-app*.log. This is helpful when running applications that produce logs in different formats; you can create multiple Fluent Bit inputs, process them, and store them accordingly.

And finally, here are some links that we found useful:

Re Alvarez-Parmar

Re Alvarez-Parmar

Re Alvarez-Parmar is a Container Specialist Solutions Architect at Amazon Web Services. He helps customers use AWS container services to design scalable and secure applications. He is based out of Seattle and uses Twitter, sparingly, @realz

Vikram Venkataraman

Vikram Venkataraman

Vikram Venkataraman is a Principal Technical Account Manager at Amazon Web Services and also a container enthusiast. He helps organization with best practices for running workloads on AWS. In his spare time, he loves to play with his two kids and follows Cricket.