Containers

Shipping logs to third-parties with Kinesis Data Firehose and Fluent Bit for Amazon EKS on AWS Fargate

AWS Fargate is a technology that provides on-demand capacity for running pods on EKS clusters. Fargate provides a more hands-off experience, helping you run container applications without needing to manage the EC2 instances underneath. AWS Fargate runs each Kubernetes pod in its own isolated security boundary. This means it has a slightly different operating model than Kubernetes pods that run on EC2 instances.

In this post, we’ll demonstrate how you can enjoy the convenience of AWS Fargate while meeting compliance for centralized logging by routing application logs from containers running within Amazon EKS on AWS Fargate to Splunk.

To retrieve application metrics, Amazon CloudWatch Container Insights for Amazon EKS Fargate using AWS Distro for OpenTelemetry lets you view the CPU and memory use of EKS Fargate Pods in Amazon CloudWatch. For application logs, AWS Fargate provides a fully managed, built-in log router based on Fluent Bit, so no additional components need to be defined in the workload manifest.

To use the Fluent Bit log-router, create a Kubernetes ConfigMap and define FluentBit Filters and Parsers. Then define either CloudWatch, Amazon OpenSearch, Amazon Kinesis Data Firehouse, or Kinesis Streams as the supported destination for the output.

Many organizations also use third-party logging and observability solutions, such as Splunk, Datadog or New Relic. In its current state, AWS Fargate’s log router doesn’t directly support these instances, so instead, you can use Amazon Kinesis Data Firehose to create a logging pipeline. From the log router, AWS Fargate can automatically send log data to Kinesis Data Firehose before streaming it to a third-party destination.

The above architecture follows this diagram:

Overview
This walkthrough can be broken down into three high-level steps.

  1. Configure a Splunk deployment
  2. Create the Kinesis Firehose delivery stream
  3. Configure the EKS Cluster, Fargate Profile, and Fluent Bit ConfigMap

Prerequisites

  • An AWS account with the relevant permissions to create an EKS cluster and Kinesis Data Firehose
  • Installation of AWS CLI, kubectl, eksctl, and Git

Configure the Splunk deployment

  1. Deploy a Splunk Server. If you are going to install Splunk on an EC2 Linux instance, refer to the Splunk Manual – Install on Linux. This step might not be required if an existing Splunk deployment is available.
  2. The Splunk platform must be publicly accessible. To ensure Kinesis Data Firehose can reach the Splunk deployment, ensure that the following IP Ranges are able to reach your Splunk Deployment and that port 8088 is open on the Splunk deployment. The latter will allow Splunk to receive data from Kinesis Data Firehose.
  3. The Splunk endpoint needs to be secured with a TLS Certificate. Consider Let’s Encrypt to generate a fully trusted certificate, since self-signed certificates are not supported. In the example server.conf below,  /opt/splunk/etc/auth/mycerts/myServerCert.pem is the file containing both public certificate and private keys.
    ubuntu@ip-192-168-5-11:~$ sudo cat /opt/splunk/etc/system/local/server.conf
    [general]
    serverName = ip-192-168-5-11
    pass4SymmKey = $7$Ns...
    
    [sslConfig]
    sslKeysfile = /opt/splunk/etc/auth/mycerts/myServerCert.pem
    ...
  4. Following the Splunk documentation, create a HTTP Event collector, and ensure that you’ve selected Indexer acknowledgement. After creating this, copy the token value , for example, BD274822-96AA-4DA6-90EC-18940FB2414C, into your workstation’s clipboard.

  5. To test if the Splunk deployment is ready to receive data, replace BD274822-96AA-4DA6-90EC-18940FB2414C with the token you just copied. Adjust the value of the X-Splunk-Request-Channel to any random UUID, as we use that for testing only.
    curl "https://mysplunkhost.com:8088/services/collector" \
        -H "X-Splunk-Request-Channel: FE0ECFAD-13D5-401B-847D-77833BD77131" \
        -H "Authorization: Splunk BD274822-96AA-4DA6-90EC-18940FB2414C" \  
        -d '{"event": "Hello, world!", "sourcetype": "manual"}' -v
    

Create a Kinesis Firehose delivery stream

  1. On the Amazon Kinesis Console page, under Delivery Streams click Create delivery stream.
  2. When choosing the Source field, select Direct PUT. For the Destination field, choose Splunk.
  3. Enter the Splunk cluster endpoint (such as http://mysplunkhost.com:8088 ) and the value of Authentication Token (for example, BD274822-96AA-4DA6-90EC-18940FB2414C).
  4. Leave the other fields as default values, and select Create delivery stream.

You can now navigate into the new delivery stream you just created. Select Start sending demo data, and the testing data should appear in your Splunk environment. If no logs appear in your Splunk environment, you can troubleshoot further by viewing the CloudWatch metrics, Delivery to Splunk success and Destination error logs.

Configure the EKS cluster, Fargate profile, and Fluent Bit

The final part of setup is to run an EKS Cluster with Fargate, and configure the built-in log router to send logs to Kinesis Data Firehose.

1.     Create an eksctl YAML manifest that defines an EKS cluster with a Fargate profile:

cat > eks-cluster-config.yaml << EOF
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: fluentbit
  region: us-west-2
iam:
  withOIDC: true
fargateProfiles:
  - name: defaultfp
    selectors:
      - namespace: fargate
EOF

2.     Run the eksctl command to create the EKS cluster. Please note that it will take a few minutes for the EKS Cluster and Fargate profiles to be completed.

$ eksctl create cluster -f eks-cluster-config.yaml

3.     Create the dedicated aws-observability namespace and the ConfigMap for Fluent Bit. Replace PUT-SPK-k0itr to the value as the name of the Kinesis Firehose you created in a previous part. Create the Fluent Bit configuration using the following command:


# Create the Kubernetes Namespace
$ kubectl create ns aws-observability

# Create the Config Map File
$ cat > fluentbit-config.yaml << EOF
apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-logging
  namespace: aws-observability
data:
  filters.conf: |
    [FILTER]
        Name                kubernetes
        Match               kube.*
        Merge_Log           On
        Buffer_Size         0
        Kube_Meta_Cache_TTL 300s
        
  flb_log_cw: 'true'
  
  output.conf: |
    [OUTPUT]
        Name kinesis_firehose
        Match kube.*
        region us-west-2
        delivery_stream PUT-SPK-k0itr
EOF

# Create the Config Map in the Cluster
$ kubectl apply -f fluentbit-config.yaml

4.     Provide the relevant IAM permissions for the log router running on AWS Fargate to write to the Kinesis Data Firehose by attaching an IAM policy to the pod execution role. In doing so, the underlying Fargate resource will gain permission to write to the Kinesis Data Firehouse instead of the workload running in the Kubernetes Pod.

First define a policy called allow_kinesis_put_permission.json. Replace the ARN with the relevant Firehouse Delivery Stream ARN when you create the file.

$ cat > allow_kinesis_put_permission.json << EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "firehose:PutRecord",
                "firehose:PutRecordBatch"
            ],
            "Resource": "arn:aws:firehose:<region>:<accountid>:deliverystream/<firehose>"
        }
    ]
}
EOF

# Create the IAM Policy
$ aws iam create-policy \
        --policy-name FluentBitEKSFargate \
        --policy-document file://allow_kinesis_put_permission.json 

# Retrieve the Fargate Pod Execution Role
$ POD_EXEC_ROLE=$(aws eks describe-fargate-profile \
  --cluster-name fluentbit \
  --fargate-profile-name defaultfp | jq -r '.fargateProfile.podExecutionRoleArn')

# Attach the IAM Policy to the Pod Execution Role
$ aws iam attach-role-policy \
        --policy-arn arn:aws:iam::123456789012:policy/FluentBitEKSFargate \
        --role-name $POD_EXEC_ROLE

Deploy sample applications

To generate logs and test that the log pipeline is working, deploy a Nginx pod that’s running on AWS Fargate.

$ cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-app
  namespace: Fargate
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
EOF

Once the pod is running, retrieve the logs using the kubectl command to compare to the logs within Splunk.

$ kubectl logs -n fargate --selector app=nginx

/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2022/03/09 08:21:06 [notice] 1#1: using the "epoll" event method
2022/03/09 08:21:06 [notice] 1#1: nginx/1.21.6
2022/03/09 08:21:06 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
2022/03/09 08:21:06 [notice] 1#1: OS: Linux 4.14.262-200.489.amzn2.x86_64
2022/03/09 08:21:06 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1024:65535
2022/03/09 08:21:06 [notice] 1#1: start worker processes
2022/03/09 08:21:06 [notice] 1#1: start worker process 31
2022/03/09 08:21:06 [notice] 1#1: start worker process 32

Ideally, the same log lines will appear in Splunk, and they’ll include additional metadata enriched by Kubernetes filter.

Cleaning up

To avoid incurring future charges, delete all resources, including the Kinesis Firehose and EKS cluster using the following commands:

# Delete Kinesis Data Firehose
$ aws firehose delete-delivery-stream --delivery-stream-name PUT-SPK-hixCS

# Delete the EKS Cluster
$ aws iam detach-role-policy \
    --role-name eksctl-fluentbit-cluster-FargatePodExecutionRole-XXXXXXXXXX \
    --policy-arn arn:aws:iam::123456789012:policy/FluentBitEKSFargate
    
$ eksctl delete cluster -f eks-cluster-config.yaml

Conclusion

In this post, we demonstrated how to send logs from Fargate to a third-party logging solution with Kinesis Data Firehose, using Splunk as an end target.

With this architecture, you can enjoy the ease of Fargate while meeting a requirement for centralized logging.

To learn more about EKS Fargate logging, Kinesis Data Firehose, and Splunk, please refer to the links below: