亚马逊AWS官方博客

使用Log Hub解决方案收集Amazon EKS集群环境下日志

一.概述

日志系统作为企业日常运维重要支撑,以及大数据分析、审计工作,已经成为企业IT系统不可缺少的重要组成部分。3月15日亚马逊云科技发布了集中式日志解决方案 — Log Hub的预览版,通过该解决方案,我们可以将来自于亚马逊云服务的日志,如Amazon Elastic Kubernetes Service(以下简称Amazon EKS)集群内日志、Amazon CloudFront日志、Amazon Cloud Trail日志、Amazon RDS/Aurora 日志、Amazon Web 应用程序防火墙(WAF)日志、Amazon EC2上日志和来自于应用的日志,如Nginx日志等收集到Amazon OpenSearch Service中(如果您想了解关于Log Hub更多日志类型的支持,可以访问以下链接进行查阅),并可以通过Amazon OpenSearch Service 进行一站式日志的检索、通过Log Hub解决方案内置的Kibana仪表板进行分析。接下来,本文将介绍如何使用Log Hub来收集与分析Amazon EKS集群内的日志。通过本文您将会了解到:

  • 通过Log Hub创建应用日志分析流水线
  • 通过IAM OIDC Identity Provider,将Kubernetes的 service account 关联 IAM policy
  • Amazon EKS环境下使用Fluent Bit通过DaemonSet模式进行日志采集
  • 通过Kubectl命令部署Nginx应用,用以测试日志发送
  • 通过Log Hub打开Amazon OpenSearch Service访问代理,访问Amazon OpenSearch Service的Dashboards,查看日志
  • 收集Fluent Bit容器日志
  • 收集Ingress Controller日志,并通过Log Hub创建预定义的示例仪表板

二.简要说明

下面我们基于下图的总体架构,以EKS集群中Nginx为例,详细讲解一下如何将Log Hub的日志agent-Fluent Bit部署为DaemonSet模式,通过Fluent Bit捕获Nginx的应用日志、容器日志、Ingress Controller日志,并把日志发送到Log Hub解决方案创建的日志缓存区Amazon Kinesis Data Stream,最终将日志输送到Amazon OpenSearch Service中,以及如何通过Amazon OpenSearch Service查询与分析日志。文章使用的Region为ap-northeast-2,大家也可以根据自己的实际场景,选择对应的Region。

1.通过Log Hub创建应用日志收集的管道

  • 通过Amazon CloudFormation部署Log Hub解决方案,并登录到Log Hub控制台后,在日志分析流水线处点击“应用日志”,并点击创建流水线,用于收集Amazon EKS集群中的Nginx日志。创建过程可点击这里,参考链接内容的“Create application log pipeline”部分。在创建过程中,索引名称我们可以命名为“eks-nginx-log”。

注意:链接内容处的“Create EC2 policy”以及后续内容可以忽视。

  • 创建成功后,选择刚才创建的流水线,并点击查看详情,复制权限处策略文档中Amazon Kinesis Data Stream相关的内容。复制内容参考下图:

2.通过IAM OIDC Identity Provider,将Kubernetes的 service account 关联 IAM Role以及对应的Policy。

  • 创建策略,选择“JSON”,并从刚才创建的应用日志流水线权限处复制的内容粘贴至“Statement”处的“[ ]”内。策略文档名称为“demo-kinesis-policy”, 创建过程参考下图:



注意:请将<YOUR ACCOUNT ID>替换成实际账户ID。“stream/”后为Amazon Kinesis Data Stream的名称,这里我们需要先记录下来,后面创建Fluent Bit的ConfigMap时需要使用到。

  • 创建面向Service Account的IAM的Role,让Pod可以访问Amazon Kinesis Data Stream。

1)访问你所创建EKS集群的配置,并复制OpenID Connect 提供商 URL

2)在IAM的身份提供商处添加提供商,选择OpenID Connect,并将上一步复制的“提供商 URL”粘贴在提供商 URL,点击获取指纹。在受众处填写“sts.amazonaws.com”,并添加提供商


3)创建IAM 角色,选择Web身份,在身份提供商处选择步骤2创建的身份服务商,并在Audience处选择“sts.amazonaws.com”。点击”下一步”,在权限策略处选择“demo-kinesis-policy”。点击”下一步”,输入角色名称,并创建。这里我们将角色名称命名为“eks-fluent-bit-role”。


4)选择刚才创建的“eks-fluent-bit-role”角色,点击信任关系,编辑信任策略,参考下图在“String Equals”处添加“<Provider>:sub”: “system:serviceaccount:logging:fluent-bit”,”点击“更新策略”。

注意:请将<Provider>替换成步骤2)添加的提供商

5)复制“eks-fluent-bit-role”的ARN。

  • 通过kubectl命令创建Fluent Bit的namespace,这里我们命名为“logging”。创建命令参考如下:

kubectl create namespace logging

当然,我们也可以通过kubectl  create -f fluent-bit-ns.yaml命令使用yaml文件的方式创建,fluent-bit-ns.yaml文件参考内容如下:

apiVersion: v1
kind: Namespace
metadata:
  name: logging
  • 创建以及Service Account,Service Account的yaml文件内容参考如下:
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluent-bit
  namespace: logging
  annotations:
    eks.amazonaws.com/role-arn: < eks-fluent-bit-role-arn>

注意:请将<eks-fluent-bit-role-arn>替换前面复制“eks-fluent-bit-role”的ARN。

创建命令参考如下:

kubectl create -f  fluent-bit-service-account.yaml

  • 通过kubectl命令在Kubernetes中创建Fluent Bit的Role,yaml文件参考如下:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: fluent-bit-read
rules:
  - apiGroups: [""]
    resources:
      - namespaces
      - pods
      - nodes
      - nodes/proxy
    verbs: 
      - get
      - list
      - watch
  • 通过kubectl命令建立Service Account与Kubernetes Role的绑定关系,yaml文件参加如下:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: fluent-bit-read
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: fluent-bit-read
subjects:
- kind: ServiceAccount
  name: fluent-bit
  namespace: logging

3.Amazon EKS环境下使用Fluent Bit通过DaemonSet方式进行日志采集

  • 创建Fluent Bit的DaemonSet模式所使用的ConfigMap等资源,用于收集/var/log/containers目录下容器日志。这里我们将文件命名fluent-bit.yaml,并通过kubectl apply -f  fluent-bit.yaml创建对应资源。 yaml文件内容参考如下:
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: fluent-bit-config
  namespace: logging
  labels:
    k8s-app: fluent-bit
    version: v1
data:
  # Configuration files: apache, apache2, nginx and multiline text for java slf4j
  # =============================================================================
  applog-parsers-conf: |
    [PARSER]
        Name   apache
        Format regex
        Regex  ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
        Time_Key time
        Time_Format %d/%b/%Y:%H:%M:%S %z    
    [PARSER]
        Name   apache2
        Format regex
        Regex  ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^ ]*) +\S*)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>.*)")?$
        Time_Key time
        Time_Format %d/%b/%Y:%H:%M:%S %z
    [PARSER]
        Name   apache_error
        Format regex
        Regex  ^\[[^ ]* (?<time>[^\]]*)\] \[(?<level>[^\]]*)\](?: \[pid (?<pid>[^\]]*)\])?( \[client (?<client>[^\]]*)\])? (?<message>.*)$
    [PARSER]
        Name   nginx
        Format regex
        Regex ^(?<remote>[^ ]*) (?<host>[^ ]*) (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")
        Time_Key time
        Time_Format %Y-%m-%dT%H:%M:%S.%LZ
    [PARSER]
        Name   json
        Format json
        Time_Key time
        Time_Format %Y-%m-%dT%H:%M:%S.%LZ
        Time_Keep    On
    [PARSER]
        Name   nginx_loghub
        Format regex
        Regex (?<remote_addr>\S+)\s*-\s*(?<remote_user>\S+)\s*\[(?<time_local>\d+/\S+/\d+:\d+:\d+:\d+)\s+\S+\]\s*"(?<request_method>\S+)\s+(?<request_uri>\S+)\s+\S+"\s*(?<status>\S+)\s*(?<body_bytes_sent>\S+)\s*"(?<http_referer>[^"]*)"\s*"(?<http_user_agent>[^"]*)"\s*"(?<http_x_forwarded_for>[^"]*)".* 
        Time_Key time
        Time_Format %d/%b/%Y:%H:%M:%S %z
    [PARSER]
        Name   multilinetext_efeff2cb-10eb-4c94-b5ed-90efd689b3c4
        Format regex
        Regex (?<time>\d{4}-\d{2}-\d{2}\s*\d{2}:\d{2}:\d{2}.\d{3})\s*(?<level>\S+)\s*\[(?<thread>\S+)\]\s*(?<logger>\S+)\s*:\s*(?<message>[\s\S]+)
    [PARSER]
        # https://rubular.com/r/IhIbCAIs7ImOkc
        Name        k8s-nginx-ingress
        Format      regex
        Regex       ^(?<host>[^ ]*) - (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*) "(?<referer>[^\"]*)" "(?<agent>[^\"]*)" (?<request_length>[^ ]*) (?<request_time>[^ ]*) \[(?<proxy_upstream_name>[^ ]*)\] (\[(?<proxy_alternative_upstream_name>[^ ]*)\] )?(?<upstream_addr>[^ ]*) (?<upstream_response_length>[^ ]*) (?<upstream_response_time>[^ ]*) (?<upstream_status>[^ ]*) (?<reg_id>[^ ]*).*$
        Time_Key    time
        Time_Format %Y-%m-%dT%H:%M:%S.%LZ
    [PARSER]
        Name         docker
        Format       json
        Time_Key     time
        Time_Format  %Y-%m-%dT%H:%M:%S.%LZ
        Time_Keep    On
    [PARSER]
        Name        container_firstline
        Format      regex
        Regex       (?<log>(?<="log":")\S(?!\.).*?)(?<!\\)".*(?<stream>(?<="stream":").*?)".*(?<time>\d{4}-\d{1,2}-\d{1,2}T\d{2}:\d{2}:\d{2}\.\w*).*(?=})
        Time_Key    time
        Time_Format %Y-%m-%dT%H:%M:%S.%LZ      
    [PARSER]
        Name        docker-daemon
        Format      regex
        Regex       time="(?<time>[^ ]*)" level=(?<level>[^ ]*) msg="(?<msg>[^ ].*)"
        Time_Key    time
        Time_Format %Y-%m-%dT%H:%M:%S.%L
        Time_Keep   On
    [PARSER]
        Name        syslog
        Format      regex
        Regex       ^(?<time>[^ ]* {1,2}[^ ]* [^ ]*) (?<host>[^ ]*) (?<ident>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<pid>[0-9]+)\])?(?:[^\:]*\:)? *(?<message>.*)$
        Time_Key    time
        Time_Format %b %d %H:%M:%S      
    [PARSER]
        Name        syslog-rfc5424
        Format      regex
        Regex       ^\<(?<pri>[0-9]{1,5})\>1 (?<time>[^ ]+) (?<host>[^ ]+) (?<ident>[^ ]+) (?<pid>[-0-9]+) (?<msgid>[^ ]+) (?<extradata>(\[(.*)\]|-)) (?<message>.+)$
        Time_Key    time
        Time_Format %Y-%m-%dT%H:%M:%S.%L
        Time_Keep   On
    [PARSER]
        Name        syslog-rfc3164-local
        Format      regex
        Regex       ^\<(?<pri>[0-9]+)\>(?<time>[^ ]* {1,2}[^ ]* [^ ]*) (?<ident>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<pid>[0-9]+)\])?(?:[^\:]*\:)? *(?<message>.*)$
        Time_Key    time
        Time_Format %b %d %H:%M:%S
        Time_Keep   On
    [PARSER]
        Name        syslog-rfc3164
        Format      regex
        Regex       /^\<(?<pri>[0-9]+)\>(?<time>[^ ]* {1,2}[^ ]* [^ ]*) (?<host>[^ ]*) (?<ident>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<pid>[0-9]+)\])?(?:[^\:]*\:)? *(?<message>.*)$/
        Time_Key    time
        Time_Format %Y-%m-%dT%H:%M:%S.%L
        Time_Keep   On
    [PARSER]
        Name        mongodb
        Format      regex
        Regex       ^(?<time>[^ ]*)\s+(?<severity>\w)\s+(?<component>[^ ]+)\s+\[(?<context>[^\]]+)]\s+(?<message>.*?) *(?<ms>(\d+))?(:?ms)?$
        Time_Format %Y-%m-%dT%H:%M:%S.%L
        Time_Key    time
        Time_Keep   On
    [PARSER]
        Name        envoy
        Format      regex
        Regex       ^\[(?<start_time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)? (?<protocol>\S+)" (?<code>[^ ]*) (?<response_flags>[^ ]*) (?<bytes_received>[^ ]*) (?<bytes_sent>[^ ]*) (?<duration>[^ ]*) (?<x_envoy_upstream_service_time>[^ ]*) "(?<x_forwarded_for>[^ ]*)" "(?<user_agent>[^\"]*)" "(?<request_id>[^\"]*)" "(?<authority>[^ ]*)" "(?<upstream_host>[^ ]*)"  
        Time_Format %Y-%m-%dT%H:%M:%S.%L%z
        Time_Key    start_time
        Time_Keep   On
    [PARSER]
        Name        cri
        Format      regex
        Regex       ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>[^ ]*) (?<message>.*)$
        Time_Key    time
        Time_Format %Y-%m-%dT%H:%M:%S.%L%z
        Time_Keep    On
    [PARSER]
        Name    kube-custom
        Format  regex
        Regex   (?<tag>[^.]+)?\.?(?<pod_name>[a-z0-9](?:[-a-z0-9]*[a-z0-9])?(?:\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)_(?<namespace_name>[^_]+)_(?<container_name>.+)-(?<docker_id>[a-z0-9]{64})\.log$     
  # Configuration files: server, input, filters and output
  # ======================================================
  application-containers-conf: |
    # ====================collect nginx log==================================
    [INPUT]
        Name              tail
        Tag               kube.var.log.containers.nginx.*
        Exclude_Path      /var/log/containers/cloudwatch-agent*, /var/log/containers/fluent-bit*, /var/log/containers/aws-node*, /var/log/containers/kube-proxy*
        Path              /var/log/containers/app-nginx-demo*.log
        Path_Key          file_name
        Parser            docker
        DB                /fluent-bit/checkpoint/flb_container.db
        Mem_Buf_Limit     50MB
        Skip_Long_Lines   On
        Refresh_Interval  10
        Rotate_Wait       30
        storage.type      filesystem
        Read_from_Head    True
    [OUTPUT]
        Name kinesis_streams
        Match  kube.var.log.containers.nginx.*
        Region ap-northeast-2
        Stream <nginx-log-kinesis-data-stream-name>
        Retry_Limit False
        #Auto_retry_requests True  
  filter-kubernetes-conf: |
    # =============== add kubernates meta data into nginx log========================
    [FILTER]
        Name                kubernetes
        Match               kube.var.log.containers.nginx.*
        Kube_URL            https://kubernetes.default.svc:443
        Kube_CA_File        /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        Kube_Token_File     /var/run/secrets/kubernetes.io/serviceaccount/token
        Kube_Tag_Prefix     kube.var.log.containers.nginx.
        Annotations         On
        
        Merge_Log           On
        Merge_Log_Trim      On
        Merge_Log_Key       nginx-log
        Merge_Parser        nginx_loghub

        Buffer_Size         64k
        Use_Kubelet         Off
        Regex_Parser        kube-custom
        Keep_Log            On

        K8S-Logging.Parser  Off
        K8S-Logging.Exclude Off
 

    [FILTER]
        Name    modify
        Match   *
        Set     cluster ${CLUSTER_NAME}

  fluent-bit-conf: |
    [SERVICE]
        Flush        5
        Daemon       off
        Log_level    Info
        #Log_File     /fluent-bit/log/fluent-bit.log
        Http_server  On
        Http_listen  0.0.0.0
        Http_port    2022
        Storage.sync normal
        storage.checksum        Off
        Storage.backlog.mem_limit 5M      
        Storage.path /fluent-bit/flb-storage/  
        Parsers_File /fluent-bit/etc/applog-parsers.conf
    @INCLUDE application-containers.conf
    @INCLUDE filter-kubernetes.conf
 
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluent-bit
  namespace: logging
  labels:
    app.kubernetes.io/name: fluent-bit
  annotations:
    prometheus.io/scrape: "true"
    prometheus.io/port: "2022"
    prometheus.io/path: /api/v1/metrics/prometheus
    # fluentbit.io/exclude: "false"
    # fluentbit.io/parser_stderr: json    
    # fluentbit.io/parser_stdout: json
spec:
  selector:
    matchLabels:
      app: fluent-bit
  updateStrategy:
        type: RollingUpdate    
  template:
    metadata:
      labels:
        app: fluent-bit
        version: v1
        kubernetes.io/cluster-service: "true"
    spec:
      containers:
      - name: fluent-bit
        image: amazon/aws-for-fluent-bit:2.21.0
        imagePullPolicy: Always
        env:
          - name: CLUSTER_NAME
            value: "LoghubEKS"
        ports:
          - containerPort: 2022
        command: ["/fluent-bit/bin/fluent-bit", "-c"]
        args:
        - /fluent-bit/etc/fluent-bit.conf
        resources:
            limits:
              memory: 200Mi
            requests:
              cpu: 500m
              memory: 100Mi
        volumeMounts:
        #reference volume name
        - name: fluentbit-storage
          mountPath: /fluent-bit/flb-storage
        - name: fluent-bit-log
          mountPath: /fluent-bit/log      
        - name: fluent-bit-checkpoint-db
          mountPath: /fluent-bit/checkpoint        
        - name: var-log
          mountPath: /var/log
          readOnly: true
        - name: app-spring-boot-log
          mountPath: /spring-log  
        - name: var-lib-docker-containers
          mountPath: /var/lib/docker/containers
          readOnly: true  
        - name: fluent-bit-config
          mountPath: /fluent-bit/etc
        - name: etc-machine-id
          mountPath: /etc/machine-id
          readOnly: true
        - name: runlogjournal
          mountPath: /run/log/journal
          readOnly: true
        - name: dmesg
          mountPath: /var/log/dmesg
          readOnly: true 
        - name: secure
          mountPath: /var/log/secure
        - name: messages
          mountPath: /var/log/messages
               
      terminationGracePeriodSeconds: 10
      volumes:
      #define volume name  
      - name: fluentbit-storage
        hostPath:
          path: /var/fluent-bit/storage
      - name: fluent-bit-log
        hostPath:
          path: /var/fluent-bit/log
      - name: fluent-bit-checkpoint-db
        hostPath:
          path: /var/fluent-bit/checkpoint            
      - name: var-log
        hostPath:
          path: /var/log
      # - name: var-log-containers
      #   hostPath:
      #     path: /var/log/containers
      - name: app-spring-boot-log
        hostPath:
          path: /spring-boot/log      
      - name: var-lib-docker-containers
        hostPath:
          path: /var/lib/docker/containers
      - name: etc-machine-id
        hostPath:
          path: /etc/machine-id
          type: "File"          
      - name: fluent-bit-config  
        configMap:
          name: fluent-bit-config
          items:
          - key: applog-parsers-conf
            #using the content of applog-parsers-conf as applog-parsers.conf file to mount
            path: applog-parsers.conf
          - key: fluent-bit-conf
            #using the content of fluent-bit-conf as fluent-bit.conf file to mount
            path: fluent-bit.conf
          - key: application-containers-conf
            #using the content of application-containers-conf as application-containers.conf file to mount
            path: application-containers.conf 
          - key: filter-kubernetes-conf
            #using the content of filter-kubernetes-conf as filter-kubernetes.conf file to mount
            path: filter-kubernetes.conf
          - key: dataplane-log-conf
            path: dataplane-log.conf
          - key: host-log-conf
            path: host-log.conf 
      - name: runlogjournal
        hostPath:
          path: /run/log/journal
      - name: dmesg
        hostPath:
          path: /var/log/dmesg  
      - name: secure
        hostPath:
          path: /var/log/secure
      - name: messages
        hostPath:
          path: /var/log/messages         
      serviceAccountName: fluent-bit
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      - operator: "Exists"
        effect: "NoExecute"
      - operator: "Exists"
        effect: "NoSchedule"

注意:请将以上内容中 “<nginx-log-kinesis-data-stream-name>”替换成前面创建应用日志流水线时所生成的Amazon Kinesis Data Stream实际的名称。

  • 通过kubectl get pods -n logging -o wide命令验证运行状态。

4.通过Kubectl部署Nginx应用,测试日志发送

  • 到目前为止,我们已经完成Fluent Bit在EKS集群中的部署,并且通过用于创建Fluent Bit的ConfigMap的Yaml文件我们发现Fluent Bit是读取/var/log/nginx目录的文件。我们通过kubectl apply -f nginx-app.yaml部署Nginx,nginx-app.yaml文件内容参考如下:
---
apiVersion: v1
kind: Namespace
metadata:
  name: nginx-ns
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-user
  namespace: nginx-ns
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: nginx-ns
  name: app-nginx-demo
  labels:
    app.kubernetes.io/name: app-nginx-demo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app-nginx-demo
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: app-nginx-demo
    spec:
      serviceAccountName: nginx-user
      containers:
      - image: nginx:1.20
        imagePullPolicy: Always
        name: app-nginx-demo
        ports:
        - containerPort: 80
          protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  name: app-nginx-demo
  namespace: nginx-ns
spec:
  selector:
    app: app-nginx-demo  
  ports:
  # Default port used by the image
   -  protocol: TCP
      port: 80
      targetPort: 80
  • 通过kubectl get pods -n nginx-ns -o wide命令验证运行状态。

5.通过Log Hub打开Amazon OpenSearch Service访问代理,访问Amazon OpenSearch Service的Dashboards,查看日志

通过前面步骤我们已经将Amazon EKS的集群中应用日志发送到Amazon OpenSearch Service中。为了访问VPC内的Amazon OpenSearch Service,查看日志,您可以通过Log Hub打开访问代理,详细步骤可以点击这里参考。登录Amazon OpenSearch Service的Dashboards后,我们先创建“Index Pattern”。

  • 登录后,在“Management”处选择“Stack Management”,然后继续选择“Index Patterns”,并点击创建Index pattern。输入“eks-nginx-log *”,并点击下一步,完成创建。创建过程可以参考以下截图:


  • 在“OpenSearch Dashboards”处选择“Discover”,并在“CHANGE INDEX PATTERN”处选择上一步创建的Index Pattern,即可进行Nginx日志搜索与过滤。

6.收集Fluent Bit容器日志

我们重新创建一个应用日志流水线,用于收集Amazon EKS集群的容器日志。创建步骤可以参考前面的步骤“1.通过Log Hub创建应用日志收集的管道“。创建完成后,我们复制数据流名称,更改策略“demo-kinesis-policy”,添加允许对该数据流访问。同样,我们还要编辑fluent-bit.yaml,在“application-containers-conf”和“filter-kubernetes-conf: |“处分别添加如下内容:

    # ========collect fluent bit log, add it into application-containers-conf===================
    [INPUT]
        Name                tail
        Tag                 kube.var.log.containers.fluent-bit.*
        Path                /var/log/containers/fluent-bit*
        Path_Key            file_name
        Parser              docker
        DB                  /fluent-bit/checkpoint/flb_log.db
        Mem_Buf_Limit       5MB
        Skip_Long_Lines     On
        Refresh_Interval    10
        Read_from_Head      True
    [OUTPUT]
        Name kinesis_streams
        Match kube.var.log.containers.fluent-bit.*
        Region ap-northeast-2
        Stream <fluent-bit-log-kinesis-data-stream-name>
        Retry_Limit False
    # =============== add kubernates meta data into fluent bit log============================
    [FILTER]
        Name                kubernetes
        Match               kube.var.log.containers.fluent-bit.*
        Kube_URL            https://kubernetes.default.svc:443
        Kube_CA_File        /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        Kube_Token_File     /var/run/secrets/kubernetes.io/serviceaccount/token
        Kube_Tag_Prefix     kube.var.log.containers.fluent-bit.
        Annotations         On
        
        Merge_Log           On
        Merge_Log_Trim      On
        Merge_Log_Key       fluent-bit-log
        Merge_Parser        json

        Buffer_Size         64k
        Use_Kubelet         Off
        Regex_Parser        kube-custom
        Keep_Log            On

        K8S-Logging.Parser  Off
        K8S-Logging.Exclude Off    

注意:请将<fluent-bit-log-kinesis-data-stream-name>,替换成复制的数据流名称,保存后执行kubectl replace --force -f  ./fluent-bit.yaml。

然后重新登录Amazon OpenSearch Service的Dashboards,创建“Index pattern”后,在“Discover”处,选择对应的“Index pattern”名称即可展示对应日志内容。

7.收集Ingress Controller日志,并通过Log Hub创建预定义的示例仪表板

我们将使用aws-load-balancer-controller作为Ingress Controller来暴露Nginx服务,这里我们使用aws-load-balancer-controller的版本为2.4.0,关于aws-load-balancer-controller的安装部分,大家可以参考这篇文章

  • 安装完成后,通过kubectl apply -f alb-ingress.yaml命令创建ALB, 将 pod 服务暴露出去,alb-ingress.yaml文件内容参考如下:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: demo-app-ingress
  namespace: nginx-ns
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80}]'
spec:
  rules:
    - http:
        paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: app-nginx-demo
              port:
                number: 80

安装完成后,我们通过kubectl get ingress/demo-app-ingress -n nginx-ns命令验证是否已经使用:

登录Amazon Web Services的控制台,选择EC2服务的Load Balancers,查看刚才创建的ALB服务,并复制DNS name,通过浏览器访问,验证成功。


  • 通过Log Hub Console创建ALB 服务日志流水线

1)访问Log Hub控制台,在“日志分析流水线”选择“AWS服务日志”,并创建日志注入。

2)在AWS服务处选择“Elastic Load Balancer”,并点击“下一步”,参考如下截图,完成创建。


3)选择导入的Amazon OpenSearch Service集群,并在“示例仪表板”处选择“是”,这样我们可以通过Log Hub内置的Amazon OpenSearch Service仪表板深度分析ALB日志。

4)完成创建后,通过访问代理访问Amazon OpenSearch Service的Dashboard,查看Ingress Controller日志。


三.常见问题和优化方法

1.如果Nginx日志格式需要定制化,如果更改?

答:可以通过Log Hub解决方案创建日志配置,在日志类型处选择“Nginx“。输入Nginx的日志格式配置,并通过输入日志样例,解析日志验证通过后,复制正则表达式,然后替换fluent-bit.yaml文件中的”nginx_loghub“处内容。详细内容可以参考以下截图:


2.如何更换Parser?

答:如果需要使用自定义的Parser, 可以先在“applog-parsers-conf: |”处添加内容,然后参考下图,新建一个Filter,并更改对应的Merge_Parser。注意,新建的Filter时,请替换Kube_Tag_Prefix与Match处内容。

3.如何收集Kube-proxy日志?

答:以下为本文涉及Fluent Bit的yaml文件完整内容,可供参考,关于Kube-proxy的日志可以直接看文件中dataplane-log-conf的内容。

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: fluent-bit-config
  namespace: logging
  labels:
    k8s-app: fluent-bit
    version: v1
data:
  # Configuration files: apache, apache2, nginx and multiline text for java slf4j
  # =============================================================================
  applog-parsers-conf: |
    [PARSER]
        Name   apache
        Format regex
        Regex  ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
        Time_Key time
        Time_Format %d/%b/%Y:%H:%M:%S %z    
    [PARSER]
        Name   apache2
        Format regex
        Regex  ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^ ]*) +\S*)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>.*)")?$
        Time_Key time
        Time_Format %d/%b/%Y:%H:%M:%S %z
    [PARSER]
        Name   apache_error
        Format regex
        Regex  ^\[[^ ]* (?<time>[^\]]*)\] \[(?<level>[^\]]*)\](?: \[pid (?<pid>[^\]]*)\])?( \[client (?<client>[^\]]*)\])? (?<message>.*)$
    [PARSER]
        Name   nginx
        Format regex
        Regex ^(?<remote>[^ ]*) (?<host>[^ ]*) (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")
        Time_Key time
        Time_Format %Y-%m-%dT%H:%M:%S.%LZ
    [PARSER]
        Name   json
        Format json
        Time_Key time
        Time_Format %Y-%m-%dT%H:%M:%S.%LZ
        Time_Keep    On
    [PARSER]
        Name   nginx_loghub
        Format regex
        Regex (?<remote_addr>\S+)\s*-\s*(?<remote_user>\S+)\s*\[(?<time_local>\d+/\S+/\d+:\d+:\d+:\d+)\s+\S+\]\s*"(?<request_method>\S+)\s+(?<request_uri>\S+)\s+\S+"\s*(?<status>\S+)\s*(?<body_bytes_sent>\S+)\s*"(?<http_referer>[^"]*)"\s*"(?<http_user_agent>[^"]*)"\s*"(?<http_x_forwarded_for>[^"]*)".* 
        Time_Key time
        Time_Format %d/%b/%Y:%H:%M:%S %z
    [PARSER]
        Name   multilinetext_efeff2cb-10eb-4c94-b5ed-90efd689b3c4
        Format regex
        Regex (?<time>\d{4}-\d{2}-\d{2}\s*\d{2}:\d{2}:\d{2}.\d{3})\s*(?<level>\S+)\s*\[(?<thread>\S+)\]\s*(?<logger>\S+)\s*:\s*(?<message>[\s\S]+)
    [PARSER]
        # https://rubular.com/r/IhIbCAIs7ImOkc
        Name        k8s-nginx-ingress
        Format      regex
        Regex       ^(?<host>[^ ]*) - (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*) "(?<referer>[^\"]*)" "(?<agent>[^\"]*)" (?<request_length>[^ ]*) (?<request_time>[^ ]*) \[(?<proxy_upstream_name>[^ ]*)\] (\[(?<proxy_alternative_upstream_name>[^ ]*)\] )?(?<upstream_addr>[^ ]*) (?<upstream_response_length>[^ ]*) (?<upstream_response_time>[^ ]*) (?<upstream_status>[^ ]*) (?<reg_id>[^ ]*).*$
        Time_Key    time
        Time_Format %Y-%m-%dT%H:%M:%S.%LZ
    [PARSER]
        Name         docker
        Format       json
        Time_Key     time
        Time_Format  %Y-%m-%dT%H:%M:%S.%LZ
        Time_Keep    On
    [PARSER]
        Name        container_firstline
        Format      regex
        Regex       (?<log>(?<="log":")\S(?!\.).*?)(?<!\\)".*(?<stream>(?<="stream":").*?)".*(?<time>\d{4}-\d{1,2}-\d{1,2}T\d{2}:\d{2}:\d{2}\.\w*).*(?=})
        Time_Key    time
        Time_Format %Y-%m-%dT%H:%M:%S.%LZ      
    [PARSER]
        Name        docker-daemon
        Format      regex
        Regex       time="(?<time>[^ ]*)" level=(?<level>[^ ]*) msg="(?<msg>[^ ].*)"
        Time_Key    time
        Time_Format %Y-%m-%dT%H:%M:%S.%L
        Time_Keep   On
    [PARSER]
        Name        syslog
        Format      regex
        Regex       ^(?<time>[^ ]* {1,2}[^ ]* [^ ]*) (?<host>[^ ]*) (?<ident>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<pid>[0-9]+)\])?(?:[^\:]*\:)? *(?<message>.*)$
        Time_Key    time
        Time_Format %b %d %H:%M:%S      
    [PARSER]
        Name        syslog-rfc5424
        Format      regex
        Regex       ^\<(?<pri>[0-9]{1,5})\>1 (?<time>[^ ]+) (?<host>[^ ]+) (?<ident>[^ ]+) (?<pid>[-0-9]+) (?<msgid>[^ ]+) (?<extradata>(\[(.*)\]|-)) (?<message>.+)$
        Time_Key    time
        Time_Format %Y-%m-%dT%H:%M:%S.%L
        Time_Keep   On
    [PARSER]
        Name        syslog-rfc3164-local
        Format      regex
        Regex       ^\<(?<pri>[0-9]+)\>(?<time>[^ ]* {1,2}[^ ]* [^ ]*) (?<ident>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<pid>[0-9]+)\])?(?:[^\:]*\:)? *(?<message>.*)$
        Time_Key    time
        Time_Format %b %d %H:%M:%S
        Time_Keep   On
    [PARSER]
        Name        syslog-rfc3164
        Format      regex
        Regex       /^\<(?<pri>[0-9]+)\>(?<time>[^ ]* {1,2}[^ ]* [^ ]*) (?<host>[^ ]*) (?<ident>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<pid>[0-9]+)\])?(?:[^\:]*\:)? *(?<message>.*)$/
        Time_Key    time
        Time_Format %Y-%m-%dT%H:%M:%S.%L
        Time_Keep   On
    [PARSER]
        Name        mongodb
        Format      regex
        Regex       ^(?<time>[^ ]*)\s+(?<severity>\w)\s+(?<component>[^ ]+)\s+\[(?<context>[^\]]+)]\s+(?<message>.*?) *(?<ms>(\d+))?(:?ms)?$
        Time_Format %Y-%m-%dT%H:%M:%S.%L
        Time_Key    time
        Time_Keep   On
    [PARSER]
        Name        envoy
        Format      regex
        Regex       ^\[(?<start_time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)? (?<protocol>\S+)" (?<code>[^ ]*) (?<response_flags>[^ ]*) (?<bytes_received>[^ ]*) (?<bytes_sent>[^ ]*) (?<duration>[^ ]*) (?<x_envoy_upstream_service_time>[^ ]*) "(?<x_forwarded_for>[^ ]*)" "(?<user_agent>[^\"]*)" "(?<request_id>[^\"]*)" "(?<authority>[^ ]*)" "(?<upstream_host>[^ ]*)"  
        Time_Format %Y-%m-%dT%H:%M:%S.%L%z
        Time_Key    start_time
        Time_Keep   On
    [PARSER]
        Name        cri
        Format      regex
        Regex       ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>[^ ]*) (?<message>.*)$
        Time_Key    time
        Time_Format %Y-%m-%dT%H:%M:%S.%L%z
        Time_Keep    On
    [PARSER]
        Name    kube-custom
        Format  regex
        Regex   (?<tag>[^.]+)?\.?(?<pod_name>[a-z0-9](?:[-a-z0-9]*[a-z0-9])?(?:\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)_(?<namespace_name>[^_]+)_(?<container_name>.+)-(?<docker_id>[a-z0-9]{64})\.log$     
  # Configuration files: server, input, filters and output
  # ======================================================
  application-containers-conf: |
    # ====================collect nginx log==================================
    [INPUT]
        Name              tail
        Tag               kube.var.log.containers.nginx.*
        Exclude_Path      /var/log/containers/cloudwatch-agent*, /var/log/containers/fluent-bit*, /var/log/containers/aws-node*, /var/log/containers/kube-proxy*
        Path              /var/log/containers/app-nginx-demo*.log
        Path_Key          file_name
        Parser            docker
        DB                /fluent-bit/checkpoint/flb_container.db
        Mem_Buf_Limit     50MB
        Skip_Long_Lines   On
        Refresh_Interval  10
        Rotate_Wait       30
        storage.type      filesystem
        Read_from_Head    True
    [OUTPUT]
        Name kinesis_streams
        Match  kube.var.log.containers.nginx.*
        Region ap-northeast-2
        Stream <nginx-log-kinesis-data-stream-name>
        Retry_Limit False
        #Auto_retry_requests True  

    # ====================collect fluent bit log==================================
    [INPUT]
        Name                tail
        Tag                 kube.var.log.containers.fluent-bit.*
        Path                /var/log/containers/fluent-bit*
        Path_Key            file_name
        Parser              docker
        DB                  /fluent-bit/checkpoint/flb_log.db
        Mem_Buf_Limit       5MB
        Skip_Long_Lines     On
        Refresh_Interval    10
        Read_from_Head      True
    [OUTPUT]
        Name kinesis_streams
        Match kube.var.log.containers.fluent-bit.*
        Region ap-northeast-2
        Stream <fluent-bit-log-kinesis-data-stream-name>
        Retry_Limit False
        #Auto_retry_requests True    
  filter-kubernetes-conf: |
    # =============== add kubernates meta data into nginx log========================
    [FILTER]
        Name                kubernetes
        Match               kube.var.log.containers.nginx.*
        Kube_URL            https://kubernetes.default.svc:443
        Kube_CA_File        /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        Kube_Token_File     /var/run/secrets/kubernetes.io/serviceaccount/token
        Kube_Tag_Prefix     kube.var.log.containers.nginx.
        Annotations         On
        
        Merge_Log           On
        Merge_Log_Trim      On
        Merge_Log_Key       nginx-log
        Merge_Parser        nginx_loghub

        Buffer_Size         64k
        Use_Kubelet         Off
        Regex_Parser        kube-custom
        Keep_Log            On

        K8S-Logging.Parser  Off
        K8S-Logging.Exclude Off
    # ============ add kubernates meta data into fluent bit log==================
    [FILTER]
        Name                kubernetes
        Match               kube.var.log.containers.fluent-bit.*
        Kube_URL            https://kubernetes.default.svc:443
        Kube_CA_File        /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        Kube_Token_File     /var/run/secrets/kubernetes.io/serviceaccount/token
        Kube_Tag_Prefix     kube.var.log.containers.fluent-bit.
        Annotations         On
        
        Merge_Log           On
        Merge_Log_Trim      On
        Merge_Log_Key       fluent-bit-log
        Merge_Parser        json

        Buffer_Size         64k
        Use_Kubelet         Off
        Regex_Parser        kube-custom
        Keep_Log            On

        K8S-Logging.Parser  Off
        K8S-Logging.Exclude Off    

    [FILTER]
        Name    modify
        Match   *
        Set     cluster ${CLUSTER_NAME}

  fluent-bit-conf: |
    [SERVICE]
        Flush        5
        Daemon       off
        Log_level    Info
        #Log_File     /fluent-bit/log/fluent-bit.log
        Http_server  On
        Http_listen  0.0.0.0
        Http_port    2022
        Storage.sync normal
        storage.checksum        Off
        Storage.backlog.mem_limit 5M      
        Storage.path /fluent-bit/flb-storage/  
        Parsers_File /fluent-bit/etc/applog-parsers.conf
    @INCLUDE application-containers.conf
    @INCLUDE filter-kubernetes.conf
    @INCLUDE dataplane-log.conf
    @INCLUDE host-log.conf
     
  dataplane-log-conf: |
      [INPUT]
          Name                systemd
          Tag                 dataplane.systemd.*
          Systemd_Filter      _SYSTEMD_UNIT=docker.service
          Systemd_Filter      _SYSTEMD_UNIT=kubelet.service
          DB                  /fluent-bit/checkpoint/flb_systemd.db
          Path                /var/log/journal
          Path_Key            file_name
          Read_From_Tail      True

      [INPUT]
          Name                tail
          Tag                 dataplane.tail.*
          Path                /var/log/containers/aws-node*, /var/log/containers/kube-proxy*
          Path_Key            file_name
          Docker_Mode         On
          Docker_Mode_Flush   5
          Docker_Mode_Parser  container_firstline
          Parser              docker
          DB                  /fluent-bit/checkpoint/flb_dataplane_tail.db
          Mem_Buf_Limit       50MB
          Skip_Long_Lines     On
          Refresh_Interval    10
          Rotate_Wait         30
          storage.type        filesystem
          Read_from_Head      True

      [FILTER]
          Name                modify
          Match               dataplane.systemd.*
          Rename              _HOSTNAME                   hostname
          Rename              _SYSTEMD_UNIT               systemd_unit
          Rename              MESSAGE                     message
          Remove_regex        ^((?!hostname|systemd_unit|message).)*$

      [FILTER]
          Name                aws
          Match               dataplane.*
          imds_version        v2
          ec2_instance_type   True 
          ami_id              True
          private_ip          True
          hostname            True
          vpc_id              True
      [OUTPUT]
          Name kinesis_streams
          Match dataplane.*
          Region ap-northeast-2
          Stream <dataplane-log-kinesis-data-stream-name>
          Retry_Limit False
          Auto_retry_requests True      
  host-log-conf: |
      [INPUT]
          Name                tail
          Tag                 host.dmesg
          Path                /var/log/dmesg
          Path_Key            file_name
          Parser              syslog
          DB                  /fluent-bit/checkpoint/flb_dmesg.db
          Mem_Buf_Limit       5MB
          Skip_Long_Lines     On
          Refresh_Interval    10
          Read_from_Head      True

      [INPUT]
          Name                tail
          Tag                 host.messages
          Path                /var/log/messages
          Path_Key            file_name
          Parser              syslog
          DB                  /fluent-bit/checkpoint/flb_messages.db
          Mem_Buf_Limit       5MB
          Skip_Long_Lines     On
          Refresh_Interval    10
          Read_from_Head      True

      [INPUT]
          Name                tail
          Tag                 host.secure
          Path                /var/log/secure
          Path_Key            file_name
          Parser              syslog
          DB                  /fluent-bit/checkpoint/flb_secure.db
          Mem_Buf_Limit       5MB
          Skip_Long_Lines     On
          Refresh_Interval    10
          Read_from_Head      True

      [FILTER]
          Name                aws
          Match               host.*
          imds_version        v1

      [OUTPUT]
          Name kinesis_streams
          Match host.*
          Region ap-northeast-2
          Stream <host-log-kinesis-data-stream-name>
          Retry_Limit False
          Auto_retry_requests True 
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluent-bit
  namespace: logging
  labels:
    app.kubernetes.io/name: fluent-bit
  annotations:
    prometheus.io/scrape: "true"
    prometheus.io/port: "2022"
    prometheus.io/path: /api/v1/metrics/prometheus
    # fluentbit.io/exclude: "false"
    # fluentbit.io/parser_stderr: json    
    # fluentbit.io/parser_stdout: json
spec:
  selector:
    matchLabels:
      app: fluent-bit
  updateStrategy:
        type: RollingUpdate    
  template:
    metadata:
      labels:
        app: fluent-bit
        version: v1
        kubernetes.io/cluster-service: "true"
    spec:
      containers:
      - name: fluent-bit
        image: amazon/aws-for-fluent-bit:2.21.0
        imagePullPolicy: Always
        env:
          - name: CLUSTER_NAME
            value: "LoghubEKS"
        ports:
          - containerPort: 2022
        command: ["/fluent-bit/bin/fluent-bit", "-c"]
        args:
        - /fluent-bit/etc/fluent-bit.conf
        resources:
            limits:
              memory: 200Mi
            requests:
              cpu: 500m
              memory: 100Mi
        volumeMounts:
        #reference volume name
        - name: fluentbit-storage
          mountPath: /fluent-bit/flb-storage
        - name: fluent-bit-log
          mountPath: /fluent-bit/log      
        - name: fluent-bit-checkpoint-db
          mountPath: /fluent-bit/checkpoint        
        - name: var-log
          mountPath: /var/log
          readOnly: true
        - name: app-spring-boot-log
          mountPath: /spring-log  
        - name: var-lib-docker-containers
          mountPath: /var/lib/docker/containers
          readOnly: true  
        - name: fluent-bit-config
          mountPath: /fluent-bit/etc
        - name: etc-machine-id
          mountPath: /etc/machine-id
          readOnly: true
        - name: runlogjournal
          mountPath: /run/log/journal
          readOnly: true
        - name: dmesg
          mountPath: /var/log/dmesg
          readOnly: true 
        - name: secure
          mountPath: /var/log/secure
        - name: messages
          mountPath: /var/log/messages
               
      terminationGracePeriodSeconds: 10
      volumes:
      #define volume name  
      - name: fluentbit-storage
        hostPath:
          path: /var/fluent-bit/storage
      - name: fluent-bit-log
        hostPath:
          path: /var/fluent-bit/log
      - name: fluent-bit-checkpoint-db
        hostPath:
          path: /var/fluent-bit/checkpoint            
      - name: var-log
        hostPath:
          path: /var/log
      # - name: var-log-containers
      #   hostPath:
      #     path: /var/log/containers
      - name: app-spring-boot-log
        hostPath:
          path: /spring-boot/log      
      - name: var-lib-docker-containers
        hostPath:
          path: /var/lib/docker/containers
      - name: etc-machine-id
        hostPath:
          path: /etc/machine-id
          type: "File"          
      - name: fluent-bit-config  
        configMap:
          name: fluent-bit-config
          items:
          - key: applog-parsers-conf
            #using the content of applog-parsers-conf as applog-parsers.conf file to mount
            path: applog-parsers.conf
          - key: fluent-bit-conf
            #using the content of fluent-bit-conf as fluent-bit.conf file to mount
            path: fluent-bit.conf
          - key: application-containers-conf
            #using the content of application-containers-conf as application-containers.conf file to mount
            path: application-containers.conf 
          - key: filter-kubernetes-conf
            #using the content of filter-kubernetes-conf as filter-kubernetes.conf file to mount
            path: filter-kubernetes.conf
          - key: dataplane-log-conf
            path: dataplane-log.conf
          - key: host-log-conf
            path: host-log.conf 
      - name: runlogjournal
        hostPath:
          path: /run/log/journal
      - name: dmesg
        hostPath:
          path: /var/log/dmesg  
      - name: secure
        hostPath:
          path: /var/log/secure
      - name: messages
        hostPath:
          path: /var/log/messages         
      serviceAccountName: fluent-bit
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      - operator: "Exists"
        effect: "NoExecute"
      - operator: "Exists"
        effect: NoSchedule

四.总结

在本文中,我们介绍了如何通过Amazon Log Hub解决方案在Amazon EKS集群环境下收集应用程序的容器日志以及使用aws-load-balancer-controller提供服务时的Ingress Controller日志。您也可以根据具体的使用需求进一步修改相关配置。如果您对本文有任何意见和建议,请与我们分享!

五.参考资料

本篇作者

何文安

AWS解决方案架构师,负责帮助客户进行上云架构的设计和咨询。在银行及电商行业有丰富的咨询和架构设计经验。加入 AWS 前曾于全球大型银行、国际消费品企业,担任系统分析师及领域专家,负责高并发、高可用系统架构设计,应用微服务化及云上迁移等具体实践。

马涛

亚马逊云科技解决方案研发架构师,主要负责云上解决方案的设计与研发。