AWS Cloud Operations & Migrations Blog

Visualizing metrics across Amazon Managed Service for Prometheus workspaces using Amazon Managed Grafana

This post provides step-by-step instructions for aggregating and visualizing your Amazon Elastic Kubernetes Service (Amazon EKS) monitoring metrics using Amazon Managed Service for Prometheus and Amazon Managed Grafana. As part of this solution, promxy a Prometheus proxy, is deployed to enable a single Grafana data source to query multiple Prometheus workspaces. Please note that this solution uses an open source project for which AWS support doesn’t exist. It is assumed that you will perform all necessary security assessments before using this solution in production.

Amazon EKS is a managed Kubernetes service that makes it easy to run Kubernetes on AWS and on-premises. Amazon Managed Service for Prometheus is a Prometheus-compatible monitoring and alerting service that makes it easy to monitor containerized applications and infrastructure at scale. Amazon Managed Grafana is a fully managed service that enables you to analyze your metrics, logs, and traces without having to provision servers, configure and update software, or do the heavy lifting involved in securing and scaling Grafana in production. For help setting up your EKS cluster, Amazon Managed Service for Prometheus workspaces, and Amazon Managed Grafana workspace used in this post, please reference the AWS Observability Workshop.

Overview of solution

With Amazon EKS for container deployment and management, Amazon Managed Service for Prometheus for container monitoring, and Amazon Managed Grafana for data visualization, you can deploy, monitor, and visualize your containerized applications. However, Grafana dashboards that span multiple Prometheus workspaces require additional setup and configuration because separate queries for each workspace must be created.

Promxy is an open-source utility that acts as a Prometheus proxy, enabling a single query to retrieve data from multiple Prometheus workspaces. This utility simplifies dashboards and data source management in Grafana.

To implement this solution, complete the following steps. You will dive deep into each of these steps in the following sections.

  1. Amazon EKS cluster preparation
  2. Application load balancer controller deployment
  3. NGINX controller deployment
  4. Promxy authentication
  5. Promxy deployment
  6. Amazon Manage Grafana configuration

Amazon EKS is not a mandatory component in the architecture. Other platforms can be used for this deployment, including Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Elastic Container Service (Amazon ECS).

After this process, the following monitoring architecture will be in place. A data source created in Amazon Managed Grafana points to the application load balancer. The load balancer sends requests to NGINX, which performs basic authentication and forwards the request to promxy. Promxy connects to multiple Amazon Managed Service for Prometheus workspaces to obtain Prometheus metrics for display in the Grafana dashboard.

] The deployed architecture consists of an Amazon Managed Grafana dashboard displaying metrics from multiple Amazon Managed Service for Prometheus workspaces via a single data source.

Prerequisites

For this walkthrough, you should have the following prerequisites in place:

In this walkthrough, we use an AWS Cloud9 IDE to run commands. You can use any IDE but ensure you have the AWS CLI and Helm installed.

Amazon EKS cluster preparation

To use AWS Identity and Access Management (IAM) roles for service accounts, you need an IAM OIDC identity provider for your cluster. First, retrieve your OIDC Connect Issuer URL for your cluster (for instructions, refer to Step 1 in Create an IAM OIDC provider for your cluster). Next, create an OIDC identity provider by issuing the following command:

eksctl utils associate-iam-oidc-provider –cluster <cluster_name> –approve

Substitute your Amazon EKS cluster name for <cluster_name>.

Application load balancer controller deployment

Application load balancing on Amazon EKS is accomplished using the AWS Load Balancer Controller, which manages the deployment of elastic load balancers for a Kubernetes cluster. The controller automatically provisions an application load balancer when a Kubernetes ingress resource is created. This ingress resource is created as part of the promxy deployment described later. This section includes information and examples of the commands run to install the controller; for detailed instructions, refer to Installing the AWS Load Balancer Controller add-on. Please note the use of the promxy namespace in place of kube-system.

Create the IAM policy

Create the iam_policy.json file first, as described in Step 1a of Installing the AWS Load Balancer Controller add-on. Next, create the IAM policy associated with the Kubernetes service account role using the following command.

aws iam create-policy \
    --policy-name AWSLoadBalancerControllerIAMPolicy \
    --policy-document file://iam_policy.json

Create the IAM role and service account

An IAM role can be associated with a Kubernetes service account to provide AWS permissions to the containers in any pod that uses the service account. The following command creates the service account and IAM role:

eksctl create iamserviceaccount \
    --cluster=<cluster_name> \
    --namespace=promxy \
    --name=aws-load-balancer-controller \
    --role-name "AmazonEKSLoadBalancerControllerRole" \
    --attach-policy-arn=arn:aws:iam::<account_id>:policy/AWSLoadBalancerControllerIAMPolicy \
    --approve

Replace <cluster_name> with your EKS cluster name and <account_id> with your AWS account ID.

Deploy the application load balancer controller

Install the AWS application load balancer controller using Helm:

helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
    --namespace promxy \
    --create-namespace \
    --set clusterName= \
    --set serviceAccount.create=false \
    --set serviceAccount.name=aws-load-balancer-controller \
    --set image.repository=602401143452.dkr.ecr.us-east-1.amazonaws.com/amazon/aws-
load-balancer-controller

Replace <cluster_name> with your EKS cluster name.

If you have not previously added the eks-charts repository, you can add it by following steps 5a and 5b in  Installing the AWS Load Balancer Controller add-on.

Ingress-NGINX controller deployment

The promxy utility doesn’t provide an authentication mechanism. Therefore, you use the Ingress-NGINX Controller, which provides various authentication methods. This deployment utilizes basic authentication for accessing promxy. This ingress resource is created as part of the promxy deployment described later.

Deploy the Ingress-NGINX controller

Install the Ingress-NGINX controller helm chart. Please note that the service type must be set to NodePort to prevent the controller from automatically creating an AWS classic load balancer when an ingress resource is created. Instead, this solution uses an application load balancer created by the application load balancer controller.

helm install ingress-nginx ingress-nginx \
    --repo https://kubernetes.github.io/ingress-nginx \
    --namespace promxy \
    --create-namespace \
    --set controller.service.type=NodePort

Promxy authentication

NGINX provides authentication, but first, you must create a secret containing the user information.

Create the htpasswd file

Utilize the htpasswd command to generate a file named auth, containing the user name, promxy-admin, and associated password.

$ htpasswd -c auth promxy-admin

New password: xxxxxxx
Re-type new password: xxxxxxx

Convert htpasswd into a secret

Convert the file into a Kubernetes secret:

$ kubectl create secret generic basic-auth --from-file=auth --namespace promxy

Examine the secret to confirm it’s created correctly:

$ kubectl get secret --namespace promxy basic-auth -o yaml

You get the following output:

apiVersion: v1
data:
  auth: cHJvbXh5LWFkbWluOiRhcHIxJFdTRWdOS1RDJGM0cVJyclJ5cm1mNE5RVFR0ZW5TVDAK
kind: Secret
metadata:
  name: basic-auth
  namespace: promxy
type: Opaque

Promxy deployment

Complete the following steps to deploy promxy.

Create an IAM role for promxy

In this step, you create an IAM role to give promxy permission to query metrics from Amazon Managed Service for Prometheus workspaces.

  1. On the IAM console, choose Roles in the navigation pane.
  2. Choose Create role and choose Custom trust policy.

The custom role contains a custom trust policy.

  1. Replace the custom trust policy with the following policy. Update the trust policy’s account number, Region, and OIDC ID.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Federated": "arn:aws:iam::<Account Number>:oidc-provider/oidc.eks.<region>.amazonaws.com/id/<OpenID Connect ID>"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
                "StringEquals": {
                    "oidc.eks.<region>.amazonaws.com/id/<OpenID Connect ID>": "system:serviceaccount:promxy:iampromxy-service-account"
                }
            }
        }
    ]
}
  1. Choose Next and add the AWS managed policy AmazonPrometheusQueryAccess.

Add the managed policy to the role.

  1. Choose Next.
  2. Enter the IAM role name, promxy-prometheus-access-role, and choose Create role.
  3. Copy the IAM role Amazon Resource Name (ARN) for later use, which looks like arn:aws:iam::<account number>:role/promxy-prometheus-access-role.

Clone the promxy GitHub repository

Changes and additions to the promxy GitHub repository files are required. Therefore, run the following commands to clone the promxy repository to the local file system:

mkdir ~/ekspromxy
cd ~/ekspromxy
git clone https://github.com/jacksontj/promxy.git

Create supplementary promxy files

In this step, you create supplementary files to modify the default promxy deployment and create the application load balancer and NGINX ingress resources.

Promxy override values

Some of the default promxy configuration needs to be changed. Create a new file named promxy_override_values.yaml in the ~/ekspromxy directory. Replace the account number and Region with the appropriate values for your installation. Update the path_prefix with the correct Amazon Managed Service for Prometheus workspace ID.

Please note that the entire static_configs section is duplicated and each contains a separate workspace ID. This section should be replicated for every Amazon Managed Service for Prometheus workspace to which promxy will proxy requests (in the example, two workspaces are defined). Finally, update the certificate name if you’re using https to connect to promxy.

serviceAccount:
  name: "iampromxy-service-account"
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::<account_id>:role/promxy-prometheus-access-role
ingress:
  enabled: false
ingress_alb:
  enabled: true
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    # the next line is only required if using https
    alb.ingress.kubernetes.io/certificate-arn: arn:aws:iam::<account_id>:server-certificate/<certificate_name>
  path: /
  service:
    name: ingress-nginx-service
    port: 80
ingress_nginx:
  enabled: true
  annotations:
    # type of authentication
    nginx.ingress.kubernetes.io/auth-type: basic
    # name of the secret that contains the user/password definitions
    nginx.ingress.kubernetes.io/auth-secret: basic-auth
    # message to display with an appropriate context why the authentication is required
    nginx.ingress.kubernetes.io/auth-realm: "Authentication Required - for promxy"
    # the next two lines are only required if using https
    nginx.ingress.kubernetes.io/enable-cors: "true"
    nginx.ingress.kubernetes.io/cors-allow-headers: "Authorization, origin, accept"
  path: /
  service:
    name: ingress-promxy
    port: 8082
server:
  sidecarContainers:
    - name: aws-sigv4-proxy-sidecar
      image: public.ecr.aws/aws-observability/aws-sigv4-proxy:1.0
      args:
      - --name
      - aps
      - --region
      - <region>
      - --host
      - aps-workspaces.<region>.amazonaws.com
      - --port
      - :8005
      ports:
      - name: aws-sigv4-proxy
        containerPort: 8005
config:
  promxy:
    server_groups:
      - static_configs:
          - targets:
            - localhost:8005
        path_prefix: workspaces/<workspace_id_1>
        labels:
          prom_workspace: workspace_1
      - static_configs:
          - targets:
            - localhost:8005
        path_prefix: workspaces/<workspace_id_2>
        labels:
          prom_workspace: workspace_2

ALB ingress resource

Create a new file named ingress_alb.yaml in the ~/ekspromxy/promxy/deploy/k8s/helm-charts/promxy/templates directory, which defines the application load balancer ingress resource:

{{- if .Values.ingress_alb.enabled }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
{{- if .Values.ingress_alb.annotations }}
  annotations:
{{ toYaml .Values.ingress_alb.annotations | indent 4 }}
{{- end }}
  labels:
  {{- include "chart.labels" . | nindent 4 }}
  {{ with .Values.ingress_alb.extraLabels }}
{{ toYaml . | indent 4 }}
  {{ end }}
  name: {{ .Values.ingress_alb.service.name }}
spec:
  rules:
  - http:
      paths:
        - path: {{ .Values.ingress_alb.path }}
          pathType: Prefix
          backend:
            service:
              name: ingress-nginx-controller
              port:
                number: {{ .Values.ingress_alb.service.port }}
{{- end -}}

NGINX ingress resource

Create a new file named ingress_nginx.yaml in the ~/ekspromxy/promxy/deploy/k8s/helm-charts/promxy/templates directory, which defines the NGINX ingress resource:

{{- if .Values.ingress_nginx.enabled }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
{{- if .Values.ingress_nginx.annotations }}
  annotations:
{{ toYaml .Values.ingress_nginx.annotations | indent 4 }}
{{- end }}
  labels:
  {{- include "chart.labels" . | nindent 4 }}
  {{ with .Values.ingress_nginx.extraLabels }}
{{ toYaml . | indent 4 }}
  {{ end }}
  name: {{ .Values.ingress_nginx.service.name }}
spec:
  ingressClassName: nginx
  rules:
  - http:
      paths:
        - path: {{ .Values.ingress_nginx.path }}
          pathType: Prefix
          backend:
            service:
              name: promxy
              port:
                number: {{ .Values.ingress_nginx.service.port }}
{{- end -}}

Update promxy files

Promxy doesn’t provide the ability to authenticate, which Amazon Managed Service for Prometheus requires. Signature Version 4 (SigV4) is the process of adding authentication information to AWS API requests sent by HTTP. To utilize SigV4, you deploy an AWS SigV4 Proxy Kubernetes sidecar container that signs the requests from promxy and forwards them to Amazon Managed Service for Prometheus.

Update the existing deployment.yaml file in the ~/ekspromxy/promxy/deploy/k8s/helm-charts/promxy/templates directory to deploy the sidecar:

{{- if .Values.server.sidecarContainers }}
      {{- range $name, $spec :=  .Values.server.sidecarContainers }}
      - name: {{ $name }}
        {{- if kindIs "string" $spec }}
          {{- tpl $spec $ | nindent 8 }}
        {{- else }}
          {{- toYaml $spec | nindent 8 }}
        {{- end }}
      {{- end }}
    {{- end }}

The bottom of the deployment.yaml file looks like the following code after the update.

The new lines of code should be added to the deployment.yaml file just before the volumes stanza.

Install the promxy helm chart

Install the promxy helm chart in the promxy namepace using the previously created override file:

helm install promxy --namespace promxy ./promxy/deploy/k8s/helm-charts/promxy -f promxy_override_values.yaml

You will see the pods and services created for the two controllers and promxy:

kubectl get all --namespace promxy

Running the kubectl command will display the deployed pods and services.

Run the following command to obtain the application load balancer URL:

kubectl get ingress ingress-nginx-service -—namespace promxy

Once deployed, the kubectl command can be run to determine the associated load balancer’s DNS alias.

Amazon Managed Grafana configuration

In this step, you create the Amazon Managed Grafana data source.

  1. On the Amazon Managed Grafana console, choose your Grafana workspace URL, and log in.
  2. Choose Configuration and then Data sources.
  3. Choose Add date source and choose Prometheus.
  4. For Name, enter a name.
  5. For URL, enter the URL obtained in the prior step. Be sure to prefix the URL with http or https, depending on what was specified in the ALB ingress resource definition.
  6. Choose Basic auth and then enter the user name and password created and stored in the Kubernetes secret.

Create the Amazon Managed Grafana data source.

  1. Choose Save & test, and the message “Data source is working” will be displayed.

The data source will display a green check mark with the message “Data source is working” when it successfully connects to promxy.

The final step is to create a dashboard to display Prometheus metrics. In the following dashboard, two Amazon EKS clusters send metrics to two Amazon Managed Service for Prometheus workspaces. Configuring the Amazon Managed Grafana data source to point to promxy enables the ability query metrics from all Amazon Managed Service for Prometheus workspaces. Amazon EKS pod memory and CPU metrics aggregated within an Amazon EKS cluster are displayed on the left, and metrics aggregated across Amazon EKS clusters from multiple Amazon Managed Service for Prometheus workspaces are on the right.

Dashboard displaying EKS pod metrics aggregated within an Amazon EKS cluster on the left and across Amazon EKS clusters from multiple Amazon Managed Service for Prometheus workspaces on the right.

Cleaning up

After testing this solution, remember to complete the following steps to avoid incurring charges to your AWS account.

Uninstall promxy

Uninstall the promxy helm chart and delete the local promxy GitHub repository:

cd ~/ekspromxy
helm uninstall promxy --namespace promxy
rm -rf promxy

Delete the IAM role that was created for promxy:

aws iam delete-role --role-name promxy-prometheus-access-role

Delete the Kubernetes secret

Delete the file generated using htpasswd and the Kubernetes secret that was created for basic authentication within NGINX:

rm auth
kubectl delete secret --namespace promxy basic-auth

Uninstall the controllers

Uninstall both helm charts for the NGINX and application load balancer controllers:

helm uninstall ingress-nginx --namespace promxy
helm uninstall aws-load-balancer-controller --namespace promxy

Delete the EKS service account, role, and policy

Finally, delete the Amazon EKS service account, role, and policy used by the application load balancer controller:

eksctl delete iamserviceaccount --cluster=<cluster_name> --namespace=promxy --name=aws-load-balancer-controller
aws iam delete-policy --policy-arn <policy_arn>

Remember to replace <cluster_name> with the name of your cluster and <policy_arn> with the ARN of the IAM policy you created. The eksctl command automatically removes the associated IAM role.

Conclusion

This post demonstrates how to visualize and aggregate metrics from multiple Amazon Managed Service for Prometheus workspaces in an Amazon Managed Grafana dashboard using a single data source. To accomplish this, you utilized an open-source tool, promxy, to connect to each Amazon Managed Service for Prometheus workspace. Amazon Managed Grafana was then able to pull metrics from each workspace using a single data source connecting to promxy. Promxy was configured to run within an Amazon EKS cluster, along with an application load balancer and NGINX controller. The application load balancer controller automatically created an AWS Application Load Balancer in front of promxy and the NGINX controller provided basic authentication.

To learn more and get hands on experience with Amazon EKS, Amazon Managed Service for Prometheus, and Amazon Managed Grafana, explore the EKS Workshop.

About the authors:

Mark Hudson

Mark Hudson is a Cloud Infrastructure Architect at Amazon Web Services (AWS). He enjoys helping customers of all sizes learn how to operate efficiently and effectively in the cloud. He spends his free time traveling, coaching his daughter, and doing anything active outdoors.

Godwin Sahayaraj Vincent

Godwin Sahayaraj Vincent is an Enterprise Solutions Architect at AWS who is passionate about Machine Learning and providing guidance to customers to design, deploy and manage their AWS workloads and architectures. In his spare time, he loves to play cricket with his friends and tennis with his three kids.