AWS Open Source Blog
Using Amazon Managed Service for Prometheus to monitor EC2 environments
April 16, 2021: This article has been updated to reflect changes introduced by AWS Signature Version 4 support on Prometheus server.
We recently announced Amazon Managed Service for Prometheus (AMP) that allows you to create a fully managed, secure, Prometheus-compatible environment to ingest, query, and store Prometheus metrics. In a previous blog post from the AWS Management & Governance Blog, we explained how you could set up the service to monitor containerized environments. For some critical use-cases, containerization is too far ahead or sometimes not even possible.
In this article, we’ll demonstrate how you can use AMP for those systems running on Amazon Elastic Compute Cloud (Amazon EC2) or on-premises environments.
Setup
In this example, we walk through the following steps:
- Set up an Amazon EC2 instance running Amazon Linux.
- Run a demo application written in Go that exposes a Prometheus endpoint under
/metrics
using the Prometheus client library. - Create an Amazon Prometheus Service (AMP) workspace.
- Run a Prometheus server to export the application metrics to AMP through the proxy.
- Configure a Grafana server on a remote desktop to query our AMP workspace. You can also use our recently announced Amazon Managed Service for Grafana to query AMP.
The corresponding architecture can be visualized as follows:
In this example, we’ve select the Ireland (eu-west-1) region. Please visit the AWS Regional Service List to see AWS Regions supported by the service.
Amazon EC2 setup
The first step in this walkthrough is to set up an EC2 instance, which will host our application and forward its metrics to the AMP workspace that we’ll create later on. We recommend using IAM roles attached to the instance, to which we can attach the policy AmazonPrometheusRemoteWriteAccess
to provide the instance with the bare minimum permissions.
Demo application
Once the instance is configured, we can log in to our instance and run a sample application. Create a file named main.go
and add the content shown below. Use the Prometheus http handler to auto-expose few system metrics via HTTP. You can implement your own metrics using the Prometheus client library.
Before running our sample application, let’s make sure we have all the dependencies installed.
The application should be up and running on port 8000
. At this stage, we should be able to see all Prometheus metrics exposed by the application:
Create an AMP workspace
To create a workspace, simply open AMP on the AWS Console and enter the name of the workspace.
Once created, the service should provide us with a remote write URL and a query URL.