AWS Compute Blog

AWS Outposts monitoring and reporting: A comprehensive Amazon EventBridge solution

Organizations using AWS Outposts racks commonly manage capacity from a single AWS account and share resources through AWS Resource Access Manager (AWS RAM) with other AWS accounts (consumer accounts) within AWS Organizations. In this post, we demonstrate one approach to create a multi-account serverless solution to surface costs in shared AWS Outposts environments using Amazon EventBridge, AWS Lambda, and Amazon DynamoDB. This solution reports on instance runtime and allocated storage for Amazon Elastic Compute Cloud (Amazon EC2), Amazon Relational Database Services (Amazon RDS), and Amazon Elastic Block Store (Amazon EBS) services running on Outposts racks. In turn, teams can track the cost of infrastructure associated with their workloads across AWS accounts. This solution is a framework that can be customized to meet your organization’s specific business objectives.

Solution overview

The following is the Terraform-based reference architecture used to represent the solution, including EventBridge, DynamoDB, and Lambda across a multi-account environment. Relevant launch events are tracked in EventBridge that invoke Lambda functions, which are logged in DynamoDB tables (see sample code). This allows reporting on captured event data through the AWS SDK for Python (Boto3)AWS architecture diagram showing data collection and workload account integration with EventBridge, CloudTrail, and Outposts
Figure 1: Reference architecture for reporting solution on AWS Outposts 

Prerequisites

The following prerequisites are necessary to implement this solution:

Walkthrough

The following sections walk you through how to deploy this solution.

Deploying in data collection account

Step 1: Create a bucket in-Region to hold the Terraform state file in the data collection account.

aws s3 mb s3://state-bucket-name

Step 2: Clone the repository.On your local machine, clone the repository that contains the sample by running the following command:

git clone https://github.com/aws-samples/sample-outposts-monitoring-and-reports.git

Navigate to the cloned repository by running the following command:cd sample-outposts-monitoring-and-reports/data_collection

Step 3: Edit the providers.tf to configure the AWS provider.



provider "aws" {
  region = ""
}

Step 4: Edit the backend.tf to provide the Terraform state bucket and Outposts anchored AWS Region.

terraform {
  backend "s3" {
    bucket = ""
    key    = "terraform.tfstate"
    region = ""
  }
}

Step 5: Modify the variables.tf.From the root directory of the cloned repository, modify the variables.tf file with the target Region and workload accounts as shown in the following example. The target Region is the collection destination.

variable "aws_region" {
  description = "AWS region for resources"
  type        = string
  default     = ""
}

variable "allowed_account_id" {
  description = "AWS account ID allowed to put events to the event bus"
  

}

Initialize the configuration directory of the data collection account to download and install the providers defined in the configuration by running the following command:

terraform init

All resources are deployed with minimal permissions to serve as an example. We recommend viewing all configurations to make sure that they meet your organizational security policies. Step 6: Deploy infrastructure in the data collection account.Run terraform plan on the configuration to and review which resources are created:

terraform plan

When you have reviewed the plan, run the following command and enter “yes” to accept the changes and deploy:

terraform apply

Deployment should take less than 5 minutes. If you receive any errors, review the previously mentioned steps to ensure that you followed them in their entirety. If the errors persist, reach out to AWS Support for additional guidance.

Deploying in workload account

The data collection account receives events from EventBridge and performs intelligent analysis and storage from the AWS Outposts resource data.Step 1: Navigate to the workload account directory by running the following command:

cd ../workload_account

Step 2: Edit variables.tf to set up the Region and event bus Amazon Resource Name (ARN). 

variable "aws_region" {
  description = "AWS region for resources"
  type        = string
  default     = ""
}

variable "event_bus_arn" {
  description = "target event bus arn"
  type        = string
  default     = ""
}

Edit the code to update the event bus name.

Step 3: Run the following command to create the backend.tf and create the Terraform state bucket for each workload account.

./init-backend.sh

This is an idempotent operation that creates a file from the template and a bucket with a fixed name including the account ID if it doesn’t exist. 

Step 4: Initialize the configuration directory of the Data Collection Account to download and install the providers defined in the configuration by running the following command:

terraform init

Step 5: Deploy the infrastructure in the Data Collection Account.Run a terraform plan on the configuration and review which resources are created:

terraform plan

After you have reviewed the plan, run the following command and enter “yes” to accept the changes and deploy:

terraform apply

Deployment should take less than 5 minutes. If you receive any errors, follow the troubleshooting steps in the previous section.

At this point, any Amazon EC2 or Amazon RDS instances and Amazon EBS volumes are logged to the DynamoDB tables in the data collection account. Repeat Steps 3–5 for each workload account running resources on AWS Outposts with appropriate account credentials. If you’re deploying at scale and using AWS Control Tower consider using AWS Control Tower Account Factory for Terraform (AFT).

Running monthly reports

With this solution in place, reports can be generated on demand. These reports can be customized by modifying the Python example scripts shown to support your needs. Reports can be created from a local machine with credentials that have access to the DynamoDB tables in the data collection account. The examples were created from the source directory of the data collection account git repository. Run the following command to view the report for Amazon RDS usage in September 2025:

./rds_runtime_calculator.py --year 2025 --month 9 --output rds_report.csv

Spreadsheet showing RDS database instances with configuration details, storage allocation, and operational status in us-west-2 region

Figure 2: Example of RDS runtime report 

 

Run the following command to view the report for Amazon EBS usage in September 2025:

./ebs_volume_reporter.py --year 2025 --month 9 --output ebs_report.csv

 

EBS volume tracking table showing volume configurations, lifecycle hours, and active/deleted status in us-west-2

Figure 3: Example of EBS usage report 

 

Run the following command to view the report for Amazon EC2 usage in September 2025:

./ec2_runtime_calculator.py --month 9 --year 2025 --output ec2_report.csv

EC2 instance tracking table showing c5.large instances with runtime hours and running/stopped status on AWS Outposts

Figure 4: Example of EC2 runtime report 

 

Cleaning up

Complete the following steps to clean up the resources that were deployed by this solution. For each workload account, complete the following:

cd sample-outposts-monitoring-and-reports/workload_account
terraform destroy 

Enter “yes” to proceed. You can then manually empty and remove the terraform state S3 bucket for that account.

For the data collection, complete the following:

cd ../data_collection
terraform destroy

Enter “yes” to proceed. You can then manually empty and remove the terraform state S3 bucket for that account.

Conclusion

Customers who have shared multi-account Outposts deployments can use this solution to create account level reporting for Outposts resources using real-time event capture and processing, state analysis and categorization, historical usage metrics, and serverless architecture. Teams can use this to visualize and report on the costs of running their workloads on Outposts. The event-driven design supports accurate tracking while maintaining low operational overhead. The solution scales effectively across multiple Outposts and accounts, providing a unified view of hybrid infrastructure. Keep in mind that you can extend the functionality described here to meet your business objectives.

Deploy this solution today using the GitHub repository to gain financial insights to share with the tenants of your Outposts workload accounts. Reach out to your AWS account team, or fill out this form to learn more about Outposts.