AWS Compute Blog

Measuring Amazon MQ throughput using Maven 2 benchmark and AWS CDK

This post is written by Olajide Enigbokan, Senior Solutions Architect and Mohammed Atiq, Solutions Architect

In this post you will learn how to evaluate the throughput for Amazon MQ, a managed message broker service for ActiveMQ, by using the ActiveMQ Classic Maven Performance test plugin. This post will provide recommendations for configuring Amazon MQ to optimize throughput when leveraging ActiveMQ as a broker engine.

Overview on benchmarking throughput for Amazon MQ for ActiveMQ

To get a good balance of cost and performance while leveraging ActiveMQ on Amazon MQ, AWS recommends that customers benchmark during migration, instance type/size upgrade, or downgrade. Benchmarking can help you choose the correct instance type and size for your workload requirements. For common benchmark scenarios and benchmark figures for different instances types and sizes, see AmazonMQ for ActiveMQ Throughput benchmarks.

Performance of your ActiveMQ workload depends on the specifics of your use-case. For example, if you have a workload where durability is extremely important (meaning that messages cannot be lost), enabling persistence mode ensures that messages are persisted to disk before the broker informs the client that the message send action has completed. The faster the disk I/O capacity and the smaller the message size during these writes, the better the message throughput. For this reason, AWS recommends the mq.m5.* instance types for regular development, testing, and production workloads as described in Amazon MQ for ActiveMQ instance types. The mq.t2.micro and mq.t3.micro instance types are intended for product evaluation and are subject to burst CPU credits and baseline performance. Hence, they are not suitable for applications that require fixed performance. In the situation where a larger broker instance type is selected, AWS also recommends batching transactions for persistent store which allows you to send multiple messages per transaction while achieving an overall higher message throughput.

The next section describes the details of setting up your own benchmark for Amazon MQ using the open-source benchmarking tool: ActiveMQ Classic Maven Performance test plugin. The ActiveMQ Classic Maven Performance test plugin benchmark suite is highly recommended due to the ease in setup and deployment process.

Getting started

This walkthrough guides you through the steps for benchmarking your Amazon MQ brokers:

Step 1 – Build and push container image to Amazon ECR

Clone the mq-benchmarking-container-image-sample repository and follow the steps in the README file to build and push your image to an Amazon Elastic Container Registry (Amazon ECR) public repository. You will need this container image for Step 2.

Step 2 – Automate Your Benchmarking Setup with AWS CDK

Architecture of CDK deployment

Architecture of CDK deployment

To streamline the deployment of an active/standby ActiveMQ broker alongside Amazon Elastic Container Service (Amazon ECS) tasks for this walk-through, follow these steps below to set up the environment leveraging AWS Cloud Development Kit (AWS CDK). This will deploy the resources shown in the architecture diagram above.

2.1. Prerequisites:

Ensure the following packages are installed:

2.2 Repository Setup:

Clone the mq-benchmarking-sample repository. This repository contains all the necessary code and instructions to automate the benchmarking process using the AWS CDK.

2.3 Create a Virtual Environment:

Change directory (cd) to the cloned repository directory and create a Python virtual environment by running the following command:

cd mq-benchmarking-sample

python -m venv .venv

2.4 Activate Virtual Environment:

Run the following commands to activate your virtual environment:

# Linux
source .venv/bin/activate

# Windows
.\.venv\Scripts\activate

2.5 Install Dependencies:

Install the required Python packages using:

pip install -r requirements.txt

2.6 Customize and Deploy:

In this step, deploy the necessary stacks and their resources for benchmarking in your AWS account. The command ‘cdk deploy’ below deploys three stacks with resources for Amazon ECS, MQ and VPC. Deploy your application with AWS CDK using the command:

cdk deploy "*" -c container_repo_url=<YOUR CONTAINER REPO URL> -c container_repo_tag=<YOUR CONTAINER REPO TAG>

This command deploys your application with the specified Docker image. Replace <YOUR CONTAINER REPO URL> and <YOUR CONTAINER REPO TAG> with your specific Docker repo image details from Step 1. An example container repo URL would look like this: public.ecr.aws/xxxxxxxxx/xxxxxxxxxx.

The deployment of the stacks and their resources happen in three stages. Please select “yes” at each stage to deploy the stated changes as shown below:

First stage of the deploy

Select yes to deploy these changes

Deployed stacks and their resources

Deployed stacks and their resources

Optionally, you can include additional context variables in your command as seen below:

cdk deploy "*" -c vpc_cidr=10.0.0.0/16 -c mq_cidr=10.0.0.0/16 -c broker_instance_type=mq.m5.large -c mq_username=testuser -c tasks=2 -c container_repo_url=<YOUR CONTAINER REPO URL> -c container_repo_tag=<YOUR CONTAINER REPO TAG>

Note: In the example command above, the vpc_cidr specified is the same as mq_cidr. If you decide to use the above command, you will need to ensure that your vpc_cidr range is the same as your mq_cidr range. AWS recommends this as security best practice to ensure that your broker endpoint is only accessible from recognized IP ranges, see Security best practices for Amazon MQ.

More details on the above context variables:

  • broker_instance_type: Represents the instance type for the Amazon MQ Broker. You can start with the instance type mq.m5.large.
  • vpc_cidr: Allows you to customize the VPC’s CIDR block. The default CIDR is set to 10.42.0.0/16.
  • mq_cidr: Allows you set a specific security group CIDR range for the broker. This must be set to the vpc_cidr. From the sample command above, this is set to 10.0.0.0/16. For more flexibility with source IP ranges, you can edit the broker security group of your CDK deployment.
  • mq_username: Allows you to specify a username to access the ActiveMQ web console access and broker.
  • tasks: Determines the number of ECS tasks (1 or more) to run your Docker image. Since the OpenWire configuration file for both consumers and producers allow you to specify the number of clients that you want, all the clients in one ECS task will share the CPU and memory allocation for that task. You have the option to run multiple ECS tasks (with multiple clients) running the benchmark in parallel.

These adjustments allow for a more customized deployment to fit specific benchmarking needs and scenarios.

2.7 Benchmarking Execution

After deployment, you should see an output similar to the following:

Successful deployment of CDK application with output

Successful deployment of CDK application with output

1. Retrieve the TASK-ARN and access the Container

The above exec command in “outputs:” requires that you supply a <TASK-ARN> before the command can be run. To retrieve the <TASK-ARN> via the AWS CLI, you will need to do the following:

  • Run the below command and note down the Task ARN (needed later):
aws ecs list-tasks --cluster <cluster-name> --region <region>

You can also retrieve this value via the Amazon ECS console by going to your ECS Cluster and choosing Tasks.

  • Access the running ECS task using the ECS Exec feature with the command that is output from the CDK deployment. The command should look like the following:
aws ecs execute-command --region eu-central-1 --cluster arn:aws:ecs:eu-central-1:XXXXXXXX:cluster/ECS-Stack-ClusterEB0386A7-gRmSxC06y4ay --task <TASK-ARN> --container Benchmarking-Container --command "/bin/bash" --interactive

Before running the above command, replace the placeholder value of <TASK-ARN> with the value of the actual Task ARN noted earlier.

After retrieving the <TASK-ARN>, and running the exec command, you should have a directory structure as follows:

Directory Structure within ECS Task using ECS Exec

Directory Structure within ECS Task using ECS Exec

2. Configure the openwire-producer.properties and openwire-consumer.properties files.

Open both files. Shown below is the content of the openwire-producer.properties and openwire-consumer.properties files.

openwire-producer.properties:

sysTest.reportDir=./reports/
sysTest.samplers=tp
sysTest.spiClass=org.apache.activemq.tool.spi.ActiveMQReflectionSPI
sysTest.clientPrefix=JmsProducer
sysTest.numClients=25


producer.destName=queue://PERF.TEST
producer.deliveryMode=persistent
producer.messageSize=1024
producer.sendDuration=300000

factory.brokerURL=
factory.userName=
factory.password=

openwire-consumer.properties:

sysTest.reportDir=./reports/
sysTest.samplers=tp
sysTest.spiClass=org.apache.activemq.tool.spi.ActiveMQReflectionSPI
sysTest.destDistro=equal
sysTest.clientPrefix=JmsConsumer
sysTest.numClients=25

consumer.destName=queue://PERF.TEST

factory.brokerURL=
factory.userName=
factory.password=

In both files, provide the brokerURL, username and password as they are required before starting the benchmarking process. The brokerURL and username can be obtained from the Amazon MQ console

Amazon MQ broker

Amazon MQ broker

Once you click into the deployed broker, you will find the brokerURL under the Endpoints section for OpenWire.

Endpoints in Amazon MQ console

Endpoints in Amazon MQ console

The endpoint URL for OpenWire should be in this format:

failover:(ssl://b-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx-1.mq.<aws region>.amazonaws.com:61617,ssl://b-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx -2.mq.<aws region>.amazonaws.com:61617)
Retrieve username from Amazon MQ console

Retrieve username from Amazon MQ console

Since you are using an active/standby broker, the test would only leverage the active broker URL and not both. The failover protocol automatically manages this exchange. The password can be retrieved from the AWS Secrets Manager console or via the CLI.

The following parameters and values can be adjusted in both producer and consumer properties file to suite your use case:

  • sendDuration: The sendDuration which represents the time taken for the producer/consumer test to run. Default value is set to 300000ms.
  • messageSize: The messageSize which adjusts the size of messages sent is set to 1024KB by default.
  • deliveryMode: The deliveryMode is set to persistent by default.
  • numClients: numClients sets the number of concurrent consumers, influencing message processing speed. It is set to 25 by default.
  • destName: destName represents the name of your destination queue or topic. You can change the name to your preference.

For a more comprehensive guide, refer to the mq-benchmarking-sample documentation.

2.8 Benchmark Results

After populating both producer and consumer files with the required parameters, run the following maven commands (one after the other) in separate terminals to start the test:

Maven producer command:

mvn activemq-perf:producer -DsysTest.propsConfigFile=openwire-producer.properties

Maven consumer command:

mvn activemq-perf:consumer -DsysTest.propsConfigFile=openwire-consumer.properties

Once each of the above tests complete, they provide a summary of the tests in stdout as shown below:

#########################################
####    SYSTEM THROUGHPUT SUMMARY    ####
#########################################
System Total Throughput: 562020
System Total Clients: 25
System Average Throughput: 1873.4000000000003
System Average Throughput Excluding Min/Max: 1860.8333333333333
System Average Client Throughput: 74.936
System Average Client Throughput Excluding Min/Max: 74.43333333333334
Min Client Throughput Per Sample: clientName=JmsProducer19, value=2
Max Client Throughput Per Sample: clientName=JmsProducer13, value=169
Min Client Total Throughput: clientName=JmsProducer0, value=20224
Max Client Total Throughput: clientName=JmsProducer5, value=23917
Min Average Client Throughput: clientName=JmsProducer0, value=67.41333333333333
Max Average Client Throughput: clientName=JmsProducer5, value=79.72333333333333
Min Average Client Throughput Excluding Min/Max: clientName=JmsProducer0, value=67.04333333333334
Max Average Client Throughput Excluding Min/Max: clientName=JmsProducer8, value=78.91
[main] INFO org.apache.activemq.tool.reports.XmlFilePerfReportWriter - Created performance report: /app/activemq-perftest/./reports/JmsProducer_numClients25_numDests1_all.xml
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 5:05.052s
[INFO] Finished at: Mon Apr 29 10:22:01 UTC 2024
[INFO] Final Memory: 15M/60M
[INFO] ------------------------------------------------------------------------
#########################################
####    SYSTEM THROUGHPUT SUMMARY    ####
#########################################
System Total Throughput: 562023
System Total Clients: 25
System Average Throughput: 1873.4100000000005
System Average Throughput Excluding Min/Max: 1864.6599999999996
System Average Client Throughput: 74.93640000000002
System Average Client Throughput Excluding Min/Max: 74.58639999999998
Min Client Throughput Per Sample: clientName=JmsConsumer13, value=0
Max Client Throughput Per Sample: clientName=JmsConsumer13, value=105
Min Client Total Throughput: clientName=JmsConsumer13, value=22475
Max Client Total Throughput: clientName=JmsConsumer14, value=22495
Min Average Client Throughput: clientName=JmsConsumer13, value=74.91666666666667
Max Average Client Throughput: clientName=JmsConsumer14, value=74.98333333333333
Min Average Client Throughput Excluding Min/Max: clientName=JmsConsumer13, value=74.56666666666666
Max Average Client Throughput Excluding Min/Max: clientName=JmsConsumer14, value=74.63333333333334
[main] INFO org.apache.activemq.tool.reports.XmlFilePerfReportWriter - Created performance report: /app/activemq-perftest/./reports/JmsConsumer_numClients25_numDests1_equal.xml
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 5:02.434s
[INFO] Finished at: Mon Apr 29 10:22:02 UTC 2024
[INFO] Final Memory: 14M/68M
[INFO] ------------------------------------------------------------------------

The above output is from a test performed and shown as sample output.

System Average Throughput and System Total Clients are the most useful metrics.

In the reports directory look for two xml files with more detailed throughput metrics. In the JmsProducer_numClients25_numDests1_all.xml file for example, jmsClientSettings and jmsFactorySettings captures different broker switches.

Each of the report files captures exact test and broker environment. Keeping these files around will allow you to compare performance between different test cases and analyze how a set of configurations have impacted performance.

For this test, the average throughput for a producer is around 1873 messages per second for 25 clients. Keep in mind that the broker instance is an mq.m5.large. You can get higher throughput with more clients and a larger broker instance. This test demonstrates the concept of running fast consumers while producing messages.

More comprehensive information on the test output can be found in performance testing.

By following these guidelines and leveraging ECS Exec for direct access, you can deploy the ActiveMQ Classic Maven Performance test plugin, using AWS CDK. This setup allows you to customize and execute benchmark tests on Amazon MQ within an ECS task, facilitating an automated and efficient deployment and testing workflow.

Amazon MQ benchmarking architecture

Amazon MQ for ActiveMQ brokers can be deployed as a single-instance broker or as an active/standby broker. Amazon MQ is architected for high availability (HA) and durability. For HA and broker benchmarking, AWS recommends using the active/standby deployment. After a message is sent to Amazon MQ in persistent mode, the message is written to the highly durable message store which replicates the data across multiple nodes and Availability Zones.

Cleanup

To avoid incurring future charges for the resources deployed in this walkthrough, run the following command and follow the prompts to delete the CloudFormation stacks launched in 2.6 Customize and Deploy:

cdk destroy "*"

Conclusion

This post provides a detailed guide on performing benchmarking for Amazon MQ for ActiveMQ brokers leveraging the ActiveMQ Classic Maven Performance test plugin. Benchmarking plays a crucial role for customers migrating to Amazon MQ, as it offers insights into the broker’s performance under conditions that mirror their existing setup. This process enables customers to fine-tune their configurations and choose the appropriate instance type that aligns with their specific use case, ensuring optimal handling of their workloads’ throughput.

Get started with Amazon MQ by using the AWS Management Console, AWS CLI, AWS Software Development Kit (SDK), or AWS CloudFormation. For information on cost, see Amazon MQ pricing.