AWS Compute Blog

Migrating message driven applications to Amazon MQ for RabbitMQ

This post is courtesy of Mithun Mallick, AWS Sr. Messaging Specialist Solutions Architect, and Sam Dengler, AWS Principal Serverless Specialist Solutions Architect.

Message brokers can be used to solve a number of needs in application integration, including managing workload queues and broadcasting messages to a number of subscribers. Amazon MQ is a managed message broker service for RabbitMQ and Apache ActiveMQ that makes it easy to set up and operate message brokers on AWS. RabbitMQ is a popular open-source message broker that supports AMQP 0-9-1 (Advanced Message Queuing Protocol). More details on AMQP can be found in RabbitMQ documentation. Customers can migrate their workloads that use AMQP 0-9-1 to Amazon MQ for RabbitMQ. In this blog, we will look at some of the common integration patterns using RabbitMQ, migrating from self-managed RabbitMQ to Amazon MQ, and using plugins like Federation to build hybrid architectures. We will also explore the architectural details of Amazon MQ for RabbitMQ for its different deployment models.

Architecture

Amazon MQ for RabbitMQ offers two deployment options – single instance and three node cluster. Single instance deployments are only recommended for development environments or workloads that need to avoid latency due to replication. There are a variety of instance types that are supported. The list of supported instance types can be found in our developer guide. We recommend using t3.micro instance types only for development or testing environments. Three node cluster is the recommended deployment model for production workloads. These nodes are deployed across different Availability Zones (AZ) to provide high availability. Amazon MQ uses classic mirrored queues with automatic synch up and replication across all nodes which provides maximum durability. Both single node as well as cluster deployments provide a single endpoint for accessing the RabbitMQ web console as well as API’s for management and monitoring of nodes. We support both public as well as private brokers. Public brokers provide a public endpoint that can be accessed using broker credentials. Publicly accessible brokers can be useful for connecting on-premises client applications or integrate partners. The private broker option restricts access to broker within a specific VPC and subnet. The overall architecture for a single node and a multi-node cluster are shown in the following diagrams:

Single instance standalone

Publicly accessible broker

publicly accessible broker

In a public broker architecture, a client application accesses the broker using a Network Load Balancer (NLB) that is deployed in a public subnet within an AWS managed account. The NLB endpoint provides a single interface for both the broker management APIs as well for message processing.

Private broker

private broker

In the case of a private broker, clients running in a customer VPC access an elastic network interface provisioned in a private subnet. The elastic network interface connects to an NLB running in a service account using a VPC endpoint. As in the case of a public broker, the NLB provides a single endpoint for connecting to the broker instance.

Multi-broker cluster

Amazon MQ for RabbitMQ supports a three-node cluster spanning across multiple AZ’s providing high availability for the broker endpoint. It also supports both public and private accessibility. Below is the architecture for public and private clusters:

Publicly accessible cluster

publicly accessible cluster

A publicly accessible cluster also runs in a service owned account. The NLB is deployed in a public subnet. Clients can connect to the public NLB for accessing the broker.

Private cluster

private cluster

In both deployment models, an NLB is used as the entry point through which the broker instances are accessed. In the case of a private broker, an elastic network interface is deployed in your VPC, which accesses an NLB running in AWS service account. The NLB in turn points to specific brokers running in a service account. You will only have the elastic network interface’s deployed in your account.

Broker security

Amazon MQ for RabbitMQ encrypts messages at rest as well as in transit. Currently, Amazon MQ for RabbitMQ only supports service owned keys for encryption at rest. Messages in transit are encrypted using SSL. Private brokers can be restricted using specific security group rules. Broker management is also restricted using IAM policies. It meets compliance standards like HIPAA, PCI, SOC, and several others. For more details on compliance, please refer to the services in scope documentation.

Common integration patterns

RabbitMQ uses a concept of exchanges and bindings to facilitate message routing and filtering. It is based on the AMQP 0-9-1 protocol. Although RabbitMQ supports the JMS API via a plugin, we have not enabled it for Amazon MQ for RabbitMQ as we believe ActiveMQ is the best option for JMS support. More details on RabbitMQ messaging concepts can be found in their official documentation. Let’s look at some of the common messaging patterns and code examples:

  • Simple send: Simple send is the most basic way to send messages in RabbitMQ. It is based on the AMQP 0-9-1 protocol. For more details on AMQP protocol concepts, please refer to the AMQP documentation for RabbitMQ. In this pattern, a message sender uses the default exchange and directly specifies the queue name as a routing key. The receiver gets the message directly from the queue. The following is a snippet of a sample code in Python using Pika library that sends messages directly on a queue using default exchange:
channel.queue_declare(queue='hello')
channel.basic_publish(exchange='', routing_key='hello', body='Hello World!')
print(" [x] Sent 'Hello World!'")

In Amazon MQ for RabbitMQ, we only support the secure version of AMQP using TLS. The code snippet below demonstrates AMQPS connection using Pika library. Please note that we do not support peer verification on server side.

credentials = pika.PlainCredentials('admin', 'xxxxxxxx')
context = ssl.SSLContext(ssl.PROTOCOL_TLSv1_2)

cp = pika.ConnectionParameters(port=5671, host='xxxxxx', credentials=credentials, 
ssl_options=pika.SSLOptions(context))
  • Direct send: Direct send is the pattern that explicitly uses the concept of exchanges in sending messages. It decouples the message destination from the sender. In this pattern, messages are sent on a specific exchange with a routing key. Consumers always read messages from queues but they bind their queue to the exchange with a binding key. A message goes to all queues with a binding key that exactly matches the routing key of the message. The following is a code snippet that shows sending and receiving messages from an exchange using a routing key and binding key. The sender may specify durability options of the exchange, which indicate whether the messages will be stored on disk or kept in memory.
ch = conn.channel()
# producer
ch.exchange_declare(exchange='direct_publisher', exchange_type='direct', durable='True')

ch.basic_publish(exchange='direct_publisher', routing_key='us-east', body=body_content)

# consumer
ch.queue_declare(queue='us_east_orders', durable=True, arguments=argument_list)
ch.queue_bind('us_east_orders', 'direct_publisher', routing_key=binding_key)

ch.basic_consume('us_east_orders', on_message, auto_ack=False)
  • Fanout: The fanout pattern is RabbitMQ’s implementation of publish/subscribe. It allows messages to be sent to all destinations that are bound to an exchange irrespective of their binding key. So, effectively routing keys have no impact if the exchange type is set as fanout. In such cases, an exchange acts like a topic through which messages are sent to all subscribers.
channel.exchange_declare(exchange='all_orders', exchange_type='fanout', durable='True')
  • Topic: The topic pattern is RabbitMQ’s implementation of message filtering and routing. In this pattern, message sender will put a message in an exchange with a routing key. However, the queue bindings can use a wild-card pattern to filter out specific messages for that queue. Messages with routing key that don’t match the binding key pattern are discarded.
channel.exchange_declare(exchange='orders_by_state', exchange_type='topic')
channel.basic_publish(exchange='orders_by_state', routing_key='us.wa.electronics', body=message)

In this section, we covered some of the basic concepts of messaging in RabbitMQ. RabbitMQ is much more advanced and offers several features. You can refer to the RabbitMQ tutorial for an extensive list of code examples in various languages.

Migrating to Amazon MQ from self-managed RabbitMQ

You can export the configuration from your self-managed RabbitMQ cluster and import it into Amazon MQ. Currently, we only support the Federation, Shovel, and Management plugins. All the queue and exchange definitions can be imported as is. Any existing user definitions as well as policy definitions are also imported. Amazon MQ does have an enforced policy of ‘ha-mode=all and ‘ha-sync-mode=automatic’ which will override any custom policy you may have related to these keys. Also, we do not support Quorum queues at this time. You can edit the exported JSON from the existing RabbitMQ cluster to remove the definitions that are not supported. The following steps can be performed to export and import the definitions from an existing RabbitMQ cluster.

  1. Go to the RabbitMQ console of your existing cluster by signing on to any of the brokers. Click on the overview tab and you will see an option to ‘Export Definitions’. Click on that. It will have a link to export the definition.
    migrating step 1The export is a JSON file that can be saved to your local disk.
  2. Next, login to the Amazon MQ RabbitMQ console. Click on the overview tab and you will see an option to import definitions. Click on the import definitions and you will be able to upload the config file that was exported in the previous step.
    migrating step 2Once it’s imported, you will be able to see all the queues and exchange definitions that were defined in the self-managed broker.

Building hybrid architectures

One of the biggest advantages of using RabbitMQ is its ability to federate messages across multiple clusters. As described in RabbitMQ documentation, federation provides an opinionated distribution of messages across brokers. Amazon MQ supports the Federation plugin and you can import your existing federation configurations into Amazon MQ. Federation may be used to extend your message processing capabilities beyond data center resources. The other plugin that is widely used for moving messages across exchanges or queue is the Shovel plugin. We will explore the various deployment topologies that can be set up with federation and Shovel. We will also look at the various use cases that can be addressed by these deployment architectures:

Federation

Federation plugin can be used to build hybrid architecture between Amazon MQ broker and on-premises broker. It facilitates moving messages from an upstream(source) broker to a downstream(destination) broker. The plugin needs to be configured on the downstream broker, which in our case is the Amazon MQ broker. The pattern can be described as below:

federation

This architecture gives the simplest way to configure Amazon MQ as the federated broker. The pattern can be applied for extending message processing to the cloud. Federating the Amazon MQ broker on queues allows some consumers to be available on the cloud while some can remain on-premises. The key consideration with Amazon MQ broker is that it only has direct access to resources over public internet. It means that for federation to access the upstream broker, it needs to be either publicly accessible or have a public proxy. If the on-premises broker has access to Amazon MQ broker, it can also set up the Amazon MQ broker as the upstream broker, which can create a pair topology.

Shovel

Shovel is a flexible plugin that can provide utility tasks within the broker to unidirectionally move messages. It can move messages between queues and exchanges within the same broker or it can act as a bridge between two different brokers. The flexibility of Shovel plugin can address the following hybrid patterns:

On-premises private RabbitMQ broker without internet access

on-premises private RabbitMQ broker without internet access

In this pattern, Shovel plugin is used to move messages from an on-premises private RabbitMQ broker to a private Amazon MQ broker. The on-premises broker in this case does not have internet access. The on-premises broker will have the Shovel plugin configured to push messages from on-premises broker to Amazon MQ broker. The pattern requires VPN connection between the customer VPC and the on-premises network.

On-premises private RabbitMQ broker with internet access

on-premises private RabbitMQ broker without internet access

In this pattern, Shovel plugin is used to build a bridge between an on-premises RabbitMQ broker with internet access and a private Amazon MQ broker. A public Amazon MQ broker is used as the bridge in this pattern. We set up the Shovel plugin on the private on-premises broker that has internet connectivity. It pushes messages to the public Amazon MQ for RabbitMQ broker. Queues or exchanges in public broker act like a staging area. The Shovel plugin configured on the private broker is able to pull messages from the public Amazon MQ for RabbitMQ broker. In this pattern, Shovel plugin is configured on the on-premises broker as well as on the private Amazon MQ for RabbitMQ broker.

Conclusion

In this blog, we have described the overall architecture of Amazon MQ for RabbitMQ. We also covered some of the basics around messaging using RabbitMQ. You can get more details on specific RabbitMQ features from the official documentation of RabbitMQ. We also looked at various deployment architectures to support hybrid patterns with Amazon MQ using the Federation and Shovel plugin. You can get more details on Amazon MQ for RabbitMQ in our developer guide.