AWS Cloud Operations Blog
Achieve domain consistency in event-driven architectures
Application modernization is an important and growing migration strategy for many businesses. Most applications begin as a monolith, focusing on a specific business use case. As businesses grow, so does the complexity and number of business use-cases that their monoliths must support. This causes monolith application components to be tightly coupled and less cohesive, making it difficult to maintain and scale them further. Because of these challenges, businesses decompose monolithic applications into microservices and event-driven architectures. The benefits include the ability to optimize cost, increase developer productivity, and improve application agility, performance, and resiliency.
However, modernizing a monolith into an event-driven architecture presents a challenge regarding data consistency. Typically, services update the database and then publish an event to downstream systems. Both actions must be performed successfully, or be rolled back otherwise. In this scenario, the traditional solution is to use a distributed transaction for a two-phase commit (2PC) that spans the database and the message broker. 2PC is a distributed transaction protocol used in database management systems to ensure the atomicity and consistency of transactions that span multiple databases or resources. However, a number of popular NoSQL databases and message brokers do not support 2PC. Without 2PC, we need an alternative mechanism to ensure both the database update and the publishing of the message are successful (one should not happen without the other).
In order to facilitate reliable message delivery without 2PC, we recommend you explore the transactional-outbox pattern for the event messages. The transactional-outbox pattern shows that events are saved in a data store (typically in an outbox table) before they’re pushed to a message broker. It solves the problem of inconsistency by writing to two database tables (an entity table and an outbox table) within the same database transaction scope. It then uses the content written to the outbox table to drive the event publishing process. The record written to the outbox table describes a change event that happened in the service. Next, to capture this change event, we use a service that will asynchronously monitor the outbox table. As soon as there is a new entry in the table, it will be transformed into events and published to the message broker.
Challenges
The following are common challenges we see when modernizing monolith applications to purpose-built databases and serverless technologies:
- Network latency for applications designed to run synchronous short-lived transactions on high-throughput enclosed environments. Having a hybrid architecture between cloud based microservices and on-premises legacy applications exacerbates this issue as well.
- Consistency requirements for distributed transactions require that both eventual consistencies are achieved, and conversely rollback occurs when there is an unsuccessful transaction.
- Entangled complexity of the monolith’s database, where different parts of the application may use and rely on the same data for different purposes. When changes are made to the database to accommodate one part of the application, it can inadvertently impact other parts of the application that rely on the same data. This is particularly relevant when microservices depend on reference tables owned by the monolith. It requires microservice developers to understand how a business entity is written across multiple tables in the monolith in order to understand the impact of changes. This creates further needs for the developer to understand what monolith services and features cause it to change, and how it propagates cascading updates when it changes.
Solution
To address these challenges, we propose an architecture that enables domain consistency in a decoupled event-driven architecture. This architecture increases resiliency by applying the transactional-outbox pattern to domain-database transactions. In this blog post, we explain how to update a database reliably, and raise an event message by applying the transactional-outbox pattern for event-driven architectures. We provide an architecture and example source code to show you how to apply this pattern with your monolith workloads.
Implementation steps
Let’s see how we can implement the preceding solution in AWS. The following architecture diagram describes the proposed solution to address the challenges of achieving domain consistency in event-driven architectures.
Consider an organization that has a legacy monolith, and has developed two microservices as part of their modernization strategy. It is important that transactions in the monolith are replicated to the microservices, while also maintaining the consistency of transactions in all components.
- Create an event bus using Amazon EventBridge
To support event choreography between the domains, first set up a custom event bus using Amazon EventBridge. This is a serverless event bus service that helps you connect your applications with data from a variety of sources. An event bus is a pipeline that receives events and evaluates them as they arrive using rules. Follow these steps to create an event bus. - Detect changes on the monolith’s relational database
Setup change-data-capture (CDC) to detect changes on the monolith database with AWS Data Migration Service (AWS DMS) and Amazon Kinesis Data Streams. AWS DMS performs nearly continuous data replication using CDC. By using CDC with AWS DMS, you can determine and track the changes required. You can then stream the data changes by setting a Kinesis data stream as the target of DMS. - Create an outbox table in the monolith’s database
Create an outbox table that records events to be published following any database change. Then wrap the write operation of both entities and outbox tables inside a transactional operation. This will help you maintain atomicity, consistency, isolation, and durability (ACID) properties. -
Capture the inserts in the monolith’s outbox table and send them to EventBridge
Capture the inserts in the outbox table created in the previous step using an AWS Lambda function. Listen to the CDC Kinesis Data Stream, and publish the inserts as domain events in the event bus. An example Lambda function code to capture Kinesis data stream and write the incoming event data to EventBridge as follows (read more here).console.log('Loading function'); // receive a Kinesis event input exports.handler = function(event, context) { //console.log(JSON.stringify(event, null, 2)); event.Records.forEach(function(record) { // Kinesis data is base64 encoded so decode here var payload = Buffer.from(record.kinesis.data, 'base64').toString('ascii'); console.log('Decoded payload:', payload); // Push the event to EventBridge }); }
-
Capture the EventBridge events and trigger a Lambda function on your microservices
Capture the monolith’s events by setting up a rule in EventBridge, and push them into an Amazon Simple Queue Service (Amazon SQS) queue. Rules associated with EventBridge evaluate events to check whether an event matches the rule’s criteria as they arrive. Then, handle the events by triggering an associated Lambda function. Set up a dead-letter queue (DLQ) as a failure destination to capture events that fail to be processed. -
Use the Lambda function to update a projection table on DynamoDB
Using Amazon DynamoDB, create a projection table with a data model that represents the monolith’s domain. This only needs to contain the monolith’s properties that are relevant to the microservice’s features. Use the incoming monolith events to populate the projection table. Given the asynchronous nature of event-driven architectures, the projection table will be eventually consistent with the monolith’s database. An example Lambda function code to populate the projection table is as follows.// Load the AWS SDK for Node.js var AWS = require('aws-sdk'); // Create the DynamoDB service object var dynamodb = new AWS.DynamoDB({ apiVersion: '2012-08-10' }); exports.handler = function(event, context) { //console.log(JSON.stringify(event, null, 2)); var params = { //Event from EventBridge }; // Call DynamoDB to add the item to the table dynamodb.putItem(params, function(err, data) { if (err) { console.log("Error", err); } else { console.log("Success", data); } }); }
- On the DynamoDB table, repeat the outbox mechanism
Using DynamoDB Transactions, repeat the same transactional mechanism around the entity tables and the event outbox table. Insert events into the outbox table that represent changes in the microservice’s domain aggregates. - Capture outbox events with DynamoDB Streams
Using DynamoDB Streams, capture the inserts on the microservice’s event outbox table. - Publish outbox events from DynamoDB into EventBridge
Handle the data stream with a Lambda function, publishing the inserts as domain events in the event bus. An example Lambda function code to publish DynamoDB events to EventBridge as follows (read more here).
// Load the AWS SDK for Node.js
const AWS = require('aws-sdk')
AWS.config.region = process.env.AWS_REGION || 'us-east-1'
const eventbridge = new AWS.EventBridge()
exports.handler = async (event) => {
//console.log(JSON.stringify(event, 0, null));
let payload = { Entries: [] };
event.Records.forEach((item) => {
payload.Entries.push({
// Event envelope fields
Source: 'MyDynamoStream',
EventBusName: 'MyEventBus',
DetailType: 'transaction',
Time: new Date(),
// Main event body
Detail: JSON.stringify(item)
});
});
//console.log("Payload to EventBridge");
//console.log(payload);
const result = await eventbridge.putEvents(payload).promise();
//console.log('EventBridge result');
//console.log(result);
}
Conclusion
In this blog post, we described how you can increase the resilience of your event-driven architectures by applying the transactional-outbox pattern to your database transactions.
The transactional-outbox pattern solves the problem of how to atomically update a domain database and send messages to a message broker in an event-driven architecture. It can be used in both native microservice architectures for greenfield projects and for monolith modernization scenarios. In the latter, you’ll typically use the strangler fig pattern to decompose the monolith. During the transition time, you will use the transactional-outbox to ensure data consistency between the monolith and the microservices. This is a temporary solution to solve domain consistency issues until the monoliths are fully decomposed.
For this blog post, we assumed a scenario where an organization wants to modernize a legacy on-premises monolith by extending it with cloud-native microservices. To facilitate data consistency, we applied a transactional-outbox pattern using an event driven architecture with AWS. We provided a high-level walkthrough of the steps that you must apply to your existing legacy monoliths.
The PDF version of the architectural diagram we described is available here. For other reference architectures, visit the AWS Architecture Center.