AWS for Industries

Event-Driven Architecture for ISO 20022 Messaging Workflows on AWS

For decades, Financial Services Industry (FSI) organizations have been relying on various messaging standards to send and receive payments. For example, credit card payments leverage ISO 8583, cross-border payments have been relying on ISO 15022, while various depository institutions or market payments operators have developed their own proprietary messaging standards. FSI organizations are adopting an industry-standard messaging format with the introduction of ISO 20022, which is a common data dictionary across major FSI domains, such as payments, securities, trade, foreign exchange, and many more. This post walks you through an open-sourced implementation of an Event-Driven Architecture for ISO 20022 Messaging Workflows on AWS.

Let’s open-source

Today, we are excited to open-source the ISO 20022 Messaging Workflows solution designed to receive, process, and release ISO 20022 payment messages. This solution provides multi-Region, tunable consistency with the decision-making process managed by API consumers that allows for the acceptance, rejection, cancellation, and re-drive of data processing workflows with failover across AWS Regions.

Next, we explore the reference architecture, how to deploy this solution into your AWS account, and walk through some architecture choices and cost considerations of this solution.

Reference architecture

Event-Driven Architecture (EDA) is a widely used architecture paradigm. FSI organizations are adopting EDA to modernize their payment infrastructure with support for the ISO 20022 messaging standard. EDA is commonly paired with MicroServices Architecture (MSA) to share information efficiently between decoupled systems and components at scale. Working backward from customers’ requirements, we have created the following reference architecture.

Figure 1 EDA for ISO 20022 Payments Processing on AWS

Figure 1: EDA for ISO 20022 Messaging Workflows on AWS

The following steps explain the end-to-end lifecycle of payment messaging workflows with this solution throughout the reference architecture diagram (as shown in the preceding figure):

  1. API consumer calls the regional AUTH endpoint associated with a Region-specific Amazon Cognito’s Client ID and Client Secret, and receives the OAuth 2.0 Access Token (to be used with all subsequent API requests).
  2. API consumer calls the regional API endpoint associated with the Transaction MSA and receives HTTP 200 with a response payload that includes a transaction ID (to be used with all subsequent API requests).
  3. Transaction MSA generates UUID v4, verifies if it is unique within the current partition in Amazon DynamoDB (transaction table), and records the step in DynamoDB (status = ACCP). Otherwise it retries up to three times.
  4. API consumer calls the regional API endpoint associated with Incoming Queue and passes the transaction ID as HTTP header and ISO 20022 incoming message as HTTP body (this step starts the internal event-driven workflow).
  5. Incoming MSA consumes the ISO 20022 message from Incoming Queue, stores it in an Amazon Simple Storage Service (Amazon S3) bucket (incoming path), records the step in DynamoDB (status = ACTC), and pushes incoming message to Processing Queue.
  6. Processing MSA consumes the ISO 20022 message from Processing Queue, runs technical and business validations including sync calls to other MSAs: FIPS-140-2 / KYC / AML / Fraud / Liquidity / etc., records step in DynamoDB (status = ACSP or RJCT), and pushes ISO 20022 confirmation or rejection message to Releasing Queue.
  7. Releasing MSA consumes the ISO 20022 message from Releasing Queue, stores it in an S3 bucket (outgoing path), records the step in DynamoDB (status = ACSC or RJCT), and pushes the notification to Amazon Simple Notification Service (Amazon SNS).
  8. API consumer calls the regional API endpoint associated with the Outgoing MSA and receives HTTP 200 with the ISO 20022 outgoing message as a response payload.
  9. Timeout MSA executes every 15 seconds to retrieve any transaction that exceeds SLA, generates the rejection ISO 20022 message, stores it in Amazon S3 (outgoing path), and records the new step in DynamoDB (status = RJCT).
  10. OPTIONALLY, for on-premises downstream systems leveraging existing messaging capabilities (e.g., IBM MQ, Kafka), deploy the same tool in the cloud and use native replication between on-premises and cloud.
  11. MQ Reader MSA consumes messages from cloud-based MQ and submits them to the Incoming API (see the preceding Steps 1 through 5).
  12. MQ Writer MSA consumes messages from Outgoing API and pushes them to cloud-based MQ (see the preceding Steps 1, 2, and 9).

To make it easier to understand and digest, the following sequence diagram describes a simplified view of the interaction between API Consumers and this solution.

Figure 2 End-to-end Flow of Events between API Consumers and ISO 20022 Payments Processing Solution

Figure 2: End-to-end Flow of Events between API Consumers and ISO 20022 Messaging Workflows Solution

Architecture choices

Some customers might decide not to leverage Amazon Simple Query Service (Amazon SQS) and use ActiveMQ or RabbitMQ instead. In this case, you could opt in for the managed service Amazon MQ for ActiveMQ brokers or Amazon MQ for RabbitMQ brokers. Some customers may choose to use streaming mechanism instead of queuing. In that case, you could opt in for Amazon Kinesis Data Streams, or Amazon Managed Streaming for Apache Kafka (Amazon MSK).

Additionally, not all customers may choose to build and deploy their code using AWS Serverless, with one AWS Lambda function per one MSA. Some customers may choose Containers at AWS by leveraging Amazon Elastic Container Service (Amazon ECS) or Amazon Elastic Kubernetes Service (Amazon EKS). Either way, you can build your code as Docker images and push them into Amazon Elastic Container Repository (Amazon ECR) repositories, which are supported by Lambda, Amazon ECS, and Amazon EKS.

Cost considerations

For cost transparency, we have estimated how much it would cost to deploy and operate this reference architecture on AWS, assuming the following average Transactions Per Second (TPS):

  1. 100 (one hundred) TPS
  2. 500 (five hundreds) TPS
  3. 1,000 (one thousand) TPS

For this calculation, we have assumed 4Kb per message and four messages per transaction. This leads to incrementally lower costs per unit (one thousand requests) as the volume grows, as shown in the following figure.

Figure 3 Cost estimations using AWS Pricing Calculator

Figure 3: Cost estimations using AWS Pricing Calculator

In other words:

  • For 100 TPS: this solution would process 1.04 billion requests per month and store 1.04 TB of data per month, which would cost approx. $16.6K per month, or $0.064 per one thousand requests.
  • For 500 TPS: this solution would process 5.2 billion requests per month and store 5.2 TB of data per month, which would cost approx. $79.7K per month, or $0.0615 per one thousand requests.
  • For 1,000 TPS: this solution would process 10.4 billion requests per month and store 10.4 TB of data per month, which would cost approx. $158.5K per month, or $0.061 per one thousand requests.
  • Last, but not least: for 0 (zero) TPS you would pay only $1.50 per month. We’ll state the obvious: To achieve these levels of scale down in traditional environments is nearly impossible.

In summary, the Capital Expenditure (CapEx) of this solution is effectively zero, while the Operational Expense (OpEx) is incrementally lower per unit as the volume grows. Additionally, you could explore Savings Plans and Provisioned Capacity for Lambda and DynamoDB to further reduce the overall cost of this solution.

Solution deployment

We assume that you already have:

Note that if you select the AWS target Region other than us-east-1, make sure to create public certificates in both your target Region and us-east-1. Amazon Cognito custom domain deploys hosted UI using Amazon CloudFront distribution under the hood, which requires the public certificate to be pre-configured in us-east-1 region.

As a first step, clone the repository and validate that foundational configurations are in place:

git clone https://github.com/aws-solutions-library-samples/guidance-for-iso20022-messaging-workflows-on-aws
cd ./guidance-for-iso20022-messaging-workflows-on-aws/
/bin/bash ./bin/validate.sh -q example.com -r us-east-1 -t my-s3-bucket-us-east-1

Make sure to replace example.com with your custom domain, us-east-1 with your target AWS Region, and my-s3-bucket-us-east-1 with your S3 bucket.

Next, deploy your AWS resources using the Continuous Integration/Continuous Deployment (CI/CD) mechanism:

/bin/bash ./bin/deploy.sh -q example.com -r us-east-1 -t my-s3-bucket-us-east-1

Make sure to replace example.com with your custom domain, us-east-1 with your target AWS Region, and my-s3-bucket-us-east-1 with your S3 bucket.

Once the build execution is successful, you should be able to navigate to AWS CodeBuild service and see the newly created project named something like rp2-cicd-pipeline-abcd1234 (as shown in the following figure).

Figure 4 CICD Pipeline using CodeBuild

Figure 4: CI/CD Pipeline using CodeBuild

At this point you are ready to deploy this solution in your AWS account. To start the deployment process, select Start build in the AWS Management Console (as shown in the preceding figure) and view logs as they are produced in near real-time (as shown in the following figure).

Figure 5 Tail Logs in CodeBuild

Figure 5: Tail Logs in CodeBuild

Once the build execution is successful, you can navigate to the Amazon API Gateway service, select Custom domain names, select any of your custom domains from the list, and observe API Gateway domain name under the Configuration tab (as shown in the following figure). Use these values to update your DNS provider.

Figure 6 Custom Domains in API Gateway

Figure 6: Custom Domains in API Gateway

Similar to the previous step, once the build execution is successful, you can navigate to the Amazon Cognito service, select rp2-cognito-users from the list of user pools, and observe Custom domain and Alias target under the App integration tab (as shown in the following figure). Use these values to update your DNS provider.

Figure 7 Custom Domain in Amazon Cognito

Figure 7: Custom Domain in Amazon Cognito

The suffix abcd1234 in your CodeBuild project name is the solution deployment ID. This value can be used to test this solution as soon as the build execution is successful:

/bin/bash ./bin/test.sh -q example.com -r us-east-1 -d abcd1234

Make sure to replace example.com with your custom domain, us-east-1 with your target AWS Region, and abcd1234 with your solution deployment ID.

Once the execution is successful, your test output should look something like the following figure:

$ /bin/bash ./bin/test.sh -q example.com -r us-east-1 -i abcd1234
[INFO] RP2_API_URL: https://api-us-east-1.example.com
[INFO] RP2_AUTH_URL: https://auth-us-east-1.example.com
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1015 0 920 100 95 2030 209 —:--:— --:—:-- —:--:— 2250
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 331 100 331 0 0 136 0 0:00:02 0:00:02 —:--:— 136
[INFO] RP2_UUID: 951c8ed3-ebc7-448b-acc3-11005544d991
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 5356 100 303 100 5053 637 10631 —:--:— --:—:-- —:--:— 11371
[INFO] RP2_INBOX: {
"message": "transaction received successfully",
"transaction_id": "951c8ed3-ebc7-448b-acc3-11005544d991",
"request_id": "401dbc81-3fd7-482f-a645-987c772ae72e",
"request_timestamp": 1688146506714,
"region_id": "us-east-1",
"api_endpoint": "https://api-us-east-1.example.com"
}

Figure 8: Test output example

And your transaction related data in the DynamoDB table should look something like the following figure:

Figure 9 DynamoDB items example

Figure 9: DynamoDB items example

Cleaning up

If you decide to clean up your AWS environment and remove all AWS resources deployed by this solution, then run the following two commands:

/bin/bash ./bin/deploy.sh -c true -d iac.src -q example.com -r us-east-1 -t my-s3-bucket-us-east-1
/bin/bash ./bin/deploy.sh -c true -d iac.cicd -q example.com -r us-east-1 -t my-s3-bucket-us-east-1

Make sure to replace example.com with your custom domain, us-east-1 with your target AWS Region, and my-s3-bucket-us-east-1 with your S3 bucket.

Conclusion

This post provided a walkthrough of the reference architecture and descriptive guidance of an open-sourced implementation leveraging EDA for ISO 20022 Messaging Workflows. AWS can help you migrate to the cloud or modernize in the cloud by deploying in a single AWS Region, or deploying multi-Region using active/passive disaster recovery (DR) strategy. To learn more, visit AWS Solutions Library for Financial Services, Payments Modernization.

Eugene Istrati

Eugene Istrati

Eugene is a Global Solutions Architect at AWS for Financial Services. Based in New York City, he spends most of his time with Global Financial Services customers to help them achieve their business goals through cloud enabled technology solutions. Outside work, Eugene plays soccer (read: football) and travels the world with his family.

Jack Iu

Jack Iu

Jack is a Global Solutions Architect at AWS Financial Services. Jack is based in New York City, where he works with Financial Services customers to help them design, deploy, and scale applications to achieve their business goals. In his spare time, he enjoys badminton and loves to spend time with his wife and Shiba Inu.