This Guidance demonstrates near real-time processing of account posting events for payment systems. Payment systems must persistently store and idempotently process all customer transactions and activities to maintain data integrity, requiring relational databases with transactional capabilities. These applications often use synchronous requests and must commit transactions to databases one by one rather than concurrently. This Guidance aims to create asynchronous, event-driven architectural patterns in a system that you can automatically deploy in your environment through infrastructure as code (IaC).

Please note: [Disclaimer]

Architecture Diagram

[Architecture diagram description]

Download the architecture diagram PDF 

Well-Architected Pillars

The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.

The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.

  • AWS CloudFormation helps you automate your infrastructure building and deployment through IaC templates, helping you limit human error and make small, repeatable, incremental, and reversible changes. AWS X-Ray traces functions and events, generating an application topology map that you can use to improve performance and identify bottlenecks during troubleshooting. Amazon CloudWatch acts as the central telemetry storage and provides log collection, dashboards, alarms, and analysis capabilities from Lambda and Step Functions. AWS CloudTrail monitors and records all API activities across your AWS accounts, giving you the ability to audit activity or API calls made to EventBridge, Lambda, and Step Functions.

    Read the Operational Excellence whitepaper 
  • AWS Identity and Access Management (IAM) establishes a strong identity and authorization foundation, and you can set up identity-based policies following the least-privilege principle to limit the access Lambda, Step Functions, and EventBridge have to downstream AWS services. You can also use resource-based policies to limit access further. Additionally, this Guidance lets you encrypt sensitive data from start to finish. AWS Key Management Service (AWS KMS) provides the ability to securely decrypt and encrypt data at rest. Step Functions and EventBridge encrypt data at rest and in transit, and Lambda encrypts data in transit. You can securely store Lambda code secrets using AWS Secrets Manager.

    Read the Security whitepaper 
  • This Guidance enables you to use EventBridge, Lambda, and Step Functions in combination to create an event-driven, fault-tolerant architecture. These three services are regional and are deployed across multiple Availability Zones (AZs). EventBridge uses buses and rules to enable a publish-subscribe model with downstream targets, and this model enables loose coupling, enabling components to scale independently. Powertools for AWS Lambda (Python) lets you write and implement idempotent functions so that each request is completed exactly once. Lambda functions are stateless by design and can scale independently, and Lambda sends failed requests to an Amazon SQS dead letter queue for fault isolation and further troubleshooting. Step Functions provides built-in error handling, time-outs, and parallel processing to handle your distributed application reliably.

    Read the Reliability whitepaper 
  • Lambda manages its own scaling mechanism when invoked asynchronously by EventBridge, and the serverless architecture removes the need for you to run and maintain physical servers for compute activities. Step Functions orchestrates business processes, and in the event of a time-out, you can gracefully terminate long-running or stuck calls or implement an alternative task. Additionally, DynamoDB is inherently designed to process large volumes of data with high performance. Its on-demand mode enables it to serve a large number of requests without any capacity planning. By carefully designing the DynamoDB primary key, you can build tables with a large number of distinct values and avoid throttling while reading or writing.

    Read the Performance Efficiency whitepaper 
  • This Guidance uses serverless services with a pay-for-value billing model, so you can lower your total application cost because you don't pay for overprovisioning, and resource utilization is optimized on your behalf. This also lowers your operational costs, because you don’t have to manage the infrastructure or create patches. Additionally, EventBridge pipes provide a consistent and cost-effective way to create point-to-point integration between event producers and consumers. The DynamoDB on-demand capacity mode scales based on traffic and helps you avoid overprovisioning or underprovisioning database resources. Additionally, you can use Arm-based Lambda architecture powered by AWS Graviton2 processors to gain up to 20 percent cost efficiency.

    Read the Cost Optimization whitepaper 
  • The architecture uses AWS serverless services, which are elastic by design and only provision the resources necessary to complete the required tasks. The use of AWS Graviton2 processors in Lambda can deliver up to 19 percent better performance at 20 percent lower cost, reducing your energy consumption. By using direct service integration with Step Functions, you can further reduce the carbon footprint of your workload and avoid running unnecessary components.

    Read the Sustainability whitepaper 

Implementation Resources

A detailed guide is provided to experiment and use within your AWS account. Each stage of building the Guidance, including deployment, usage, and cleanup, is examined to prepare it for deployment.

The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.

[Content Type]


This [blog post/e-book/Guidance/sample code] demonstrates how [insert short description].


The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.

References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.

Was this page helpful?