This Guidance helps you build an order management system (OMS) on AWS using cloud-native services. By building OMS on the cloud, you can incorporate an event-based workflow to help streamline orders, from order entry to fulfillment. This Guidance also helps you manage and analyze data within your OMS so you can generate insights to improve your customer experience.
Architecture Diagram
Step 1
Enterprise applications feed data into OMS. This includes facility attribute data (such as store and warehouse data), product data, and inventory data. eCommerce order data is also sent to OMS for allocation and release.
Step 2
The integration layer consists of multiple AWS services that support file transfer for external file feeds, APIs, event-driven patterns, and streaming for inventory and master data.
Step 3
The extract, transform, load (ETL) layer consists of AWS Lambda functions that consume and publish data to Amazon Kinesis Data Streams and Amazon EventBridge. AWS Glue loads and transforms data for batch transactions.
Step 4
The data layer consists of Amazon Aurora for transactional data and Amazon DynamoDB, which serves requests at low latency.
Step 5
The OMS exposes a graphical user interface (GUI) that associates will use to create and modify orders, which in turn calls the necessary APIs from the API layer.
Step 6
The API layer consists of Lambda functions. The presentation layer of OMS and other applications, such as eCommerce, front-end, and customer care, invoke these functions.
Step 7
The allocation engine consists of Lambda functions and AWS Step Functions. These services execute the optimal allocation logic, publish eCommerce orders to EventBridge, and identify the appropriate facility to fulfill the order.
Step 8
EventBridge sends the orders to fulfillment applications. Associates pick and pack the items and send shipment confirmations to OMS.
Step 9
Data moved by the integration layer is sent to the analytics layer. Amazon Redshift generates insights on order processing efficiency.
Step 10
Third-party applications provide functionality based on specific tasks and interact with OMS through the integration layer.
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
This architecture follows a microservices approach, meaning that services are decoupled from one another. This allows you to make small, frequent, and reversible changes to the architecture. Additionally, if one component of the architecture fails, it will not affect other components.
-
Security
Data is encrypted at rest in DynamoDB and Aurora.
-
Reliability
This architecture uses stateless compute, meaning that data is not stored on servers so that servers can perform other functions. It also uses a decoupled architecture so that the function of one service is not altered by the function of another.
-
Performance Efficiency
This architecture uses DynamoDB, which delivers response times that can be measured in single-digit milliseconds for most cases. If you need response times in microseconds, you can use DynamoDB Accelerator (DAX).
-
Cost Optimization
This architecture is an internal application, so data transfer charges between Availability Zones within an AWS Region will be lower compared to an external application.
-
Sustainability
This architecture uses serverless services, which helps ensure that applications only use the exact amount of resources needed.
Implementation Resources
A detailed guide is provided to experiment and use within your AWS account. Each stage of building the Guidance, including deployment, usage, and cleanup, is examined to prepare it for deployment.
The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.
Related Content
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.