Guidance for Third-Party Marketplace on AWS
Overview
How it works
This architecture diagram demonstrates how retailers can onboard new suppliers and process orders of these suppliers’ products, enabling the retailer to provide their customer with more product choices without higher inventory cost.
Well-Architected Pillars
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
Operational Excellence
The services deployed in this Guidance can help you better understand your workloads and their expected behaviors by each emitting their own set of metrics into Amazon CloudWatch, where you can monitor errors. CloudWatch provides a centralized dashboard with logs and metrics, and can also be configured with alerts for operational anomalies. Additionally, consider tagging your CloudWatch resources for better organization, identification, and cost accounting. A tag is a custom label that you or AWS assigns to an AWS resource and can help in identifying how to respond to alarms and events. Additionally, you can leverage AWS Cost Anomaly Detection to detect unusual activity on your account, so you can understand and monitor the state of the resources consumed by this Guidance.
Security
By default, the data in this Guidance is encrypted at rest using the DynamoDB-owned key from AWS Key Management Service (AWS KMS). You can use the default AWS-owned encryption key, use an AWS managed key (the key that is created on your behalf), or a customer managed key (the key that you create). Lambda, by default, encrypts the environmental variables used at rest using the AWS managed KMS key. You can optionally configure Lambda to use a customer managed key instead of the default AWS managed key. Additionally, CloudWatch encrypts the logs by default at rest using server-side encryption. You can also use customer managed AWS KMS keys to get more control over encryption of the logs.
Reliability
There are several architectural components in this Guidance that support loose coupling, so you can implement a reliable application-level architecture. For example, the DynamoDB streams invoke a data validation process whenever new entries are made to DynamoDB tables. Also, Step Functions that host the data validation workflow have built-in retry capabilities. Additionally, error handling is multifaceted, from automated data recovery to manual verification that is performed on entries on Amazon SQS. Amazon SQS helps decouple the dependency between identifying an error that needs manual intervention, and implementing a workflow that allows administrators to perform data corrections. Amazon SQS also has the capability to use dead-letter queues to capture messages that fail even after multiple retries.
Performance Efficiency
The services selected for this Guidance, including Lambda, DynamoDB, and API Gateway, were selected because they are serverless services. Serverless services feature automatic scaling. If there is an influx of changes occurring in the content, the Guidance will scale accordingly, and make changes in near real-time. To optimize the process of the Lambda function, you can use the Lambda Power Tuning tool which automates the manual process of running tests on functions with different memory allocations, and then measuring the time taken to complete. The DynamoDB operations can be optimized by using Amazon DynamoDB Accelerator (DAX) that improves the performance of the application and also reduces the read capacity units used by DynamoDB. Finally, API Gateway allows for API caching to enhance responsiveness. This will reduce the number of calls made to the endpoint and improve the latency of the requests to the API. You can also enable payload compression for your API to improve responsiveness.
Cost Optimization
This Guidance relies on serverless AWS services—DynamoDB, Lambda, Step Functions, Amazon SQS, and API Gateway—that are fully managed and auto-scale according to workload demand. As a result, you only pay for what you use and save cost at times of low load. DynamoDB resources and costs can be reduced by choosing the most appropriate read capacity units (RCU) and write capacity units (WCU). By analyzing the data access patterns, you should refrain from over provisioning RCU and WCU.
Sustainability
AWS managed services help to scale up and down according to business requirements and traffic, and are inherently more sustainable than on-premises solutions. Additionally, leveraged serverless components automate the process of infrastructure management and make it more sustainable.
Based on the query patterns for the Guidance, we have created a data model that works with a single DynamoDB table. When you use this Guidance, you should identify and remove unused DynamoDB resources based on your needs and avoid over provisioning RCU and WCU. You can also reduce resources by clearing old data using Time to Live (TTL) and compressing data.
Implementation Resources
Disclaimer
Did you find what you were looking for today?
Let us know so we can improve the quality of the content on our pages