Guidance for Social Media Data Pipeline on AWS
Overview
How it works
These technical details feature an architecture diagram to illustrate how to effectively use this solution. The architecture diagram shows the key components and their interactions, providing an overview of the architecture's structure and functionality step-by-step.
Well-Architected Pillars
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
Operational Excellence
Amazon CloudWatch captures service-level metrics for all the workload components. Application and transaction-specific logs are pushed to CloudWatch Logs. CloudWatch dashboards provide a centralized view to monitor resources and understand workload state. This unified view can be customized for metrics and alarms. CloudWatch alarms and events alert you of risks or anomalies so you can take proactive actions to maintain the health of your workload.
Security
This Guidance features a data pipeline focused on service-to-service communications rather than points of user interaction. AWS service-to-service communication is configured with Identity and Access Management (IAM) service-linked roles with fine-grained access polices. Data transfer is protected by transport layer security (TLS)-based communication for AWS service and social media platform integrations. AWS services in this Guidance use storage-level encryption to protect data at rest.
Reliability
This Guidance is based on loosely-coupled, event driven architecture, and it uses near real-time or scheduled events to communicate between decoupled services. Most of the service interactions are asynchronous in nature through an intermediate durable storage layer, such as an SQS queue or a streaming data platform, such as Kinesis or Step Functions. Additionally, social media platform feeds are the major data source for this architecture. The pipeline is designed to accommodate throttling constraints imposed by social media platforms. You can perform reliability load testing for the end-to-end workflow on a production-like environment to validate that the workload meets scaling and performance requirements.
Performance Efficiency
We chose purpose-built services for this Guidance that will help achieve optimal performance, such as Amazon S3 for the data lake, DynamoDB for config data, and Amazon Comprehend for language models. We also selected these services because they can optimize costs, scale as business needs grow, and maintain high availability to prevent downtime.
Cost Optimization
This Guidance uses a serverless architecture designed to scale based on demand. This helps you grow with increasing business needs, while keeping costs down during the entry phase and non-peak times. Once you identify workload patterns in production, you can explore additional cost optimization features, such as the DynamoDB pricing model for provisioned capacity.
Sustainability
This Guidance uses AWS service features that optimize data access patterns and storage requirements so that you don’t store data that you no longer need. For example, with DynamoDB Time to Live (TTL), you can define a timestamp that indicates when items should expire. DynamoDB will then delete the item from your table without consuming any write throughput.
Disclaimer
Did you find what you were looking for today?
Let us know so we can improve the quality of the content on our pages