This Guidance helps customers use social media feeds to derive better business outcomes and an improved customer experience. Social media feeds provide data to run successful customer sentiment analysis, targeted campaigns, targeted ads, and content moderation. This Guidance demonstrates an end-to-end pipeline to build a near real-time social media corpus on AWS using cloud-native, serverless computing. With this pipeline, customers can capture time-sensitive information about current trends and product feedback from social media feeds.  

Architecture Diagram

Download the architecture diagram PDF 

Well-Architected Pillars

The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.

The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.

  • Amazon CloudWatch captures service-level metrics for all the workload components. Application and transaction-specific logs are pushed to CloudWatch Logs. CloudWatch dashboards provide a centralized view to monitor resources and understand workload state. This unified view can be customized for metrics and alarms. CloudWatch alarms and events alert you of risks or anomalies so you can take proactive actions to maintain the health of your workload.

    Read the Operational Excellence whitepaper 
  • This Guidance features a data pipeline focused on service-to-service communications rather than points of user interaction. AWS service-to-service communication is configured with Identity and Access Management (IAM) service-linked roles with fine-grained access polices. Data transfer is protected by transport layer security (TLS)-based communication for AWS service and social media platform integrations. AWS services in this Guidance use storage-level encryption to protect data at rest.

    Read the Security whitepaper 
  • This Guidance is based on loosely-coupled, event driven architecture, and it uses near real-time or scheduled events to communicate between decoupled services. Most of the service interactions are asynchronous in nature through an intermediate durable storage layer, such as an SQS queue or a streaming data platform, such as Kinesis or Step Functions. Additionally, social media platform feeds are the major data source for this architecture. The pipeline is designed to accommodate throttling constraints imposed by social media platforms. You can perform reliability load testing for the end-to-end workflow on a production-like environment to validate that the workload meets scaling and performance requirements.

    Read the Reliability whitepaper 
  • We chose purpose-built services for this Guidance that will help achieve optimal performance, such as Amazon S3 for the data lake, DynamoDB for config data, and Amazon Comprehend for language models. We also selected these services because they can optimize costs, scale as business needs grow, and maintain high availability to prevent downtime.

    Read the Performance Efficiency whitepaper 
  • This Guidance uses a serverless architecture designed to scale based on demand. This helps you grow with increasing business needs, while keeping costs down during the entry phase and non-peak times. Once you identify workload patterns in production, you can explore additional cost optimization features, such as the DynamoDB pricing model for provisioned capacity.   

    Read the Cost Optimization whitepaper 
  • This Guidance uses AWS service features that optimize data access patterns and storage requirements so that you don’t store data that you no longer need. For example, with DynamoDB Time to Live (TTL), you can define a timestamp that indicates when items should expire. DynamoDB will then delete the item from your table without consuming any write throughput.  

    Read the Sustainability whitepaper 

Implementation Resources

A detailed guide is provided to experiment and use within your AWS account. Each stage of building the Guidance, including deployment, usage, and cleanup, is examined to prepare it for deployment.

The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.

[Subject]
[Content Type]

[Title]

[Subtitle]
This [blog post/e-book/Guidance/sample code] demonstrates how [insert short description].

Disclaimer

The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.

Was this page helpful?