Skip to main content

Guidance for Responsible Content Moderation with AI Services on AWS

Overview

This Guidance demonstrates how game developers can moderate user-generated content (UGC) to ensure appropriate and safe player interactions. With AWS managed services and custom machine learning models, developers can quickly set up a content moderation backend system in one place. This backend system supports detecting and filtering comprehensive toxic content, along with customizable content flagging. The well-designed APIs allow for fast integration with the game and community tools. Ultimately, this allows developers to face the operational risks of user-provided content in online gaming platforms head-on; manual content moderation is error-prone and costly, whereas content moderation, powered by artificial intelligence (AI), dramatically accelerates the process to keep gaming communities safe.

How it works

These technical details feature an architecture diagram to illustrate how to effectively use this solution. The architecture diagram shows the key components and their interactions, providing an overview of the architecture's structure and functionality step-by-step.

Well-Architected Pillars

The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.

Lambda and API Gateway are used in this Guidance to support operational excellence by managing infrastructure and providing monitoring capabilities. Lambda handles all infrastructure needs to complete functions with minimal maintenance. Meanwhile, both Lambda and API Gateway integrate with Amazon CloudWatch metrics that can be used to monitor individual application components. Additionally, Lambda enables seamless deploying, debugging, and troubleshooting through infrastructure as code (IaC) tools like the AWS Serverless Application Model (AWS SAM) and the AWS Cloud Development Kit (AWS CDK). It handles all function maintenance, security patching, and monitoring by providing invocation as well as processing logs and metrics to identify errors and performance bottlenecks.

Read the Operational Excellence whitepaper 

Lambda and API Gateway are native AWS security services that help developers to reduce risky external dependencies while securing access to sensitive functionalities. Specifically, Lambda uses AWS Identity and Access Management (IAM) roles configured with least privilege principles to communicate with other AWS services such as Amazon Rekognition and SageMaker. This restricts services to only the permissions they require. Additionally, API Gateway simplifies authentication and authorization by integrating with IAM and Lambda. Together, these facilitate a secure environment where credentials and access can be precisely managed according to best practices.

Read the Security whitepaper 

The Regional AWS services utilized—including Lambda, API Gateway, and Amazon Rekognition—take advantage of Availability Zones (AZs) and multi-AZ redundancy to ensure high availability targets are met. By leveraging these fully managed services, developers can focus on core application logic rather than complex availability management. Lambda auto-scaling and automated retries shield developers from these concerns while ensuring reliability is maintained even under peak loads. Tapping into the innate high availability of AWS Regional services allows developers to easily achieve resilient network topology without architecting complex solutions themselves. The automation and self-healing capabilities make the backend infrastructure extremely durable in the face of most typical failures or surges.

Read the Reliability whitepaper 

The fully managed auto-scaling capabilities of Lambda and SageMaker make them ideal choices to support the near real-time and high concurrency demands of content moderation. As more moderation requests flow in, Lambda automatically handles the provisioning of additional environments to fulfill each one with low latency. Similarly, SageMaker endpoints dynamically adjust the number of machine learning (ML) inference instances based on fluctuating request workloads. Developers can rely on the innate scaling of these services to efficiently process bursting request volumes without over-provisioning resources. By leveraging the performance efficiency optimizations of Lambda and SageMaker, the backend infrastructure can cost-effectively manage unpredictable traffic—maintaining responsive moderation at any scale.

Read the Performance Efficiency whitepaper 

The serverless services used in this Guidance, like Lambda, API Gateway, and Amazon S3, are leveraged to minimize costs and avoid overprovisioning. Lambda bills in millisecond increments based on actual computation time used—developers only pay for the precise resources needed to process each moderation request. Similarly, API Gateway charges are incurred per API call, so costs scale directly with usage. And Amazon S3 provides a low total cost of ownership for stored content. Since moderation queries per second can fluctuate drastically and be hard to predict, the serverless pay-as-you-go model is ideal when compared to overprovisioning dedicated servers. By tapping into these auto-scaling cloud services that align cost with usage, developers can optimize expenses even with volatile request patterns.

Read the Cost Optimization whitepaper 

The serverless services used in this Guidance optimize sustainability by only consuming compute resources that scale directly with the velocity of the workloads. Since these are fully managed AWS offerings, no energy is wasted on idle, overprovisioned capacity. Lambda and API Gateway scale precisely to usage levels, so developers have no useless idle resources. And by leveraging SageMaker for ML inferencing rather than training models, inference compute needs are minimized. This serverless, event-driven architecture allows workloads to breathe with traffic patterns, optimizing energy demands accordingly.

Read the Sustainability whitepaper 

Deploy with confidence

Ready to deploy? Review the sample code on GitHub for detailed deployment instructions to deploy as-is or customize to fit your needs. 

Go to sample code

Disclaimer

The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.