This Guidance demonstrates how game developers can moderate user-generated content (UGC) to ensure appropriate and safe player interactions. With AWS managed services and custom machine learning models, developers can quickly set up a content moderation backend system in one place. This backend system supports detecting and filtering comprehensive toxic content, along with customizable content flagging. The well-designed APIs allow for fast integration with the game and community tools. Ultimately, this allows developers to face the operational risks of user-provided content in online gaming platforms head-on; manual content moderation is error-prone and costly, whereas content moderation, powered by artificial intelligence (AI), dramatically accelerates the process to keep gaming communities safe.

Please note: [Disclaimer]

Architecture Diagram

[Architecture diagram description]

Download the architecture diagram PDF 

Well-Architected Pillars

The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.

The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.

  • Lambda and API Gateway are used in this Guidance to support operational excellence by managing infrastructure and providing monitoring capabilities. Lambda handles all infrastructure needs to complete functions with minimal maintenance. Meanwhile, both Lambda and API Gateway integrate with Amazon CloudWatch metrics that can be used to monitor individual application components. Additionally, Lambda enables seamless deploying, debugging, and troubleshooting through infrastructure as code (IaC) tools like the AWS Serverless Application Model (AWS SAM) and the AWS Cloud Development Kit (AWS CDK). It handles all function maintenance, security patching, and monitoring by providing invocation as well as processing logs and metrics to identify errors and performance bottlenecks.

    Read the Operational Excellence whitepaper 
  • Lambda and API Gateway are native AWS security services that help developers to reduce risky external dependencies while securing access to sensitive functionalities. Specifically, Lambda uses AWS Identity and Access Management (IAM) roles configured with least privilege principles to communicate with other AWS services such as Amazon Rekognition and SageMaker. This restricts services to only the permissions they require. Additionally, API Gateway simplifies authentication and authorization by integrating with IAM and Lambda. Together, these facilitate a secure environment where credentials and access can be precisely managed according to best practices.

    Read the Security whitepaper 
  • The Regional AWS services utilized—including Lambda, API Gateway, and Amazon Rekognition—take advantage of Availability Zones (AZs) and multi-AZ redundancy to ensure high availability targets are met. By leveraging these fully managed services, developers can focus on core application logic rather than complex availability management. Lambda auto-scaling and automated retries shield developers from these concerns while ensuring reliability is maintained even under peak loads. Tapping into the innate high availability of AWS Regional services allows developers to easily achieve resilient network topology without architecting complex solutions themselves. The automation and self-healing capabilities make the backend infrastructure extremely durable in the face of most typical failures or surges.

    Read the Reliability whitepaper 
  • The fully managed auto-scaling capabilities of Lambda and SageMaker make them ideal choices to support the near real-time and high concurrency demands of content moderation. As more moderation requests flow in, Lambda automatically handles the provisioning of additional environments to fulfill each one with low latency. Similarly, SageMaker endpoints dynamically adjust the number of machine learning (ML) inference instances based on fluctuating request workloads. Developers can rely on the innate scaling of these services to efficiently process bursting request volumes without over-provisioning resources. By leveraging the performance efficiency optimizations of Lambda and SageMaker, the backend infrastructure can cost-effectively manage unpredictable traffic—maintaining responsive moderation at any scale.

    Read the Performance Efficiency whitepaper 
  • The serverless services used in this Guidance, like Lambda, API Gateway, and Amazon S3, are leveraged to minimize costs and avoid overprovisioning. Lambda bills in millisecond increments based on actual computation time used—developers only pay for the precise resources needed to process each moderation request. Similarly, API Gateway charges are incurred per API call, so costs scale directly with usage. And Amazon S3 provides a low total cost of ownership for stored content. Since moderation queries per second can fluctuate drastically and be hard to predict, the serverless pay-as-you-go model is ideal when compared to overprovisioning dedicated servers. By tapping into these auto-scaling cloud services that align cost with usage, developers can optimize expenses even with volatile request patterns.

    Read the Cost Optimization whitepaper 
  • The serverless services used in this Guidance optimize sustainability by only consuming compute resources that scale directly with the velocity of the workloads. Since these are fully managed AWS offerings, no energy is wasted on idle, overprovisioned capacity. Lambda and API Gateway scale precisely to usage levels, so developers have no useless idle resources. And by leveraging SageMaker for ML inferencing rather than training models, inference compute needs are minimized. This serverless, event-driven architecture allows workloads to breathe with traffic patterns, optimizing energy demands accordingly.

    Read the Sustainability whitepaper 

Implementation Resources

The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.

[Subject]
[Content Type]

[Title]

[Subtitle]
This [blog post/e-book/Guidance/sample code] demonstrates how [insert short description].

Disclaimer

The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.

References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.

Was this page helpful?