[SEO Subhead]
This Guidance shows how to create an automated responsible gaming mechanism to protect your players from problematic betting and gaming behavior. By using technology from AWS and AWS Partner Databricks, you can build an impartial, scalable, artificial intelligence and machine learning (AI/ML) workflow that creates a risk score, predicts problematic behavior, and notifies you in near real-time. You can then automate responses that intervene, helping to reduce harm that players may experience due to problematic play.
Please note: [Disclaimer]
Architecture Diagram
[Architecture diagram description.]
Step 1
Amazon Kinesis agents encrypt and send data to Amazon Kinesis Data Streams, which forwards it to Amazon Kinesis Data Firehose for risk evaluation, formatting, and storage of raw data using Amazon Simple Storage Service (Amazon S3).
Step 2
The Databricks Lakehouse Platform (Databricks) running on the Amazon Elastic Compute Cloud (Amazon EC2) extract, transform, load (ETL) and training cluster reads raw data from Amazon S3 and transforms the data to clean data, then to curated data, writing it back to Amazon S3 using the Delta Lake format.
Step 3
Access to Amazon S3 is brokered by an Amazon S3 gateway endpoint, providing secure, reliable connectivity without requiring an internet gateway or network address translation device.
Step 4
Amazon QuickSight provides dashboards that access curated data.
Step 5
The ML model creates a risk score to predict problematic play and publishes that model to an Amazon EC2 inference cluster.
Step 6
Kinesis Data Firehose invokes an AWS Lambda function to send wagers through representational state transfer to the Databricks Lakehouse Platform on the Amazon EC2 inference cluster (Databricks cluster) for risk evaluation, which returns a risk score for each player wager.
Step 7
Lambda evaluates the risk score and forwards it to Amazon EventBridge if it exceeds a customer-configured threshold.
Step 8
EventBridge sends notification events to Amazon Simple Notification Service (Amazon SNS) or to Amazon Pinpoint or Amazon Connect using Lambda. Monitoring and logging information is sent to Amazon CloudWatch.
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
This Guidance uses EventBridge, which lets you make small architectural changes and simplifies data redirection. Kinesis Data Streams supports a configurable retention period of its streaming data to reduce downstream impact in the event of downtime. You can also use the lifecycle management features of the Amazon S3 Intelligent-Tiering storage class so that raw ML data can automatically move to the optimal access tier based on access frequency. Additionally, you can use CloudWatch for detailed monitoring and logging.
-
Security
This Guidance uses Amazon Virtual Private Cloud (Amazon VPC) so that your data resides only within a network under your full control. Kinesis Data Streams encrypts your data in transit and also provides server-side encryption to automatically encrypt your data at rest. Amazon S3 encrypts all object uploads to all buckets. You can also block public access to all of your objects at the bucket- or the account-level with the Amazon S3 Block Public Access feature.
-
Reliability
This Guidance uses Kinesis Data Streams, which lets you configure a seven-day data retention period so that downstream systems can reprocess data in the event of data loss or a processing failure. Amazon S3, which is designed to provide 99.999999999 percent object durability, stores objects redundantly across multiple facilities to increase the reliability of your data storage. Additionally, Lambda runs functions in multiple Availability Zones (AZs) so that it can complete processing in the event of a service interruption to a single AZ.
-
Performance Efficiency
This Guidance uses Lambda, which automatically provisions separate implementation environments for each concurrent request so that it can scale to meet your capacity needs without overprovisioning resources. You can configure the Transmission Control Protocol (TCP) keep alive feature to avoid creating new TCP connections for subsequent function invocations. Additionally, Kinesis Data Streams provides an on-demand capacity mode that automatically scales to accommodate your workload throughput needs.
-
Cost Optimization
This Guidance uses Kinesis Data Streams, which provides a managed, serverless architecture for data streaming, so that you don’t need to deploy, configure, or maintain streaming server hardware and software. You only pay for what you use, and you can shift to more cost-effective provisioned capacity when traffic is steadier, reducing costs. Additionally, enhanced shard-level monitoring lets you gain insights into traffic patterns so that you can merge underused shards for further cost savings. This Guidance also lets you use S3 Intelligent-Tiering to automatically move your data to the most cost-effective access tier.
-
Sustainability
This Guidance uses serverless services like Kinesis Data Streams and Lambda, which distribute their environmental impact across many users through multi-tenant control planes. Additionally, Kinesis Data Streams provides an on-demand capacity mode, which uses automatic scaling so that only the resources required to handle the current workload are running. Likewise, Lambda automatically scales the number of implementation environments up and down so that no idle resources are running. ARM-based AWS Graviton processors increase the price performance of Lambda by up to 34 percent over x86-based functions, further minimizing hardware requirements.
Implementation Resources
A detailed guide is provided to experiment and use within your AWS account. Each stage of building the Guidance, including deployment, usage, and cleanup, is examined to prepare it for deployment.
The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.
Related Content
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.