[SEO Subhead]
Important: This Guidance requires the use of AWS CodeCommit, which is no longer available to new customers. Existing customers of AWS CodeCommit can continue using and deploying this Guidance as normal.
This Guidance helps game developers automate the process of creating a non-player character (NPC) for their games and associated infrastructure. It uses Unreal Engine MetaHuman, along with foundation models (FMs), for instance the large language models (LLMs) Claude 2, and Llama 2, to improve NPC conversational skills. This leads to dynamic responses from the NPC that are unique to each player, adding to scripted dialogue. By using the Large Language Model Ops (LLMOps) methodology, this Guidance accelerates prototyping, and delivery time by continually integrating, and deploying the generative AI application, along with fine-tuning the LLMs. All while helping to ensure that the NPC has full access to a secure knowledge base of game lore.
This Guidance includes four parts: an overview architecture, an LLMOps pipeline architecture, a Foundation Model Operations (FMOps) architecture, and a database hydration architecture.
Please note: [Disclaimer]
Architecture Diagram
-
Overview
-
LLMOps Pipeline
-
FMOps Pipeline
-
Database Hydration
-
Overview
-
This architecture diagram shows an overview workflow for hosting a generative AI NPC on AWS.
Step 1
Game clients interact with the NPC running on the Unreal Engine Metahuman.
Step 2
Requests for generated text responses from the NPC are sent to a Text API Amazon API Gateway endpoint. Requests that require game-specific context from the NPC are sent to a retrieval-augmented generation (RAG) API Gateway endpoint.Step 3
AWS Lambda handles the NPC text requests and sends them to LLMs hosted on Amazon Bedrock.Step 4
Base LLMs and LLMs customized through fine-tuning provide a generated text response.Step 5
The generated text response is sent to Amazon Polly, which in turn returns an audio stream of the response. The audio format is returned to the NPC to be delivered as dialogue.Step 6
For RAG NPC requests, Lambda submits the request to Amazon Bedrock to generate a vectorized representation from the embeddings model. Lambda then searches for relevant information from an Amazon OpenSearch Service vector index.Step 7
OpenSearch Service provides a similarity search capability to provide relevant context that augments the generated text request based on the vectorized representation of the request from Amazon Bedrock.
Step 8
The relevant context and original text request are sent to LLMs hosted on Amazon Bedrock to provide a generated text response. Amazon Polly then delivers the response to the NPC for dialogue.Step 9
Game narrative writers add game-specific training data to create custom models using the FMOps process or add game lore data to hydrate the vector database.
Step 10
Infrastructure and DevOps engineers manage the architecture as code using the AWS Cloud Development Kit (AWS CDK) and monitor the Guidance using Amazon CloudWatch. -
LLMOps Pipeline
-
This architecture diagram shows the processes of deploying an LLMOps pipeline on AWS.
Step 1
Infrastructure engineers build and test the codified infrastructure using AWS CDK.
Step 2
Updates to infrastructure code are committed to the AWS CodeCommit repository, invoking the continuous integration and continuous deployment (CI/CD) pipeline within the Toolchain AWS account.Step 3
Infrastructure assets, such as docker containers and AWS CloudFormation templates, are compiled and stored in Amazon Elastic Container Registry (Amazon ECR) and Amazon Simple Storage Service (Amazon S3).Step 4
The infrastructure is deployed to the quality assurance (QA) AWS account as a CloudFormation stack for integration and system testing.Step 5
AWS CodeBuild initiates automated testing scripts that verify that the architecture is functional and ready for production deployment.Step 6
Upon successful completion of all systems tests, the infrastructure is automatically deployed as a CloudFormation stack into the Production (PROD) AWS account.
Step 7
The FMOps pipeline resources are also deployed as a CloudFormation stack into the PROD AWS account.
-
FMOps Pipeline
-
This architecture diagram shows the process of tuning a generative AI model using FMOps.
Step 1
Game lore text documents are uploaded to an S3 bucket.
Step 2
The document object upload event invokes Amazon SageMaker Pipelines.Step 3
The preprocessing step runs a SageMaker processing job to pre-process the text documents for model fine-tuning and model evaluation.Step 4
The callback step allows SageMaker Pipelines to integrate with other AWS services by sending a message to an Amazon Simple Queue Service (Amazon SQS) queue. After sending the message, SageMaker Pipelines waits for a response from the queue.Step 5
Amazon SQS manages the message queue that coordinates tasks between the SageMaker Pipelines and the AWS Step Functions workflow.Step 6
The Step Functions workflow orchestrates the process of fine-tuning the LLM. Once a model has been fine-tuned, Amazon SQS sends a success message back to the SageMaker Pipelines callback step.
Step 7
The model evaluation step runs a SageMaker processing job to evaluate the fine-tuned model’s performance. The tuned model is stored in the Amazon SageMaker Model Registry.Step 8
Machine learning (ML) practitioners review the tuned model and approve it for production use.Step 9
An AWS CodePipeline workflow is invoked to deploy the approved model into production. -
Database Hydration
-
This architecture diagram shows the process for database hydration by vectorizing and storing gamer lore for RAG.
Step 1
A data scientist uploads game lore text documents to an S3 bucket.
Step 2
The object upload invokes a Lambda function to launch a SageMaker processing job.
Step 3
A SageMaker processing job downloads the text document from Amazon S3 and splits the text into multiple chunks.
Step 4
The SageMaker processing job then submits each chunk of text to an Amazon Titan embeddings model hosted on Amazon Bedrock to create a vectorized representation of the text chunks.Step 5
The SageMaker processing job then ingests both the text chunk and the vector representation into OpenSearch Service for RAG.
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
This Guidance uses AWS X-Ray, Lambda, API Gateway, and CloudWatch to track all API requests for generated NPC dialogue between the Unreal Engine Metahuman client and the Amazon Bedrock FM. This provides end-to-end visibility into the status of the Guidance, allowing you to granularly track each request and response from the game client so you can quickly identity issues and react accordingly. Additionally, this Guidance is codified as a CDK application using CodePipeline so operations teams and developers can address faults and bugs through appropriate change control methodologies and quickly deploy these updates or fixes using the CI/CD pipeline.
-
Security
Amazon S3 provides encrypted protection for storing game lore documentation at rest in addition to encrypted access for data in transit, while ingesting game lore documentation into the vector or fine-tuning an Amazon Bedrock FM. API Gateway adds an additional layer of security between the Unreal Engine Metahuman and the Amazon Bedrock FM by providing TLS-based encryption of all data between the NPC and the model. Lastly, Amazon Bedrock implements automated abuse detection mechanisms to further identity and mitigate violations of the AWS Acceptable Use Policy and the AWS Responsible AI Policy.
-
Reliability
API Gateway manages the automated scaling and throttling of requests by the NPC to the FM. Additionally, since the entire infrastructure is codified using CI/CD pipelines, you can provision resources across multiple AWS accounts and multiple AWS Regions in parallel. This enables multiple simultaneous infrastructure re-deployment scenarios to help you overcome AWS Region-level failures. As serverless infrastructure resources, API Gateway and Lambda allow you to focus on game development instead of manually managing resource allocation and usage patterns for API requests.
-
Performance Efficiency
Serverless resources, such as Lambda and API Gateway, contribute to performance efficiency of the Guidance by providing both elasticity and scalability. This allows the Guidance to dynamically adapt to an increase or decrease in API calls from the NPC client. An elastic and scalable approach helps you right-size resources for optimal performance and to address unforeseen increases or decreases in API requests—without having to manually manage provisioned infrastructure resources.
-
Cost Optimization
Codifying the Guidance as a CDK application provides game developers with the ability to quickly prototype and deploy their NPC characters into production. Developers get quick access to Amazon Bedrock FMs through an API Gateway REST API without having to engineer, build, and pre-train them. Turning around quick prototypes helps reduce the time and operations costs associated with building FMs from scratch.
-
Sustainability
Lambda provide a serverless, scalable, and event-driven approach without having to provision dedicated compute resources. Amazon S3 implements data lifecycle policies along with compression for all data across this Guidance, allowing for energy-efficient storage. Amazon Bedrock hosts FMs on AWS silicon, offering better performance per watt of standard compute resources.
Implementation Resources
The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.
Related Content
[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.