AWS for Games Blog
AI-assisted game production: From static concept to interactive prototype
Game development often begins long before anything is playable. Teams spend weeks brainstorming concepts, months creating designs, and countless hours implementing and testing mechanics before an interactive demo is created. The challenge isn’t only timeline constraints; critical validation and refinement traditionally happen late in the development cycle, when changes become costly and creative pivots are difficult.
Artificial intelligence offers a path to transform early-stage development by making game conceptualization interactive earlier in the process.
AI can empower teams to:
- Rapidly explore and validate creative directions
- Generate placeholder assets that match artistic vision
- Test gameplay mechanics through playable prototypes
- Refine concepts with immediate feedback instead of waiting for full implementation
This shifts polish and interactivity earlier in the development cycle, enabling teams to make better-informed creative decisions before committing significant engineering resources.
At Amazon Web Services (AWS) re:Invent 2025, we’re introducing Agentic Arcade, an interactive demo experience where attendees can create complete, playable game prototypes in minutes using AI. Attendees assemble a fully functional browser-based game they can play immediately, complete with custom art, mechanics, and a professional storefront presentation.
We will explore the technical architecture behind Agentic Arcade and demonstrate how you can apply these patterns to accelerate your own game development workflows.
The demo experience
Agentic Arcade is organized around four stations that mirror a professional game studio workflow. Attendees will flow through each station, interacting with four specialized AI agents that work with them:
- Creative direction: Collaborate with the Director Agent to define the game’s genre, theme, objective, and determine the overall creative concept.
- Technical art: Work with the Artist Agent to generate game box art and choose style-consistent and visually cohesive character assets, color palettes, and sound effects.
- Gameplay development: Implement gameplay features with the Developer Agent to determine core game mechanics, special abilities and player difficulty.
- Game playtesting: Deploy the final game on a virtual storefront and experience the game alongside the Playtester Agent who will analyze the game and provide feedback.
The experience demonstrates how AI can guide developers through the complete game development pipeline, from initial concept to playable prototype in minutes. This is achieved by a comprehensive suite of AWS AI offerings working together:
- Amazon Bedrock provides foundation model access, with Claude by Anthropic powering intelligent reasoning for game concept generation and game review. Stable Diffusion by Stability AI and Amazon Nova enable high-quality visual asset creation, and Amazon Titan drives semantic asset discovery.
- Amazon Bedrock AgentCore orchestrates a multi-agent system where specialized AI agents collaborate across the development pipeline, each representing different game studio roles.
- Strands Agents facilitate type-safe, structured outputs across all agent interactions, enabling consistent data formats and reliable integration between different stages of the pipeline.
- Kiro, an agentic AI integrated development environment (IDE), generates comprehensive game specifications that define objectives, player abilities, enemy behaviors, and scoring systems.
- Amazon S3 Vectors, a feature of Amazon Simple Storage Service (Amazon S3), enables semantic asset discovery through vector embeddings. It matches visual characteristics rather than relying on keyword tags.
- Amazon Bedrock Guardrails enforces content safety policies across all AI-generated outputs, verifying appropriate content throughout the experience.
By combining multi-agent orchestration, programmatic asset generation, semantic search, and modern game architecture patterns, developers can explore creative directions faster—focusing on what makes their games unique.
Architecture overview
The experience is powered by a fully serverless architecture encompassing multiple layers:
- Frontend layer: React application hosted on Amazon S3, delivered through Amazon CloudFront.
- API layer: Amazon API Gateway for HTTP and WebSocket APIs, AWS Lambda for connection management, Amazon Cognito for authentication.
- Compute layer: FastAPI backend on AWS Fargate for Amazon ECS, ComfyUI workflows as containerized tasks with GPU support, Amazon Simple Queue Service (Amazon SQS) managing processing queues.
- Data layer: Amazon DynamoDB for session state and task queues; Amazon S3 for asset storage; S3 Vectors for semantic search; Amazon Elastic File System (Amazon EFS) for model storage.
- AI services: Amazon Bedrock (Amazon Nova Models, Claude, Stable Diffusion, Amazon Titan), Kiro, Amazon Bedrock Guardrails, Amazon Bedrock AgentCore with Strands Agents, Open-Source models (FLUX.1).
Following is a high-level data flow of the architecture:
- User interactions flow through CloudFront to the React frontend.
- API Gateway routes requests to Lambda or ECS containers.
- Multi-agent workflows coordinate through Amazon Bedrock AgentCore.
- Assets are generated asynchronously for the demo using ComfyUI running headless on Amazon ECS.
- During the live demo, vector similarity search retrieves assets from Amazon S3.
- Game configurations save to DynamoDB.
- WebSocket connections stream real-time updates.
- Final games render in-browser using Excalibur.js.
This serverless architecture enables automatic scaling, while maintaining cost efficiency, with clear separation between asset generation and runtime selection facilitating a consistent performance.
Now let’s dive into the implementation of this architecture, showcasing how AI can augment complex workflows in the game development lifecycle.
Multi-agent orchestration with Amazon Bedrock AgentCore
The heart of Agentic Arcade is a sophisticated multi-agent system where specialized AI agents coordinate to handle different aspects of game creation. This approach demonstrates how AI can mirror human creative workflows, with each agent bringing domain expertise to the development process.
Agent specialization
Each station features a specialized agent with distinct responsibilities. The Director Agent focuses on game concept development, using various AI models to assist with creative exploration and game definition. The Artist Agent coordinates visual asset selection, supervising box art generation and retrieving character asset packs using vector embeddings. The Developer Agent handles game mechanics configuration, selecting appropriate mechanics and generating configuration files. The Playtester Agent evaluates the finished game, analyzing gameplay features and provides recommendations on next steps.
Supervisor-worker pattern
Behind the scenes, a supervisor-worker architecture coordinates these agents. Supervisor agents dispatch tasks to worker agents, which execute specialized functions and report back. Evaluation agents validate outputs against quality criteria before assets enter the production pipeline.
This architecture is implemented using Amazon DynamoDB for task queue management and AWS Lambda for task dispatch. Amazon Bedrock provides AI model invocations driven agentically by Amazon Bedrock AgentCore and Strands Agents. The agents maintain short-term memory within each session, passing context between invocations to deliver coherent decision-making across all four stations.
Strands Agents provides structured output capabilities, enabling type-safe responses and facilitating consistent data formats across agent interactions. This combination of Amazon Bedrock AgentCore for orchestration and Strands Agents for structured outputs creates a robust foundation for complex multi-agent workflows.
Programmatic asset generation with ComfyUI
Traditional image generation tools rely on simple text-to-image prompts. Agentic Arcade uses ComfyUI to build complex, reproducible workflows as code. This enables consistent asset generation at scale through agentic prompt crafting based on the required game genre, themes and required styles. This approach solves a fundamental challenge in game development: creating sprites and visual assets that match specific concepts while maintaining quality and consistency across potentially thousands of assets.
ComfyUI workflows as code
ComfyUI provides a node-based visual programming interface for image generation. Agentic Arcade uses it programmatically, with each workflow defined as a JSON configuration specifying model selection, generation parameters, post-processing pipelines, and output specifications. These workflows run as containerized tasks on Amazon ECS with GPU support. Models are stored on Amazon EFS volumes for fast task startup. Amazon SQS manages the processing queue.
Autonomous asset pipeline
Driven by agents, the pipeline automatically generates, evaluates, and creates polished assets based on concepts provided by humans. Character sprite generation uses workflows that load specialized models optimized for game assets, apply prompts incorporating user theme and genre selections, and create outputs optimized for gameplay.
Assets are generated asynchronously before game runtime to avoid latency. ComfyUI workflows generate a comprehensive library of assets, embeddings and vectors are created for semantic search, and these assets are loaded into the demo experience. During the live demo, specific components generate in real time (including game concepts and titles) through Amazon Bedrock, box art through image generation models, and vector embedding searches to match assets. Character sprites, environment assets, and sound effects are intelligently selected from the asset library using semantic search.
Agentic quality control
Every generated asset passes through an agentic evaluation pipeline before entering the asset library. Evaluation agents use Amazon Bedrock to access the multimodal capabilities of Amazon Nova Lite to assess images against quality criteria including recognizability, genre appropriateness, visual consistency, and content safety. If an asset doesn’t meet quality thresholds, the system automatically refines the generation prompt and regenerates.
This programmatic approach transforms asset generation from an ad-hoc creative process into a reliable, scalable pipeline that can be version controlled, tested, and continuously improved.
Intelligent asset discovery with vector embeddings
Finding visually cohesive assets across thousands of options requires semantic understanding, not keyword matching. This process retrieves the appropriate assets for the game being created, extending the asset generation system by intelligently matching user preferences to content.
Semantic search
Based on the user’s selections across genre, theme, objective, and chosen artwork, the system queries assets stored in Amazon S3 with S3 Vectors enabled. By leveraging semantic search, it can retrieve character sprites and environmental assets with similar visual characteristics. The system generates embeddings using the Amazon Titan Multimodal Embeddings model, capturing visual characteristics as high-dimensional vectors, filtering for game genre.
Intelligent asset matching
This embedding-based approach enables the system to support diverse creative directions, while maintaining consistency within each game concept. By intelligently matching user preferences to appropriate assets from a curated library, the system provides contextually relevant recommendations that feel responsive and appropriate. A vector similarity search returns results in seconds, enabling real-time asset recommendations.
Agent-driven game development with Kiro
The final playable game uses modern game architecture patterns that enable modular, reusable game logic across different genres. This approach allows the same foundational code to support vertical shooters, platformers, and other game types by composing different combinations of components and systems.
Entity component system architecture
Rather than building games from scratch for each session, Agentic Arcade uses the Entity Component System software pattern that separates game objects (entities) from their behaviors (components) and logic (systems). Entities are game objects such as the player character or enemies. Components define properties such as position, velocity, or health. Systems contain the logic that operates on entities with specific component combinations.
This solution architecture makes game generation highly flexible and modular. When a developer selects mechanics and features, the system composes the appropriate components and systems. The same Entity Component System foundation supports different genres by swapping component configurations and system logic.
Spec-driven configuration
Kiro was used to create much of the demo experience itself. Kiro generates structured specifications that define game objectives, player abilities, enemy behaviors, scoring systems, and difficulty progression.
Each generated game is defined by configuration files (JSON and Markdown) containing asset references, game mechanics parameters, and title information. This configuration drives the game engine, enabling dynamic rendering without code compilation. The browser-based architecture uses Excalibur.js. This means games load instantly and run on any device with a modern web browser.
Real-time AI streaming
The multi-agent system processes multiple tasks simultaneously. To keep the experience smooth and interactive, we stream progress updates as they happen, providing transparency into how AI agents reason through complex decisions.
WebSocket implementation
Attendees see visualizations displaying the AI agents’ reasoning in real time, showing how agents analyze selections, evaluate consistency, and make recommendations. This transparency helps developers understand how to effectively prompt and guide AI systems in their own workflows.
Amazon API Gateway WebSocket APIs handle communication between the frontend and backend. AWS Lambda manages the connection lifecycle, Amazon DynamoDB stores the session state, and the backend streams updates from AI models in real time.
Content safety with Amazon Bedrock Guardrails
AI models can sometimes produce sensitive or inappropriate content. Every prompt and model response in Agentic Arcade is routed through Amazon Bedrock Guardrails before being shown to users.
Multi-layer safety strategy
User selections are limited to predefined options rather than free-form text, significantly reducing risk exposure. Evaluation agents assess quality and appropriateness before assets enter the library. Each generated image is analyzed against multiple criteria including content safety. The system uses a content moderation approach rather than complex feedback loops, ensuring reliability during runtime.
The system balances real-time AI generation for specific components with intelligently selected content from the asset library. The combined implementation of synchronous and asynchronous content guardrails creates a cohesive end-user experience which balances control and responsiveness.
Getting started
All the techniques used in Agentic Arcade are available today. You can experiment with multi-agent workflows through Amazon Bedrock AgentCore and programmatic asset generation using ComfyUI workflows deployed on Amazon ECS. Start now with vector similarity search using Amazon S3 Vectors and Amazon Titan Embedding. Implement real-time AI streaming with Amazon API Gateway WebSocket APIs with Entity Component System patterns for modular game architecture, and complete serverless deployment with AWS Cloud Development Kit (AWS CDK).
Whether you’re prototyping a new game concept, generating placeholder assets, or exploring gameplay mechanics, these patterns can help you iterate faster and focus on what makes your games unique.
Conclusion
Agentic Arcade demonstrates a fundamental shift in how AI can support game development, not by replacing creative decisions, but by making conceptualization interactive and validation immediate. By bringing polish and interactivity to the pre-production phase, teams can make better-informed creative decisions before committing significant engineering resources.
The patterns showcased—multi-agent orchestration, programmatic asset generation, semantic search, and modular game architecture—represent practical approaches. This can transform static ideas into interactive prototypes that can be explored, refined, and validated earlier in the development cycle.
Contact an AWS Representative to know how we can help accelerate your business.

