Skip to main content

Generative AI Application Builder on AWS

Rapidly develop and deploy production-ready generative AI applications

Overview

Building generative AI applications can be complex for teams that don’t have deep AI expertise. Generative AI Application Builder on AWS simplifies this process, helping you develop, test, and deploy AI applications without extensive AI knowledge. This solution speeds up your AI development by easily incorporating your business data, comparing the performance of large language models (LLMs), running multi-step tasks through AI Agents, quickly building extensible applications, and deploying them with enterprise-grade architecture. Generative AI Application Builder comes with a ready-to-use generative AI chatbot and API that can be quickly integrated into your business processes or applications.

This solution includes integrations with Amazon Bedrock and its LLMs in addition to LLMs deployed on Amazon SageMaker. It uses Amazon Bedrock tools for Retrieval Augmented Generation (RAG) to enhance AI responses, Amazon Bedrock Guardrails to implement safeguards and reduce hallucinations, and Amazon Bedrock Agents to create workflows for complex tasks. You can also connect to other AI models using LangChain or AWS Lambda. Start with the simple, no-code wizard to build AI applications for conversational search, AI-generated chatbots, text generation, and text summarization.

Benefits

This solution allows users to experiment quickly by removing the heavy lifting required to deploy multiple instances with different configurations and compare outputs and performance. Experiment with multiple configurations of various LLMs, prompt engineering, enterprise knowledge bases, guardrails, AI agents, and other parameters.
With pre-built connectors to a variety of LLMs, such as models available through Amazon Bedrock, this solution gives you the flexibility to deploy the model of your choice, as well as the AWS and leading FM services you prefer. You can also enable Amazon Bedrock Agents to fulfill various tasks and workflows.
Built with AWS Well-Architected design principles, this solution offers enterprise-grade security and scalability with high availability and low latency, ensuring seamless integration into your applications with high performance standards.
Extend this solution’s functionality by integrating your existing projects or natively connecting additional AWS services. Because this is an open-source application, you can use the included LangChain orchestration layer or Lambda functions to connect with the services of your choice.

How it works

Agent Use Case

The Agent Use Case enables users to hand off tasks for completion using Amazon Bedrock Agents. You can select a model, write a few instructions in natural language, and Amazon Bedrock AI Agents will analyze, orchestrate, and complete the tasks by connecting to your data sources, or other APIs, to fulfill your request.

Architecture diagram illustrating the flow and components of a generative AI application builder on AWS, featuring AWS WAF, API Gateway, Cognito, Lambda, DynamoDB, S3, Bedrock Agents, CloudWatch, and user web/app interactions for an agent use case.

Text Use Case

The Text Use Case enables users to experience a natural language interface using generative AI. This use case can be integrated into new or existing applications, and is deployable through the Deployment Dashboard or independently through a provided URL.

Architecture diagram illustrating the workflow for building a text-based generative AI application on AWS. The flow includes services such as AWS WAF, Amazon API Gateway, Lambda, CloudFront, Cognito, S3, DynamoDB, SQS, Bedrock, SageMaker, Kendra, CloudWatch, and more, showing session control, LLM configuration, feedback and integration points with knowledge bases and AI models.

Deployment Dashboard

The Deployment Dashboard is a web UI that serves as a management console for admin users to view, manage, and create their use cases. This dashboard enables customers to rapidly experiment, iterate, and deploy generative AI applications using multiple configurations of LLMs and data.

Architecture diagram showing the deployment of a generative AI application builder on AWS. The solution uses components such as AWS WAF, Amazon CloudFront, Amazon Cognito, Amazon API Gateway, AWS Lambda, AWS CloudFormation, Amazon DynamoDB, Amazon S3, and Amazon CloudWatch to orchestrate user authentication, API authorization, stack deployment, configuration management, and monitoring.

About this deployment

  • Version: 3.0.0

  • Released: 5/2025

  • Author: AWS

  • Est. deployment time: 10 mins

  • Estimated cost: See details

Deploy with confidence

Everything you need to launch this AWS Solution in your account is right here

We'll walk you through it

Get started fast. Read the implementation guide for deployment steps, architecture details, cost information, and customization options.

Open guide

Let's make it happen

Ready to deploy? Open the CloudFormation template in the AWS Console to begin setting up the infrastructure you need. You'll be prompted to access your AWS account if you haven't yet logged in.

Launch in the AWS Console