Integration & Automation
Build an automated deployment of generative AI with agent lifecycle changes using Terraform
Generative AI technology holds immense potential for driving innovation, enhancing content creation, improving customer experiences, streamlining operations, and boosting productivity across various industries. To unlock this transformative power, Amazon Bedrock offers a fully managed service that integrates high-performance foundation models from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, Mistral AI, and Amazon. By providing a single API to access these advanced models and supporting the development of generative AI applications with a strong focus on security, privacy, and responsible AI practices, Amazon Bedrock empowers organizations to explore new avenues for innovation and elevate their overall offerings through the power of AI.
In this blog post, we’ll guide you through the process of creating a fast, reusable, and repeatable Retrieval-Augmented Generation (RAG)-based generative AI infrastructure as code (IaC) using Terraform; keeping in mind best practices for a sample use case chatbot as a blueprint. This solution provides a foundational framework that you can customize and expand to develop robust, scalable, and efficient chatbot applications on the AWS Cloud using Amazon Bedrock. You’ll learn how to automate the provisioning of essential components required for a generative AI chatbot such as Amazon Bedrock Agents, Amazon Bedrock Knowledge Bases, Amazon OpenSearch Serverless, AWS Lambda, Amazon Simple Storage Service (Amazon S3) buckets for Amazon Bedrock Knowledge Bases, Amazon Bedrock Guardrails and the AWS Identity and Access Management (IAM) roles required to permit and link the services mentioned.
We also dive deep into an automated custom logic lifecycle to prepare an agent and create and update an agent alias when there are changes to the agent action group Lambda code or instructions to the agent or the knowledge base.
Solution infrastructure diagram
The following figure shows the solution infrastructure.
This architecture diagram illustrates the integration of Amazon Bedrock with various AWS services to create an intelligent, secure, and automated workflow. The Amazon Bedrock Agent is at the core, utilizing Bedrock Knowledge Bases for handling queries and Bedrock Agent Action Groups to invoke necessary actions through Open API schema, Lambda and Amazon S3. It incorporates a VPC (Virtual Private Cloud) endpoint, which securely connects to resources within a private subnet. Inside the private subnet, a Lambda function operates to perform automated tasks triggered by the Bedrock Agent. These functions interact with various AWS resources while helping to maintain security and compliance. This setup makes sure that user requests are processed efficiently, securely, and in a scalable manner within the AWS environment.
Solution setup
The code for the solution is available on the GitHub repository in aws-samples
. Clone the code on your machine using the following command:
git clone "git@github.com:aws-samples/intelligent-rag-bedrockagent-iac.git"
Package structure
After the repository is cloned, you will find the structure of code as shown in the following figure. Each service required to build the solution has its own module conforming to the best practices of writing repeatable IaC code. The code can be integrated into a DevOps tool or lifecycle process.
The root module calls the submodules to build the stack in the sequence required to set up the solution stack.
Each module references other modules as module outputs. The following list describes the purpose of each module:
- The
knowledge_base_bucket
module is used to build the knowledge base data source S3 bucket with encryption, access logging for the bucket access, versioning, object lifecycle configuration, and (optional) Amazon Simple Notification Service (Amazon SNS) alerting for object uploads. The resources in the module can be configured to fit your use case. - The
roles
module builds the roles with security best practices for the agent, knowledge bases, and Lambda functions with KMS encryption and decryption permissions. - The
bedrock_knowledge_base
module creates a knowledge base and the knowledge base data source, and it creates a policy to allow an OpenSearch Serverless collection to be integrated with the knowledge base through theaoss
module. This module also creates AWS Systems Manager parameters to store the values of the knowledge base ID and knowledge base data source ID. - The
aoss
module builds the OpenSearch Serverless collection with a FAISS index and three OpenSearch Serverless encryption, data, and network policies. This module uses respective roles created in theroles
module and references in the module, such as the knowledge base role Amazon Resource Name (ARN). - The
bedrock_agent
module creates an Amazon Bedrock agent, a knowledge base association with the agent, an action group Lambda function that references the OpenAPI schema provided for the sample application, and finally Systems Manager parameters to store the agent IDs, agent alias IDs, agent instructions, agent action group instructions, and the Lambda function’s last update status, automating Lambda function code updates. This module references roles created by theroles
module (such as the Lambda role and agent role), knowledge base bucket names, knowledge base ID, and the ARNs to build the agent. - The
bedrock_guardrail
module creates Amazon Bedrock agent guardrails with the user’s chosen method to safeguard user inputs and responses to detect sensitive information such as personally identifiable information (PII) data. Amazon Bedrock Guardrails assesses user-provided information and foundation model outputs according to scenario-specific guidelines, offering an extra level of protection irrespective of the base foundation model (FM) used. The module blocks access based on the following properties (configurable based on user input invariables.tf
).- Content policy configuration for the Amazon Bedrock guardrails
- Sensitive information policy configuration for the Amazon Bedrock guardrails
- Topic policy configuration for the Amazon Bedrock guardrails
- Word policy configuration for the Amazon Bedrock guardrails
- The
vpc_endpoints
module creates interface and gateway endpoints for private communication between the services and the infrastructure. The endpoints created are for the following services:- Amazon Bedrock
- Amazon OpenSearch Serverless
- AWS Systems Manager
- Amazon S3
- AWS Key Management Service (AWS KMS)
- Amazon Simple Queue Service (Amazon SQS)
- The
agent_update_lifecycle
module builds custom logic that will automatically help developers by creating a new agent alias. This Alias points to the existing agent configuration. See the following section for the detailed workflow of this module.
Prerequisites
- Access management: The operations team must carefully delegate access for the services created by the solution. Proper access control measures should be in place to help secure and control access by, for example, using Systems Manager, models, an IAM role for deployment, and service-based roles, disabling public access to S3 buckets, and setting up a dead-letter queue.
- AWS CLI version: Make sure that the AWS Command Line Interface (AWS CLI) v2 is installed and configured correctly on the deployment environment.
- Provider versions: Before proceeding with the deployment, update the provider versions to the latest available versions to benefit from the latest features and security updates.
- API security: Review the open API schema and implement appropriate authentication mechanisms to help protect against unauthorized access and maintain data integrity.
- Model dependencies: Verify that the required Amazon Titan Text v2 and Claude3 Haiku models are enabled in your AWS account. These models are essential for the solution’s functionality and can be changed in the variables. See Add or remove access to Amazon Bedrock foundation models for how to enable model access for the AWS Region where you are deploying the solution.
- Regional compatibility: Confirm that the target deployment region supports the required models. Cross-check the model access and availability in the chosen region to avoid deployment failures.
- Network architecture: Provision a regulated virtual private cloud (VPC) following the AWS Well Architected Framework best practices. This VPC will host the solution and support a secure and high-performing network architecture.
Deployment instructions
This section below should be used to perform deployment of the solution containing a sample application.
- Clone the repository.
git clone "git@github.com:aws-samples/intelligent-rag-bedrockagent-iac.git"
- Navigate to the project base directory.
cd intelligent-rag-bedrockagent-iac
- Initialize Terraform.
terraform init
- Run
terraform plan
to view the list of resources that this solution will create.
terraform plan
- Run
terraform apply
. The flag—auto-approve
is optional. If you don’t use that, you will be prompted to manually enter a confirmation.
terraform apply —auto-approve
Note: ensure User Input
is enabled post deployment.
Solution workflow
The solution workflow is shown in the following figure.
The knowledge base is a repository of information that the agent can query to retrieve additional context and data to augment its responses. The agent orchestrates the workflow, which includes:
- Preprocessing the user input to contextualize and categorize it.
- Invoking the appropriate action group Lambda function or querying the knowledge base based on the user’s request.
- Generating an observation (output) from the action group Lambda function or knowledge base query.
- Formatting the final response to return to the user.
- The action group Lambda function is a custom function that the agent can invoke to perform specific tasks. During the orchestration step, the agent determines which action group Lambda function to invoke based on the user’s input.
Ultimately, the workflow of the Amazon Bedrock agent and knowledge base integration is as follows:
- The agent interprets the user’s input and generates a rationale for the next steps to take.
- Based on the rationale, the agent predicts which action group Lambda function to invoke or which knowledge base to query.
- If the agent needs more information to invoke the action, it will query the knowledge base to retrieve additional context.
- The agent then generates an observation (output) from the action group Lambda function or knowledge base query and uses it to augment the base prompt.
- This process continues in a loop until the agent can provide a final response to the user.
Agent update lifecycle workflow
The agent update lifecycle workflow is shown in the following figure.
Developers or DevOps administrators frequently test knowledge base and agent responses for their code changes, and for this developers need to prepare and create a new agent alias pointing to the latest version whenever there are changes to the knowledge bases, action group, or agent metadata.
The custom code provided has logic (agent_update_lifecycle
module) to do this automatically for the developers. The logic will create a new alias when any of the following occur:
- Developers update the Amazon Bedrock agent instructions using the AWS Console or the AWS CLI
- Developers update the action group Lambda function
- The knowledge base instructions for the agent are changed
- (Optional) Based on added logic to initiate creating a new agent alias.
Chatbot application workflow
In this blog post, we’ll explore the creation of a simple fitness chatbot. Here’s a quick overview of its key features and the technology behind them:
- Natural language understanding: Using Claude 3 Haiku, our chatbot generates human-like responses to user queries, providing a smooth and intuitive conversation experience.
- Knowledge base integration: We’ve implemented a sophisticated knowledge retrieval system. Documents are uploaded to an S3 bucket, converted into embeddings using the Amazon TITAN v2 model, and stored in OpenSearch Serverless as a vector database. This allows the chatbot to quickly access and deliver relevant fitness information.
- Flexible LLM capabilities: While we’re currently using Claude 3 Haiku, the code base is designed with configurability in mind, allowing simple switching between different large language models (LLMs) as an IaC variable.
- Comprehensive fitness knowledge: The chatbot is equipped with a broad understanding of fitness-related topics, making it a valuable resource for users seeking general health and exercise information.
- BMI calculation feature: As a practical application, we’ve integrated a body mass index (BMI) calculator. This feature is implemented as a Lambda function and is connected to the agent through an action group, demonstrating how to extend the chatbot’s capabilities with custom actions.
Agent configuration
Amazon Bedrock Agents streamline workflows and automate repetitive tasks. Amazon Bedrock Agents securely connect to your company’s data sources and augments the user request with the right information to generate an accurate response. The following instructions illustrate the configuration of the Bedrock Agent.
Agent instructions
You are a helpful fitness assistant. You have general knowledge about sports. You can answer questions related to fitness, diet plans. Use only the tools or knowledge base provided to answer user questions. Choose between the tools or the knowledge base. Do not use both. Do not respond without using a tool or knowledge base.
When a user asks to calculate their BMI:
1. Ask for their weight in kilograms.
2. Ask for their height in meters
3. If the user provides values in any other unit, convert it into kilograms for weight and meters for height. Do not make any comments about health status.
The key to creating an effective chatbot lies in the prompt—the instructions given to the AI to shape its behavior and responses. For our fitness chatbot, we developed a prompt that defines its role, capabilities, and limitations. Here’s a breakdown of the main components:
- Role definition: The chatbot is introduced as a helpful fitness assistant with general knowledge about sports and the ability to answer fitness and diet-related questions.
- Knowledge base: The AI is instructed to use only the provided tools or knowledge base to answer questions, maintaining accuracy and consistency in its responses.
- Specific functionality: We included a detailed process for calculating BMI, which demonstrates how to guide the AI through multi-step interactions.
- Unit conversion: To enhance user-friendliness, the chatbot is programmed to convert units if users provide weight or height in non-standard measurements.
- Ethical considerations: The AI is explicitly instructed not to make comments about health status, maintaining a neutral and non-judgmental stance.
The BMI calculation feature
One of the standout features of this chatbot is its ability to calculate BMI. Here’s how it works:
- When a user asks the chatbot to calculate their BMI, it initiates a structured dialogue.
- It first asks for the user’s weight in kilograms.
- Then, it requests the user’s height in meters.
- If the user provides values in different units (for example, pounds or feet and inches), the chatbot automatically converts these to kilograms and meters.
- After obtaining the necessary information, it calculates the BMI without offering unsolicited health advice.
The AI understands that it must calculate BMI. It understands that it needs to ask the user for height and weight. However, after that information is received it cannot immediately compute BMI. To perform the computation, it needs to first perform an action. This is defined in the agent’s action group.
Action group
First, let’s clarify what an action group is. In the context of Amazon Bedrock agents, an action group is a configuration that enables the agent to perform specific actions. These actions are defined in Lambda functions, allowing for custom functionality beyond the agent’s built-in capabilities.
How does the agent know when to call a particular Lambda function? The answer lies in the OpenAPI Schema definition file.
The decision-making process:
- Knowledge base check: When a user asks a question, the agent first consults its knowledge base. This is where it determines whether it can answer directly or if it needs to perform an action.
- Action determination: If the agent decides an action is necessary, it turns to the OpenAPI schema for guidance.
- Schema consultation: The OpenAPI schema acts as a roadmap, defining available actions and their corresponding Lambda functions.
A real-world example: BMI calculation
Let’s illustrate this with our fitness chatbot’s BMI calculation feature:
- User query: The user asks the chatbot to calculate their BMI.
- Agent analysis: The agent recognizes this request requires an action rather than a simple response based on the knowledge base.
- Schema reference: The agent consults the OpenAPI schema, which includes a definition for BMI calculation.
- Function identification: Within the schema, the agent finds the specific Lambda function designated for BMI calculation.
- Invocation: With this information, the agent invokes the correct Lambda function to perform the BMI calculation.
The OpenAPI schema: A closer look
In our BMI calculation example, the OpenAPI schema looks something like the following figure.
OpenAPI schema explanation:
Test the chatbot
In this section, we test the chatbot by asking it a series of questions and evaluating the answers. The figures show the questions and answers. All but the first figure also show the trace window, which shows the steps taken by the chatbot to return an answer.
Please note that since the chatbot leverages LLM, the language and the response can vary. However, the sentiments and the essence of the response should remain the same. Also, this is a sample application that helps you calculate BMI and contains a generic formula for BMI calculation and does not infer actual scenarios.
Scenario 1: Ask a generic question to understand the capabilities of the chatbot.
Question: What can you help me with?
Answer: As a fitness agent, I can help you with questions related to fitness, diet plans, and calculating your Body Mass Index(BMI). I have access to set of functions that can assist you with these tasks. Let me know if you have any specific questions and I'll do my best to help.
Scenario 2: In the data source file, there is some data that contains information about how IT professionals can improve their fitness. The purpose of this question is to demonstrate that we can ask the chatbot to search the knowledge base and provide a corresponding answer.
Question: what are the fitness tips for IT professionals?
Answer: 1. Combat sedentary lifestyle by standing up and stretching every hour, using a standing or treadmill desk, and taking short walks during breaks or phone call. 2. Maintain eye health by following the 20-20-20 rule (look at something 20 feet away for 20 seconds every 20 mins, using blue light filtering glasses, and adjusting monitor brightness and contrast. 3. Incorporate desk exercises like desk push-ups, chair squats, seated leg raises, and wrist finger stretches to stay active throughout the workday. 4. Prioritize stress management through mindfulness, meditation and taking regular breaks to reset mentally. 5. Maintain good nutrition and hydration habits, such as eating brain boosting foods and avoiding excessive caffeine and sugar, and staying hydrated.
Scenario 3: Let’s ask a generic question which has some overlap with fitness. We want to know if the LLM can answer questions from its pre-trained LLM data.
Question: What is table tennis?
Answer: Table tennis, also known as ping-pong, is a popular indoor sport where two or four players hit a lightweight ball back and forth across a net on a table. The objective is to hit the ball in such a way that the opponent cannot return it successfully. Table tennis is played on a table with a net in the center, and players use small paddles to hit the ball. It is an Olympic sport and is enjoyed by people of all ages as a recreational activity or competitive sport.
Scenario 4: Now, we ask a question for which the agent must perform an action. The action in this example is to calculate BMI.
Question: Can you calculate BMI
Answer: <Q&A flow, as displayed in the following screenshot>
Clean up
Use the following steps to clean up the infrastructure created for the solution. The solution can be built, updated, or modified using a variety of DevOps tools or procedures.
To clean up the infrastructure:
- Empty the data source S3 bucket.
- Navigate into the project base directory.
cd intelligent-rag-bedrockagent-iac
- The flag
—auto-approve
is optional. If you don’t use that, you will be prompted to enter a confirmation manually.
terraform —destroy --auto-approve
Conclusion
This blog post guides you through implementing a generative AI intelligent chatbot using Amazon Bedrock and AWS services, including developer lifecycle changes. It covers creating a scalable chatbot infrastructure with Terraform, using services like Amazon Bedrock Agents, Amazon Bedrock Knowledge Bases, OpenSearch Service, Lambda, Amazon S3, and IAM. The focus on IaC and automation brings key DevOps benefits: version control, consistent deployments, reduced errors, faster provisioning, and improved collaboration.
This guide serves as a blueprint for organizations to develop code based on re-usable, modular, and security best practices to deploy a generative AI based infrastructure into multiple environments with minimal changes (using DevOps if required).
- Review the Amazon Responsible AI policy before implementing this solution.
- This solution does not include a chat history feature and the chat is not stored.
- This solution deploys the default guardrail policies.
If you have feedback about this blog post, use the Comments section on this page.