AWS Public Sector Blog

Reimagining customer experience with AI-powered conversational service discovery

AWS branded background design with text overlay that says "Reimagining customer experience with AI-powered conversational service discovery"

The Executive Order on Transforming Federal Customer Experience and Service Delivery to Rebuild Trust in Government requires the design and delivery of services in a manner that people of all abilities can navigate. The executive order calls for the use of technology to modernize government and implement services. These services should be simple to use, accessible, equitable, protective, transparent, and responsive for all people of the United States. Governments across the globe often provide public services that are sourced from information residing across disparate systems. The heterogeneity across these systems makes it challenging for users to access, navigate, and discover available sources of information and relevant service content with precision and accuracy.

Governments often create websites or applications called “service catalogs” to list all the products and services they offer. These can include IT support, IT operations, HR, facilities, and more. Employees and customers can search the catalog and select services to submit requests or get information. However, you may not know which service you need to accomplish a task, such as applying for a purchase card. The specific service may be part of a larger financial offering that is not listed in the catalog. Searching through everything becomes time-consuming. AI-powered chatbots can simplify this by understanding requests in natural language and providing direct answers. For example, if you ask, “How do I apply for a purchase card?” the chatbot can give you the right information quickly, saving you time.

In this post, we will explore the use of generative artificial intelligence (AI) chatbots as a natural language alternative to the service catalog approach. We will present an Amazon Web Services (AWS) architecture pattern to deploy an AI chatbot that can understand user requests in natural language and provide interactive responses to user requests, directing them to the specific systems or services they are looking for. Chatbots simplify the content navigation and discovery process while improving the customer experience.

Architecture overview

In the following Figure 1, we propose a high-level architecture pattern for a single page application (SPA). The SPA is built using a front-end framework or library such as Angular or React that enables users to submit requests through the generative AI app, generate a response using Amazon Bedrock, and return it back to the user.

Figure 1. Architectural diagram of the generative AI chatbot described in this post. The major components are Amazon CloudFront, AWS WAF, AWS Shield Advanced, an Amazon S3 bucket, Amazon Cognito, Amazon API Gateway, AWS Lambda, Amazon Bedrock, Amazon OpenSearch, and Amazon RDS.

Subcomponents of the architecture

The next sections go into some detail about the subcomponents of the architecture guidance.

Chatbot application

The chatbot front-end enables users to interact with the generative AI app directly in their web browser. We built the front end as an SPA and deployed it using Amazon CloudFront and Amazon Simple Storage Service (Amazon S3). We use AWS WAF and AWS Shield Advanced to enhance the security posture on AWS. AWS WAF lets you monitor web requests and block threats like SQL injection or cross-site scripting. AWS Shield Advanced protects your app against distributed denial of service (DDoS) attacks by analyzing traffic patterns and automatically detecting malicious activity. We also use Amazon Cognito to provide sign-up, sign-in, and manage user identities for your web and mobile applications.

Next, we integrated the chatbot front end with the generative AI app to securely submit user requests using HTTPS APIs and to get responses.

Generative AI app

The generative AI app generates responses to user requests. We build the app using Python, AWS Boto3 SDK, and Amazon Bedrock runtime APIs. The application is deployed on AWS Lambda and Amazon API Gateway for scalable and cost-efficient deployment. The app uses Anthropic’s Claude large language model (LLM) to generate natural language text through a simple API call. API Gateway provides a scalable front end to invoke the Lambda functions running your code. This serverless architecture allows the app to handle high request volumes in a cost-optimal manner.

We also implement a Retrieval-Augmented Generation (RAG) workflow using Knowledge Bases for Amazon Bedrock to guide the LLM with relevant data. Knowledge Bases for Amazon Bedrock manages vector stores and embeddings to make the architecture more scalable.

When the generative AI app receives a user request, it performs a semantic search using the knowledge base and generates a response using Anthropic’s Claude model. The response contains relevant details and actions for the user’s query. For example, for “How do I apply for a purchase card?,” the app generates a response in natural language by listing each system, explaining what that system does, describing possible user actions, and providing clickable URLs.

This tailored response allows the user to directly access the required system or service without searching through nested links or catalogs. By quickly providing customized guidance in conversational language, the chatbot streamlines access to services and improves the user experience. The natural language responses deliver the right information on demand, preventing the user’s frustration and saving them time.

Data processing and the knowledge base

To build the knowledge dataset, you need to ingest data from domain-specific sources containing information about your organization’s services. A data pipeline architecture can help ingest data from diverse sources, validate and clean the data, and enrich it to create high-quality knowledge datasets (steps A to B, as shown in Figure 1). Although building data pipelines is outside the scope of this post, you can review analytic pipeline reference architectures for help in constructing your own pipelines and curated knowledge datasets.

You then build a knowledge base utilizing your data, taking advantage of the knowledge base capabilities of Amazon Bedrock. This provides a more scalable and adaptable knowledge base solution (steps C to D, as shown in Figure 1). You don’t need to manage knowledge bases, vector stores, or create embeddings in your code. Amazon Bedrock simplifies the process of ingesting new datasets. You can do this by uploading the data to Amazon S3 and then selecting the Sync option in the Amazon Bedrock knowledge base console.

Message flow

The following steps represent the flow of messages in the architecture to generate responses for the user’s requests:

  1. The user accesses the chatbot application through a web browser using the Amazon CloudFront URL.
  2. CloudFront utilizes AWS services like AWS WAF and Shield Advanced to enhance the security of the chatbot application and infrastructure.
  3. CloudFront securely retrieves the web resources by making a request to Amazon S3.
  4. The user is redirected to Amazon Cognito to authenticate before being granted access to the chatbot application.
  5. The user submits a request to the generative AI application through Amazon API Gateway.
  6. API Gateway routes the request to the generative AI application.
  7. The generative AI app utilizes Knowledge Bases for Amazon Bedrock to find content related to the user’s requests.
  8. Finally, the generative AI application uses the context from step 6 along with the user’s requests to generate a natural language response. The response is returned to the chatbot to display to users.

A Chatbot example

Here is an example of the chatbot responding to the user’s query, “How do I apply for a purchase card?”

Figure 2. AI-powered conversational chatbot sample response.

AWS can help government agencies build service experience portals to deliver on CX

In this post, we explored using an AI-powered conversational chatbot to simplify customer experience and transform service discovery for federal agencies. The chatbot provided an intuitive alternative to browsing complex websites or searching catalogs. Users can ask natural language questions and receive relevant recommendations through a conversation.

Under the hood, generative AI capabilities allow the chatbot to interpret queries and respond appropriately. This shifts the search burden from the user to an intelligent agent. Rather than digging through links, customers can navigate effortlessly as the chatbot guides them to the right services and information. By using this conversational approach, governments can make their online resources far more accessible and user-friendly.

To get started on a proof of concept or implementation project using this reference architecture or to learn more about the AWS generative AI–based chatbot, contact your AWS account team or reach out to the AWS Public Sector team for more information.

Read more about AWS generative AI applications and AI/ML for governments:

Naresh Dhiman

Naresh Dhiman

Naresh is a senior solutions architect at Amazon Web Services (AWS) supporting US federal customers. He has more than 25 years of experience as a technology leader and is a recognized inventor with six patents. He specializes in containers, machine learning (ML), and generative artificial intelligence (AI) on AWS.

Mickey Iqbal

Mickey Iqbal

Mickey is director of enterprise and principal technologists at Amazon Web Services (AWS). He leads a global team of expert builders who deliver innovative and scalable cloud solutions for public sector customers worldwide. Before joining AWS, Mickey was an IBM Fellow and vice president, a CEO of a digital health startup, and a co-author of three technical books and multiple publications. He has also filed 40-plus patents and received the 2018 Asian American Engineer of the Year award from AAEOY.org.