Front-End Web & Mobile
Add a Conversational Interface to any Data Source
When building generative AI applications using Amazon Bedrock, it’s often necessary to enhance the capabilities of the models by giving them access to external data or functionality. The way to do that is with tools (sometimes referred to as function calling). These tools act as bridges, allowing models to retrieve application-specific, real-time or dynamic information that isn’t available in the model training data. For example, imagine a conversational assistant that not only answers general queries but can also fetch a live weather forecast, news, retrieve the latest stock prices, or interact with your internal databases.
How LLM tools work
Describing the tool
First, the LLM needs to know about the tools that are available to it. It needs information about the tools it can use, such as the name and description, so that it can choose the right one for the right situation. Providing a detailed description and descriptive name will help the LLM decide and provide better results. Also, the LLM needs to know what the inputs are for the tool, like an API specification or a function interface. In the Amazon Bedrock Converse API, tool definitions look like this:
{
"tools": [
{
"toolSpec": {
"name": "get_news",
"description": "Provides the latest news for a given topic.",
"inputSchema": {
"json": {
"type": "object",
"properties": {
"topic": {
"type": "string",
"description": "The Topic we want the news about."
}
},
"required": [
"topic"
]
}
}
}
}
]
}
This object describes what the tool does and the inputs it requires. For example, a “News API” tool might need a topic as input to return the news. We’ll see later that with AWS Amplify AI kit this is greatly simplified and you no longer will need to define a tool in this same manner.
Calling the tool
When users interact with Amazon Bedrock, the model analyzes the prompt and determines whether a tool is needed to generate the response. If it is, the model responds by specifying the tool to use and asking the application code to use the tool on its behalf.
The application receives the tool response and executes some code to fulfill the request. The output of this execution is then added to the conversation history and sent back to the model. The model processes this request with the result of the tool execution in the conversation history and can then response to the user by generating a human-readable response or choose to invoke another tool if needed. It is important to know the LLMs are stateless, each request to an LLM is a separate and unique API call, the conversation history, system prompt, and tool configuration are all sent to the LLM each time. Application code outside of the LLM needs to manage conversation history, invoking tools at the right time, and prompting or re-prompting the LLM. Let’s take a look at a simplified diagram of what is happening:
Using LLM tools are very powerful, but also fairly complex to manage. Let’s look at how the AWS Amplify AI kit makes it easy to define tools to build rich, interactive generative AI applications.
Using LLM tools with AWS Amplify
First, make sure you have an Amplify application set up and the Amplify libraries installed. You can find more information in this article about how to build fullstack AI apps with the new Amplify AI Kit.
In Amplify, you define your data in a TypeScript in a schema definition in the amplify/data/resource.ts
file. You can define data models with relationships and authorization rules as well as custom queries. Because accessing data is so important to generative AI applications, you can also define your what data your generative AI functionality has access to in this data schema as well.
Let’s take a look at how to define a custom query that can fetch some data from an external API and add conversational search on top of that API using LLM tools.
Defining a query
Let’s start by defining the shape of the query response. Open up the amplify/data/resource.ts
file and add a customType
in the schema called News
. This custom type will have a title, description, and the source of the news article.
const schema = a.schema({
News: a.customType({
source: a.float(),
title: a.string(),
description: a.string()
}),
//...
})
Next, define the custom query. To do that you will need to define the arguments it takes (the input to the API), what it returns (the custom type you just defined), the handler function (the implementation), and the authorization (who can access it).
const schema = a.schema({
//...
getNews: a
.query()
.arguments({ topic: a.string() })
.returns(a.ref("News").array())
.handler(a.handler.function(getNews))
.authorization((allow) => allow.authenticated()).
//...
})
Now let’s define the handler function. Use defineFunction
to define the getNews
function in the amplify/data/resource.ts
file.
import { defineFunction, secret } from '@aws-amplify/backend';
export const getNews = defineFunction({
name: "getNews",
entry: "./getNews.ts",
environment: {
NEWSAPI_API_KEY: secret("NEWSAPI_API_KEY"),
},
});
We recommend that you store your API key using Amplify secrets which store them in the AWS Systems Manager Parameter Store. To add a secret in your sandbox environment, use the command below:
npx ampx sandbox secret set NEWSAPI_API_KEY
All you need to do now is to code our getNews function in the amplify/data/getNews.ts
file. This example uses the https://newsapi.org/ API to retrieve the latest news about a topic.
import { env } from "$amplify/env/getNews";
import type { Schema } from "./resource";
export const handler: Schema["getNews"]["functionHandler"] = async (event) => {
const res = await fetch(`https://newsapi.org/v2/everything?q=${encodeURIComponent(
event.arguments.topic ?? ""
)}&apiKey=${env.NEWSAPI_API_KEY}`);
const news = await res.json();
const result = news.articles.slice(0, 3).map((article: any) => ({
source: article.source.name,
title: article.title,
description: article.description,
}));
return result;
};
Now you have a custom query defined in our data schema that will take a topic and fetch the latest news articles. All that is left is to use this query as an LLM tool. To do that, define an AI conversation route in your data schema by using a.conversation()
. Provide the AI model you wish to use, a system prompt, and the tools it has access to. To define an LLM tool, use a.ai.dataTool
and provide it with a name and description and a reference to the custom query you defined earlier.
const schema = a.schema({
//...
chat: a.conversation({
aiModel: a.ai.model("Claude 3.5 Sonnet"),
systemPrompt: `
You are a helpful assistant.
`,
tools: [
a.ai.dataTool({
name: "get_news",
description: "Provides the latest news for a given topic.",
query: a.ref("getNews"),
}),
],
})
.authorization((allow) => allow.owner()),
})
If you are using the Amplify sandbox, once you save your data resource file the changes should get deployed to your personal cloud sandbox. Or you can push your changes to a git branch that is connected to an Amplify application in the Amplify console to deploy them.
Building the chat interface
Install the Amplify client library and React components:
npm i aws-amplify @aws-amplify/ui-react @aws-amplify/ui-react-ai react-markdown
Then, make sure your frontend code has the Amplify library configured by calling Amplify.configure
with your amplify_outputs.json
file that Amplify generates when you use the sandbox, or download the file from the Amplify console.
import { Amplify } from 'aws-amplify';
import '@aws-amplify/ui-react/styles.css';
import outputs from '../amplify_outputs.json';
Amplify.configure(outputs);
Then use the useAIConversation
React hook to connect to the chat AI conversation route and pass the messages and send message handler to the AIConversation
React component. Make sure to wrap it in the Authenticator
component because conversations are scoped to a user.
import { generateClient } from "aws-amplify/api";
import { Authenticator } from '@aws-amplify/ui-react';
import { createAIHooks, AIConversation } from "@aws-amplify/ui-react-ai";
import { Schema } from "../amplify/data/resource";
export const client = generateClient<Schema>({ authMode: "userPool" });
export const { useAIConversation } = createAIHooks(client);
function App() {
const [
{
data: { messages },
isLoading,
},
handleSendMessage,
] = useAIConversation('chat');
return (
<Authenticator>
<AIConversation
messageRenderer={{
text: ({ text }) => <ReactMarkdown>{text}</ReactMarkdown>
}}
messages={messages}
isLoading={isLoading}
handleSendMessage={handleSendMessage}
/>
</Authenticator>
);
}
Retrieving data models
You can also give the LLM access to data models you define in your data schema as well. For example, if you are creating a note taking application and want your conversational assistant to have access to current news stories as well as your notes you can do that.
First, add a data model to your data schema:
const schema = a.schema({
//...
Note: a.model({
title: a.string(),
content: a.string()
})
.authorization((allow) => [allow.owner()])
//...
});
Then add a new tool in the conversation route definition:
tools: [
//...
a.ai.dataTool({
name: 'SearchNotes',
description: 'Used to search for notes the user has written',
model: a.ref('Note'),
modelOperation: 'list'
})
]
Now your conversational assistant will be able to respond with notes that your users have created as well as current news articles.
Cleanup
If you are running the sandbox, you can delete your sandbox and all the backend infrastructure that was created by running the following command in your terminal.
npx ampx sandbox delete
Wrapping up
In this blog post, you learned what LLM tools are and how you can use them in AWS Amplify to retrieve data from an external source, through an external API call or from your database using AWS AppSync. If you want to learn more, take a look at our starting guide for AI or go check the sample code.