Front-End Web & Mobile
Build fullstack AI apps in minutes with the new Amplify AI Kit
Today we are excited to announce the general availability of the AWS Amplify AI kit, the quickest way for fullstack developers to build web apps with AI capabilities such as chat, conversational search, and summarization. With the Amplify AI kit you can build fullstack generative AI functionality without prior experience in cloud architecture or machine learning. You don’t need to worry about provisioning resources or providing secure frontend access, and it is all serverless to allow for fast iteration, so you only pay for what you use.
If you haven’t heard, here at AWS Amplify we are all in on TypeScript. With Amplify Gen 2, every part of your app’s cloud backend is defined in TypeScript. Need an Auth backend? TypeScript. Data backend? TypeScript. Storage backend? TypeScript. Generative AI? TypeScript.
In this blog we will go over what is in the Amplify AI kit and how it can simplify building secure fullstack AI applications with Amplify and Amazon Bedrock. If you want to jump right in you can check out our getting started guide or the samples repo.
Set up
You will need an AWS account that is set up for local development and has access to the Foundation Model(s) in Amazon Bedrock you want to use. You can request access to the models by going in to the Amazon Bedrock console and requesting access.
If you don’t have a frontend project setup, you can set one up with Next.js or Vite. Then run the create amplify script in your project directory:
npm create amplify@latest
This will create an amplify
folder that will be your backend definition in TypeScript. You can then run:
npx ampx sandbox
Which will start up the Amplify sandbox, giving you a cloud sandbox environment to test with real AWS resources.
Adding AI functionality
To create AI functionality in the Amplify AI kit you define AI “routes” in your Amplify data schema along with your data models and custom queries. An AI route is like an API endpoint for interacting with backend AI functionality. There are currently 2 types of AI routes:
- Conversation: A conversation route is a realtime, multi-turn API. Conversations and messages are automatically stored in DynamoDB and responses are streamed to the client in realtime. Examples of this are any chat-based AI experience or conversational UI.
- Generation: A simple request-response API. A generation route is an AWS AppSync query that generates structured data according to your definition. Common uses include generating structured data from unstructured input and summarization.
In your Amplify data resource definition you can define AI routes in your schema definition:
import { a, defineData, type ClientSchema } from '@aws-amplify/backend';
const schema = a.schema({
// This will add a new conversation route to your Amplify Data backend.
chat: a.conversation({
aiModel: a.ai.model('Claude 3 Haiku'),
systemPrompt: 'You are a helpful assistant',
})
.authorization((allow) => allow.owner()),
// This adds a new generation route to your Amplify Data backend.
generateRecipe: a.generation({
aiModel: a.ai.model('Claude 3 Haiku'),
systemPrompt: 'You are a helpful assistant that generates recipes.',
})
.arguments({
description: a.string(),
})
.returns(
a.customType({
name: a.string(),
ingredients: a.string().array(),
instructions: a.string(),
})
)
.authorization((allow) => allow.authenticated()),
});
export type Schema = ClientSchema<typeof schema>;
export const data = defineData({
schema,
authorizationModes: {
defaultAuthorizationMode: "userPool",
},
});
Here we are creating 2 AI routes: a conversation route named ‘chat’ and a generation route named ‘generateRecipe’. Let’s go through the code one step at a time.
chat: a.conversation({
Here we are creating a new “chat” conversation route. You can name it whatever you want and the only limit to how many AI routes you can define is the limit on the AWS AppSync schema size.
aiModel: a.ai.model('Claude 3.5 Sonnet'),
Then you define the LLM you want to use. The Amplify AI kit supports any LLM in Bedrock that supports tool use and streaming. Here we are using Anthropic’s Claude 3.5 Sonnet model.
systemPrompt: 'You are a helpful assistant',
The system prompt provides high-level instructions to the LLM about its role and how it should respond.
.authorization((allow) => allow.owner()),
Amplify data has built-in authorization methods. For conversation routes we want to only allow the owner of the specific conversation to have access to it. This means users can’t access the conversation history of other users.
generateRecipe: a.generation({
Here we are creating a generation route called generateRecipe . Generations are synchronous request/response API when you just want to generate data of a specific shape based on some input.
.arguments({
description: a.string(),
})
This defines the input shape, what the client needs to send in order to get a response.
.returns(
a.customType({
name: a.string(),
ingredients: a.string().array(),
instructions: a.string(),
})
)
This is the output shape of the data you want to generate.
Now that we have a our conversation and generation routes defined, we can connect to them with the Amplify client libraries.
A type-safe client with realtime updates
With Amplify you define your backend resources in your fullstack application codebase in an amplify folder. This allows your data schema and frontend client to share types so you get a type-safe client that is always in sync with your data schema.
import { generateClient } from "aws-amplify/api";
import type { Schema } from "../amplify/data/resource";
const client = generateClient<Schema>();
// new generations / conversations namespace
client.generations.generateRecipe({ }); // type-safe based on schema
client.conversations.chat.create();
The Amplify AI kit also has React hooks so you don’t need to worry about managing network requests or UI state. You just need to hook into your UI and you have a realtime, secure, persistent, generative AI conversation.
import { generateClient } from "aws-amplify/api";
import { Schema } from "../amplify/data/resource";
import { createAIHooks } from "@aws-amplify/ui-react-ai";
const client = generateClient<Schema>();
const { useAIGeneration, useAIConversation } = createAIHooks(client);
function Chat() {
const [
{
data: { messages },
isLoading,
hasError,
},
sendMessage,
] = useAIConversation('chat');
//...
}
function RecipeGenerator() {
const [{ data, isLoading }, handleGenerate] = useAIGeneration('generateRecipe');
//...
}
Generative AI as data
Having the right access to your data is important for building meaningful generative AI functionality, and we wanted to make connecting data to an LLM as easy as possible. With Amplify, you define your data models in a strictly-typed schema definition, and now adding generative AI functionality is as easy as adding a data model or custom query. Because you are defining the shape of your data models and queries in TypeScript, the Amplify AI kit hooks up the necessary pieces to let the LLMs in Bedrock know exactly what data it has access to and how to access it. Plus, with the access controls defined in your data schema, the LLM can only access the data your end user can access. All data requests flow through AWS AppSync, whether they come from the client or from the LLM requesting some data.
Let’s say we have an application that has a “Post” data model. We can give our conversation route access to this data through tools.
import { a, defineData, type ClientSchema } from '@aws-amplify/backend';
const schema = a.schema({
Post: a.model({
title: a.string(),
body: a.string()
})
.authorization((allow) => allow.owner()),
chat: a.conversation({
aiModel: a.ai.model('Claude 3 Haiku'),
systemPrompt: 'You are a helpful assistant',
// This allows the LLM to query for your data
tools: [
a.ai.dataTool({
name: 'PostQuery',
description: 'Searches for Post records',
model: a.ref('Post'),
modelOperation: 'list',
}),
]
})
.authorization((allow) => allow.owner()),
//...
});
Let’s break down this code.
tools: [
Tools are how LLMs query for data and take actions. The Amplify AI kit will take care of describing the inputs in the request to the LLM and handle calling the tool and sending results back to the LLM.
a.ai.dataTool({
name: 'PostQuery',
description: 'Searches for Post records',
model: a.ref('Post'),
modelOperation: 'list',
}),
You can provide the LLM with access to data models you define in your schema. This will let the LLM query for records based on the attributes in the data model. Additionally, with owner-based authorization, the LLM will only have access to the records the user has created. You can also allow the LLM to call custom queries too!
Serverless AI architecture
The Amplify AI stack is fully serverless. It is quick to spin up, and with Amplify sandboxes, you can iterate on your backend code and see updates happening in a real cloud environment. With an Amplify sandbox you can get a local development experience for cloud resources. Having both backend and frontend updated as you develop features means you can build functionality end-to-end, without development silos. When you are ready to deploy, push a branch to Amplify Hosting which will deploy your backend and frontend at the same time.
- AWS AppSync API: AI Gateway for secure realtime connection to the client. The AppSync API acts as the hub where everything goes through: data requests coming from the client or the LLM all go through AppSync.
- Lambda Function: acts as a bridge between AppSync and Amazon Bedrock. It retrieves conversation history, invokes Bedrock’s streaming converse API, and handles invoking tools as AppSync queries.
- DynamoDB: stores conversation and message data that are scoped to a specific application user.
One more thing: Generative UI
The other thing we wanted to do with the Amplify AI kit was to make it straight-forward to build generative UI. Allowing the AI assistant to respond with not just text, but custom UI components you define. This allows for richer experiences and the ability to build things like conversational search.
The Amplify AI kit has an <AIConversation />
React component and accompanying useAIConversation
React hook to make it easy to build customizable interfaces with your AI routes. With this component you can provide responseComponents which are React components you define in your code that the AI assistant can respond with. Give your component a description and define the props and the Amplify AI kit will take care of the rest.
function Chat() {
const [
{
data: { messages },
isLoading,
},
sendMessage,
] = useAIConversation('chat');
return (
<AIConversation
messages={messages}
handleSendMessage={sendMessage}
isLoading={isLoading}
allowAttachments
responseComponents={{
WeatherCard: {
description: 'Used to display the weather of a given city to the user',
// any React component can be used
component: ({ city }) => {
return <Card>{city}</Card>;
},
props: {
city: {
type: 'string',
required: true,
},
},
},
}}
/>
);
}
Going to production
When you are ready to go to production you can add your git repo in the Amplify console. When changes are pushed to a branch you get a full continuous deployment pipeline for your fullstack application. Once your application is live you can view conversation data in your application.
Clean up
You can quit the Amplify sandbox command with ctrl + c. Afterwards, you can clean up the cloud sandbox resources with the command npx ampx sandbox delete
.
Conclusion
We are excited to see what you build with the Amplify AI kit and look forward to your feedback. Each day this week we will post a new article showcasing the different features of the Amplify AI kit. For more information on how to get started with the Amplify AI Kit, head over to docs.amplify.aws/ai
If you have any issues you can reach out on our Github repos, or ask a question in our community Discord.