Front-End Web & Mobile
Build a GraphQL API for your Amazon Aurora MySQL database using AWS AppSync and the RDS Data API
Today, we’re excited to announce that AWS AppSync now makes it easier for customers to create GraphQL APIs based on the the tables of their existing MySQL and PostgreSQL databases running on Amazon Aurora clusters configured with the Data API. When building APIs for existing databases, developers typically have to build an interface that accurately represents their tables. This is a time consuming and error-prone process. AppSync solves this problem with its new introspection capabilities that allow you to discover databases, and generate their matching GraphQL types. In the AppSync console, you can use this new feature to generate a ready-to-use GraphQL API from your database in just a few steps, without writing any code. In addition, we are also introducing improvements to JavaScript resolvers for the Amazon Relational Database Service (Amazon RDS), with a new SQL tagged template and SQL helper functions that makes it easier to write SQL statements in your resolvers.
In this blog, I’ll show you how you to start using this feature in the AWS console to instantly build APIs, and I’ll show how to use the new RDS utilities in your JavaScript resolvers.
Note: the feature in this blog uses an Amazon RDS database that supports the RDS Data API. To connect to any MySQL or PostgreSQL database, without Data API support, review Create a GraphQL API for any existing MySQL and PostgreSQL database
In the AppSync console
You can use AppSync’s new introspection functionality on existing databases running in an Amazon Aurora cluster configured with the Data API. Let’s say you have a MySQL database with the following tables defined that you want to provide an API for.
CREATE TABLE conversations (
id INT NOT NULL PRIMARY KEY,
name VARCHAR(255) NOT NULL,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE messages (
id VARCHAR(36) PRIMARY KEY,
conversation_id INT NOT NULL,
sub VARCHAR(36) NOT NULL,
body TEXT NOT NULL,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (conversation_id) REFERENCES conversations(id)
);
CREATE TABLE conversation_participants (
conversation_id INT NOT NULL,
sub varchar(36) NOT NULL,
last_read_at DATETIME,
PRIMARY KEY (conversation_id, sub),
FOREIGN KEY (conversation_id) REFERENCES conversations(id)
);
Head the to AWS AppSync console, click on “Create API“, and under GraphQL API, choose “Start with an Amazon Aurora cluster“. Click Next, name your API, then on the next screen, enter your database information to start the introspection.
You must have the Data API configured on your Aurora cluster, and you must have your database user’s credentials stored in AWS Secrets Manager. The user needs permissions to read your database, schema, and table configuration. See the Data API documentation to learn more about configuration. You need permission to execute GetSecretValue on your Secrets Manager secret, and needs permission to execute ExecuteStatement on the cluster. You also need to grant AppSync permissions to access with your resources. The console can create a role for you or you can provide your own. Click “Import” to start the introspection process. Once done, the discovered tables are shown as below.
By default, the type names are the same as the tables name, but you can customize them. You can also chose to exclude tables from the generated schema. Change the type names as follows:
Conversation_participants
toParticipant
Conversations
toConversation
Messages
toMessage
Click Next and choose to “Create queries, mutations, and subscriptions for all models”. Click Next, review your changes, and click “Create API“. The console starts creating your schema and attaches your resolvers to your fields. That’s it. You have a ready-to-use API! You can now interact with your database from the Queries editor or start building your client application that connects to your GraphQL API. Head over to the Queries editor to use the API. Select Queries on the left side menu. First let’s create a new Conversation.
mutation CreateConvo {
createConversation(input: {id: 1, name: "stand up meeting"}) {
created_at
id
name
}
}
This adds a conversation to the database. Next let’s add a message.
mutation CreateMsg {
createMessage(input: {body: "Hello world! Things are looking good.", conversation_id: 1, id: "new-message", sub: "johndoe"}) {
body
conversation_id
created_at
id
sub
}
}
Note that you can use AppSync’s real-time functionality right away. For example, to listen for new messages in a conversation, open the Queries editor in a new tab or window, and enter the follow subscription.
subscription OnCreate {
onCreateMessage(conversation_id: 1) {
body
conversation_id
created_at
id
sub
}
}
In your original Queries tab, send another message.
mutation CreateMsg {
createMessage(input: {body: "Hello again. Nothing to report", conversation_id: 1, id: "2nd-message", sub: "johndoe"}) {
body
conversation_id
created_at
id
sub
}
}
You’ll see the subscription triggered in the 2nd Queries editor window.
You can make edits to the auto-generated resolvers and customize them as needed. For example, I want the API to auto-generate the ID of every new message created. Let’s update the createMessage
resolver.
import { util } from '@aws-appsync/utils';
import { insert, select, createMySQLStatement, toJsonObject } from '@aws-appsync/utils/rds';
export function request(ctx) {
const { input: values } = ctx.args;
values.id = util.autoUlid() // <<< set the ULID
const doInsert = insert({ table: 'messages', values });
const doSelect = select({
table: 'messages',
columns: '*',
where: { id: { eq: values.id } },
limit: 1,
});
return createMySQLStatement(doInsert, doSelect);
}
export function response(ctx) {
const { error, result } = ctx;
if (error) {
return util.appendError(error.message, error.type, result);
}
return toJsonObject(result)[1][0];
}
In the code above, I send 2 requests to the database. The first creates a new message with the provided ULID (Universally Unique Lexicographically Sortable Identifier) id. The 2nd fetches the inserted row to return the data from the database. This is a useful pattern when using MySQL to retrieve a freshly created row (including auto-generated columns and values). In the response, I grab the 2nd object (index 1) from the response. This corresponds to the result of the 2nd statement I sent (the doSelect
statement).
Next, in the schema, update the CreateMessageInput
input to remove the id
field.
input CreateMessageInput {
# id: String! # << comment out or remove
conversation_id: Int!
sub: String!
body: String!
created_at: String
}
Now send a new message.
mutation CreateMsg {
createMessage(input: {body: "up and up", conversation_id: 1, sub: "john"}) {
body
conversation_id
created_at
id
sub
}
}
You get the response with the generated ID.
{
"data": {
"createMessage": {
"body": "up and up",
"conversation_id": 1,
"created_at": "2023-11-13 23:24:06",
"id": "01HF5FZNM3M9PEYC1234567890",
"sub": "john"
}
}
}
Now that you’ve added a few messages, you can try selecting all your conversation messages. For example, here I filter by the created_at
timestamp and the sub
value.
query ListMsgs {
listMessages(filter: {conversation_id: {eq: 1}, created_at: {ge: "2023-11-13", lt: "2023-11-13 22:23"}, sub: {ne: "john"}}) {
items {
id
created_at
body
sub
}
}
}
Using the new utility functions for RDS
You can interact with your database tables using the new utilities for RDS. Let’s modify the Conversation
type to add a participants
field. This field returns the IDs of the most recently active conversation participants based on the most recent message read (last_read_at
).
type Conversation {
id: Int!
name: String!
created_at: String
participants: [String]
}
Next, let’s update the getConversation
resolver to write a custom select statement using the select
utility provided by @aws-appsync/utils/rds
:
import { util } from '@aws-appsync/utils';
import {
select,
createMySQLStatement,
toJsonObject,
} from '@aws-appsync/utils/rds';
export function request(ctx) {
const { id } = ctx.args;
const doSelect = select({
table: 'conversations',
columns: '*',
where: { id: { eq: id } };,
limit: 1,
});
const doGetLatest = select({
table: 'conversation_participants',
columns: ['sub'],
where: { conversation_id: { eq: id } },
orderBy: [{ column: 'last_read_at', dir: 'DESC' }],
limit: 10,
});
return createMySQLStatement(doSelect, doGetLatest);
}
export function response(ctx) {
const { error, result } = ctx;
if (error) {
return util.appendError(error.message, error.type, result);
}
const res = toJsonObject(result);
const convo = res[0][0];
if (convo) {
convo.participants = (res[1] ?? []).map((p) => p.sub);
}
return convo;
}
Since the resolver allows me to send up to 2 statements to the database, I can retrieve the conversation and then its participants in a single execution. The createMySQLStatement
function creates a request that properly escapes and quotes your MySQL statements. Let’s try out the changes. In the Queries editor, run the query.
query get {
getConversation(id: 1) {
id
participants
}
}
This returns results like this.
{
"data": {
"getConversation": {
"id": 1,
"participants": [
"John",
"Sarah"
]
}
}
}
Writing custom SQL statements
You can write your own SQL statements using the new SQL tagged template. The tagged template allows you to write a static SQL statement that accepts dynamic values at run time through a template expression. Let’s add a conversation summary query to the schema and a new Summary
type.
type Query {
getConversationSummary(id: Int!, since: AWSDate!): Summary
}
type Summary {
id: Int!
total_messages: Int
total_words: Int
total_participants: Int
}
Then attach a resolver to getConversationSummary
:
import {
sql,
createMySQLStatement,
toJsonObject,
typeHint,
} from '@aws-appsync/utils/rds';
export function request(ctx) {
const query = sql`
SELECT
c.id AS id,
COUNT(DISTINCT m.id) AS total_messages,
COUNT(DISTINCT cp.sub) AS total_participants,
SUM(LENGTH(m.body) - LENGTH(REPLACE(m.body, ' ', '')) + 1) AS total_words
FROM
conversations c
LEFT JOIN
messages m ON c.id = m.conversation_id
LEFT JOIN
conversation_participants cp ON c.id = cp.conversation_id
WHERE
c.id = ${ctx.args.id}
AND m.created_at >= ${typeHint.DATE(ctx.args.since)}
GROUP BY
c.id, c.name;
`;
return createMySQLStatement(query);
}
export function response(ctx) {
return toJsonObject(ctx.result)[0][0];
}
Here I use the sql
tagged template to write a SQL statement. The SQL tagged template lets you write a static statement that only accepts dynamic values via expressions. Any value passed via an expression is automatically sent to the database engine via placeholders. I also use a type hint to indicate that the since
argument should be treated as a DATE
type by the database engine.
Save the changes, and then run the query:
query get {
getConversationSummary(id: 1, since: "2023-01-01") {
id
total_messages
total_participants
total_words
}
}
The result looks like:
{
"data": {
"getConversationSummary": {
"id": 1,
"total_messages": 9,
"total_participants": 2,
"total_words": 66
}
}
}
Conclusion
I went through the process of creating an AppSync GraphQL API from an existing MySQL database running in an Aurora cluster. While I focused on MySQL in this post, you can do the same with your Postgres databases. To get started with your own database, visit the AppSync documentation to find out more about RDS Introspection with the Data API. To learn more about the new utilities in JavaScript resolvers for RDS, see the JavaScript resolver reference. For a guided introduction, see the tutorial for Aurora Postgres with Data API. To start writing your own JavaScript resolvers, download or update to the latest version of the @aws-appsync/utils package.