Front-End Web & Mobile
Amplify Framework announces new Amazon Aurora Serverless and GraphQL Transform features for building AWS AppSync APIs
The Amplify Framework is an open-source project for building cloud-enabled applications. Today, we’re happy to announce new features for the GraphQL Transform library, which is part of the AWS Amplify command line interface (CLI). The GraphQL Transform library enables you to quickly deploy scalable AWS AppSync backends for your web and mobile applications. In this release, we’ve added new features to the library that allow for more flexibility and control over your API.
In addition, you can now use an existing Amazon Aurora Serverless database as a data source for your AWS AppSync GraphQL APIs when you’re building your mobile and web applications. This enables you to use the Amplify CLI to generate a GraphQL API with an auto-generated schema and resolvers that work with an existing Aurora Serverless database.
Support for Amazon Aurora Serverless
You can now generate a GraphQL API in front of an existing Aurora Serverless database.
Let’s say that you already have an existing database and you’d like to generate a GraphQL API in front of it. You can easily do so with the amplify api add-graphql-datasource command, and select your preferred data source type. Currently, only Aurora Serverless is supported, so the following steps show how to import an Aurora Serverless MySQL database as a data source.
1. In the AWS Amplify CLI , run the following:
$ amplify api add-graphql-datasource
Using datasource: Aurora Serverless, provided by: awscloudformation
To import an Aurora Serverless MySQL database, you first have to choose the AWS Region where it exists. The CLI only provides the Regions where the Aurora Serverless data API is available.
? Provide the region in which your cluster is located: (Use arrow keys)
❯ us-east-1
3. Next, choose the appropriate cluster identifier from the list that the CLI provides.
? Select the Aurora Serverless cluster that will be used as the data source for your API (Use arrow keys)
❯ animals
owners
4. Next, the CLI attempts to determine the appropriate secret to use automatically. If it’s not able to, it presents you with a list to select from .
5. Finally, the CLI provides a list of all of the databases that are active inside the selected cluster (this could take a few seconds). Choose the database that you’d like to import.
? Select the database to use as the data source: (Use arrow keys)
❯ Animals
At this point, the command executes and creates an AWS CloudFormation template with a schema that’s defined by the database it just parsed, as well as pre-generated resolvers for basic operations.
If the schema generation conflicts with resources that might already exist in your API schema or if the schema generated is invalid, the process fails and gives an appropriate error message. Otherwise, the schema is output in the amplify/backend/api/YOUR-API-NAME/ directory. The template is available in the amplify/backend/api/YOUR-API-NAME/stacks directory.
When the schema and resolvers look ready , run amplify push to build the template.
To learn more, see the documentation.
Improved GraphQL API operations control
In the past, rules defined in the @auth directive only protected top-level fields.
With the latest release, @connection resolvers now also protect access to connected fields based on the @auth rules that are defined on the related model type. Let’s take a look at how this works.
Take note of the Post type definition in the schema.
# schema.graphql
# Conceptually we are saying protect read access to owners. This has always protected
# top level fields.
type Post @model @auth(rules: [{ allow: owner, operations: [read, create, update, delete] }]) {
id: ID!
title: String
author: User @connection(name: "UserPosts")
}
type User @model {
id: ID!
username: String
posts: [Post] @connection(name: "UserPosts")
# @connection resolvers used to bypass @auth rules defined on @models.
# @connection resolvers now take those @auth directives into account. In this case,
# you will only see posts where the $ctx.identity.username is the same as
# the "owner" on the Post object.
}
With this schema, the @connection field also inherits the @auth rules that are set at the top-level field.
To learn more, see the documentation.
Authorization (@auth) directive improvement – Field level auth
Previously the @auth directive protected only the root-level query and mutation fields.
You can now use the @auth directive on individual fields, in addition to the object type definition. This means that you now have the ability to implement fine-grained access control across your API at both the top level and the field level.
An @auth directive that you use on an @model OBJECT augments the top-level queries and mutations. An @auth directive that you use on a FIELD_DEFINITION protects that field’s resolver by comparing the identity to the source object designated through $ctx.source.
For example, you might have the following:
type User @model {
id: ID!
username: String
# Can be used to protect @connection fields.
# This resolver will compare the $ctx.identity to the "username" attribute on the User object (via $ctx.source in the User.posts resolver).
# In other words, we are authorizing access to posts based on information in the user object.
posts: [Post] @connection(name: "UserPosts") @auth(rules: [{ allow: owner, ownerField: "username" }])
# Can also be used to protect other fields
ssn: String @auth(rules: [{ allow: owner, ownerField: "username" }])
}
# Users may create, update, delete, get, & list at the top level if they are the
# owner of this post itself.
type Post @model @auth(rules: [{ allow: owner }]) {
id: ID!
title: String
author: User @connection(name: "UserPosts")
owner: String
}
With this schema, authorization rules are in place that allow only the owner to query for posts and ssn fields on the User type, while the id and username fields aren’t be protected by any authorization rules.
To learn more, see the documentation.
Feedback
We hope you like these new features! As always, let us know how we’re doing, and submit any requests in the Amplify Framework GitHub Repository. You can read more about AWS Amplify on the AWS Amplify website.