What is a GraphQL resolver

A GraphQL resolver is a function or method that resolves a value for a type or field within a schema. A resolver is the key architectural component that connects GraphQL fields, graph edges, queries, mutations, and subscriptions to their respective data sources and micro-services. 

What AWS data sources can GraphQL APIs connect to

GraphQL APIs can connect to any AWS data source using resolvers. This article shows GraphQL resolver examples for Amazon DynamoDB, Amazon Aurora, Amazon Relational Database Service (Amazon RDS), and AWS Lambda. Custom resolvers extend the default GraphQL options that many services, such as Apollo Server, Express GraphQL, GraphQL Java, Hot Chocolate .NET, and AWS AppSync, a fully managed GraphQL service, provide.

How to build GraphQL resolvers with self-managed GraphQL servers

If opting for a self-hosted, open source GraphQL server, there are many options available. This article uses Apollo Server as an example for building resolvers with JavaScript (or TypeScript) to connect a self-managed, open source GraphQL server to various AWS data sources.

This is an example of the code for a query resolver:

const resolvers = {
    Query: {
        hello: () => { return 'Hello world!'},
    },};    

This is an example of the code for a GraphQL mutation resolver:

Mutation: {
    login: async (_, { email }, { dataSources }) => {
        const user = await dataSources.userAPI.findOrCreateUser({ email });

        if (user) {
            user.token = Buffer.from(email).toString('base64');
            return user;
        }
    },},

The login scenario above shows how GraphQL resolvers can become more complex. Moreover, it highlights how it might be necessary to reach out to other systems, call other functions, and perform other calls to build the return object for the query or mutation.

Connecting to Amazon DynamoDB with Apollo Server

For building connections to AWS data sources, it is recommended to use AWS SDKs. This is especially important for DynamoDB, as the SDK provides many features for interacting with DynamoDB tables and data. See this article on DynamoDB data modeling with GraphQL.

In the following snippet, Apollo Library and SDK are used to perform several key steps: configuration for the connection, instantiation of the DocumentClient, and then executing these for a put against the database.

var AWS = require("aws-sdk");

AWS.config.update({
  region: "us-west-2",
  endpoint: "http://wherever.your.location/is"
  // Security keys, tokens, etc, will need setup and stored securely for the connection.
})

var docClient = new AWS.DynamoDB.DocumentClient()

var params = {
    TableName: "MoviesTable",
    Item:{
        "year": 2025,
        "title": "The Big New Movie"
    }
}

Mutation: {
    createMovie: async (_, { email }, { dataSources }) => {
        docClient.put(params, function(err, data) {
            if (err) {
                // Item failed, handle error and respond with pertiment response.
                HANDLE.error("Unable to add item. Error JSON:", JSON.stringify(err, null, 2))
                return err
            } else {
                // Item added here.
                HANDLE.log("Added item:", JSON.stringify(data, null, 2))
                return data 
            }
        })
    },
}

For additional information on building out access and usage plans, see the DynamoDB Documentation Developer Guide and the Apollo Documentation.

Connecting to Amazon RDS and Aurora with Apollo Server

Amazon RDS and Aurora are two AWS relational databases. They provide many options with support for Postgres, MySQL, SQL Server, and others as the SQL engines. 

This is an example of an Apollo GraphQL resolver with a PostgreSQL database using the pg client for Node.js and the SDK.

const { Pool } = require('pg')
const { RDS } = require('aws-sdk')

const signerOptions = {
  credentials: {
    accessKeyId: process.env.youraccesskey,
    secretAccessKey: process.env.yoursecretaccesskey,
  },
  region: 'us-east-2',
  hostname: 'example.hostname.us-east-2.rds.amazonaws.com',
  port: 5432,
  username: 'postgres-api-account',
}

const signer = new RDS.Signer()
const getPassword = () => signer.getAuthToken(signerOptions)

const pool = new Pool({
  host: signerOptions.hostname,
  port: signerOptions.port,
  user: signerOptions.username,
  database: 'my-db',
  password: getPassword,
})

var insertQuery = 'INSERT INTO MoviesTable(title, year) VALUES($1, $2) RETURNING *'
var queryValues = ['The Big New Movie', '2025']

Mutation: {  
    createMovie: async (_, { email }, { dataSources }) => {
        res = await pool.query(insertQuery, queryValues)
        // Handle response errors, data, and other processing here.
        if (err) {
            // Item failed, handle error and respond with pertiment response.
            HANDLE.error("Unable to add item. Error JSON:", JSON.stringify(err, null, 2));
            return err
        } else {
            // Item added here.
            HANDLE.log("Added item:", JSON.stringify(data, null, 2));
            return data 
        }
    },  
}

pool.query(insertQuery, queryValues)

This example demonstrates Amazon RDS and PostgreSQL specificities. The connection pooling that a relational database does – whether PostgreSQL, MySQL, SQL Server, or another – is an important part of the connection that must be managed. 

For more information on writing queries against Amazon RDS and Aurora, see documentation on Amazon RDS Proxy; for more on PostgreSQL pooling for pg, see Pooling Documentation.

Connecting to AWS Lambda with Apollo Server

Lambda is not a data source specifically, but it could provide a bridge to almost any data source. Connecting and invoking a Lambda function follows several steps. The following example demonstrates the key parts of making a Lambda call from a resolver using the SDK. 

Parameters are built up and passed to the Lambda function and the call within the GraphQL resolver.

var AWS = require("aws-sdk");

AWS.config.update({
  region: "us-west-2",
  endpoint: "http://wherever.your.location/is"
  // Security keys, tokens, etc, will need setup and stored securely for the connection.
});

const params = {
    FunctionName: 'my-lambda-function', 
    Payload: queryValues,
};

Mutation: {   
    createMovie: async (_, { email }, { dataSources }) => { 
        const result = await (new AWS.Lambda().invoke(params).promise());

        if (err) {   
            // Act on the err here to pass the err.stack back as an appropriate   
            // GraphQL error.  
            return err   
        } else {   
            const res = await pool.query(insertQuery, queryValues)   
            // Act on the response here.   
            return data   
            }   
        })   
    },   
}

Best practices for building resolvers with self-managed GraphQL servers

1. SDK

Use AWS SDKs for all calls to AWS resources. This will ensure a consistent, reliable, and maintained access method for GraphQL APIs and their respective data sources. 

The SDK provides a more consistent way to setup configuration; pass secrets for connections; handle errors, retries, and exponential back off; and other important aspects across the code base. Using the SDK also makes it easier to set up a particular access style, such as async await, promises, or callbacks. There are many additional reasons to use the SDK beyond these immediate advantages.

var AWS = require("aws-sdk");

AWS.config.update({
  region: "us-west-2",
  endpoint: "http://wherever.your.location/is"
  // Security keys, tokens, etc, will need setup and stored securely for the connection.
});

2. Naming, parameterization, and clear code

Name all the operations, parameters, and other passed code mapped with their respective database queries across their layers of concern.

For example, if the GraphQL query is to get movies, and it looks like this example, ensure that the respective query on the database side matches the names or naming. The table in the database should be named movie or movies, depending on the convention. Moreover, this all leads to a better understanding and readability when determining which GraphQL query belongs to which database query, database table, and so on. This prevents the multiple layers of GraphQL query or mutation and entity with fields from getting confused or out of synchronization with the database. Furthermore, when options present themselves, this provides an easier path to automation if code generation or other technologies are used to create resolvers.

query GetMovies{
    movies {
        title
        year
    }
}

3. Request and response connections, pooling, and fire and forget

Determine a strategy to maintain a clear tactical understanding of what behavior to expect from the API. 

For example, a good practice to follow is writing code against a determined (based on desired outcomes) connection method. If a database is setup and the intent is to use a service such as Amazon RDS Proxy to implement connection pooling, ensure that driver code is written for the GraphQL resolvers to match that architectural decision. If a client connection must fire and forget, this needs to be taken into account. Otherwise, if a client must set up a connection and interact with multiple queries per resolver, that must be part of the design patterns used. This prevents resolvers from being written that deviate from expected usage, which would result in few to no errors or warnings and an almost impossible situation to debug.

4. Authorization

Determine what authorization strategy will be used upfront. Define expected results and behavior, and tooling requirements before starting to design a GraphQL API.

  async theUser(_, args, context) {
    if (!context.isAuthenticated) {
        // Handle not authenticated.
    }
    if (!context.isInRole(role)) {
        // Handle not being in role.
    }

For example, if row or field level authorization of data is required in the API, the decision must be known to make the correct decisions about tooling. For some scenarios, using Amazon Cognito might be perfect, for others, perhaps just a simple authentication mechanism, and for others still, something completely different might be needed. Deciding on this up front ensures that the right tooling choices are made and that project restarts are minimized.

5. Queries

Ensure that only the fields being requested are being queried for, and that they only return what is asked for. 

For example, if the query reads like the following, then it is only asking for field1 and field2 of theEntity to be returned.

query anExtremelyMinimalQuery {
    theEntity {
        field1
        field2
    }
}

If the database must pull data from a singular table, then the SQL query would look something like the following:

SELECT field1, field2 FROM theEntityTable

However, if a join must occur, then it might look like this:

SELECT field1.firstEntity, field2.secondEntity 
FROM firstEntity 
INNER JOIN secondEntity ON firstEntity.ID = secondEntity.ID

In these cases, these queries must be generated or written to only include the needed elements for the GraphQL query. Furthermore, they are written or generated in a way that would be performant. Not doing so can lead to performance issues.

How to build GraphQL resolvers with managed GraphQL servers

Unlike self-hosted open source GraphQL options such as Apollo Server, managed GraphQL solutions provide a way to shift many operational concerns, and reorient efforts toward organizational use cases. With a managed GraphQL API service like AWS AppSync, the infrastructure, patching, scaling, availability, scalability, and other operational activities are managed by AWS. 

AWS AppSync provides a serverless GraphQL API service with optimized resolvers to connect to AWS data sources, compute, and others. 

Connecting to Amazon DynamoDB with AWS AppSync

A schema can be used to auto-generate GraphQL resolvers, which in AWS AppSync are called Unit Resolvers. These provide a direct resolver connection to DynamoDB. The resolvers execute actions against the database, thereby providing a singular response and request mapping template using Apache Velocity Template Language (VTL). This takes the request as inputs, and then outputs a JSON document with instructions for the GraphQL resolver. To learn more about Unit Resolvers, work through the How to Configure Resolvers Developer Guide

For more information on VTL use in AWS AppSync, see the Resolver Mapping Template Programming Guide.

If multiple operations need to be executed against one of multiple data sources in a single client request, the Unit Resolver has to change to a Pipeline Resolver. These Pipeline Resolver operations are executed in order against selected data source(s). Furthermore, functions can be created to execute during these operations. This opens up a wide range of sequential operational options during the execution of esolvers. For further information, see the Pipeline Resolvers Tutorial.

Connecting to Amazon RDS and Aurora with AWS AppSync

AWS AppSync also provides resolver options using Apache VTL (Velocity Template Language) to connect to Aurora Serverless. For example, an insert of data, with the respective return data for the GraphQL response, would look like the following. This code would be added to the request mapping template.

#set($id=$utils.autoId())
{
    "version": "2018-05-29",
    "statements": [
        "insert into Pets VALUES ('$id', '$ctx.args.input.type', $ctx.args.input.price)",
        "select * from Pets WHERE id = '$id'"
    ]
}

Then, for the response mapping template, the following code would complete the resolver:

$utils.toJson($utils.rds.toJsonObject($ctx.result)[1][0])

For an example of connecting resolvers to Aurora Serverless options, see this Aurora Serverless Tutorial.

Connecting to AWS Lambda with AWS AppSync

The Lambda resolvers can fulfill nearly any remaining need for your GraphQL API. These resolvers provide a direct way for the AWS AppSync service to call Lambda functions. This can be done by connecting to stand alone RDS Database Servers, Amazon Neptune,
Amazon Kinesis, or any number of other sources of processing or data storage.

This Lambda Resolvers Tutorial demonstrates the power of building GraphQL resolvers for AWS AppSync with Lambda functions. The switch is used to handle one of several types of actions in the Lambda for each of the AWS AppSync resolvers. 

exports.handler = (event, context, callback) => {
    // Other code to handle invoke and getting/retrieving/adding posts.
    switch(event.field) {
        case "getPost":
            var id = event.arguments.id;
            callback(null, posts[id]);
            break;
        case "allPosts":
            var values = [];
            for(var d in posts){
                values.push(posts[d]);
            }
            callback(null, values);
            break;
        case "addPost":
            callback(null, event.arguments);
            break;
        case "addPostErrorWithData":
            var id = event.arguments.id;
            var result = posts[id];
            result.errorMessage = 'Error with the mutation, data has changed';
            result.errorType = 'MUTATION_ERROR';
            callback(null, result);
            break;
        case "relatedPosts":
            var id = event.source.id;
            callback(null, relatedPosts[id]);
            break;
        default:
            callback("Unknown field, unable to resolve" + event.field, null);
            break;
    }
    // Other code to handle getting/retrieving/adding posts, errors, etc.    
};

Once that Lambda is available, then the resolver VTL would make calls that are minimal to pass in the event, context, and related parameters. The request mapping template would look like the following:

{
    "version": "2017-02-28",
    "operation": "Invoke",
    "payload": {
        "field": "getPost",
        "arguments":  $utils.toJson($context.arguments)
    }
}

And the response mapping template would look like this:

$utils.toJson($context.result)

Best practices for building resolvers with managed GraphQL AppSync

1. Resolver level caching

Turning on per-resolver caching provides resolver specific settings for arguments, source, and identity maps on which to base the cache hits. This provides another level of caching beyond full request caching that lets each resolver cache based on its specific data requests to
servers.

2. HTTP resolvers as HTTP actions

A popular capability to pair with GraphQL APIs is the ability to issue an HTTP action as a request, and then build the response based on that action call. A HTTP Resolver can be used to accomplish this task based on API needs. This can provide a means to connect to other APIs via whichever mechanism, such as GraphQL, REST, or other options.

3. Avoiding VTL with Lambda resolvers

If a particular language or functionality is needed for an API, using Lambda resolvers introduces the option for different support languages to connect with any type of data source. These include Amazon RDS and Postgres or Aurora, and it will work with those results as needed. For more information, see Introducing Direct Lambda Resolvers: AWS AppSync GraphQL APIs without VTL.

4. Resolver auto-generation with Amplify

To try AWS AppSync with little coding, AWS Amplify Studio or AWS Amplify CLI tooling are good options are they auto-generate GraphQL APIs. This includes all VTL resolver code based on a schema. Additional tooling with Amplify Studio provides a way to build out a schema graphically. Furthermore, it can draw relationships with immediate schema and the ability to deploy the API with fully functional resolvers built.

5. Custom authentication with Lambda resolvers

There are many authentication options with AWS AppSync, including Amazon Cognito, OIDC, IAM, and API Keys. Moreover, if a custom authentication option is needed, using Lambda Resolvers provides this option to connect and authenticate, or authorize data consumers against the API.

Which GraphQL resolver development option to choose

Both self-hosted and managed GraphQL options provide extensive tooling to build out APIs. The choice should be guided by desired outcome, efficiency, and long-time impact.

Category Self-hosted, open source GraphQL servers AWS AppSync managed GraphQL service

Configuration

Many of the secrets for connection, configuration of databases, and other criteria must be managed. This requires a secrets vault or other system of keeping secrets, and configurations managed across environments.

Secrets are managed across environments and require minimal interaction from the developer. These data sources in an AWS AppSync API are all seamlessly integrated and the developer can focus more on the model and organizational use case.

Data

Full control over the exact response and request of data inbound and outbound from the database. However, each function can become a costly development process.

Control of the request and response cycles with various tooling, and immediacy of generated query calls into the database. This provides a boost toward focusing on organizational use cases.

Developer

Can provide the most extensive flexibility around implementing code, database, and models for GraphQL APIs.

Provides a faster on-ramp for the deployment of GraphQL APIs, and streamlines process. However, it will be limited if more elaborate and complex coding is needed.

Logging

Must determine exactly what is needed, then build out the solution, connecting it to the chosen GraphQL server.

AWS AppSync uses CloudWatch to provide an easy way to turn on logging. Tracing can also be turned on with AWS X-Ray to bolster insight and information in the system.

Cost

Introduces a range of additional costs, including servers, functions, individual resources, possible additional staff, and others.

Provides a singular line item based on usage only.

Looking for a fully managed GraphQL service?

Explore AWS AppSync

Explore AWS AppSync

AWS AppSync is an enterprise level, fully managed serverless GraphQL service with real-time data synchronization and offline programming features. AppSync makes it easy to build data driven mobile and web applications by securely handling all the application data management tasks such as real-time and offline data access, data synchronization, and data manipulation across multiple data sources.