AWS Database Blog

Exploring Amazon DynamoDB SDK clients

When working with Amazon DynamoDB, developers have the option to choose between a low-level client and a high-level client in most of the AWS SDKs offered. Understanding the differences between these client types is crucial for effectively interacting with DynamoDB. In this post, we explore the characteristics, use cases, and benefits of both low-level and high-level clients.

DynamoDB-JSON compared to native JSON

To grasp the intricacies of DynamoDB SDK clients, it’s essential to delve into the underlying foundation of how DynamoDB stores data in DynamoDB-JSON format. DynamoDB-JSON is a specialized format used by DynamoDB to store data in a structured manner. It extends the JSON format by introducing data type annotations to each attribute value.

The following code is an example of standard JSON:

{"mykey":"my string value"}

Compare to DynamoDB-JSON:

{"mykey":{"S":"my string value"}}

In this case, the attribute "mykey" is annotated with the data type "S" to indicate that its value is a string.

DynamoDB uses specific data type annotations for different types of attribute values. Here, "S" represents a string type. Other common annotations include "N" for number, "B" for binary data, "BOOL" for Boolean, "L" for List, and "M" for a nested JSON object or Map. For a full list of data types, refer to Data type descriptors.

The low-level client

The low-level client is a direct interface to the DynamoDB API. This enables developers to interact with DynamoDB at a granular level. It requires explicit specification of request parameters and offers fine-grained control over data manipulation and API operations.

The low-level client relies on data type annotations (described previously), also known as tokens, to convey information to DynamoDB about how to interpret each attribute. These annotations serve as instructions to DynamoDB regarding the data type associated with each attribute value. When sending data to DynamoDB or retrieving data from it, the API protocol uses these tokens to ensure accurate and consistent interpretation of the attribute values.

Each available AWS SDK provides a low-level client for DynamoDB, which is often referred to as the default client.

Low-level clients have the following key characteristics:

  • Provides fine-grained control over DynamoDB API operations
  • Suitable for advanced use cases requiring low-level customization
  • Offers flexibility for implementing complex data access patterns
  • Provides a direct representation of the DynamoDB API

In addition to providing fine-grained control over DynamoDB API operations, the low-level client offers access to the complete range of DynamoDB API functionalities. This includes advanced features such as ExportTableToPointInTime and ImportTable, which allow for seamless data backups and migration between DynamoDB tables and PartiQL API’s which enable developers to utilize a SQL compatible query language to interact with DynamoDB items.

These advanced functionalities provided by the low-level client offer powerful capabilities for data management, backup, and migration within DynamoDB. While the high-level SDKs provide convenience and abstraction for most common use cases, the low-level client is essential when using these specialized features.

The high-level client

On the other hand, the high-level client is a more abstracted and simplified interface that provides a higher level of abstraction when interacting with DynamoDB. It encapsulates many low-level details and simplifies common tasks. This reduces the amount of code required to work with DynamoDB.

The high-level client offers a more intuitive and developer-friendly approach by automatically handling low-level API operations and data conversions. It abstracts away the complexities of data type annotations and other granular details. This allows developers to focus on their application logic rather than low-level implementation details.

The high-level client often provides a more declarative and object-oriented programming model, making it easier to work with DynamoDB tables, items, and queries. It offers features such as automatic pagination, object mapping, and simplified query and scan operations. For most common use cases, the high-level client is recommended because it’s simple and easy to use. It simplifies the process of working with DynamoDB by reducing boilerplate code, and increasing developer productivity. It provides features such as automatic object mapping, which lets you directly map your application objects to DynamoDB items, making data manipulation more natural and effortless.

High-level clients have the following key attributes:

  • Simplifies the development process by abstracting away low-level details.
  • Ideal for most common use cases and general application development.
  • Reduces boilerplate code and enhances developer productivity.
  • Automates tasks such as pagination and error handling.

Note:

In Boto3, resource instances are not thread safe and shouldn’t be shared across threads or processes. These special classes contain additional metadata that can’t be shared. It’s recommended to create a new resource for each thread or process.

Examples of low-level compared to high-level clients

To illustrate the distinctions between low-level and high-level clients, we present examples using two popular AWS SDKs: Node.js for JavaScript and Boto3 for Python. We showcase how the choice of client impacts the complexity and implementation details by examining code snippets in these languages.

Let’s consider a scenario in which we want to retrieve an item from a DynamoDB table with the primary key user_id.

First, we explore the code using the low-level client approach in both Node.js and Python:

The following code is the Node.js low-level client example:

const { DynamoDBClient, GetItemCommand } = require('@aws-sdk/client-dynamodb');

const dynamoDBClient = new DynamoDBClient({ region: "us-west-2" });

const params = {
  TableName: 'my-table',
  Key: {
    user_id: { 'S': '12345' },
  },
};

const command = new GetItemCommand(params);

dynamoDBClient.send(command)
  .then((data) => {
    console.log(data.Item);
  })
  .catch((err) => {
    console.error(err);
  });

Output:

Below is the output format, here we can assert the DynamoDB-JSON output which includes the data type annotations:

{
    "user_id": {
        "S": "12345"
    },
    "last_name": {
        "S": "Whitlock"
    },
    "first_name": {
        "S": "Terry"
    },
    "phone_number": {
        "S": "555-0100"
    },
    "address": {
        "M": {
            "address_line_1": {
                "S": "123 Any Stree"
            },
            "country": {
                "S": "USA"
            },
            "address_line_2": {
                "S": "Any Town"
            }
        }
    },
    "age": {
        "N": "48"
    }
}

The following code is the Python Boto3 low-level client example:

import boto3

dynamodb_client = boto3.client('dynamodb', region_name='us-west-2')

response = dynamodb_client.get_item(
    TableName='my-table',
    Key={
        'user_id': {'S': '12345'}
    }
)

item = response['Item']
print(item)

Output:

Below is the output format, which is the same as the DynamoDB-JSON format seen previously with the Node.js client:

{
    'user_id': {
        'S': '12345'
    },
    'last_name': {
        'S': 'Whitlock'
    },
    'first_name': {
        'S': 'Terry'
    },
    'phone_number': {
        'S': '555-0100'
    },
    'address': {
        'M': {
            'address_line_1': {
                'S': '123 Any Stree'
            },
            'country': {
                'S': 'USA'
            },
            'address_line_2': {
                'S': 'Any Town'
            }
        }
    },
    'age': {
        'N': '48'
    }
}

In these examples, the low-level clients (Node.JS and Boto3 client) are used to directly interact with the DynamoDB API. Developers must explicitly specify the parameters, including the table name and the key to retrieve the item.

Let’s examine the same scenario using the high-level clients.

The following code is the Node.js high-level client example:

const { DynamoDBClient } = require("@aws-sdk/client-dynamodb");
const { DynamoDBDocumentClient, GetCommand } = require("@aws-sdk/lib-dynamodb");

const client = new DynamoDBClient({ region: "us-west-2"});
const docClient = DynamoDBDocumentClient.from(client);

const params = {
    TableName: "my-table",
    Key: {
        'user_id': '12345'
    }
};

const command = new GetCommand(params);

docClient.send(command)
  .then((data) => {
    console.log(data.Item);
  })
  .catch((err) => {
    console.error(err);
  });

Output:

Below is the output format, which this time is returned in native JSON format, which do not include the data type annotations:

{
  user_id: '12345',
  last_name: 'Whitlock',
  first_name: 'Terry',
  phone_number: '555-0100',
  address: {
    address_line_1: '123 Any Stree',
    country: 'USA',
    address_line_2: 'Any Town'
  },
  age: 48
}

The following code is the Python Boto3 high-level client example:

import boto3

dynamodb_resource = boto3.resource('dynamodb', region_name='us-west-2')

table = dynamodb_resource.Table('my-table')

response = table.get_item(
    Key={
        'user_id': '12345'
    }
)

item = response['Item']
print(item)

Output:

The output here once again show native JSON without the data type annotations:

{
    'user_id': '12345', 
    'last_name': 'Whitlock', 
    'first_name': 'Terry', 
    'phone_number': '555-0100', 
    'address': {
        'address_line_1': '123 Any Stree', 
        'country': 'USA', 
        'address_line_2': 'Any Town'
    }, '
    age': Decimal('48')
}

In these examples, the high-level clients (DocumentClient in Node.js and boto3.resource in Python) offer a more abstracted and simplified interface. The code is cleaner and more readable because the clients handle many low-level details, such as data type annotations, implicitly.

We can clearly observe the difference in complexity and the level of abstraction provided by comparing the low-level and high-level client examples. The high-level clients abstract away the nuances of the underlying DynamoDB API. This allows developers to work with DynamoDB in a more intuitive and streamlined manner.

Converting between JSON formats

When interacting with DynamoDB, you may need to convert your JSON objects to DynamoDB-JSON format and vice versa. This is where the marshall and unmarshall functions from the utility packages become useful.

The marshall function is used to convert a JSON object into the DynamoDB-JSON attribute value format. It takes in a plain JSON object and returns a marshalled representation of that object so that the attribute values are transformed into the appropriate DynamoDB data types. This is particularly useful when preparing data to be stored in DynamoDB because it makes sure that the data adheres to the expected format.

On the other hand, the unmarshall function is used to convert a DynamoDB-JSON attribute value format back into a plain JSON object. It takes in the DynamoDB-JSON response object, typically retrieved from DynamoDB operations, and transforms the attribute values into their corresponding JSON data types. This allows you to work with the data in a more familiar and convenient way within your code.

While Node.js functions are called marshall and unmarshall, in the Boto3 library for Python, similar functionality is achieved using the serialize and deserialize operations. These operations perform the same tasks as their Node.js counterparts but are referred to by different names.

The serialize operation converts a Python object into the DynamoDB attribute value format. This prepares it for storage in DynamoDB.

The deserialize operation converts the DynamoDB attribute value format back into a Python object. This enables seamless processing and utilization of the retrieved data in Python applications.

NodeJS example

The following code is a Node.js example (package: @aws-sdk/util-dynamodb):

const { DynamoDBClient } = require('@aws-sdk/client-dynamodb');
const { marshall, unmarshall } = require('@aws-sdk/util-dynamodb');

const REGION = 'us-west-2';

const dynamoDBClient = new DynamoDBClient({ region: REGION });

// Marshall (Serialize) Example
const itemToStore = {
  user_id: '12345',
  first_name: 'Terry',
  age: 48,
};

const marshalledItem = marshall(itemToStore);

const putItemParams = {
  TableName: 'my-table',
  Item: marshalledItem,
};

dynamoDBClient.putItem(putItemParams)
  .then(() => {
    console.log('Item stored successfully.');
  })
  .catch((err) => {
    console.error('Error storing item:', err);
  });

// Unmarshall (Deserialize) Example
const getItemParams = {
  TableName: 'my-table',
  Key: {
    user_id: { S: '12345' },
  },
};

dynamoDBClient.getItem(getItemParams)
  .then((data) => {
    const retrievedItem = unmarshall(data.Item);
    console.log(retrievedItem);
  })
  .catch((err) => {
    console.error(err);
  });

In the previous example, we demonstrate both marshalling (serialization) and unmarshalling (deserialization) using the AWS SDK V3 for JavaScript.

Marshalling example overview

Marshalling consists of the following steps:

  1. Have a JavaScript object itemToStore represent an item to be stored in DynamoDB.
  2. Use the marshall function to convert this object into the DynamoDB attribute value format (marshalledItem).
  3. Pass the marshalled item to the putItem operation to store it in DynamoDB.

Unmarshalling example overview

Unarshalling consists of the following steps:

  1. Have the getItemParams object specify the table name and key to retrieve an item from DynamoDB.
  2. Call the getItem operation.
  3. Obtain the response data.
  4. Use the unmarshall function to convert the DynamoDB attribute value format into a plain JavaScript object (retrievedItem).
  5. Log the retrieved item to the console.

You can convert data between the DynamoDB attribute value format and JavaScript objects using the marshall and unmarshall functions. This facilitates seamless integration with DynamoDB and streamlined processing of data in your Node.js applications.

Boto3 example

The following code is a Boto3 example (package: boto3.dynamodb.types):

from boto3.dynamodb.types import TypeDeserializer, TypeSerializer

REGION = 'us-west-2'

dynamodb_client = boto3.client('dynamodb', region_name=REGION)
# Serialize (Marshal) Example
item_to_store = {
    'user_id': '12345',
    'first_name': 'Terry',
    'age': 48,
}

serializer = TypeSerializer()

serialized_item = serializer.serialize(item_to_store)

put_item_params = {
    'TableName': 'my-table',
    'Item': serialized_item,
}

dynamodb_client.put_item(**put_item_params)

deserializer = TypeDeserializer()

get_item_params = {
    'TableName': 'my-table',
    'Key': {
        'user_id': {'S': '12345'},
    },
}

response = dynamodb_client.get_item(**get_item_params)
retrieved_item = deserializer.deserialize(response['Item'])
print(retrieved_item)

In the previous example, we demonstrated both serialization (serialize) and deserialization (deserialize) using the Boto3 library for Python.

Serialization example

Serialization consists of the following steps:

  1. Have a Python dictionary item_to_store represent an item to be stored in DynamoDB.
  2. Create a TypeSerializer object and use its serialize method to convert the dictionary into the DynamoDB attribute value format (serialized_item).
  3. Pass the serialized item to the put_item operation to store it in DynamoDB.

Deserialization example

Deserialization consists of the following steps:

  1. Create a TypeDeserializer object.
  2. Specify the table name and key in the get_item_params dictionary to retrieve an item from DynamoDB.
  3. Call the get_item operation.
  4. Obtain the response containing the serialized item.
  5. Use the deserialize method of the TypeDeserializer object to convert the serialized item back into a Python dictionary (retrieved_item).
  6. Print the retrieved item.

You can convert data between the DynamoDB attribute value format and Python objects with the serialize and deserialize functions provided by Boto3. This enables seamless integration with DynamoDB and streamlined processing of data in your Python applications.

DynamoDB SDK programming language overview

The following table provides a general overview and may not include every available DynamoDB client for each programming language.

Programming language High-level clients Low-level clients
JavaScript/Node V2 DocumentClient DynamoDBClient
JavaScript/Node V3 DocumentClient DynamoDBClient
Python Boto3 Resource Table Client
Java SDK V1 DynamoDBMapper AmazonDynamoDBClient
Java SDK V2 EnchancedClient DynamoDbClient
.NET (C#) DocumentModel AmazonDynamoDBClient
Ruby Aws::Record Aws::DynamoDB::Client
PHP DynamoDb\Marshaler DynamoDb\DynamoDbClient
Go dynamodbattribute dynamodb
Rust N/A aws_sdk_dynamodb::Client
Swift AWSDynamoDBObjectMapper AWSDynamoDB

Conclusion

Choosing between the low-level and high-level client depends on the specific requirements of your application.

The low-level client offers granular control and flexibility. This makes it suitable for advanced use cases and fine-tuned customization. The high-level client abstracts away complexities, providing a simpler and more productive experience for common use cases.

Understanding the characteristics, use cases, and benefits of both client types empowers developers to make informed decisions and use the most appropriate client for their DynamoDB interactions. It’s essential to consider factors such as the level of control required, the complexity of data access patterns, and the desired development experience when selecting the client that best suits your needs.

Ready to take your development projects to the next level? Visit AWS Developer Tools to discover a wealth of resources, tutorials, and insights that will help you harness the full power of AWS. And don’t miss out on unlocking the potential of AWS and boosting your development skills by heading over to the AWS Developer Tools Blog today and supercharging your projects!


About the Author

Lee Hannigan, Sr. DynamoDB Specialist SA. Lee has been a DynamoDB specialist for the past 4 years, with a strong background in Big Data technologies and valuable insights gained from working with innovative startups, Lee brings a wealth of knowledge to his AWS customers in EMEA. Passionate about helping AWS customers scale their applications, Lee’s expertise lies in leveraging DynamoDB and serverless technologies to achieve optimal performance and efficiency. By providing tailored solutions and guidance, Lee has successfully assisted hundreds of organizations in unlocking the full potential of DynamoDB and embracing serverless architectures. With a customer-centric approach and a deep understanding of AWS services, Lee is dedicated to empowering businesses to thrive in the world of cloud computing.