AWS Database Blog

Category: Amazon DynamoDB

Upgrade your Amazon DynamoDB global tables to the current version

Amazon DynamoDB is a fully managed, serverless NoSQL database that delivers single-digit millisecond performance for applications at any scale. DynamoDB global tables is a multi-active database feature that replicates data across AWS Regions, enabling local reads and writes. In this post, we explain why we strongly recommend all customers use the Current version for all global tables.

Implement prescription validation using Amazon Bedrock and Amazon DynamoDB

Healthcare providers manage an ever-growing volume of patient data and medication information to help ensure safe, effective treatment. Although traditional database systems excel at storing patient records, they require complex queries to access information. By adding generative AI capabilities, healthcare providers can now use natural language to search patient records and verify medication safety, rather than writing complex database queries. In this post, I show you a solution that uses Amazon Bedrock and Amazon DynamoDB to create an AI agent that helps healthcare providers quickly identify potential drug interactions by validating new prescriptions against a patient’s current medication records.

How Amazon maintains accurate totals at scale with Amazon DynamoDB

Amazon’s Finance Technologies Tax team (FinTech Tax) manages mission-critical services for tax computation, deduction, remittance, and reporting across global jurisdictions. The Application processes billions of transactions annually across multiple international marketplaces. In this post, we show how the team implemented tiered tax withholding using Amazon DynamoDB transactions and conditional writes.

Amazon DynamoDB data modeling for Multi-tenancy – Part 3

In this series of posts, we walk through the process of creating a DynamoDB data model using an example multi-tenant application, a customer issue tracking service. The goal of this series is to explore areas that are important for decision-making and provide insights into the influences to help you plan your data model for a multi-tenant application. In this last part of the series, we explore how to validate the chosen data model from both a performance and a security perspective. Additionally, we cover how to extend the data model as new access patterns and requirements arise.

Amazon DynamoDB data modeling for Multi-Tenancy – Part 2

In this series of posts, we walk through the process of creating a DynamoDB data model using an example multi-tenant application, a customer issue tracking service. The goal of this series is to explore areas that are important for decision-making and provide insights into the influences to help you plan your data model for a multi-tenant application. In this post, we continue the design process, selecting a partition key design and creating our data schema. We also show how to implement the access patterns using the AWS Command Line Interface (AWS CLI).

Amazon DynamoDB data modeling for Multi-Tenancy – Part 1

In this series of posts, we walk through the process of creating a DynamoDB data model using an example multi-tenant application, a customer issue tracking service. The goal of this series is to explore areas that are important for decision-making and provide insights into the influences to help you plan your data model for a multi-tenant application. In this post, we define the access patterns and decide on the table design.

Ingest CSV data to Amazon DynamoDB using AWS Lambda

In this post, we explore a streamlined solution that uses AWS Lambda and Python to read and ingest CSV data into an existing Amazon DynamoDB table. This approach adheres to organizational security restrictions, supports infrastructure as code (IaC) for table management, and provides an event-driven process for ingesting CSV datasets into DynamoDB.

Choose the right throughput strategy for Amazon DynamoDB applications

When getting started with DynamoDB, one of the first decisions you will make is choosing between two throughput modes: on-demand and provisioned. On-demand mode is the default and recommended throughput option because it simplifies building modern, serverless applications that can start small and scale to millions of requests per second. However, choosing the right throughput strategy requires evaluating your operational needs, development velocity, and application characteristics, with cost being a key consideration. In this post, we examine both throughput modes in detail, exploring their characteristics, strengths, and ideal use cases.

How Amazon Finance Automation built an operational data store with AWS purpose built databases to power critical finance applications

In this post, we discuss how the Amazon Finance Automation team used AWS purpose built databases, such as Amazon DynamoDB, Amazon OpenSearch Service, and Amazon Neptune together coupled with serverless compute like AWS Lambda to build an Operational Data Store (ODS) to store financial transactional data and support FinOps applications with millisecond latency. This data is the key enabler for FinOps business.