AWS Database Blog

Category: Amazon DynamoDB

Amazon DynamoDB data modeling for Multi-tenancy – Part 3

In this series of posts, we walk through the process of creating a DynamoDB data model using an example multi-tenant application, a customer issue tracking service. The goal of this series is to explore areas that are important for decision-making and provide insights into the influences to help you plan your data model for a multi-tenant application. In this last part of the series, we explore how to validate the chosen data model from both a performance and a security perspective. Additionally, we cover how to extend the data model as new access patterns and requirements arise.

Amazon DynamoDB data modeling for Multi-Tenancy – Part 2

In this series of posts, we walk through the process of creating a DynamoDB data model using an example multi-tenant application, a customer issue tracking service. The goal of this series is to explore areas that are important for decision-making and provide insights into the influences to help you plan your data model for a multi-tenant application. In this post, we continue the design process, selecting a partition key design and creating our data schema. We also show how to implement the access patterns using the AWS Command Line Interface (AWS CLI).

Amazon DynamoDB data modeling for Multi-Tenancy – Part 1

In this series of posts, we walk through the process of creating a DynamoDB data model using an example multi-tenant application, a customer issue tracking service. The goal of this series is to explore areas that are important for decision-making and provide insights into the influences to help you plan your data model for a multi-tenant application. In this post, we define the access patterns and decide on the table design.

Ingest CSV data to Amazon DynamoDB using AWS Lambda

In this post, we explore a streamlined solution that uses AWS Lambda and Python to read and ingest CSV data into an existing Amazon DynamoDB table. This approach adheres to organizational security restrictions, supports infrastructure as code (IaC) for table management, and provides an event-driven process for ingesting CSV datasets into DynamoDB.

Choose the right throughput strategy for Amazon DynamoDB applications

When getting started with DynamoDB, one of the first decisions you will make is choosing between two throughput modes: on-demand and provisioned. On-demand mode is the default and recommended throughput option because it simplifies building modern, serverless applications that can start small and scale to millions of requests per second. However, choosing the right throughput strategy requires evaluating your operational needs, development velocity, and application characteristics, with cost being a key consideration. In this post, we examine both throughput modes in detail, exploring their characteristics, strengths, and ideal use cases.

How Amazon Finance Automation built an operational data store with AWS purpose built databases to power critical finance applications

In this post, we discuss how the Amazon Finance Automation team used AWS purpose built databases, such as Amazon DynamoDB, Amazon OpenSearch Service, and Amazon Neptune together coupled with serverless compute like AWS Lambda to build an Operational Data Store (ODS) to store financial transactional data and support FinOps applications with millisecond latency. This data is the key enabler for FinOps business.

2024: A year of innovation and growth for Amazon DynamoDB

2024 marked a significant year for Amazon DynamoDB, with advancements in security, performance, cost-effectiveness, and integration capabilities. This year-in-review post highlights key developments that have enhanced the DynamoDB experience for our customers. Whether you’re a long-time DynamoDB user or just getting started, this post will guide you through the most impactful changes of 2024 and how they can help you build reliable, faster, and more secure applications. We’ve sorted the post by alphabetical feature areas, listing releases in reverse chronological order.

Announcing configurable point-in-time recovery periods for Amazon DynamoDB

Amazon DynamoDB enables you to back up your table data continuously by using point-in-time recovery (PITR). When you enable PITR, DynamoDB backs up your table data automatically with per-second granularity. PITR helps protect you against accidental writes and deletes. For example, if a test script accidentally writes to a production DynamoDB table, or someone mistakenly […]