AWS Database Blog

Category: Amazon DynamoDB

How Global Payments Inc. improved their tail latency using request hedging with Amazon DynamoDB

Amazon DynamoDB delivers consistent single-digit millisecond performance at any scale, making it ideal for mission-critical workloads. However, as with any distributed system, a small percentage of requests may experience significantly longer response times than the average. This phenomenon, known as tail latency, refers to these slower outliers that can be seen by looking at metrics such as the 99th or 99.9th percentile of response times. In this post, we explore how Global Payments Inc. (GPN) reduced their tail latency by 30% using request hedging. We review the technical details and challenges they faced, providing insights into how you can optimize your own latency-sensitive applications. In a next post we’ll share detailed implementation examples.

Gracefully handle failed AWS Lambda events from Amazon DynamoDB Streams

In this post, we show how to capture and retain failed stream events for later analysis or replay using Amazon S3 as a durable destination. We compare this approach with the traditional Amazon SQS dead-letter queue (DLQ) pattern, and explain when and why Amazon S3 is a preferred option.

Vibe code with AWS databases using Vercel v0

In this post, we explore how you can use Vercel’s v0 generative UI to build applications with a modern UI for AWS purpose-built databases such as Amazon Aurora, Amazon DynamoDB, Amazon Neptune, and Amazon ElastiCache.

Enhanced throttling observability in Amazon DynamoDB

Today, we’re announcing improved observability for throttled requests in Amazon DynamoDB. These enhancements provide developers with enriched exception messages, detailed Amazon CloudWatch metrics, and a new, more cost-effective mode for CloudWatch Contributor Insights. Together, these improvements make it straightforward to understand, monitor, and optimize your DynamoDB applications’ performance. In this post, we explore how these […]

Introducing the Amazon DynamoDB data modeling MCP tool

To help you move faster with greater confidence, we’re introducing a new DynamoDB data modeling tool, available as part of our DynamoDB Model Context Protocol (MCP) server. The DynamoDB MCP data modeling tool integrates with AI assistants that support MCP, providing a structured, natural-language-driven workflow to translate application requirements into DynamoDB data models. In this post, we show you how to generate a data model in minutes using this new data modeling tool.

How to evaluate throughput utilization for Amazon DynamoDB tables in provisioned mode

In this post, we demonstrate how to evaluate throughput utilization for DynamoDB tables in provisioned mode. Understanding this metrics helps you determine whether switching to on-demand mode is the right choice. Moving to on-demand mode, where you pay-per-request for throughput, can optimize costs, eliminate capacity planning, minimize operational overhead, and enhance overall user experience for your applications.

SQL to NoSQL: Modernizing data access layer with Amazon DynamoDB

The transition from SQL-based access patterns to a DynamoDB API-driven approach presents opportunities to optimize how your application interacts with its data layer. This final part of our series focuses on implementing an effective abstraction layer and handling various data access patterns in DynamoDB.

SQL to NoSQL: Modeling data in Amazon DynamoDB

In this post, we explore strategies for designing DynamoDB data models, including entity identification, table design decisions, and relationship modeling approaches. We examine practical scenarios comparing different modeling strategies, helping you make informed decisions for your specific use case.