AWS Database Blog

Category: Learning Levels

Monitor Amazon Timestream for InfluxDB performance using the Timestream for InfluxDB Metrics dashboard

The Timestream for InfluxDB Metrics dashboard adds the ability to perform trend analysis, create actionable insights, set up alerts, and automate reporting. You can configure the Timestream for InfluxDB Metrics dashboard to suit your business needs and build a robust and optimized time series workflow. In this post we walk you through how to deploy the Timestream for InfluxDB Metrics dashboard to start monitoring the performance of your fleet of Timestream for InfluxDB databases.

Build a dynamic workflow orchestration engine with Amazon DynamoDB and AWS Lambda

In this post, I show you how to build a serverless workflow orchestration engine that uses Amazon DynamoDB and AWS Lambda. The complete implementation is available in a GitHub repository, which includes two fully functional examples that you can deploy and run immediately to see the orchestration engine in action.

How Smartsheet enhances recommendations using Amazon Neptune and Knowledge Graphs

Smartsheet is a leading SaaS-based collaborative work management platform trusted by enterprises worldwide to manage projects, automate workflows, and drive collaboration at scale. In this post, we describe the Smartsheet Knowledge Graph, built in partnership between Smartsheet and AWS. The Smartsheet Knowledge Graph is a unified data model connecting people, content, and work in Smartsheet, representing how users interact with assets, content, and their collaborators.

Identifying and resolving performance issues caused by TOAST OID contention in Amazon Aurora PostgreSQL Compatible Edition and Amazon RDS for PostgreSQL

In this post, we explore the challenges of OID exhaustion in PostgreSQL, focusing on its impact on TOAST tables and how it leads to performance issues. We will cover how to identify the problem by reviewing wait events, session activity, and table usage. Additionally, we discuss practical solutions, from cleaning up data to more advanced strategies such as partitioning.

Implement event-driven architectures with Amazon DynamoDB – Part 3

In this three-part series, we explore approaches to implement enhanced event-driven patterns for DynamoDB-backed applications. Throughout this series, we’ve examined various strategies for managing data within DynamoDB. This post shifts the focus to an event-driven pattern that reliably schedules future downstream actions using EventBridge Scheduler.

Implement event-driven architectures with Amazon DynamoDB – Part 2

In this three-part series, we explore approaches to implement enhanced event-driven patterns for DynamoDB-backed applications. In this post (Part 2), we explore another method which uses global secondary indexes (GSIs) to handle fine-grained Time to Live (TTL) requirements.

Implement event-driven architectures with Amazon DynamoDB

In this three-part series, we explore approaches to implement enhanced event-driven patterns for DynamoDB-backed applications. In this post (Part 1), we focus on improving DynamoDB’s native TTL functionality by implementing near real-time data eviction using EventBridge Scheduler, reducing the typical time to delete expired items from within a few days to less than one minute.

Raising the bar on Amazon DynamoDB data modeling

In April 2025, we introduced the Amazon DynamoDB data modeling tool for the Model Context Protocol (MCP) server. The tool guides you through a conversation, collects your requirements, and produces a data model that includes tables, indexes, and cost considerations. In this post, we show you how we built this automated evaluation framework and how it helped us deliver reliable DynamoDB data modeling guidance at scale.

Long-term storage and analysis of Amazon RDS events with Amazon S3 and Amazon Athena

In this post, we show you how to implement an automated solution for archiving Amazon RDS events to Amazon Simple Storage Service (Amazon S3). We also discuss how to analyze the events with Amazon Athena which helps enable proactive database management, helps maintain security and compliance, and provides valuable insights for capacity planning and troubleshooting.