AWS Database Blog

Category: Analytics

Filter Amazon Aurora database activity stream data for segregation and monitoring

Most organizations need to monitor activity on databases containing sensitive information to ensure security auditing and compliance. Although some security operations teams might be interested in monitoring all activities like read, write, and logons, others might want to restrict monitoring to activities that lead to changes in data and data structures only. In this post, […]

Read More

Building a data discovery solution with Amundsen and Amazon Neptune

September 8, 2021: Amazon Elasticsearch Service has been renamed to Amazon OpenSearch Service. See details. In this post, we discuss the need for a metadata and data lineage tool and the problems it solves, how to rapidly deploy it in the language you prefer using the AWS Cloud Development Kit (AWS CDK), as well as […]

Read More

Analyze database performance with Amazon CloudWatch metric streams

With the announcement of Amazon CloudWatch Metric Streams, you can now stream near-real-time metrics data to a destination such as Amazon Simple Storage Service (Amazon S3). Metric Streams supports two primary use cases: Third-party providers – You can stream metrics to partners to power dashboards, alarms, and other tools that rely on accurate and timely […]

Read More

Near real-time processing with Amazon Kinesis, Amazon Timestream, and Grafana

As organizations adopt and deploy home-connected smart devices, they face challenges utilizing device telemetry data in narrow and broad contexts. Examples of such home-connected devices are smart meters and home sensors that emit telemetry and measurements as time series data. In a narrow context, operational teams use data to understand if devices are operating within […]

Read More

How to migrate Amazon DynamoDB tables from one AWS account to another with AWS Data Pipeline

There are many scenarios in which you might need to migrate your Amazon DynamoDB tables from one AWS account to another AWS account, such as when you need to consolidate all your AWS services into centralized accounts. Consolidating DynamoDB tables into a single account can be time-consuming and complex if you have a lot of […]

Read More

Amazon QLDB data streaming via AWS CDK

Amazon Quantum Ledger Database (Amazon QLDB) is a fully managed ledger database that provides a transparent, immutable, and cryptographically verifiable transaction log. You can use Amazon QLDB to track each application data change, and it maintains a complete and verifiable history of changes over time. Because of those key features, banking customers have adopted Amazon QLDB as a database […]

Read More

How Zulily drives discovery shopping using Amazon Kinesis Data Analytics and Amazon DocumentDB

This is a guest post by Sergey Podlazov – Director of Engineering (Shopping Experience) at Zulily, Senthil Kumar, Sr. Solutions Architect, AWS, and Praveen Chamarthi, Sr. Technical Account Manager, AWS Zulily offers a unique ecommerce experience to shoppers by offering amazing deals on products for moms, kids, and babies. We have scaled this model to […]

Read More

Capture changes from Amazon DocumentDB via AWS Lambda and publish them to Amazon MSK

When using a document data store as your service’s source of truth, you may need to share the changes of this source with other downstream systems. The data events that are happening within this data store can be converted to business events, which can then be sourced into multiple microservices that implement different business functionalities. […]

Read More

Export and analyze Amazon DynamoDB data in an Amazon S3 data lake in Apache Parquet format

Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It’s a fully managed, multi-region, multi-active, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. DynamoDB can handle more than 10 trillion requests per day and can support peaks of more than 20 million […]

Read More

Creating Amazon Timestream interpolated views using Amazon Kinesis Data Analytics for Apache Flink

Many organizations have accelerated their adoption of stream data processing technologies in an effort to more quickly derive actionable insights from their data. Frequently, it is required that data from streams be computed into metrics or aggregations and stored in near real-time for analysis. These computed values should be generated and stored as quickly as […]

Read More