AWS Database Blog

Diving deep into the new Amazon Aurora Global Database writer endpoint

On October 22, 2024, we announced the availability of the Aurora Global Database writer endpoint, a highly available and fully managed endpoint for your global database that Aurora automatically updates to point to the current writer instance in your global cluster after a cross-Region switchover or failover, alleviating the need for application changes and simplifying routing requests to the writer instance. In this post, we dive deep into the new Global Database writer endpoint, covering its benefits and key considerations for using it with your applications.

Use Amazon Neptune Analytics to analyze relationships in your data faster, Part 2: Enhancing fraud detection with Parquet and CSV import and export

In this two-part series, we show how you can import and export using Parquet and CSV to quickly gather insights from your existing graph data. In Part 1, we introduced the import and export functionalities, and walked you through how to quickly get started with them. In this post, we show how you can use the new data mobility improvements in Neptune Analytics to enhance fraud detection.

Use Amazon Neptune Analytics to analyze relationships in your data faster, Part 1: Introducing Parquet and CSV import and export

In this two-part series, we show how you can import and export using Parquet and CSV to quickly gather insights from your existing graph data. Part 1 introduces the import and export functionalities, and walks you through how to quickly get started with them. In Part 2, we show how you can use the new data mobility improvements in Neptune Analytics to enhance fraud detection.

How Skello uses AWS DMS to synchronize data from a monolithic application to microservices

Skello is a human resources (HR) software-as-a-service (SaaS) platform that focuses on employee scheduling and workforce management. It caters to various sectors, including hospitality, retail, healthcare, construction, and industry. In this post, we show how Skello uses AWS Database Migration Service (AWS DMS) to synchronize data from an monolithic architecture to microservices and perform data ingestion from the monolithic architecture and microservices to our data lake.

How Orca Security optimized their Amazon Neptune database performance

Orca Security, an AWS Partner, is an independent cybersecurity software provider whose patented agentless-first cloud security platform is trusted by hundreds of enterprises globally. At Orca Security, we use a variety of metrics to assess the significance of security alerts on cloud assets. Our Amazon Neptune database plays a critical role in calculating the exposure of individual assets within a customer’s cloud environment. By building a graph that maps assets and their connectivity between one another and to the broader internet, the Orca Cloud Security Platform can evaluate both how an asset is exposed as well as how an attacker could potentially move laterally within an account. In this post, we explore some of the key strategies we’ve adopted to maximize the performance of our Amazon Neptune database.

Monitor server-side latency for Amazon ElastiCache for Valkey

Modern applications are built as a group of microservices, and the latency for one component can impact the performance of the entire system. Monitoring latency is critical for maintaining optimal performance, enhancing user experience, and maintaining system reliability. In this post, we explore ways to monitor latency, detect anomalies, and troubleshoot high-latency issues effectively for your self-designed (node-based) ElastiCache clusters.

Monitor server-side latency for Amazon MemoryDB for Valkey

Amazon MemoryDB is a Valkey– and Redis OSS-compatible, durable, in-memory database service that delivers ultra-fast performance. With MemoryDB, data is stored in memory with Multi-AZ durability, which enables you to achieve microsecond read and single-digit millisecond write latency and high throughput. MemoryDB is often used for building durable microservices and latency-sensitive database workloads such as […]

JSON serialization using Serde Rust crates in Amazon RDS for PostgreSQL

In this post, we showcase how to use PGRX and PL/Rust to efficiently access and manipulate all built-in PostgreSQL data types in Rust. We demonstrate how to write performant functions that create and serialize JSON objects that include these built-in types. These functions are directly usable in your database and use the newly supported serde and serde_json crates. We also walk through deploying an Amazon RDS for PostgreSQL instance with PL/Rust enabled, and how PGRX type mapping allows you to use all built-in PostgreSQL types in a JSON object.

Migrate spatial columns from Oracle to Amazon Aurora PostgreSQL or Amazon RDS for PostgreSQL using AWS DMS

In this post, we discuss configurations in AWS DMS endpoints and AWS DMS tasks to migrate spatial columns from Oracle to Aurora PostgreSQL-Compatible efficiently.

Vacasa’s migration to Amazon Aurora for a more efficient Property Management System

Vacasa is North America’s leading vacation rental management platform, revolutionizing the rental experience with advanced technology and expert teams. In the competitive short-term vacation property management industry, efficient systems are critical. To maintain its edge and continue providing top-notch service, Vacasa needed to modernize its primary transactional database to improve performance, provide high availability, and reduce costs. In this post, we share Vacasa’s journey from Amazon Relational Database Service (Amazon RDS) for MariaDB to Amazon RDS for MySQL, and finally to Amazon Aurora, highlighting the technical steps taken and the outcomes achieved.