AWS Database Blog
Category: PostgreSQL compatible
Scheduled scaling of Amazon Aurora Serverless with Amazon EventBridge Scheduler
In this post, we demonstrate how you can implement scheduled scaling for Aurora Serverless using Amazon EventBridge Scheduler. By proactively adjusting minimum Aurora Capacity Units (ACUs), you can achieve faster scaling rates during peak periods while maintaining cost efficiency during low-demand times.
Upgrade strategies for Amazon Aurora PostgreSQL and Amazon RDS for PostgreSQL 12
In this post, we explore the end-of-life (EOL) timeline for Aurora PostgreSQL and Amazon RDS for PostgreSQL. We discuss features in PostgreSQL major versions, Amazon RDS Extended Support, and various upgrade strategies, including in-place upgrades, Amazon RDS blue/green deployments, and out-of-place upgrades.
How Mindbody improved query latency and optimized costs using Amazon Aurora PostgreSQL Optimized Reads
In this post, we highlight the scaling and performance challenges Mindbody was facing due to an increase in their data growth. We also present the root cause analysis and recommendations for adopting to Aurora Optimized Reads, outlining the steps taken to address these issues. Finally, we discuss the benefits Mindbody realized from implementing these changes, including enhanced query performance, significant cost savings, and improved price predictability.
Multi-tenant vector search with Amazon Aurora PostgreSQL and Amazon Bedrock Knowledge Bases
In this post, we discuss the fully managed approach using Amazon Bedrock Knowledge Bases to simplify the integration of the data source with your generative AI application using Aurora. Amazon Bedrock is a fully managed service that makes foundation models (FMs) from leading AI startups and Amazon available through an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case.
Self-managed multi-tenant vector search with Amazon Aurora PostgreSQL
In this post, we explore the process of building a multi-tenant generative AI application using Aurora PostgreSQL-Compatible for vector storage. In Part 1 (this post), we present a self-managed approach to building the vector search with Aurora. In Part 2, we present a fully managed approach using Amazon Bedrock Knowledge Bases to simplify the integration of the data sources, the Aurora vector store, and your generative AI application.
Simplify database authentication management with the Amazon Aurora PostgreSQL pg_ad_mapping extension
In this post, we look into Kerberos authentication for Amazon Aurora PostgreSQL-Compatible Edition using AWS Directory Service for Microsoft Active Directory, and particularly the new pg_ad_mapping extension and how it can help you manage access control more efficiently.
How Aqua Security exports query data from Amazon Aurora to deliver value to their customers at scale
Aqua Security is the pioneer in securing containerized cloud native applications from development to production. Like many organizations, Aqua faced the challenge of efficiently exporting and analyzing large volumes of data to meet their business requirements. Specifically, Aqua needed to export and query data at scale to share with their customers for continuous monitoring and security analysis. In this post, we explore how Aqua addressed this challenge by using aws_s3.query_export_to_s3 function with their Amazon Aurora PostgreSQL-Compatible Edition and AWS Step Functions to streamline their query output export process, enabling scalable and cost-effective data analysis.
Monitor the health of Amazon Aurora PostgreSQL instances in large-scale deployments
In this post, we show you how to achieve better visibility into the health of your Amazon Aurora PostgreSQL instances, proactively address potential issues, and maintain the smooth operation of your database infrastructure. The solution is designed to scale with your deployment, providing robust and reliable monitoring for even the largest fleets of instances.
Diving deep into the new Amazon Aurora Global Database writer endpoint
On October 22, 2024, we announced the availability of the Aurora Global Database writer endpoint, a highly available and fully managed endpoint for your global database that Aurora automatically updates to point to the current writer instance in your global cluster after a cross-Region switchover or failover, alleviating the need for application changes and simplifying routing requests to the writer instance. In this post, we dive deep into the new Global Database writer endpoint, covering its benefits and key considerations for using it with your applications.
Migrate spatial columns from Oracle to Amazon Aurora PostgreSQL or Amazon RDS for PostgreSQL using AWS DMS
In this post, we discuss configurations in AWS DMS endpoints and AWS DMS tasks to migrate spatial columns from Oracle to Aurora PostgreSQL-Compatible efficiently.