AWS Database Blog
Category: Advanced (300)
Improve speed and reduce cost for generative AI workloads with a persistent semantic cache in Amazon MemoryDB
In this post, we present the concepts needed to use a persistent semantic cache in MemoryDB with Knowledge Bases for Amazon Bedrock, and the steps to create a chatbot application that uses the cache. We use MemoryDB as the caching layer for this use case because it delivers the fastest vector search performance at the highest recall rates among popular vector databases on AWS. We use Knowledge Bases for Amazon Bedrock as a vector database because it implements and maintains the RAG functionality for our application without the need of writing additional code.
How to deploy Stacks blockchain nodes on AWS with the AWS Blockchain Node Runners Stacks blueprint
In this post, we demonstrate how to swiftly deploy Stacks blockchain nodes on AWS with the AWS Blockchain Node Runners blueprint.
Stream change data in a multicloud environment using AWS DMS, Amazon MSK, and Amazon Managed Service for Apache Flink
When workloads and their corresponding transactional databases are distributed across multiple cloud providers, it can create challenges in using the data in near real time for advanced analytics. In this post, we discuss architecture, approaches, and considerations for streaming data changes from the transactional databases deployed in other cloud providers to a streaming data solution deployed on AWS.
Power real-time vector search capabilities with Amazon MemoryDB
In today’s rapidly advancing world of generative artificial intelligence (AI), businesses across diverse industries are transforming customer experiences through the power of real-time search. By harnessing the untapped potential of unstructured data ranging from text to images and videos, organizations are able to redefine the standards of engagement and personalization. A key component of this […]
Implement a rollback strategy after an Amazon Aurora MySQL blue/green deployment switchover
In this post, we discuss the steps to perform a blue/green deployment switchover and how to set up and perform a rollback strategy post switchover for Amazon Aurora MySQL-Compatible Edition.
Migrate an on-premises MySQL database to Amazon Aurora MySQL over a private network using AWS DMS homogeneous data migration and Network Load Balancer
Homogeneous data migrations in AWS DMS simplify the migration of on-premises databases to their Amazon RDS equivalents. In this post, we guide you through the steps of performing a homogeneous migration from an on-premises MySQL database to Amazon Aurora MySQL using AWS DMS homogeneous data migrations over a private network using network load balancer.
Stop and start Amazon RDS Multi-AZ DB clusters on a schedule
Stopping and starting the RDS Multi-AZ DB clusters can be very useful if you want to temporarily stop the clusters for your development or test environments when you’re not using them for various reasons (such as vacations, holidays, or weekends) to reduce costs. In this post, we show you how to stop and start your RDS Multi-AZ DB clusters, enabling you to gain more control over your infrastructure resources.
Using knowledge graphs to build GraphRAG applications with Amazon Bedrock and Amazon Neptune
Retrieval Augmented Generation (RAG) is an innovative approach that combines the power of large language models with external knowledge sources, enabling more accurate and informative generation of content. Using knowledge graphs as sources for RAG (GraphRAG) yields numerous advantages. These knowledge bases encapsulate a vast wealth of curated and interconnected information, enabling the generation of responses that are grounded in factual knowledge. In this post, we show you how to build GraphRAG applications using Amazon Bedrock and Amazon Neptune with LlamaIndex framework.
How Infosys used Amazon Aurora zero-ETL integration with Amazon Redshift for near real-time analytics and insights
In this post, we talk about how Infosys redefined the ETL landscape for their product sales and freight management application using Aurora zero-ETL to Amazon Redshift. We also explain our experience with the old process and how the new zero-ETL integration helped us effortlessly move data into a Redshift cluster for analytics along with metrics to monitor the health of the integration.
Implementing a fall forward strategy from Amazon RDS for SQL Server Transparent Data Encryption (TDE) and Non-TDE Enabled databases to self-managed SQL Server
In this post, we discuss how to set up a rollback strategy using a fall forward approach from Amazon RDS for SQL Server transparent database encryption (TDE)- and non-TDE-enabled databases to self-managed SQL Server, utilizing SQL’s native backup and restore option.









