AWS Database Blog
Category: Artificial Intelligence
Graph-powered authorization: Relationship based access control for access management
Authorization systems are a critical component of modern applications, yet traditional approaches like role-based access control (RBAC) and attribute-based access control (ABAC) struggle to meet the complex access control requirements of today’s enterprises. In this post, we introduce a relationship-based access control (ReBAC) as an alternative for enterprise scale authorization. We explore how the proposed […]
Using generative AI and Amazon Bedrock to generate SPARQL queries to discover protein functional information with UniProtKB and Amazon Neptune
In this post, we demonstrate how to use generative AI and Amazon Bedrock to transform natural language questions into graph queries to run against a knowledge graph. We explore the generation of queries written in the SPARQL query language, a well-known language for querying a graph whose data is represented as Resource Description Framework (RDF).
Integrate natural language processing and generative AI with relational databases
In this post, we present an approach to using natural language processing (NLP) to query an Amazon Aurora PostgreSQL-Compatible Edition database. The solution presented in this post assumes that an organization has an Aurora PostgreSQL database. We create a web application framework using Flask for the user to interact with the database. JavaScript and Python code act as the interface between the web framework, Amazon Bedrock, and the database.
Multi-tenant vector search with Amazon Aurora PostgreSQL and Amazon Bedrock Knowledge Bases
In this post, we discuss the fully managed approach using Amazon Bedrock Knowledge Bases to simplify the integration of the data source with your generative AI application using Aurora. Amazon Bedrock is a fully managed service that makes foundation models (FMs) from leading AI startups and Amazon available through an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case.
Self-managed multi-tenant vector search with Amazon Aurora PostgreSQL
In this post, we explore the process of building a multi-tenant generative AI application using Aurora PostgreSQL-Compatible for vector storage. In Part 1 (this post), we present a self-managed approach to building the vector search with Aurora. In Part 2, we present a fully managed approach using Amazon Bedrock Knowledge Bases to simplify the integration of the data sources, the Aurora vector store, and your generative AI application.
Create a 360-degree master data management patient view solution using Amazon Neptune and generative AI
In this post, we explore how you can achieve a patient 360-degree view using Amazon Neptune and generative AI, and use it to strengthen your organization’s research and breakthroughs. By consolidating information from multiple sources such as electronic health records (EHRs), lab reports, prescriptions, and medical histories into a single location, healthcare providers can gain a better understanding of a patient’s health.
How Iterate.ai uses Amazon MemoryDB to accelerate and cost-optimize their workforce management conversational AI agent
Iterate.ai is an enterprise AI platform company delivering innovative AI solutions to industries such as retail, finance, healthcare, and quick-service restaurants. Among its standout offerings is Frontline, a workforce management platform powered by AI, designed to support and empower Frontline workers. Available on both the Apple App Store and Google Play, Frontline uses advanced AI tools to streamline operational efficiency and enhance communication among dispersed workforces. In this post, we give an overview of durable semantic caching in Amazon MemoryDB, and share how Iterate used this functionality to accelerate and cost-optimize Frontline.
Accelerate your generative AI application development with Amazon Bedrock Knowledge Bases Quick Create and Amazon Aurora Serverless
In this post, we look at two capabilities in Amazon Bedrock Knowledge Bases that make it easier to build RAG workflows with Amazon Aurora Serverless v2 as the vector store. The first capability helps you easily create an Aurora Serverless v2 knowledge base to use with Amazon Bedrock and the second capability enables you to automate deploying your RAG workflow across environments.
Amazon DynamoDB data models for generative AI chatbots
Amazon DynamoDB is ideal for storing chat history and metadata due to its scalability and low latency. DynamoDB can efficiently store chat history, allowing quick access to past interactions. User-specific metadata, such as preferences and session information, can be stored to personalize responses and manage active sessions, enhancing the overall chatbot experience.In this post, we explore how to design an optimal schema for chatbots, whether you’re building a small proof of concept application or deploying a large-scale production system.
Build a scalable, context-aware chatbot with Amazon DynamoDB, Amazon Bedrock, and LangChain
Amazon DynamoDB, Amazon Bedrock, and LangChain can provide a powerful combination for building robust, context-aware chatbots. In this post, we explore how to use LangChain with DynamoDB to manage conversation history and integrate it with Amazon Bedrock to deliver intelligent, contextually aware responses. We break down the concepts behind the DynamoDB chat connector in LangChain, discuss the advantages of this approach, and guide you through the essential steps to implement it in your own chatbot.