Artificial Intelligence

Dheer Toprani

Author: Dheer Toprani

Amazon QuickSight dashboard displaying sales analytics with multiple visualizations including a text summary showing 99 unique customers with $2,752,804 total sales revenue, a horizontal bar chart of total sales by customer name with Anthem at the top, summary metrics showing $2,752,804 sales and 99 customers, a scatter plot chart showing total sales quantity and profit by customer color-coded by company, and a detailed customer data table with order information including dates, contacts, names, regions and countries.

Build a conversational data assistant, Part 2 – Embedding generative business intelligence with Amazon Q in QuickSight

In this post, we dive into how we integrated Amazon Q in QuickSight to transform natural language requests like “Show me how many items were returned in the US over the past 6 months” into meaningful data visualizations. We demonstrate how combining Amazon Bedrock Agents with Amazon Q in QuickSight creates a comprehensive data assistant that delivers both SQL code and visual insights through a single, intuitive conversational interface—democratizing data access across the enterprise.

Architecture diagram of the solution

Build a conversational data assistant, Part 1: Text-to-SQL with Amazon Bedrock Agents

In this post, we focus on building a Text-to-SQL solution with Amazon Bedrock, a managed service for building generative AI applications. Specifically, we demonstrate the capabilities of Amazon Bedrock Agents. Part 2 explains how we extended the solution to provide business insights using Amazon Q in QuickSight, a business intelligence assistant that answers questions with auto-generated visualizations.

Reducing hallucinations in LLM agents with a verified semantic cache using Amazon Bedrock Knowledge Bases

This post introduces a solution to reduce hallucinations in Large Language Models (LLMs) by implementing a verified semantic cache using Amazon Bedrock Knowledge Bases, which checks if user questions match curated and verified responses before generating new answers. The solution combines the flexibility of LLMs with reliable, verified answers to improve response accuracy, reduce latency, and lower costs while preventing potential misinformation in critical domains such as healthcare, finance, and legal services.