[SEO Subhead]
This Guidance shows how to use large language models (LLMs) to generate SQL queries and perform data analytics, enhancing the value of your data. It leverages Amazon Bedrock and Amazon SageMaker to build a SQL generator that uses natural language processing (NLP). To improve accuracy for SQL generation, this Guidance also uses retrieval augmented generation (RAG) to retrieve historical data as few-shot samples in the prompt. With this Guidance, you can optimize costs for data analysis and operations teams by automating routine tasks, reducing working hours, and minimizing labor costs.
Please note: [Disclaimer]
Architecture Diagram
[Architecture diagram description]
Step 1
The user interacts with the system through Elastic Load Balancing (ELB), which directs traffic to the appropriate service within the AWS Cloud.
Step 2
The configurator uses ELB to access configuration-related services.
Step 3
Amazon Cognito handles user authentication and authorization, ensuring secure access to the services.
Step 4
The authenticated user accesses the frontend (Service 2) and API (Service 3) hosted on Amazon Elastic Container Service (Amazon ECS), which manages the deployment and scaling of these services.
Step 5
AWS Secrets Manager securely stores and retrieves sensitive information, such as database credentials, used by the services.
Step 6
Amazon DynamoDB stores user profiles and related data, providing a scalable and high-performance NoSQL database.
Step 7
The embedding module leverages Amazon OpenSearch Service and embedding models from Amazon Bedrock (Titan) or Amazon SageMaker (for example, BGE) to process and index data for efficient querying.
Step 8
Amazon Bedrock (Claude) or SageMaker (for example, Llama 3) converts natural language text into SQL queries, enabling users to interact with databases using plain language.
Step 9
The system pulls data definition language (DDL) information and queries customer data sources such as Amazon Aurora, Amazon Relational Database Service (Amazon RDS), Amazon Athena, and other third-party integrations to fetch data as required by the user queries.
Get Started
Deploy this Guidance
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
AWS managed services like Amazon ECS, Amazon Bedrock, and SageMaker offload operational burden, allowing developers to focus on application logic rather than undifferentiated heavy lifting tasks. These services handle provisioning, scaling, patching, and infrastructure maintenance, so applications can handle varying user traffic without compromising performance or availability through automatic scaling capabilities.
-
Security
AWS Identity and Access Management (IAM) manages access to AWS resources by creating and managing users and groups, controlling their permissions to perform specific actions on specific resources. Implementing the principle of least privilege minimizes the risk of unauthorized access, enhancing application security.
-
Reliability
Amazon ECS relieves the responsibility of managing and scaling underlying infrastructure, reducing operational overhead and enabling automatic scaling and recovery from failures. OpenSearch Service provides high availability and resilience against node failures or data loss, helping ensure critical functionalities remain uninterrupted.
-
Performance Efficiency
OpenSearch Service is a distributed search and analytics engine leveraging Apache Lucene for high-performance text search and data retrieval. It enables efficient storage and fast retrieval of large volumes of data, such as our historical question and SQL datasets. By using OpenSearch Service in this Guidance, you can rapidly access relevant information and assemble prompts for our text-to-SQL functionality, benefiting from its caching mechanisms and distributed architecture for enhanced performance.
-
Cost Optimization
OpenSearch Service is a managed service that relieves you from managing search infrastructure. AWS handles the underlying resources, patching, and scaling, allowing you to focus on application functionality. Its ability to scale resources based on usage help optimize costs by avoiding overprovisioning or under-utilization of resources.
-
Sustainability
AWS Cloud infrastructure is designed for sustainability, leveraging energy-efficient data centers, renewable energy sources, and optimized resource utilization. You can offload infrastructure management to AWS, reducing environmental impact while leveraging sustainable practices for search and analytics needs.
Related Content
[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.