AWS Machine Learning Blog
Category: Generative AI
Unlocking generative AI for enterprises: How SnapLogic powers their low-code Agent Creator using Amazon Bedrock
In this post, we learn how SnapLogic’s Agent Creator leverages Amazon Bedrock to provide a low-code platform that enables enterprises to quickly develop and deploy powerful generative AI applications without deep technical expertise.
Fine-tune a BGE embedding model using synthetic data from Amazon Bedrock
In this post, we demonstrate how to use Amazon Bedrock to create synthetic data, fine-tune a BAAI General Embeddings (BGE) model, and deploy it using Amazon SageMaker.
Boost post-call analytics with Amazon Q in QuickSight
In this post, we show you how to unlock powerful post-call analytics and visualizations, empowering your organization to make data-driven decisions and drive continuous improvement.
Create a next generation chat assistant with Amazon Bedrock, Amazon Connect, Amazon Lex, LangChain, and WhatsApp
In this post, we demonstrate how to deploy a contextual AI assistant. We build a solution which provides users with a familiar and convenient interface using Amazon Bedrock Knowledge Bases, Amazon Lex, and Amazon Connect, with WhatsApp as the channel.
Generative AI foundation model training on Amazon SageMaker
In this post, we explore how organizations can cost-effectively customize and adapt FMs using AWS managed services such as Amazon SageMaker training jobs and Amazon SageMaker HyperPod. We discuss how these powerful tools enable organizations to optimize compute resources and reduce the complexity of model training and fine-tuning. We explore how you can make an informed decision about which Amazon SageMaker service is most applicable to your business needs and requirements.
Automate fine-tuning of Llama 3.x models with the new visual designer for Amazon SageMaker Pipelines
In this post, we will show you how to set up an automated LLM customization (fine-tuning) workflow so that the Llama 3.x models from Meta can provide a high-quality summary of SEC filings for financial applications. Fine-tuning allows you to configure LLMs to achieve improved performance on your domain-specific tasks.
Amazon Bedrock Custom Model Import now generally available
We’re pleased to announce the general availability (GA) of Amazon Bedrock Custom Model Import. This feature empowers customers to import and use their customized models alongside existing foundation models (FMs) through a single, unified API.
Deploy a serverless web application to edit images using Amazon Bedrock
In this post, we explore a sample solution that you can use to deploy an image editing application by using AWS serverless services and generative AI services. We use Amazon Bedrock and an Amazon Titan FM that allow you to edit images by using prompts.
Best practices for building robust generative AI applications with Amazon Bedrock Agents – Part 2
In this post, we dive into the architectural considerations and development lifecycle practices that can help you build robust, scalable, and secure intelligent agents.
Using Amazon Q Business with AWS HealthScribe to gain insights from patient consultations
In this post, we discuss how you can use AWS HealthScribe with Amazon Q Business to create a chatbot to quickly gain insights into patient clinician conversations.