AWS Machine Learning Blog
LLM continuous self-instruct fine-tuning framework powered by a compound AI system on Amazon SageMaker
In this post, we present the continuous self-instruct fine-tuning framework as a compound AI system implemented by the DSPy framework. The framework first generates a synthetic dataset from the domain knowledge base and documents for self-instruction, then drives model fine-tuning through SFT, and introduces the human-in-the-loop workflow to collect human and AI feedback to the model response, which is used to further improve the model performance by aligning human preference through reinforcement learning (RLHF/RLAIF).
Maximize your file server data’s potential by using Amazon Q Business on Amazon FSx for Windows
In this post, we show you how to connect Amazon Q, a generative AI-powered assistant, to Amazon FSx for Windows File Server to securely analyze, query, and extract insights from your file system data.
Generate synthetic counterparty (CR) risk data with generative AI using Amazon Bedrock LLMs and RAG
In this post, we explore how you can use LLMs with advanced Retrieval Augmented Generation (RAG) to generate high-quality synthetic data for a finance domain use case. You can use the same technique for synthetic data for other business domain use cases as well. For this post, we demonstrate how to generate counterparty risk (CR) data, which would be beneficial for over-the-counter (OTC) derivatives that are traded directly between two parties, without going through a formal exchange.
Turbocharging premium audit capabilities with the power of generative AI: Verisk’s journey toward a sophisticated conversational chat platform to enhance customer support
Verisk’s Premium Audit Advisory Service is the leading source of technical information and training for premium auditors and underwriters. In this post, we describe the development of the customer support process in PAAS, incorporating generative AI, the data, the architecture, and the evaluation of the results. Conversational AI assistants are rapidly transforming customer and employee support.
Build verifiable explainability into financial services workflows with Automated Reasoning checks for Amazon Bedrock Guardrails
In this post, we explore how Automated Reasoning checks work through various common FSI scenarios such as insurance legal triaging, underwriting rules validation, and claims processing.
Best practices for Amazon SageMaker HyperPod task governance
In this post, we provide best practices to maximize the value of SageMaker HyperPod task governance and make the administration and data science experiences seamless. We also discuss common governance scenarios when administering and running generative AI development tasks.
How Formula 1® uses generative AI to accelerate race-day issue resolution
In this post, we explain how F1 and AWS have developed a root cause analysis (RCA) assistant powered by Amazon Bedrock to reduce manual intervention and accelerate the resolution of recurrent operational issues during races from weeks to minutes. The RCA assistant enables the F1 team to spend more time on innovation and improving its services, ultimately delivering an exceptional experience for fans and partners. The successful collaboration between F1 and AWS showcases the transformative potential of generative AI in empowering teams to accomplish more in less time.
Using Amazon Rekognition to improve bicycle safety
To better protect themselves, many cyclists are starting to ride with cameras mounted to the front or back of their bicycle. In this blog post, I will demonstrate a machine learning solution that cyclists can use to better identify close calls. The architecture of the solution uses Amazon Rekognition to detect vehicles in recorded bike ride videos. It then analyzes the video to determine if any vehicles are passing too close to the cyclist, within the 3-foot safe distance required by law. The solution automatically generates video clips of these dangerous passing events, which can then be shared with authorities to help improve cyclist safety.
Build a dynamic, role-based AI agent using Amazon Bedrock inline agents
In this post, we explore how to build an application using Amazon Bedrock inline agents, demonstrating how a single AI assistant can adapt its capabilities dynamically based on user roles.
Use language embeddings for zero-shot classification and semantic search with Amazon Bedrock
In this post, we explore what language embeddings are and how they can be used to enhance your application. We show how, by using the properties of embeddings, we can implement a real-time zero-shot classifier and can add powerful features such as semantic search.