AWS Machine Learning Blog

Category: Customer Solutions

How Fastweb fine-tuned the Mistral model using Amazon SageMaker HyperPod as a first step to build an Italian large language model

Fastweb, one of Italy’s leading telecommunications operators, recognized the immense potential of AI technologies early on and began investing in this area in 2019. In this post, we explore how Fastweb used cutting-edge AI and ML services to embark on their LLM journey, overcoming challenges and unlocking new opportunities along the way.

Architecture Diagram

How TUI uses Amazon Bedrock to scale content creation and enhance hotel descriptions in under 10 seconds

TUI Group is one of the world’s leading global tourism services, providing 21 million customers with an unmatched holiday experience in 180 regions. The TUI content teams are tasked with producing high-quality content for its websites, including product details, hotel information, and travel guides, often using descriptions written by hotel and third-party partners. In this post, we discuss how we used Amazon SageMaker and Amazon Bedrock to build a content generator that rewrites marketing content following specific brand and style guidelines.

How Amazon trains sequential ensemble models at scale with Amazon SageMaker Pipelines

Ensemble models are becoming popular within the ML communities. They generate more accurate predictions through combining the predictions of multiple models. Pipelines can quickly be used to create and end-to-end ML pipeline for ensemble models. This enables developers to build highly accurate models while maintaining efficiency, and reproducibility. In this post, we provide an example of an ensemble model that was trained and deployed using Pipelines.

How Clearwater Analytics is revolutionizing investment management with generative AI and Amazon SageMaker JumpStart

In this post, we explore Clearwater Analytics’ foray into generative AI, how they’ve architected their solution with Amazon SageMaker, and dive deep into how Clearwater Analytics is using LLMs to take advantage of more than 18 years of experience within the investment management domain while optimizing model cost and performance.

How Twitch used agentic workflow with RAG on Amazon Bedrock to supercharge ad sales

In this post, we demonstrate how we innovated to build a Retrieval Augmented Generation (RAG) application with agentic workflow and a knowledge base on Amazon Bedrock. We implemented the RAG pipeline in a Slack chat-based assistant to empower the Amazon Twitch ads sales team to move quickly on new sales opportunities.

Architecture of AWS Field Advisor using Amazon Q Business

How AWS sales uses Amazon Q Business for customer engagement

In April 2024, we launched our AI sales assistant, which we call Field Advisor, making it available to AWS employees in the Sales, Marketing, and Global Services organization, powered by Amazon Q Business. Since that time, thousands of active users have asked hundreds of thousands of questions through Field Advisor, which we have embedded in our customer relationship management (CRM) system, as well as through a Slack application.

How Tealium built a chatbot evaluation platform with Ragas and Auto-Instruct using AWS generative AI services

In this post, we illustrate the importance of generative AI in the collaboration between Tealium and the AWS Generative AI Innovation Center (GenAIIC) team by automating the following: 1/ Evaluating the retriever and the generated answer of a RAG system based on the Ragas Repository powered by Amazon Bedrock, 2/ Generating improved instructions for each question-and-answer pair using an automatic prompt engineering technique based on the Auto-Instruct Repository. An instruction refers to a general direction or command given to the model to guide generation of a response. These instructions were generated using Anthropic’s Claude on Amazon Bedrock, and 4/ Providing a UI for a human-based feedback mechanism that complements an evaluation system powered by Amazon Bedrock.

EBSCOlearning scales assessment generation for their online learning content with generative AI

In this post, we illustrate how EBSCOlearning partnered with AWS Generative AI Innovation Center (GenAIIC) to use the power of generative AI in revolutionizing their learning assessment process. We explore the challenges faced in traditional question-answer (QA) generation and the innovative AI-driven solution developed to address them.

Syngenta develops a generative AI assistant to support sales representatives using Amazon Bedrock Agents

In this post, we explore how Syngenta collaborated with AWS to develop Cropwise AI, a generative AI assistant powered by Amazon Bedrock Agents that helps sales representatives make better seed product recommendations to farmers across North America. The solution transforms the seed selection process by simplifying complex data into natural conversations, providing quick access to detailed seed product information, and enabling personalized recommendations at scale through a mobile app interface.

How Amazon Finance Automation built a generative AI Q&A chat assistant using Amazon Bedrock

Amazon Finance Automation developed a large language model (LLM)-based question-answer chat assistant on Amazon Bedrock. This solution empowers analysts to rapidly retrieve answers to customer queries, generating prompt responses within the same communication thread. As a result, it drastically reduces the time required to address customer queries. In this post, we share how Amazon Finance Automation built this generative AI Q&A chat assistant using Amazon Bedrock.