AWS Machine Learning Blog
AWS machine learning supports Scuderia Ferrari HP pit stop analysis
Pit crews are trained to operate at optimum efficiency, although measuring their performance has been challenging, until now. In this post, we share how Amazon Web Services (AWS) is helping Scuderia Ferrari HP develop more accurate pit stop analysis techniques using machine learning (ML).
Accelerate edge AI development with SiMa.ai Edgematic with a seamless AWS integration
In this post, we demonstrate how to retrain and quantize a model using SageMaker AI and the SiMa.ai Palette software suite. The goal is to accurately detect individuals in environments where visibility and protective equipment detection are essential for compliance and safety.
How Apoidea Group enhances visual information extraction from banking documents with multimodal models using LLaMA-Factory on Amazon SageMaker HyperPod
Building on this foundation of specialized information extraction solutions and using the capabilities of SageMaker HyperPod, we collaborate with APOIDEA Group to explore the use of large vision language models (LVLMs) to further improve table structure recognition performance on banking and financial documents. In this post, we present our work and step-by-step code on fine-tuning the Qwen2-VL-7B-Instruct model using LLaMA-Factory on SageMaker HyperPod.
How Qualtrics built Socrates: An AI platform powered by Amazon SageMaker and Amazon Bedrock
In this post, we share how Qualtrics built an AI platform powered by Amazon SageMaker and Amazon Bedrock.
Vxceed secures transport operations with Amazon Bedrock
AWS partnered with Vxceed to support their AI strategy, resulting in the development of LimoConnect Q, an innovative ground transportation management solution. Using AWS services including Amazon Bedrock and Lambda, Vxceed successfully built a secure, AI-powered solution that streamlines trip booking and document processing.
Cost-effective AI image generation with PixArt-Sigma inference on AWS Trainium and AWS Inferentia
This post is the first in a series where we will run multiple diffusion transformers on Trainium and Inferentia-powered instances. In this post, we show how you can deploy PixArt-Sigma to Trainium and Inferentia-powered instances.
Customize DeepSeek-R1 671b model using Amazon SageMaker HyperPod recipes – Part 2
In this post, we use the recipes to fine-tune the original DeepSeek-R1 671b parameter model. We demonstrate this through the step-by-step implementation of these recipes using both SageMaker training jobs and SageMaker HyperPod.
Build a financial research assistant using Amazon Q Business and Amazon QuickSight for generative AI–powered insights
In this post, we show you how Amazon Q Business can help augment your generative AI needs in all the abovementioned use cases and more by answering questions, providing summaries, generating content, and securely completing tasks based on data and information in your enterprise systems.
Securing Amazon Bedrock Agents: A guide to safeguarding against indirect prompt injections
Generative AI tools have transformed how we work, create, and process information. At Amazon Web Services (AWS), security is our top priority. Therefore, Amazon Bedrock provides comprehensive security controls and best practices to help protect your applications and data. In this post, we explore the security measures and practical strategies provided by Amazon Bedrock Agents to safeguard your AI interactions against indirect prompt injections, making sure that your applications remain both secure and reliable.
Build scalable containerized RAG based generative AI applications in AWS using Amazon EKS with Amazon Bedrock
In this post, we demonstrate a solution using Amazon Elastic Kubernetes Service (EKS) with Amazon Bedrock to build scalable and containerized RAG solutions for your generative AI applications on AWS while bringing your unstructured user file data to Amazon Bedrock in a straightforward, fast, and secure way.