AWS Machine Learning Blog
Category: Amazon Machine Learning
Build a dynamic, role-based AI agent using Amazon Bedrock inline agents
In this post, we explore how to build an application using Amazon Bedrock inline agents, demonstrating how a single AI assistant can adapt its capabilities dynamically based on user roles.
Use language embeddings for zero-shot classification and semantic search with Amazon Bedrock
In this post, we explore what language embeddings are and how they can be used to enhance your application. We show how, by using the properties of embeddings, we can implement a real-time zero-shot classifier and can add powerful features such as semantic search.
Fine-tune LLMs with synthetic data for context-based Q&A using Amazon Bedrock
In this post, we explore how to use Amazon Bedrock to generate synthetic training data to fine-tune an LLM. Additionally, we provide concrete evaluation results that showcase the power of synthetic data in fine-tuning when data is scarce.
LLM-as-a-judge on Amazon Bedrock Model Evaluation
This blog post explores LLM-as-a-judge on Amazon Bedrock Model Evaluation, providing comprehensive guidance on feature setup, evaluating job initiation through both the console and Python SDK and APIs, and demonstrating how this innovative evaluation feature can enhance generative AI applications across multiple metric categories including quality, user experience, instruction following, and safety.
From concept to reality: Navigating the Journey of RAG from proof of concept to production
In this post, we explore the movement of RAG applications from their proof of concept or minimal viable product (MVP) phase to full-fledged production systems. When transitioning a RAG application from a proof of concept to a production-ready system, optimization becomes crucial to make sure the solution is reliable, cost-effective, and high-performing.
Building a virtual meteorologist using Amazon Bedrock Agents
In this post, we present a streamlined approach to deploying an AI-powered agent by combining Amazon Bedrock Agents and a foundation model (FM). We guide you through the process of configuring the agent and implementing the specific logic required for the virtual meteorologist to provide accurate weather-related responses.
Transforming credit decisions using generative AI with Rich Data Co and AWS
The mission of Rich Data Co (RDC) is to broaden access to sustainable credit globally. Its software-as-a-service (SaaS) solution empowers leading banks and lenders with deep customer insights and AI-driven decision-making capabilities. In this post, we discuss how RDC uses generative AI on Amazon Bedrock to build these assistants and accelerate its overall mission of democratizing access to sustainable credit.
Revolutionizing business processes with Amazon Bedrock and Appian’s generative AI skills
AWS and Appian’s collaboration marks a significant advancement in business process automation. By using the power of Amazon Bedrock and Anthropic’s Claude models, Appian empowers enterprises to optimize and automate processes for greater efficiency and effectiveness. This blog post will cover how Appian AI skills build automation into organizations’ mission-critical processes to improve operational excellence, reduce costs, and build scalable solutions.
How Untold Studios empowers artists with an AI assistant built on Amazon Bedrock
Untold Studios is a tech-driven, leading creative studio specializing in high-end visual effects and animation. This post details how we used Amazon Bedrock to create an AI assistant (Untold Assistant), providing artists with a straightforward way to access our internal resources through a natural language interface integrated directly into their existing Slack workflow.
Protect your DeepSeek model deployments with Amazon Bedrock Guardrails
This blog post provides a comprehensive guide to implementing robust safety protections for DeepSeek-R1 and other open weight models using Amazon Bedrock Guardrails. By following this guide, you’ll learn how to use the advanced capabilities of DeepSeek models while maintaining strong security controls and promoting ethical AI practices.