AWS Machine Learning Blog
Category: Management Tools
Accelerate IaC troubleshooting with Amazon Bedrock Agents
This post demonstrates how Amazon Bedrock Agents, combined with action groups and generative AI models, streamlines and accelerates the resolution of Terraform errors while maintaining compliance with environment security and operational guidelines.
How Formula 1® uses generative AI to accelerate race-day issue resolution
In this post, we explain how F1 and AWS have developed a root cause analysis (RCA) assistant powered by Amazon Bedrock to reduce manual intervention and accelerate the resolution of recurrent operational issues during races from weeks to minutes. The RCA assistant enables the F1 team to spend more time on innovation and improving its services, ultimately delivering an exceptional experience for fans and partners. The successful collaboration between F1 and AWS showcases the transformative potential of generative AI in empowering teams to accomplish more in less time.
Accelerate digital pathology slide annotation workflows on AWS using H-optimus-0
In this post, we demonstrate how to use H-optimus-0 for two common digital pathology tasks: patch-level analysis for detailed tissue examination, and slide-level analysis for broader diagnostic assessment. Through practical examples, we show you how to adapt this FM to these specific use cases while optimizing computational resources.
Build a read-through semantic cache with Amazon OpenSearch Serverless and Amazon Bedrock
This post presents a strategy for optimizing LLM-based applications. Given the increasing need for efficient and cost-effective AI solutions, we present a serverless read-through caching blueprint that uses repeated data patterns. With this cache, developers can effectively save and access similar prompts, thereby enhancing their systems’ efficiency and response times.
Governing the ML lifecycle at scale: Centralized observability with Amazon SageMaker and Amazon CloudWatch
This post is part of an ongoing series on governing the machine learning (ML) lifecycle at scale. To start from the beginning, refer to Governing the ML lifecycle at scale, Part 1: A framework for architecting ML workloads using Amazon SageMaker. A multi-account strategy is essential not only for improving governance but also for enhancing […]
Generative AI foundation model training on Amazon SageMaker
In this post, we explore how organizations can cost-effectively customize and adapt FMs using AWS managed services such as Amazon SageMaker training jobs and Amazon SageMaker HyperPod. We discuss how these powerful tools enable organizations to optimize compute resources and reduce the complexity of model training and fine-tuning. We explore how you can make an informed decision about which Amazon SageMaker service is most applicable to your business needs and requirements.
Create a multimodal chatbot tailored to your unique dataset with Amazon Bedrock FMs
In this post, we show how to create a multimodal chat assistant on Amazon Web Services (AWS) using Amazon Bedrock models, where users can submit images and questions, and text responses will be sourced from a closed set of proprietary documents.
Architecture to AWS CloudFormation code using Anthropic’s Claude 3 on Amazon Bedrock
In this post, we explore some ways you can use Anthropic’s Claude 3 Sonnet’s vision capabilities to accelerate the process of moving from architecture to the prototype stage of a solution.
Implementing advanced prompt engineering with Amazon Bedrock
In this post, we provide insights and practical examples to help balance and optimize the prompt engineering workflow. We focus on advanced prompt techniques and best practices for the models provided in Amazon Bedrock, a fully managed service that offers a choice of high-performing foundation models from leading AI companies such as Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API. With these prompting techniques, developers and researchers can harness the full capabilities of Amazon Bedrock, providing clear and concise communication while mitigating potential risks or undesirable outputs.
Building automations to accelerate remediation of AWS Security Hub control findings using Amazon Bedrock and AWS Systems Manager
In this post, we will harness the power of generative artificial intelligence (AI) and Amazon Bedrock to help organizations simplify and effectively manage remediations of AWS Security Hub control findings.