AWS Machine Learning Blog

Generative AI operating models in enterprise organizations with Amazon Bedrock

As generative AI adoption grows, organizations should establish a generative AI operating model. An operating model defines the organizational design, core processes, technologies, roles and responsibilities, governance structures, and financial models that drive a business’s operations. In this post, we evaluate different generative AI operating model architectures that could be adopted.

ML-16454_solution_architecture.jpg

Develop a RAG-based application using Amazon Aurora with Amazon Kendra

RAG retrieves data from a preexisting knowledge base (your data), combines it with the LLM’s knowledge, and generates responses with more human-like language. However, in order for generative AI to understand your data, some amount of data preparation is required, which involves a big learning curve. In this post, we walk you through how to convert your existing Aurora data into an index without needing data preparation for Amazon Kendra to perform data search and implement RAG that combines your data along with LLM knowledge to produce accurate responses.

Optimizing AI responsiveness: A practical guide to Amazon Bedrock latency-optimized inference

In this post, we explore how Amazon Bedrock latency-optimized inference can help address the challenges of maintaining responsiveness in LLM applications. We’ll dive deep into strategies for optimizing application performance and improving user experience. Whether you’re building a new AI application or optimizing an existing one, you’ll find practical guidance on both the technical aspects of latency optimization and real-world implementation approaches. We begin by explaining latency in LLM applications.

Track LLM model evaluation using Amazon SageMaker managed MLflow and FMEval

In this post, we show how to use FMEval and Amazon SageMaker to programmatically evaluate LLMs. FMEval is an open source LLM evaluation library, designed to provide data scientists and machine learning (ML) engineers with a code-first experience to evaluate LLMs for various aspects, including accuracy, toxicity, fairness, robustness, and efficiency.

Create a SageMaker inference endpoint with custom model & extended container

This post walks you through the end-to-end process of deploying a single custom model on SageMaker using NASA’s Prithvi model. The Prithvi model is a first-of-its-kind temporal Vision transformer pre-trained by the IBM and NASA team on contiguous US Harmonised Landsat Sentinel 2 (HLS) data. It can be finetuned for image segmentation using the mmsegmentation library for use cases like burn scars detection, flood mapping, and multi-temporal crop classification.

Image and video prompt engineering for Amazon Nova Canvas and Amazon Nova Reel

Amazon has introduced two new creative content generation models on Amazon Bedrock: Amazon Nova Canvas for image generation and Amazon Nova Reel for video creation. These models transform text and image inputs into custom visuals, opening up creative opportunities for both professional and personal projects. Nova Canvas, a state-of-the-art image generation model, creates professional-grade images […]

Security best practices to consider while fine-tuning models in Amazon Bedrock

In this post, we implemented secure fine-tuning jobs in Amazon Bedrock, which is crucial for protecting sensitive data and maintaining the integrity of your AI models. By following the best practices outlined in this post, including proper IAM role configuration, encryption at rest and in transit, and network isolation, you can significantly enhance the security posture of your fine-tuning processes.

Streamline custom environment provisioning for Amazon SageMaker Studio: An automated CI/CD pipeline approach

In this post, we show how to create an automated continuous integration and delivery (CI/CD) pipeline solution to build, scan, and deploy custom Docker images to SageMaker Studio domains. You can use this solution to promote consistency of the analytical environments for data science teams across your enterprise.

architecture diagram

Enhance your customer’s omnichannel experience with Amazon Bedrock and Amazon Lex

In this post, we show you how to set up Amazon Lex for an omnichannel chatbot experience and Amazon Bedrock to be your secondary validation layer. This allows your customers to potentially provide out-of-band responses both at the intent and slot collection levels without having to be re-prompted, allowing for a seamless customer experience.