Artificial Intelligence
Transforming network operations with AI: How Swisscom built a network assistant using Amazon Bedrock
In this post, we explore how Swisscom developed their Network Assistant. We discuss the initial challenges and how they implemented a solution that delivers measurable benefits. We examine the technical architecture, discuss key learnings, and look at future enhancements that can further transform network operations.
End-to-End model training and deployment with Amazon SageMaker Unified Studio
In this post, we guide you through the stages of customizing large language models (LLMs) with SageMaker Unified Studio and SageMaker AI, covering the end-to-end process starting from data discovery to fine-tuning FMs with SageMaker AI distributed training, tracking metrics using MLflow, and then deploying models using SageMaker AI inference for real-time inference. We also discuss best practices to choose the right instance size and share some debugging best practices while working with JupyterLab notebooks in SageMaker Unified Studio.
Optimize RAG in production environments using Amazon SageMaker JumpStart and Amazon OpenSearch Service
In this post, we show how to use Amazon OpenSearch Service as a vector store to build an efficient RAG application.
Advancing AI agent governance with Boomi and AWS: A unified approach to observability and compliance
In this post, we share how Boomi partnered with AWS to help enterprises accelerate and scale AI adoption with confidence using Agent Control Tower.
Use Amazon SageMaker Unified Studio to build complex AI workflows using Amazon Bedrock Flows
In this post, we demonstrate how you can use SageMaker Unified Studio to create complex AI workflows using Amazon Bedrock Flows.
Accelerating AI innovation: Scale MCP servers for enterprise workloads with Amazon Bedrock
In this post, we present a centralized Model Context Protocol (MCP) server implementation using Amazon Bedrock that provides shared access to tools and resources for enterprise AI workloads. The solution enables organizations to accelerate AI innovation by standardizing access to resources and tools through MCP, while maintaining security and governance through a centralized approach.
Choosing the right approach for generative AI-powered structured data retrieval
In this post, we explore five different patterns for implementing LLM-powered structured data query capabilities in AWS, including direct conversational interfaces, BI tool enhancements, and custom text-to-SQL solutions.
Revolutionizing drug data analysis using Amazon Bedrock multimodal RAG capabilities
In this post, we explore how Amazon Bedrock’s multimodal RAG capabilities revolutionize drug data analysis by efficiently processing complex medical documentation containing text, images, graphs, and tables.
Build and deploy AI inference workflows with new enhancements to the Amazon SageMaker Python SDK
In this post, we provide an overview of the user experience, detailing how to set up and deploy these workflows with multiple models using the SageMaker Python SDK. We walk through examples of building complex inference workflows, deploying them to SageMaker endpoints, and invoking them for real-time inference.
Context extraction from image files in Amazon Q Business using LLMs
In this post, we look at a step-by-step implementation for using the custom document enrichment (CDE) feature within an Amazon Q Business application to process standalone image files. We walk you through an AWS Lambda function configured within CDE to process various image file types, and showcase an example scenario of how this integration enhances Amazon Q Business’s ability to provide comprehensive insights.