- AWS Solutions Library›
- Guidance for Agentic AI Operational Foundations on AWS
Guidance for Agentic AI Operational Foundations on AWS
Overview
This Guidance demonstrates how to accelerate and de-risk AI agent development through a comprehensive, production-ready solutions approach. It shows organizations how to establish essential service capabilities including multi-model governance, robust observability, and automated guardrails - critical elements that are often overlooked in early AI projects. The solution helps teams avoid common pitfalls and reduce time-to-value by providing pre-integrated components and proven architectural patterns that support the complete AI application lifecycle. By implementing centralized monitoring, evaluation, and safety controls, this guidance enables organizations to scale their AI initiatives reliably while maintaining visibility and control over model behavior and costs. The offerings approach transforms scattered proof-of-concepts into sustainable, production-grade AI solutions that can evolve with business needs.
Benefits
Deploy intelligent AI agents that understand context and automate support workflows. Reduce response times while maintaining personalized, high-quality customer interactions through Amazon Bedrock's agentic capabilities.
Handle increasing customer inquiries automatically using serverless AI orchestration. Your agents learn from each interaction while AWS manages the infrastructure, enabling cost-effective growth.
Connect existing Zendesk workflows with AI-powered knowledge retrieval and web search capabilities. Enable seamless escalation paths while maintaining comprehensive observability across all customer interactions.
How it works
Agentic AI Operational Foundations
This architecture diagram illustrates how to effectively support applications using agentic AI on AWS. It shows the key components and their interactions, providing an overview of the architecture's structure and functionality. The architecture enables authenticated users to interact with AI-powered agents through a frontend application, where Amazon Bedrock AgentCore orchestrates LangGraph-based agents that access knowledge bases, perform web searches, and create support tickets. The architecture incorporates comprehensive security, scalable storage, external integrations, and monitoring capabilities to deliver intelligent, contextual customer support experiences. To deploy Generative AI Gateway (Litellm), refer to Diagram 2.
Multi-Provider Generative AI Gateway
This architecture diagram demonstrates how to streamline access to numerous large language models (LLMs) through a unified, industry-standard API gateway based on OpenAI API standards. By deploying this architecture, you can simplify integration while gaining access to tools that track LLM usage, manage costs, and implement crucial governance features. This allows easy switching between models, efficient management of multiple LLM services within applications, and robust control over security and expenses.
Deploy with confidence
Ready to deploy? Review the sample code on GitHub for detailed deployment instructions to deploy as-is or customize to fit your needs.
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
Did you find what you were looking for today?
Let us know so we can improve the quality of the content on our pages