AWS Security Blog
Category: Amazon Bedrock
Defending LLM applications against Unicode character smuggling
When interacting with AI applications, even seemingly innocent elements—such as Unicode characters—can have significant implications for security and data integrity. At Amazon Web Services (AWS), we continuously evaluate and address emerging threats across aspects of AI systems. In this blog post, we explore Unicode tag blocks, a specific range of characters spanning from U+E0000 to […]
Build secure network architectures for generative AI applications using AWS services
As generative AI becomes foundational across industries—powering everything from conversational agents to real-time media synthesis—it simultaneously creates new opportunities for bad actors to exploit. The complex architectures behind generative AI applications expose a large surface area including public-facing APIs, inference services, custom web applications, and integrations with cloud infrastructure. These systems are not immune to […]
Enabling AI adoption at scale through enterprise risk management framework – Part 2
In Part 1 of this series, we explored the fundamental risks and governance considerations. In this part, we examine practical strategies for adapting your enterprise risk management framework (ERMF) to harness generative AI’s power while maintaining robust controls. This part covers: Adapting your ERMF for the cloud Adapting your ERMF for generative AI Sustainable Risk […]
Enabling AI adoption at scale through enterprise risk management framework – Part 1
According to BCG research, 84% of executives view responsible AI as a top management responsibility, yet only 25% of them have programs that fully address it. Responsible AI can be achieved through effective governance, and with the rapid adoption of generative AI, this governance has become a business imperative, not just an IT concern. By […]
Authorizing access to data with RAG implementations
Organizations are increasingly using large language models (LLMs) to provide new types of customer interactions through generative AI-powered chatbots, virtual assistants, and intelligent search capabilities. To enhance these interactions, organizations are using Retrieval-Augmented Generation (RAG) to incorporate proprietary data, industry-specific knowledge, and internal documentation to provide more accurate, contextual responses. With RAG, LLMs use an […]
Empower AI agents with user context using Amazon Cognito
Amazon Cognito is a managed customer identity and access management (CIAM) service that enables seamless user sign-up and sign-in for web and mobile applications. Through user pools, Amazon Cognito provides a user directory with strong authentication features, including passkeys, federation to external identity providers (IdPs), and OAuth 2.0 flows for secure machine-to-machine (M2M) authorization. Amazon […]
Use an Amazon Bedrock powered chatbot with Amazon Security Lake to help investigate incidents
In part 2 of this series, we showed you how to use Amazon SageMaker Studio notebooks with natural language input to assist with threat hunting. This is done by using SageMaker Studio to automatically generate and run SQL queries on Amazon Athena with Amazon Bedrock and Amazon Security Lake. The Security Lake service team and […]
Announcing AWS Security Reference Architecture Code Examples for Generative AI
Amazon Web Services (AWS) is pleased to announce the release of new Security Reference Architecture (SRA) code examples for securing generative AI workloads. The examples include two comprehensive capabilities focusing on secure model inference and RAG implementations, covering a wide range of security best practices using AWS generative AI services. These new code examples are […]
Implementing least privilege access for Amazon Bedrock
April 9, 2025: We updated content about Amazon Bedrock Guardrails to cover the recently added condition key bedrock:GuardrailIdentifier. March 27, 2025: Two policies in this post were updated. Generative AI applications often involve a combination of various services and features—such as Amazon Bedrock and large language models (LLMs)—to generate content and to access potentially confidential […]
Implement effective data authorization mechanisms to secure your data used in generative AI applications – part 2
In part 1 of this blog series, we walked through the risks associated with using sensitive data as part of your generative AI application. This overview provided a baseline of the challenges of using sensitive data with a non-deterministic large language model (LLM) and how to mitigate these challenges with Amazon Bedrock Agents. The next […]