AWS Security Blog
Category: Amazon Bedrock
Implementing least privilege access for Amazon Bedrock
Generative AI applications often involve a combination of various services and features—such as Amazon Bedrock and large language models (LLMs)—to generate content and to access potentially confidential data. This combination requires strong identity and access management controls and is special in the sense that those controls need to be applied on various levels. In this […]
Implement effective data authorization mechanisms to secure your data used in generative AI applications – part 2
In part 1 of this blog series, we walked through the risks associated with using sensitive data as part of your generative AI application. This overview provided a baseline of the challenges of using sensitive data with a non-deterministic large language model (LLM) and how to mitigate these challenges with Amazon Bedrock Agents. The next […]
Securing the RAG ingestion pipeline: Filtering mechanisms
Retrieval-Augmented Generative (RAG) applications enhance the responses retrieved from large language models (LLMs) by integrating external data such as downloaded files, web scrapings, and user-contributed data pools. This integration improves the models’ performance by adding relevant context to the prompt. While RAG applications are a powerful way to dynamically add additional context to an LLM’s prompt […]
Implement effective data authorization mechanisms to secure your data used in generative AI applications – part 1
This is part 1 of a two-part blog series. See part 2. Data security and data authorization, as distinct from user authorization, is a critical component of business workload architectures. Its importance has grown with the evolution of artificial intelligence (AI) technology, with generative AI introducing new opportunities to use internal data sources with large […]
Enhancing data privacy with layered authorization for Amazon Bedrock Agents
Customers are finding several advantages to using generative AI within their applications. However, using generative AI adds new considerations when reviewing the threat model of an application, whether you’re using it to improve the customer experience for operational efficiency, to generate more tailored or specific results, or for other reasons. Generative AI models are inherently […]
Network perimeter security protections for generative AI
Generative AI–based applications have grown in popularity in the last couple of years. Applications built with large language models (LLMs) have the potential to increase the value companies bring to their customers. In this blog post, we dive deep into network perimeter protection for generative AI applications. We’ll walk through the different areas of network […]
Hardening the RAG chatbot architecture powered by Amazon Bedrock: Blueprint for secure design and anti-pattern mitigation
Mitigate risks like data exposure, model exploits, and ethical lapses when deploying Amazon Bedrock chatbots. Implement guardrails, encryption, access controls, and governance frameworks.
Context window overflow: Breaking the barrier
Have you ever pondered the intricate workings of generative artificial intelligence (AI) models, especially how they process and generate responses? At the heart of this fascinating process lies the context window, a critical element determining the amount of information an AI model can handle at a given time. But what happens when you exceed the […]
Securing generative AI: data, compliance, and privacy considerations
Generative artificial intelligence (AI) has captured the imagination of organizations and individuals around the world, and many have already adopted it to help improve workforce productivity, transform customer experiences, and more. When you use a generative AI-based service, you should understand how the information that you enter into the application is stored, processed, shared, and […]
Securing generative AI: Applying relevant security controls
This is part 3 of a series of posts on securing generative AI. We recommend starting with the overview post Securing generative AI: An introduction to the Generative AI Security Scoping Matrix, which introduces the scoping matrix detailed in this post. This post discusses the considerations when implementing security controls to protect a generative AI […]