AWS Security Blog

Category: Artificial Intelligence

Main Image

Threat modeling your generative AI workload to evaluate security risk

As generative AI models become increasingly integrated into business applications, it’s crucial to evaluate the potential security risks they introduce. At AWS re:Invent 2023, we presented on this topic, helping hundreds of customers maintain high-velocity decision-making for adopting new technologies securely. Customers who attended this session were able to better understand our recommended approach for […]

Implement effective data authorization mechanisms to secure your data used in generative AI applications – part 1

April 3, 2025: We’ve updated this post to reflect the new 2025 OWASP top 10 for LLM entries. This is part 1 of a two-part blog series. See part 2. Data security and data authorization, as distinct from user authorization, is a critical component of business workload architectures. Its importance has grown with the evolution […]

AI AuthZ

Enhancing data privacy with layered authorization for Amazon Bedrock Agents

April 3, 2025: We’ve updated this post to reflect the new 2025 OWASP top 10 for LLM entries. Customers are finding several advantages to using generative AI within their applications. However, using generative AI adds new considerations when reviewing the threat model of an application, whether you’re using it to improve the customer experience for […]

Methodology for incident response on generative AI workloads

The AWS Customer Incident Response Team (CIRT) has developed a methodology that you can use to investigate security incidents involving generative AI-based applications. To respond to security events related to a generative AI workload, you should still follow the guidance and principles outlined in the AWS Security Incident Response Guide. However, generative AI workloads require […]

Network perimeter security protections for generative AI

Generative AI–based applications have grown in popularity in the last couple of years. Applications built with large language models (LLMs) have the potential to increase the value companies bring to their customers. In this blog post, we dive deep into network perimeter protection for generative AI applications. We’ll walk through the different areas of network […]

Amazon Bedrock logo

Hardening the RAG chatbot architecture powered by Amazon Bedrock: Blueprint for secure design and anti-pattern mitigation

Mitigate risks like data exposure, model exploits, and ethical lapses when deploying Amazon Bedrock chatbots. Implement guardrails, encryption, access controls, and governance frameworks.

Context window overflow: Breaking the barrier

Have you ever pondered the intricate workings of generative artificial intelligence (AI) models, especially how they process and generate responses? At the heart of this fascinating process lies the context window, a critical element determining the amount of information an AI model can handle at a given time. But what happens when you exceed the […]

Accelerate security automation using Amazon CodeWhisperer

In an ever-changing security landscape, teams must be able to quickly remediate security risks. Many organizations look for ways to automate the remediation of security findings that are currently handled manually. Amazon CodeWhisperer is an artificial intelligence (AI) coding companion that generates real-time, single-line or full-function code suggestions in your integrated development environment (IDE) to […]

Figure 1: Generative AI Scoping Matrix

Securing generative AI: data, compliance, and privacy considerations

Generative artificial intelligence (AI) has captured the imagination of organizations and individuals around the world, and many have already adopted it to help improve workforce productivity, transform customer experiences, and more. When you use a generative AI-based service, you should understand how the information that you enter into the application is stored, processed, shared, and […]

Data flow diagram for a generic Scope 1 consumer application and Scope 2 enterprise application

Securing generative AI: Applying relevant security controls

This is part 3 of a series of posts on securing generative AI. We recommend starting with the overview post Securing generative AI: An introduction to the Generative AI Security Scoping Matrix, which introduces the scoping matrix detailed in this post. This post discusses the considerations when implementing security controls to protect a generative AI […]