AWS Security Blog
Category: Generative AI
Implement effective data authorization mechanisms to secure your data used in generative AI applications – part 2
In part 1 of this blog series, we walked through the risks associated with using sensitive data as part of your generative AI application. This overview provided a baseline of the challenges of using sensitive data with a non-deterministic large language model (LLM) and how to mitigate these challenges with Amazon Bedrock Agents. The next […]
Safeguard your generative AI workloads from prompt injections
January 23, 2025: We updated this post to clarify the definition of indirect prompt injection and provided a new example of indirect prompt injection. Generative AI applications have become powerful tools for creating human-like content, but they also introduce new security challenges, including prompt injections, excessive agency, and others. See the OWASP Top 10 for […]
New AWS Skill Builder course available: Securing Generative AI on AWS
To support our customers in securing their generative AI workloads on Amazon Web Services (AWS), we are excited to announce the launch of a new AWS Skill Builder course: Securing Generative AI on AWS. This comprehensive course is designed to help security professionals, architects, and artificial intelligence and machine learning (AI/ML) engineers understand and implement […]
Preparing for take-off: Regulatory perspectives on generative AI adoption within Australian financial services
The Australian financial services regulator, the Australian Prudential Regulation Authority (APRA), has provided its most substantial guidance on generative AI to date in Member Therese McCarthy Hockey’s remarks to the AFIA Risk Summit 2024. The guidance gives a green light for banks, insurance companies, and superannuation funds to accelerate their adoption of this transformative technology, […]
Exploring the benefits of artificial intelligence while maintaining digital sovereignty
Around the world, organizations are evaluating and embracing artificial intelligence (AI) and machine learning (ML) to drive innovation and efficiency. From accelerating research and enhancing customer experiences to optimizing business processes, improving patient outcomes, and enriching public services, the transformative potential of AI is being realized across sectors. Although using emerging technologies helps drive positive […]
Securing the RAG ingestion pipeline: Filtering mechanisms
Retrieval-Augmented Generative (RAG) applications enhance the responses retrieved from large language models (LLMs) by integrating external data such as downloaded files, web scrapings, and user-contributed data pools. This integration improves the models’ performance by adding relevant context to the prompt. While RAG applications are a powerful way to dynamically add additional context to an LLM’s prompt […]
Threat modeling your generative AI workload to evaluate security risk
As generative AI models become increasingly integrated into business applications, it’s crucial to evaluate the potential security risks they introduce. At AWS re:Invent 2023, we presented on this topic, helping hundreds of customers maintain high-velocity decision-making for adopting new technologies securely. Customers who attended this session were able to better understand our recommended approach for […]
Implement effective data authorization mechanisms to secure your data used in generative AI applications – part 1
This is part 1 of a two-part blog series. See part 2. Data security and data authorization, as distinct from user authorization, is a critical component of business workload architectures. Its importance has grown with the evolution of artificial intelligence (AI) technology, with generative AI introducing new opportunities to use internal data sources with large […]
Enhancing data privacy with layered authorization for Amazon Bedrock Agents
Customers are finding several advantages to using generative AI within their applications. However, using generative AI adds new considerations when reviewing the threat model of an application, whether you’re using it to improve the customer experience for operational efficiency, to generate more tailored or specific results, or for other reasons. Generative AI models are inherently […]
Methodology for incident response on generative AI workloads
The AWS Customer Incident Response Team (CIRT) has developed a methodology that you can use to investigate security incidents involving generative AI-based applications. To respond to security events related to a generative AI workload, you should still follow the guidance and principles outlined in the AWS Security Incident Response Guide. However, generative AI workloads require […]