AWS Security Blog
Category: Artificial Intelligence
Preparing for take-off: Regulatory perspectives on generative AI adoption within Australian financial services
The Australian financial services regulator, the Australian Prudential Regulation Authority (APRA), has provided its most substantial guidance on generative AI to date in Member Therese McCarthy Hockey’s remarks to the AFIA Risk Summit 2024. The guidance gives a green light for banks, insurance companies, and superannuation funds to accelerate their adoption of this transformative technology, […]
Exploring the benefits of artificial intelligence while maintaining digital sovereignty
Around the world, organizations are evaluating and embracing artificial intelligence (AI) and machine learning (ML) to drive innovation and efficiency. From accelerating research and enhancing customer experiences to optimizing business processes, improving patient outcomes, and enriching public services, the transformative potential of AI is being realized across sectors. Although using emerging technologies helps drive positive […]
Securing the RAG ingestion pipeline: Filtering mechanisms
Retrieval-Augmented Generative (RAG) applications enhance the responses retrieved from large language models (LLMs) by integrating external data such as downloaded files, web scrapings, and user-contributed data pools. This integration improves the models’ performance by adding relevant context to the prompt. While RAG applications are a powerful way to dynamically add additional context to an LLM’s prompt […]
Threat modeling your generative AI workload to evaluate security risk
As generative AI models become increasingly integrated into business applications, it’s crucial to evaluate the potential security risks they introduce. At AWS re:Invent 2023, we presented on this topic, helping hundreds of customers maintain high-velocity decision-making for adopting new technologies securely. Customers who attended this session were able to better understand our recommended approach for […]
Implement effective data authorization mechanisms to secure your data used in generative AI applications
Data security and data authorization, as distinct from user authorization, is a critical component of business workload architectures. Its importance has grown with the evolution of artificial intelligence (AI) technology, with generative AI introducing new opportunities to use internal data sources with large language models (LLMs) and multimodal foundation models (FMs) to augment model outputs. […]
Enhancing data privacy with layered authorization for Amazon Bedrock Agents
Customers are finding several advantages to using generative AI within their applications. However, using generative AI adds new considerations when reviewing the threat model of an application, whether you’re using it to improve the customer experience for operational efficiency, to generate more tailored or specific results, or for other reasons. Generative AI models are inherently […]
Methodology for incident response on generative AI workloads
The AWS Customer Incident Response Team (CIRT) has developed a methodology that you can use to investigate security incidents involving generative AI-based applications. To respond to security events related to a generative AI workload, you should still follow the guidance and principles outlined in the AWS Security Incident Response Guide. However, generative AI workloads require […]
Network perimeter security protections for generative AI
Generative AI–based applications have grown in popularity in the last couple of years. Applications built with large language models (LLMs) have the potential to increase the value companies bring to their customers. In this blog post, we dive deep into network perimeter protection for generative AI applications. We’ll walk through the different areas of network […]
Hardening the RAG chatbot architecture powered by Amazon Bedrock: Blueprint for secure design and anti-pattern mitigation
Mitigate risks like data exposure, model exploits, and ethical lapses when deploying Amazon Bedrock chatbots. Implement guardrails, encryption, access controls, and governance frameworks.
Context window overflow: Breaking the barrier
Have you ever pondered the intricate workings of generative artificial intelligence (AI) models, especially how they process and generate responses? At the heart of this fascinating process lies the context window, a critical element determining the amount of information an AI model can handle at a given time. But what happens when you exceed the […]