Overview
As organizations increasingly deploy agentic AI systems that can autonomously run tasks, make decisions, and interact with infrastructure, new security challenges emerge that go beyond traditional AI security approaches. The Agentic AI Security Scoping Matrix provides a structured framework to help you understand, classify, and secure your autonomous AI implementations.
The Agentic AI Security Scoping Matrix is a comprehensive framework developed by AWS that helps organizations systematically identify, scope, and address the unique security challenges of autonomous AI systems. Building upon our proven Generative AI Security Scoping Matrix, this framework specifically targets agentic AI systems that possess varying degrees of autonomy and decision-making capabilities.
Agentic AI Security Scoping Matrix
A mental model to classify use cases
Determine your scope
The matrix categorizes agentic AI systems into four distinct scopes based on their level of agency and human oversight:
Scope 1: No Agency
Human-initiated systems with no autonomous change capabilities, where agents follow predefined execution paths and operate in a read-only mode. These agenetic systems provide information and recommendations but cannot make changes to external systems or data without explicit human action. Security focus centers on identity context, workflow integrity, input validation, and preventing agents from exceeding their defined operational boundaries.
Example: A human asks AI to search calendars for meeting availability, and returns contextual recommendations, but cannot book meetings.
Scope 2: Prescribed Agency
Human-initiated systems that can recommend actions, including the ability to make changes to the environment, but require mandatory human approval before execution through "Human in the Loop" (HITL) workflows. Agents can analyze situations, propose solutions, and prepare actions, but all changes must be explicitly approved by authorized personnel. This scope balances automation benefits with human oversight, requiring robust approval workflows and secure communication channels between agents and human decision-makers.
Example: A human asks AI to recommend meeting times and requests human review and approval before sending invites.
Scope 3: Supervised Agency
Human-initiated systems with autonomous execution capabilities that can make contextual decisions and take actions that can modify the environment without requiring further HITL approvals once activated. Agents operate within predefined, bounded parameters and can complete complex, multi-step tasks independently while maintaining alignment with original human objectives. Security emphasis shifts to continuous monitoring, behavioral validation, and implementing effective shut off switches for autonomous operations.
Example: A human asks AI to automatically book optimal meeting times without explicit review or approval to send out calendar invites, allowing the AI to take action within the bounds of its prompts, parameters, permissions, and contexts.
Scope 4: Full Agency
Self-initiating systems that operate continuously with minimal human oversight, capable of invoking their own activities based on environmental factors, learned patterns, or predetermined conditions. These agents can identify opportunities, initiate workflows, and run complex operations autonomously across extended time periods. This scope brings the most risk and therefore requires the most sophisticated security controls, including advanced behavioral monitoring, anomaly detection, and automated containment mechanisms to manage the risks of fully autonomous operation. Agents must be bounded to perform within the limits of their intended design.
Example: Agentic meeting scheduler that monitors summarized meeting notes, notices an action item where attendees agree to meet next week, and autonomously looks at participant’s calendars for meeting availability, and sends the invite.
Page topics
Six Critical Security Dimensions
Open allUser, service, and agent identity management with appropriate authentication mechanisms.
Identity management is key for all scopes but can become increasingly critical as agentic systems gain breadth, agency, autonomy. In lower scopes, basic user authentication and read-only service accounts suffice, but higher scopes require sophisticated identity delegation, continuous verification, and agent identity attestation. This is particularly important in avoiding the Confused Deputy Problem to ensure that agents don’t allow a human or system to do more than what the initiating entity is authorized or entitled to do. Organizations must implement just-in-time credential issuance, trusted identity propogation, and dynamic identity context management to ensure secure operations across extended autonomous sessions. The challenge lies in maintaining strong identity verification while enabling seamless agent operations across multiple systems, sessions, and time periods.
Persistent memory and state security with memory poisoning attack prevention
Agentic AI systems often need to maintain persistent memory and state information that traditional AI systems might not address. Memory poisoning attacks become a significant concern as agents store and recall information across sessions. Organizations need comprehensive data validation, state encryption between operations, and secure persistent storage mechanisms. As agency increases, or the sensitivity of data agents are allowed to work with increases, agentic systems require advanced memory protection, dynamic data classification, and privacy-preserving computation to prevent unauthorized access to sensitive information stored in agent memory.
Comprehensive agent action tracking and reasoning chain capture for accountability.
Comprehensive logging transforms from simple input/output tracking to capturing complex reasoning chains and behavioral patterns. Lower scopes focus on workflow execution logs and policy enforcement tracking, while higher scopes require sophisticated behavioral analytics and predictive monitoring. While lower scopes require human review for either instantiation of workflow or executing actions, higher scopes require more human review for regular auditing and improvement of agentic actions. Organizations must implement tamper-evident logging systems that capture not just what agents do, but why they make specific decisions. This includes correlation of related events across systems and explainable AI techniques for autonomous decision documentation.
Guardrails, behavioral monitoring, sandboxing, and isolation mechanisms.
Traditional guardrails evolve into dynamic behavioral monitoring and automated containment systems. Basic input/output validation expands to include containerized execution environments, resource quotas, and circuit breakers for runaway processes. Higher agency scopes require continuous behavioral analysis that benefits from machine learning, real-time anomaly detection, and self-healing security mechanisms. Organizations must balance operational flexibility with security constraints, implementing controls that can adapt to changing contexts while maintaining safety boundaries.
Clear operational boundaries and dynamic constraint evaluation.
Security boundaries shift from static, hard-coded constraints to also include dynamic, context-aware limitations that can adapt to operational needs. Lower scopes rely on fixed execution boundaries and predefined action limits, while higher scopes may implement self-adjusting boundaries and intelligent constraint enforcement to give them desired flexibility to act in safe but creative ways to solve contextual problems; however, Scope 4 agents should never be allowed to operate outside the bounds of its designed purpose. Organizations need systems that can evaluate constraints in real-time, adjust resource allocation dynamically, and maintain alignment with original objectives even as operational parameters change manually or autonomously.
Agent-to-system interaction management with tool access and execution flow control.
Service coordination evolves from predetermined workflows to self-optimizing processes that can adapt and improve over time. Simple sequential control gives way to dynamic service orchestration, autonomous flow management, and inter-agent coordination protocols – even in lower scopes, but especially in higher scopes. Organizations must implement transaction management across multiple systems, rollback and compensation mechanisms, and behavioral monitoring across orchestrated components. The challenge is maintaining control and visibility while enabling agents to optimize their own operational processes.