Workload Isolation enables you to create and manage isolated environments to contain newly created or migrated workloads. This approach reduces blast radius of vulnerabilities and threats, and eases the complexity of compliance by providing mechanisms to isolate access to resources.

Architecture Diagram

Download the architecture diagram PDF 

Implementation Resources

As you scale your cloud environment usage beyond a workload, it is important to develop repeatable processes for groups of isolated resources and workload provisioning within a set of guardrails. This maintains consistent controls and boundaries to support application development teams and other consumers. 

These controls will span cost, regulatory, and compliance domains, with implementation often starting as runbook-based mechanisms. The mechanisms will then become automated as scale, and business requirements drive the need to develop automation processes. By enabling automation, you can remove human error through use of Continuous Integration / Continuous Deployment (CI/CD) and Infrastructure as Code (IaC) principles.

    • Scenario
    • Design isolated resource environments

      • Logically organize isolated resource environments
      • Create foundational environments
      • Create workload environments
      • Create environments for specific use cases
    • Overview
    • Creating an identity and access management isolation boundary for workloads reduces the risk of a workload infrastructure update impacting a different workload, simplifies cost management, and allows application teams to operate within a bounded environment. Being able to implement preventative or detective controls on isolated resource environments allows different controls for different workload types, or Software Development Life Cycle (SDLC) environments (such as dev, test, prod).

      Workloads often have distinct security profiles that require separate control policies and mechanisms to support them. For example, it’s common to have different security and operational requirements for the non-production and production environments of a given workload. The resources and data that make up a workload are separated from other environments and workloads with defined isolation boundaries.

      You can group workloads with a common business purpose in distinct environments. This enables you to align the ownership and decision making with those environments, avoiding dependencies and conflicts with how workloads in other environments are secured and managed.

    • Implementation
    • AWS Organizations helps you centrally govern your environment as you grow and scale your workloads on AWS. Whether you are a growing startup or a large enterprise, AWS Organizations helps you to centrally provision accounts and resources; secure and audit your environment for compliance; share resources; control access to accounts, Regions, and services; as well as optimize costs and simplify billing. Additionally, AWS Organizations supports aggregation of health events, consolidated data on use of access permissions, and centralized policies, including backup and tag policies, for multi-account environments.

      An AWS account acts as an identity and access management isolation boundary. The sharing of resources or data between accounts requires explicit policy statements that allow cross account access, so it is recommended to treat an AWS account as the lowest level of workload isolation. Organizational units (OUs), a logical grouping of accounts or other OUs, allow you to group accounts together to administer as a single unit. AWS Organizations allows you to manage organizational policies at the root of the organization, to organization units, or to specific accounts. It is therefore necessary to build a deliberate and strategic account structure.

      Refer to the Organizing Your AWS Environment Using Multiple Accounts whitepaper for best practices around designing your Organization’s account structure.

      Example top level OU structure:

      Create foundational environments
      Reference architecture
      1. Management Account: The management account, also known as the payer account, is the AWS account where you enable AWS Control Tower. Control Tower enables AWS Organizations, a CloudTrail Organization Trail, and AWS Identity and Access Management (IAM) Identity Center. Additionally, the global account baseline should be applied to the management account. Control Tower is managed from this account, including the management of Control Tower detective and preventative guardrails.
      2. Security Organization Unit: The Security OU is a foundational OU that should not contain any business applications. It is created by Control Tower. Your security organization should own and manage this OU, along with any child OUs and associated accounts. Control Tower creates a Log Archive and Security Tooling (also known as Audit) account. The security tooling account is used to manage the security services in the organization. This is achieved by delegating the management of supported security services to this account. The Log Archive account is the central aggregation point for organization audit, security, network and application logs.
      3. Infrastructure Organization Unit: The Infrastructure OU is intended to contain shared infrastructure services. The accounts in this OU are also considered administrative, and your infrastructure and operations teams should own and manage this OU, any child OUs, and associated accounts. The Infrastructure OU holds the following accounts: Network account(s), Operational Tooling account(s), and Shared Services account(s).
      4. Other Organization Units: OUs should be designed strategically. OUs provide a way for you to organize your accounts, so that it’s easier to apply common policies, deploy services, and common configurations to accounts that have similar needs. Refer to the Organizing Your AWS Environment Using Multiple Accounts whitepaper for details on how to design your account structure.
      5. Global account baseline: Controls, services, and configuration that should be deployed and configured to all accounts in the organization are referred to as global account baselines. Control Tower provides an account baseline in all Control Tower-managed accounts and Regions which includes AWS CloudTrail, AWS Config, deleting of default VPC components, and some preventative and detective guardrails. Additional controls, services, and configuration may also be required, and a mechanism to apply these to all newly created accounts should exist. There are some exceptions to this. For example: a baseline that won’t necessarily be deployed to accounts within the Exceptions OU, and SCPs don’t apply to the management account.
      6. Custom account baseline: Controls, services, and configuration that may differ between Organization Units are referred to as custom account baselines. For example, controls, network connectivity, and security and operational tool requirements may differ between production, non-production, or sandbox environments. To support scaling out of the organization, accounts should be organized logically within OUs so that these baselines can be configured for all accounts within an OU or multiple OUs, and not configured at the account level.

      It is important to understand the different organizational constructs that will be used to manage your AWS environment.

      AWS account: If you are starting off on your Cloud Foundation journey, you will need an AWS account. An AWS account acts as an identity and access management isolation boundary. When you need to share resources and data between two accounts, you must explicitly allow this access. By default, no access is allowed between accounts. For example, if you designate different accounts to contain your production and non-production resources and data, by default, no access is allowed between those environments.

      Management account: As you follow this guidance, the AWS account that you start from will become the Management account of the Organization. The Management account (also called the AWS Organization Management account or Org Management account) is unique, and differentiated from every other account in AWS Organizations. From this account, you can create AWS accounts in the AWS Organization, invite other existing accounts to the AWS Organization (both types are considered member accounts), remove accounts from the AWS Organization, and apply Organizational policies to accounts within the AWS Organization. Due to this and the importance of the management account, we recommend that you use the management account and its users and roles only for tasks that need to be performed by that account. For more information on management account best practices, refer to Best practices for the management account.

      Note: The account where you enable AWS Control Tower or AWS Organizations becomes the Management account of the organization. If you already have an account to build your Cloud Foundations, you do not need to create a management account.

      Member accounts: Members make up all of the rest of the accounts in an Organization. An account can be a member of only one organization at a time. You can attach a policy to an account to apply controls to only that one account.

      Organizational Unit (OU): An OU is virtual container used by AWS Organizations to classify and organize accounts and manage them as a single unit. You use organizational units to organize your AWS accounts.

      Foundational OU: The Foundational OU contains accounts, workloads, and other AWS resources that provide common security and infrastructure capabilities to secure and support your overall AWS environment. It is important to establish the foundational OU and accounts early on. This guidance will explain how to create and use the Foundational OU and accounts. Accounts, workloads, and data residing in the foundational OUs are typically owned by your centralized Cloud Platform or Cloud Engineering teams, made up of cross-functional representatives from your Security, Infrastructure, and Operations teams.

      Create the following foundational OUs:
      Foundational OU Description How to create
      Security OU  The Security OU contains security accounts, and no application accounts or application workloads should exist within this OU. Your security organization should own and manage this OU, along with any child OUs and associated accounts. Common use cases for this OU include security tooling accounts, centralized management of security tools, and a centralized log aggregation account. For Control Tower users, this OU is automatically created. Alternatively, you can refer to the AWS Organizations user guide on how to create an organization and the AWS Organizations user guide on creating OUs to create the Security OU.
      Infrastructure OU The Infrastructure OU is intended to contain shared infrastructure services. The accounts in this OU are also considered administrative, and your infrastructure and operations teams should own and manage this OU, any child OUs, and associated accounts.  The Infrastructure OU is used to hold AWS accounts containing AWS infrastructure resources shared or utilized by the rest of the organization. The accounts in the Infrastructure OU are considered core accounts. No application accounts or application workloads are intended to exist within this OU.  Common use cases for this OU include accounts to centralize management of resources. For example, a Network account might be used to centralize your AWS network, or an operations tooling account to centralize your operational tooling. For Control Tower Users, follow the Control Tower user guide to create an OU named “Infrastructure”
      Alternatively, you can reference the AWS Organizations user guide on creating OUs to create the Infrastructure OU.
      Create foundational accounts:
      Account Description OU How to create
      Log archive account: The log archive is an account that acts as a consolidation point for log data that is gathered from all the accounts in the organization, and is primarily used by your security, operations, audit, and compliance teams. This account contains a centralized storage location for copies of every account’s audit, configuration compliance, and operational logs. It also provides a storage location for any other audit/compliance logs, as well as application/OS logs. Security For Control Tower users, this account is created in the Security OU automatically. Alternatively, you can create this account using AWS Organizations and move it to the Security OU.
      Security tooling account: This account, which is referenced as the “Audit” account in Control Tower documentation is used to provide centralized delegated admin access to AWS security tooling and consoles, as well as provide view-only access for investigative purposes into all accounts in the organization. The security tooling account should be restricted to authorized security and compliance personnel and related security. This account can be used as a central management and administration point for AWS security services, including Security Hub, GuardDuty, Amazon Macie, AWS AppConfig, Firewall Manager, Detective, Amazon Inspector, Config Aggregation, and IAM Access Analyzer. Security For Control Tower users, this account is created in the Security OU automatically. Alternatively, you can create this account using AWS Organizations and move it to the Security OU.
      Network account The Network account serves as the central hub for your network on AWS. You can manage your networking resources and route traffic between accounts in your environment, your on-premises, and egress/ingress traffic to the internet. Within this account, your networking administrators can manage and build security measures to protect outbound and inbound traffic on your environment centralizing AWS Site-to-Site VPN connections, AWS Direct Connect integrations, AWS Transit Gateway configurations, DNS services, Amazon VPC endpoints, and shared VPCs and subnets. More advanced use cases include the use of an ingress/egress or perimeter security account to host network security stacks, which provide centralized inbound and outbound internet traffic inspection, proxying, and filtering. Infrastructure For Control Tower users, create this account in Control Tower and move to the Infrastructure OU. Alternatively, create this account using AWS Organizations and move it to the Security OU.
      Operations tooling account Operations tooling accounts can be used for day-to-day operational activities across your Organization. The operations tooling account hosts tools, dashboards, and services needed to centralize operations (including traditional syslog tooling, or ITSM) where monitoring operational observability and metric tracking are hosted. These tools help the central operations team to interact with their environment from a central location.  Examples of services that can be delegated and centrally accessed in the Operations tooling account include: AWS Systems Manager, CloudFormation, and CloudWatch dashboards, metrics, and alarms. Infrastructure For Control Tower users, create this account in Control Tower and move to the Infrastructure OU. Alternatively, create this account using AWS Organizations and move it to the Security OU.
      Shared services account The Shared Services account(s) can be used to expand other services provided to the entire Organization, where Operations, Security, Infrastructure, and Finance teams build, deploy, and share resources and products across all the accounts in the organization. Examples of services that can be delegated to and centrally accessed in the Shared Services account include: IAM Identity Center, AWS License Manager, and SSH or RDP bastions for your environment.   Infrastructure For Control Tower users, create this account in Control Tower and move to the Infrastructure OU. Alternatively, create this account using AWS Organizations and move it to the Security OU.
      Create workload environments

      Create an OU to hold the workload’s accounts or workload’s sub-OUs. The Workloads OU is intended to house most of your business-specific workloads, including both production and non-production environments. These workloads can be a mix of commercial off-the-shelf (COTS) applications, and your own internally developed custom applications and data services. Workloads, and therefore AWS accounts, should be separated by Software Development Life Cycle (SDLC) environment (Prod, Dev, Test). Based on policy requirements, these accounts should be further separated into OUs based on the workload SDLC environment. It is common to have more restrictive production environments than development environments, so those accounts should be separated into their own OU.

      In the following example, workloads accounts are organized by Test and Production environments.

      Note: For Control Tower users, follow the AWS Control Tower user guide to create the Infrastructure OU.

      Alternatively, refer to the AWS Organizations user guide on creating OUs to create the Workload OU, and nested OUs (based on SDLC environment).

      Create environments for specific use-cases

      As your environment grows, you may need to solve for new use cases. We recommend that you create different OUs to group accounts that will help you solve for these use cases. Some recommended OUs follow. For a full description for each of these organizational units, refer to the Organizing Your AWS Environment Using Multiple Accounts whitepaper.

      Sandbox OU: The Sandbox OU contains accounts in which your builders are generally free to explore and experiment with AWS services and other tools and services, subject to your acceptable use policies. These environments are typically disconnected from your internal networks and internal services. Sandbox accounts are not promoted to any other type of account or environment within the Workloads OU.

      Policy Staging OU: The Policy Staging OU is intended to help teams that manage overall policies for your AWS environment to safely test potentially broadly impacting policy changes before applying them to the intended OUs or accounts. For example, SCPs and tag policies should be tested prior to applying them to the intended OUs or accounts. Similarly, broadly applicable account baseline IAM roles and policies should also be tested using the Policy Staging OU.

      Suspended OU: The Suspended OU is used as a temporary holding area for accounts that are required to have their use suspended either temporarily or permanently. Moving an account to this OU doesn’t automatically change the overall status of the account. For example, in cases where you intend to permanently stop using an account, you would follow the Closing an account process to permanently close the account.

      Exceptions OU: The Exceptions OU contains accounts that require an exception to the security policies that are applied to your Workloads OU. Normally, there should be a minimal number of accounts, if any, in this OU. Given the unique nature of the exceptions, SCPs are typically applied at the account level in this OU. Due to the customized security controls that apply to these accounts, owners of these accounts can expect to experience greater scrutiny from security monitoring systems.

      Deployments OU: The Deployments OU contains resources and workloads that support how you build, validate, promote, and release changes to your workloads. If you intend to deploy and/or manage your own CI/CD capabilities in AWS or use AWS managed CI/CD services, we recommend that you use a set of production deployment accounts within the Deployments OU to house the CI/CD management capabilities.

      Transitional OU: The Transitional OU is intended as a temporary holding area for existing accounts and workloads that you move to your Organization before you formally integrate them into your more standardized areas of your AWS environment structure.

      Individual Business Users OU: The Individual Business Users OU houses accounts for individual business users and teams who need access to directly manage AWS resources outside the context of resources managed within your Workloads OU.

      In some cases, you can consider a small number of AWS resources as something other than a workload. For example, a business team might require write access to Amazon S3 buckets to share marketing videos and data with a business partner. In these cases, you might choose to manage these resources in accounts within the individual business users OU rather than in accounts in the Workloads OU.

    • Scenario
    • Provision process for isolated environments

      • Establish isolated resource environment request and provisioning processes
    • Overview
    • At a smaller scale, request and review processes for creating isolated resource environments often start with manual-based inputs to a cloud management team through a ticketing system or other request process. The cloud team will then analyze the request and ensure a set of information is included such as: owner, email address, cost center, project number, and SDLC environment. Additional customer specific meta data may also be collected based on requirements. If an application review board or enterprise architecture board are present within the customer environment, they may also review the request. For certain types of environments, it may be decided that approvals are not required or an auto-approval mechanism may be put in place.

      Once approved, the isolated environment is created with a baseline configuration and controls that apply to all environments. Configuration and controls may also be applied based on the metadata within the request. Non-production or production environments, for example, may have different requirements and controls applied. Baseline configurations might include the removal of default password configurations, authentication methods, integration with an existing identity provider, IP allocation, network segmentation and connectivity, logging, and security tooling. Additional customer-required controls will be implemented at this point, often driven by industry specific regulatory requirements or cloud control best practice guidance through common frameworks such as Cloud Security Alliance - Cloud Controls Matrix or NIST Cybersecurity Framework (CSF) Reference Tool.

      As your environment matures, request, review, and approval processes can be automated using integration with existing IT service management or human workflow tools. Deployment processes can use automation through GitOps deployment pipelines and infrastructure as code.

    • Implementation
    • Create AWS accounts for each of your workloads or any of the use cases you are working on. AWS account creation can be automated using managed services such as AWS Control Tower, or through the 'create-account' API call. Account request and creation automation can be integrated into your ITSM solution for ServiceNow or Jira with AWS Service Management Connector.

      As part of the request process, it is often necessary to collect additional information to help make decisions on applicable controls, policies, roles, network connectivity, naming conventions, or services that are necessary for that account. For example:

      • SDLC Environment (such as Test, Dev, Prod)
      • Project name
      • Will the account hold sensitive data?
      • Will public network access be required?

      When you are creating new accounts, the following information is required:

      Account email: This is a unique email ID for the new account you are creating. This email account can be used to set the root password and ultimately log in as root; therefore, ensure that access to the email account is tightly controlled. It is strongly recommended to have an email naming convention to easily identify each account. For example, company-prod-security-tooling@example.com is a good example to represent the production security tooling account for your company.

      Display name: This is the account alias that will be shown in the Organizations console. This alias will also be displayed in the Control Tower and IAM Identity Center consoles.

      Note: If you are using Control Tower, refer to the AWS Control Tower user guide for additional information required to create new accounts.

      Once a new account is created, we recommend you set a password and configure multifactor authentication (MFA) for the root user. This will send an email to the account email address, which allows the root user to set a password and then sign in to the account. By default, the root user has complete access to all services and resources (Service Control Policies apply to the root user), so it is strongly recommended that you do not use the root user for any task where it is not absolutely required. Refer to Tasks that require root user credentials for more information on when the root user may need to be used. We recommend logging in to the account as the root user, configuring the password following company password complexity requirements, configuring MFA for the root user, and storing the credentials and MFA token in a secure location.

    • Scenario
    • Implement controls on isolated resource environments

      • Deploy preventative controls explicitly denying non-compliant actions for unauthorized users
      • Deploy detective controls to track and report non-compliant architectures
    • Overview
    • Implementing controls on isolated resource environments provide cloud consumers the autonomy to build workloads to meet their business objectives, while keeping the organization compliant to standards and regulations. Controls can take the form of either preventative or detective.

      Preventative controls prevent events from occurring based on a variety of criteria, including the event type, action, resource, caller identity, and more. For example, a preventative control might deny all users from deleting a specific resource.

      Detective controls are designed to detect, log, and alert after an event has occurred. These controls will not block actions from occurring, instead they can be used to provide notifications, generate incidents, or initiate automation to address the violation. Detective controls can be helpful in validating whether preventative controls are working, auditing an environment, building automated responses to address violations, or measuring the success of process changes.

      The sooner a guardrail can be evaluated in the deployment process the better. Implementing preventative guardrails in the deployment process allows compliance risks to be caught earlier, and before an attempt to deploy resources even occurs. Identifying risk earlier also saves time during deployments. Building controls into the deployment workflows allows for the inspection of code against standards which can enforce guardrail compliance as resources are being provisioned.

    • Implementation
    • Deploy preventative controls explicitly denying non-compliant actions for unauthorized users

      Service control policies (SCPs) are a type of Organization policy that you can use to manage permissions in your organization. SCPs offer central control over the maximum available permissions for all accounts in your organization. SCPs help you to ensure your accounts stay within your organization’s access control guidelines.

      SCPs allow you to specify the maximum permissions for member accounts in the Organization. Using SCPs, you can restrict which AWS services, resources, and individual API actions the users and roles in each member account can access. You can also define conditions for when to restrict access to AWS services, resources, and API actions. SCPs are managed (created, attached, detached, and deleted) from the Management account of the Organization. Refer to the AWS Organizations user guide to manage SCPs or for a set of example SCPs.

      Note: AWS Control Tower provides a set of managed guardrails that you can apply to the OUs in your environment. AWS Control Tower preventative guardrails are implemented using SCPs. AWS Control Tower mandatory preventative guardrails are enabled and enforced by default, and protect the resources and controls that Control Tower creates and manages. Additionally strongly recommended guardrails and elective guardrails can be enabled on desired OUs through the Control Tower console.

      Workload SCP inheritance example:

      Service Control Policies are managed from the management account of the AWS Organization.

      Organization Root: SCPs applied at the root of the organization will apply to all accounts within the Organization except for the management account. Because these policies will be inherited to all OUs and accounts (except for the management account), these policies should be designed and applied carefully.

      Top Level OU: SCPs attached at the root level (2) will be inherited to this OU. SCPs attached at the Workload OU level will be inherited by all OUs or accounts within this OU, and therefore contain policies that apply to all accounts within this OU.

      Nested OU: SCPs attached at the Root level (2) and top-level OU (3) will be inherited to this OU. SCPs attached at the Workload/prod and Workload/test(or other Workload/SDLC OUs) are specific to the SDLC Region in which they are applied. For example, it is common to have stricter policies on production accounts compared to test scores accounts, and therefore, stricter SCPs would be applied to the Workload/prod OU, which would be inherited and applied to all production workload accounts that exist below the Workload/prod OU.

      Account Level: SCPs attached at the root level (2), top level OU (3), and nested OU level (4) will be inherited on the account. SCPs can also be attached at the account level. Because account-level SCPs apply only to a specific account, they should be limited to special cases, requirements, or exceptions.

      Other OUs: The same strategy of applying broad SCPs at a higher level down to specific SCPs will apply to all other OUs and accounts within the organization.

      • SCPs don't affect users or roles in the management account. They affect only the member accounts in your Organization.
      • SCPs affect all users and roles in attached accounts, including the root user.
      • SCPs do not affect any service-linked role. Service-linked roles enable other AWS services to integrate with AWS Organizations and can't be restricted by SCPs.
      Deploy detective controls to track and report non-compliant architectures

      As you scale your AWS environment, it becomes increasingly necessary to track, assess, audit, and evaluate the configurations of your AWS resources. AWS Config continuously monitors and records your AWS resource configurations, and allows you to automate the evaluation of recorded configurations against desired configurations. With AWS Config, you can review changes in configurations and relationships between AWS resources, dive into detailed resource configuration histories, and determine your overall compliance against the configurations specified in your internal guidelines. This enables you to simplify compliance auditing, security analysis, change management, and operational troubleshooting.

      Generally, it is recommended that you enable AWS Config in all accounts and Regions where you will have resources. It is not necessary to deploy AWS Config to Regions where you are denying actions (API calls) using Control Tower deny Regions guardrail or Region deny SCPs. You may decide that there are some accounts that do not require AWS Config, because they contain non-business-related resources and data (sandbox environments for example) or accounts that contain highly ephemeral resources (high velocity creation and deletion of AWS resources).

      AWS Config can be enabled in all targeted accounts and Regions using AWS CloudFormation Stacksets or, if you use Control Tower, AWS Config is enabled in all Control Tower accounts and in Control Tower-managed Regions.

      Control Tower detective guardrails are implemented using AWS Config rules. Mandatory detective guardrails are enabled and enforced by default to protect the resources and controls that Control Tower creates and manages. Additional Strongly Recommended guardrails and elective guardrails can be enabled through the Control Tower console. You can view the compliance of organizational resources in the Control Tower dashboard or in the Config Aggregator console of the management and Security Tooling (also called the audit) account.

      Additional or custom Config Rules can be deployed using managed config rules, custom config rules, or AWS Conformance Packs which can be deployed from AWS Config console, CLI, or API. Config rules can also be deployed to the organization accounts from the management account or a delegated administrator account. Config rules that protect Global resources need to be enabled in every account and should be enabled in the home Region only. Config rules that protect resources that are Region specific need to be enabled within each account and each Region. Because of this, Config rules should be deployed via automation, AWS Config Conformance pack, or organization config rules to ensure that the rules are applied in a standardized and consistent manner across accounts and regions.

      Conformance packs are a collection of AWS Config rules and remediation actions that can be easily deployed as a single entity in an account and a Region, or across an Organization in AWS Organizations. AWS Config Conformance Packs typically align to some common regulatory control framework such as Payment Card Industry (PCI) or National Institute of Standard and Technology (NIST). Conformance packs provide a mapping of regulatory control definitions to AWS Config rule code.

      To view your resources, resource configurations, and resource compliance in one place, we recommend that you enable AWS Config Aggregator, which aggregates all account and Region data to the Config Aggregator console in the account and Region where you deploy it. We recommend deploying a Config Aggregator in the Security Tooling, in addition to any other account where a comprehensive view of resources and configuration is needed (for example, it is common to deploy a Config Aggregator in the Operations Tooling account).

      Note: AWS Control automatically configures a Config Aggregator in the Security Tooling and Management Account.

    • Scenario
    • Provision baseline standards to isolated resource environments

      • Enable audit logging in isolated resource environments
      • Deploy roles and access to isolated resource environments
      • Deploy security services to isolated resource environments
    • Overview
    • In addition to provisioning guardrails to isolated resource environments, it is also necessary to provision other baseline requirements within the environment. This can include roles, network components and connectivity, and security applications or services.

      To maintain consistency in configuration across different resource environments, the baseline configuration should be deployed in an automated fashion in existing or newly created isolated resource environments. This can include roles, network connectivity, or operational or security services. It is often helpful to be able to apply baseline configurations globally (to all isolated resource environments) or to logical groupings of the isolated resource environments. Global baselines will apply to all isolated resource environments, and therefore, should be considered carefully. Isolated resource environments should be grouped and organized in a way that allows baseline configurations to be applied to the logical grouping of environments. For example, it is common to apply stricter controls on production environments than in development environments. However, production environments should be grouped in a way that allows for different baseline configurations to be applied to production environments and development environments.

    • Implementation
    • Enable audit logging in isolated resource environments

      Visibility into your AWS account activity is a key aspect of security and operational best practices. AWS CloudTrail enables auditing, security monitoring, and operational troubleshooting by tracking user activity and API usage. CloudTrail Logs continuously monitors and retains account activity related to actions across your AWS infrastructure, giving you control over storage, analysis, and remediation actions.

      We recommend that you enable an organization trail, a trail that logs all events for all AWS accounts in your organization. This trail is created and configured in your management account, and should be encrypted and sent to your log storage Amazon Simple Storage Service (Amazon S3) bucket. For additional information on the requirements to set up an audit trail for your organization review the Cloud Foundations Log Storage Capability. AWS Control Tower will optionally automatically deploy an organizational trail for all the accounts under its governance, and these logs will be delivered to the Control Tower-created log storage S3 bucket in the Log Archive account.

      Note: If you are an AWS Control Tower user, and your CloudTrail requirements differ from the Control Tower created CloudTrail, you can choose to not use the Control Tower Trail and create your own.

      If you need more granular options for an audit trail, you can create a multi-Region trail in each account, or create an individual trail in each Region of each account, which you can automate by deploying CloudTrail with Cloud Formation StackSets (see CloudFormation CloudTrail example template).

      If you require additional visibility into data events of a specific resource, you can create a CloudTrail data event trail.

      Data events are also known as data plane operations, and are often high-volume activities. If you already have a Management Event Trail configured for your organization (for example, if you are using Control Tower), and want to create a Data Event Trail, you may decide that you only want to record data events for certain accounts within your organization.

      Deploy roles and access to isolated resource environments

      For federated access to your AWS accounts, we recommend using AWS IAM Identity Center (successor to AWS Single Sign-On), a cloud-based service that allows you to set up and manage access to the accounts in your AWS Organization from a single location. Additionally, you can integrate IAM Identity Center with an existing Identity Source. For more information, refer to AWS IAM Identity Center (successor to AWS Single Sign-On) User Guide.

      Note: Control Tower enables IAM Identity Center with a preconfigured directory that helps you manage user identities and single sign-on, so that your users have federated access across accounts. When you set up your Control Tower landing zone, this default directory is created to contain user groups and permission sets.

      Deploy security services to isolated resource environments

      AWS Organizations integrates with many security services, which allows for the centralized deployment and administration of those services. AWS Organizations Trusted Access is a feature that enables a supported AWS service that you specify to perform tasks in your organization and its accounts on your behalf.

      AWS Organization Delegated Administrator allows you to delegate the administration of a service to a specified account.

      If possible, you should avoid using the management account as the centralized account for Security Services, to reduce the number of required log-ins to the management account. Instead, use the Delegated Admin feature to delegate administration to a dedicated Security Tooling account. The following are examples of security services that can be delegated to the Security Tooling account: Security Hub, GuardDuty, Macie, AWS AppConfig, Firewall Manager, Detective, Amazon Inspector, and IAM Access Analyzer. Check the AWS Organizations services support page for details on other services that work with the Trusted Access and Delegated Administrator.

      The Multi Account Security Governance Workshop will help you get familiar with a common pattern for centralized security governance across a multi-account AWS deployment.

      Advanced: Automate the deployment of Infrastructure as Code
      Solution Acronym Infrastructure as Code Description
      Landing Zone Accelerator on AWS LZA CDK, TypeScript The Landing Zone Accelerator on AWS helps you quickly deploy a secure, resilient, scalable, and fully automated cloud foundation that accelerates your readiness for your cloud compliance program using CDK.
      Control Tower Account Factory for Terraform AFT Terraform AWS Control Tower Account Factory for Terraform (AFT) sets up a Terraform pipeline that helps you provision and customize your accounts in AWS Control Tower
      Customizations for Control Tower CfCT CloudFormation, YAML Manifest The Customizations for AWS Control Tower (CfCT) solution combines AWS Control Tower and other highly-available, trusted AWS services. CfCT helps customers more quickly set up a secure, multi-account AWS environment using AWS best practices using CloudFormation, a deployment pipeline, and a manifest to declare where and how to deploy the CloudFormation Templates.
      CloudFormation Stacksets CFN CloudFormation AWS CloudFormation StackSets extends the capability of stacks by enabling you to create, update, or delete stacks across multiple accounts and AWS Regions with a single operation.

      Due to the critical nature of foundational code, the deployment process of foundational services and resources should be separate from workload provisioning processes.

      In addition to deploying and configuring the desired services, you should implement controls (preventative or detective) to ensure that the deployed foundational resources and configuration are protected.

    • Scenario
    • Provision pre-approved deployable architectures

      • Provision pre-approved deployable architectures to workload accounts
    • Overview
    • Provision pre-approved deployable architectures

      As you adopt the cloud, you will see common workload patterns emerge, which may include standard three-tier web applications, serverless frameworks, or many other architecture patterns. At the same time, certain skill sets may vary as teams learn to use the cloud. Common patterns may be implemented with any number of variations, which augment the risk to deploy non-compliant workloads. Pre-approving and building a pattern library will provide a common and repeatable mechanism to maintain controls and boundaries around workloads. Instead of application teams relying on run-book, process documentation, or lengthy review processes, cloud platform teams can publish ready-to-deploy patterns, which include common resources along with included control guardrails. For example, a three-tier web app will include a typical set of compute instances, placed within the appropriate public and private network segments to prevent accidental exposure to internal systems.

    • Implementation
    • Provision pre-approved deployable architectures

      To facilitate consistency, compliance, and ease of adoption, it is recommended that organizations create and manage catalogs of IT services that are approved for use, and simplify the discovery and deployment process. The catalogs are comprised of many templates that are made available in a service catalog and are commonly referred to as products. The products can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures. These services are built from template solutions that ensure system version and full-service lifecycle management by the end user to include created, update, and delete the product.

    • Scenario
    • Decommission process for isolated environments

      • Establish isolated resource environment decommission process
    • Overview
    • Decommission process for isolated resource environments

      There are various situations when your organization may need to decommission one or more isolated resource environments. For example, you may have a resource environment exclusively designed for and used by a single application that is being decommissioned, you may be using sandbox (disposable) environments, or you might have misconfigured a resource environment during your testing or development phases and determine that the best path is to delete the entire environment. For any situation that requires the decommissioning of isolated resource environments, you will want to ensure a consistent decommissioning workflow is in place to prevent unintended charges for resources no longer needed. Additionally, you will want to disable access to the resource environment during any interim waiting period while your resource environment is being decommissioning.

      Similar to how you would approach provisioning isolated resource environments, you need a request process in place for decommissioning isolated resource environments. You should ensure that any dependencies with other isolated resource environments are no longer needed before decommissioning. Internal policies and compliance needs should guide how persistent data within the environment is either deleted, or safely transitioned into a separate environment. We recommend that you carefully consider what assets from the environment should be retained, and how they should be retained. For example, consider retaining assets that could add value to your business in the future or that could augment future projects by transitioning them out of the resource environment prior to termination of the environment.

      The process of environment decommissioning should typically include a manual step. Aspects of the decommissioning can and should be automated as you mature your process, but a manual step or approval should remain for critical environments. An IT Service Management ticketing process is a common way to initiate an isolated resource environment decommissioning workflow. Manual approval step(s) should be built into the workflow. As your environment matures, automation can be built into certain aspects of the approval process by using metadata about the resource environment being used to assess the risk of that environment’s decommissioning. For example, a manual approval step may not be needed to terminate a sandbox environment.

      The decommissioning process should be documented and include guidance on the required steps in the workflow. This might include:

      • A method for determining if any assets or data within the environment should be retained.
      • Applying restrictive controls on the isolated resource environment for a period of time to prevent the use of the resources but allow a recovery process, if necessary.
      • A process or automation for disabling resources or deleting data.
    • Implementation
    • Establish isolated resource environment decommission process

      Account decommission requests can be made through your existing ITSM solution. It is critical to include a manual approval step within your account decommissioning process, as decommissioned accounts are not recoverable, and therefore any decommissioned request should be closely analyzed.

      Requests to decommission accounts should include, at a minimum, the following information:

      • Account number
      • Account email
      • SDLC environment type (such as Dev/Test/Prod)
      • Business justification for decommission request
      • Known dependencies with other accounts/workloads
      • Known data or assets to be retained
      • Business justification for data or assets to be retained

      Upon formal review, and approval of account decommission, the task of decommissioning the account should be delegated to the appropriate team. Typically, this task will be handled by the Cloud Engineering team within either the Infrastructure or Operations functional areas.

      Within AWS Organizations, you can use a Suspended OU as a holding area for the account as it moves through the account decommissioning process. The suspended OU should have a deny all Service Control Policy that disallows any actions on the accounts that reside in it. The Suspended OU serves the purpose of holding the account for a period of time to allow for the recovery of data, applications, or configurations. Prior to moving the account into the Suspended OU, ensure you verify this action will not impact any other accounts or workloads. Based on your compliance needs, and what data or assets need to be retained, or deleted within the account, you may wish to delete and/or retain resources in place, allowing existing SCPs and guardrails to continue being applied during this process.

      Note: If you are using Control Tower, you need to remove the account from Control Tower management by deleting the provisioned product associated with the account in the Service Catalog. If using Control Tower, the account will be removed from Control Tower management, and appear within the Root OU after successful deletion of the Service Catalog product. Once the account is no longer Control Tower managed, you can move it to the Suspended OU, which should be an OU that is not governed by Control Tower.

      Note: Deleting the provisioned product associated with the account in Service Catalog should remove the majority of Stack Instances within the account that were deployed by Control Tower. Any remaining Stack Instances should manually be deleted.

      Leave the account in the Suspended OU for a set number of days (such as. 30), and notify the requester that they have X days remaining before the account is permanently deleted. Upon the expiration of X days, you may want to delete all of the data and resources (if you have not already done so) prior to deleting the account. To do this, you will need to move the account into an OU that that has an SCP that allows you to implement the deletion of resources and data (for example the Transitional OU). You can follow the steps or create and run automation to fully delete the account following this process: https://aws.amazon.com/premiumsupport/knowledge-center/close-aws-account/. The account will be in suspended mode for 90 days before deletion. Resolve any outstanding ITSM tickets and notify the requester.

Disclaimer

The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.

Was this page helpful?