The Log Storage capability enables you to collect and store your environment logs centrally and securely in tamper resistant storage. This will enable you to evaluate, monitor, alert, and audit access and actions performed on your cloud resources and objects.

Architecture Diagram

Download the architecture diagram PDF 

Implementation Resources

This capability should be implemented in accordance with your governance needs by the Security team.

Utilizing a separate log storage enables you to build a safe area where the logs serve as the source of truth for the activities and events occurring in your environment, relative to security and operations. Examples include account access and infrastructure changes.

Log storage must be tamper-resistant and encrypted, and accessed only by controlled and monitored mechanisms, based on least privilege access by role. Controls need to be implemented around the log storage to protect the integrity and availability of the logs and their management process. For in-depth technical information, refer to the Establishing your Cloud Foundation on AWS whitepaper.

    • Scenario
    • Build a secure and resilient log storage

      • Secure access to log storage
      • Configure default encryption on log storage
      • Create a place to store your logs
    • Overview
    • Having a separated log storage allows you to establish a secure location where the logs become the source of truth for the actions and events happening in your environment relevant to security and operations. For example, access to different accounts, or infrastructure updates.

    • Implementation
    • We recommend that you establish your log storage in the Log Archive account, which stores logs from different workloads across the environment and contains security logs. The logs in this account are used as a source of truth to analyze, evaluate, and monitor activity within the AWS environment, and can be used to demonstrate compliance and policy requirements to auditors.

      Secure access to log storage

      In this account, you should have a LogArchive-Administrator role, defined in the Identity Management & Access Control (IMAC) capability, that you will use to set up the necessary components of the Log Storage capability. Once the log storage is established, this role will rarely be used, and will be reserved to break glass scenarios when access to the log storage is not available by any other means.

      Configure default encryption on log storage

      First, you need to create the AWS KMS encryption key in your Log Archive account, which is also managed by the Security Team. When you create this key, it needs to be shared with the Log Archive Account, so Amazon S3 can use it to encrypt the log bucket. The following example can be used to start sharing the key with the management account by adding the log account as a principal on the key policy.

      {
          "Sid": "Allow an external account to use this CMK",
          "Effect": "Allow",
          "Principal": {
              "Service": [
                  "cloudtrail.amazonaws.com",
              ]
          },
          "Action": [
              "kms:Encrypt",
              "kms:Decrypt",
              "kms:ReEncrypt*",
              "kms:GenerateDataKey*",
              "kms:DescribeKey"
          ],
          "Resource": "*"
      }

      Note: For AWS Control Tower users: The Organizations AWS CloudTrail and the Log Archive Amazon Simple Storage (Amazon S3) bucket are created for you by AWS Control Tower.

      Create a place to store your logs

      Now that you have an encryption key on your Log Archive account, create an S3 bucket in your home region. Your homeregion is the one you defined as the region which you will centrally operate your resources. It is typically the closest one to your customers and to your geographical location. You want the logs to be stored that will contain the logs for your organization. For additional information, refer to the Governance capability. Use the key you created earlier in your Security Tooling account to enable the encryption of the objects stored in your S3 bucket.

      Create an S3 bucket in your home region using the encryption key for your Log Archive Account. Your home region is the region you designated as the region in which you would operate your resources centrally. Refer to the Governance capability for further details. Utilize the encryption key you previously generated in your Security Tooling account to encrypt the objects stored in your S3 Bucket.

      We recommend naming your bucket following this namespace to avoid conflicts with other implementations:

      aws-[prefix]-log-storage-[account ID]-[Region]

      Once the S3 bucket is created, ensure that block public access is enabled on the bucket in the Log Archive account. Next, add a bucket policy to allow AWS CloudTrail and AWS Config appropriate access to the bucket.

      The following is an example of what this policy looks for AWS CloudTrail and AWS Config:

      {
          "Version": "2012-10-17",
          "Statement": [{
                  "Sid": "AWSCloudTrailAclCheck20150319",
                  "Effect": "Allow",
                  "Principal": {
                      "Service": [
                          "cloudtrail.amazonaws.com",
                          "config.amazonaws.com"
                      ]
                  },
                  "Action": "s3:GetBucketAcl",
                  "Resource": "arn:aws:s3:::aws-[prefix]-log-storage-[account ID]-[Region]"
              },
              {
                  "Sid": "AWSCloudTrailWriteFullControl",
                  "Effect": "Allow",
                  "Principal": {
                      "Service": [
                          "cloudtrail.amazonaws.com",
                          "config.amazonaws.com"
                      ]
                  },
                  "Action": "s3:PutObject",
                  "Resource": "arn:aws:s3:::aws-[prefix]-log-storage-[account ID]-[Region]/[optional prefix]/AWSLogs/myAccountID/*",
                  "Condition": 
                  {
                      "StringEquals": {
                          "s3:x-amz-acl": "bucket-owner-full-control"
                      }
                  }
              },
              {
                  "Sid": "DenyUnencryptedObjectUploadsAndTLS",
                  "Effect": "Deny",
                  "Principal": "*",
                  "Action": "s3:PutObject",
                  "Resource": "arn:aws:s3:::aws-[prefix]-log-storage-[account ID]-[Region]/[optional prefix]/AWSLogs/myAccountID/*",
                  "Condition": {
                      "Null": {
                      "s3:x-amz-server-side-encryption": "true"
                      },
                      "NumericLessThan": {
                          "s3:TlsVersion": 1.2
                      }
                  
                  }
              }
          ]   
      }

      This Bucket Policy allows AWS CloudTrail and AWS Config to deliver logs to your S3 bucket, and grant the bucket full control of the objects. To allow read access later from different accounts, the bucket owner and the object owner need to be the same. To ensure this, enable the Bucket Owner Preferred or Bucket Owner Enforced setting on the S3 bucket. If other services, principals, or roles need to deliver logs to this bucket, they can be added to the bucket policy, and the logs would be delivered directly to the bucket. However, we recommend that you use this bucket only to store AWS security related logs, and use other buckets in the account for other types of logs. If you need additional storage for additional workloads or operational logs, we recommend you create additional buckets, and narrow down the specific permissions for those workloads.

    • Scenario
    • Store logs centrally

      • Capture audit trails logs in central location
      • Capture resource configuration changes in a central location
      • Generate logs at an organization-defined frequency
    • Overview
    • As your environment scales to meet your business demands, centralizing all logs throughout your environment simplifies log analysis and monitoring. It also makes it easy to retrieve environment records and regulate who may consume them. This enables you to construct customized dashboards and tools for your logging needs.  

      Create a central location for your logs

      Logs should be collected and stored in a centralized location for long-term storage and analysis. This allows you to centrally monitor your surroundings and streamlines your operations. It also establishes a single source of truth for your resource, security, and operations logs. Furthermore, it eliminates the possibility of log loss and assures that your environment is continually tracked.

      Securing your centralized logs

      When your environment's logs are saved in a centralized location, it is simpler to implement complete controls to safeguard the environment. We recommend implementing a monitoring system that generates an alert whenever the log storage is accessed with write or administrative privileges.

    • Implementation
    • Enabling AWS CloudTrail

      Now that you have created and configured the S3 bucket and the AWS KMS key, you can create the Organizations trail. When creating an AWS Organizations trail on CloudTrail, you need to provide the names of the S3 bucket and the alias of the KMS key that you have created for your CloudTrail log storage.

      Enabling log access

      In the Security Tooling account, create a Read-Only role, such as the LogsRead-OnlyFrequencyRole, which has the correct permissions to retrieve the required items from the Log Archive account for a specific timeframe. In the Log Archive account, you also need to add a statement describing the bucket policies to which this role must have access in order to extract logs and create reports.

      The following is an example policy for one of the S3 buckets in your Log Archive account:

      {
          "Sid": "AllowReadOnlyAccessToASpecifcPath",
          "Effect": "Allow",
          "Principal": {
              "AWS": [ "arn:aws:iam::[account ID]:role/*Logs**ReadOnlyFrequencyRole*"
      
              ]
          },
          "Action": "s3:GetObject",
          "Resource": "arn:aws:s3:::aws-[prefix]-log-storage-[account ID]-[Region]/*"
      }
    • Scenario
    • Ensuring the integrity of logs within your log storage

      • Set preventive controls to prevent unauthorized access and modification of the logs
      • Generate and store access logs to your log storage
    • Overview
    • All of your logs should be stored in the same isolated environment protected by centralized controls. Controls should be configured to protect your log storage environment using both preventative and detective controls.

      • Preventive Controls allow you to prevent environmental actions. Using preventive controls, access and activities to log storage may be restricted depending on role, action type, service, or region.
      • Detective Controls are implemented to actively monitor the environment. This allows you to create alerts based on unwanted or unexpected actions taken within the environment. Optionally, remediation actions can be invoked automatically mitigate the risks within the log storage environment.
    • Implementation
    • Set preventive controls to prevent unauthorized access and modification of logs

      You need to protect the log storage you have just created by applying a policy at the organization level using a service control policy (SCP) to restrict access. The following example policy denies actions to the resources on the Log Archive account, applying an SCP to deny AWS Config, AWS CloudTrail, and Amazon S3 actions to these buckets unless you are using the specified role to make these changes:

      {
          "Condition": {
              "ArnNotLike": {
                  "aws:PrincipalARN": "arn:aws:iam::*:role/<Org_admin_role>"
              }
          },
          "Action": [
              "cloudtrail:DeleteTrail",
              "cloudtrail:PutEventSelectors",
              "cloudtrail:StopLogging",
              "cloudtrail:UpdateTrail"
          ],
          "Resource": [
              "arn:aws:cloudtrail:*:*:trail/<Organizations-trail>"
          ],
          "Effect": "Deny",
          "Sid": "CLOUDTRAILENABLED"
      },
      {
          "Condition": {
              "ArnNotLike": {
                  "aws:PrincipalARN": "arn:aws:iam::*:role/<Log_Storage_Role>"
              }
          },
          "Action": [
              "config:DeleteConfigurationRecorder",
              "config:DeleteDeliveryChannel",
              "config:DeleteRetentionConfiguration",
              "config:PutConfigurationRecorder",
              "config:PutDeliveryChannel",
              "config:PutRetentionConfiguration",
              "config:StopConfigurationRecorder",
              "config:PutConfigRule",
              "config:DeleteConfigRule",
              "config:DeleteEvaluationResults",
              "config:DeleteConfigurationAggregator",
              "config:PutConfigurationAggregator"
          ],
          "Resource": [
              "*"
          ],
          "Effect": "Deny",
          "Sid": "CONFIGENABLED"
      },
      {
          "Condition": {
              "ArnNotLike": {
                  "aws:PrincipalARN": "arn:aws:iam::*:role/<Log_Storage_Role>"
              }
          },
          "Action": [
              "s3:PutBucketPolicy",
              "s3:PutLifecycleConfiguration",
              "s3:PutBucketLogging",
              "s3:DeleteBucket*"
          ],
          "Resource": [
              "*"
          ],
          "Effect": "Deny",
          "Sid": "BUCKETCHANGESFORBIDDEN"
      }
    • Scenario
    • Managing your logs in your log storage

      • Create a policy document that specifies the length of time logs must be kept
      • Automate log rotation and archiving to the proper storage tier (frequently access to archival)
      • Define time period to retain regular access to logs or to rotate to the archive
      • Establish a process exception mechanism when log deletion or archiving may be required
    • Overview
    • Each type of gathered log may necessitate a distinct log storing approach. The strategy will vary based on the needed log type, frequency, retention, size, number, compliance, and access. Network logs, access logs, finance logs, DNS logs, inventory records, and change management records are examples of typical log types. A typical lifespan pattern for logs involves storing them in regular storage, cold storage, archive storage, and eventually deleting them.

      Audit logs

      We recommend that you protect your organization with a wide array of preventative controls to help you inhibit non-compliant changes. However, given the degree of self-service and agility often required by modern business, you need to ensure full transparency of changes made to at least production aspects of your environment, workloads, and data so that detective and corrective controls can be employed.

      A secure, centralized repository of logs should represent the one source of truth and be tamper-resistant, since centralizing your audit logs gives you with a clear understanding of what has transpired in your environment and when. This would ease access to audit log data, for instance, during forensic investigations.

      Auditors use of audit logs

      If you work in a regulated industry, you will engage the services of an external auditing company to regularly attest to your adherence to applicable requirements. Your auditor might have their own accounts as part of their own company. As part of their auditing process, they will need to examine your log data to establish if you have stayed compliant since their last inspection. A benefit for both you and the auditor is to grant a read-only account within their organization for read-only access to your log archive bucket(s). This will allow your auditor to examine and analyze your logs in their environment prior to engaging in other audit tasks, such as examining paperwork and interviewing operations staff.

      Your internal security team may need to have a security assurance role as part of the auditing process. This involves doing internal dry runs of external audits to reduce the possibility that the external audit may not progress successfully. This procedure may be carried out by your security team; however, they may prefer to split security assurance-specific tasks into a distinct account in order to isolate them from routine security operations. If you have a distinct security assurance team from your security team, their job should be split into a different account to ensure separation of duties.

      Configuration logs

      Configuration logs contain detailed information about changes in your infrastructure or applications. Configuration logs also provide a current and historical view of infrastructure or application configurations. The length of time to keep configuration logs in each lifecycle phase will heavily depend on requirements, business policies, and applicable regulations.

      Networking logs

      Networking logs give you an overview of what is happening on your network. They can help you monitor traffic in your environment and diagnose network related issues. Due to the amount and frequency that networking logs are generated, it’s common to keep them in accessible storage for a much shorter time compared to other logs. A best practice is to define the lifecycle strategy to keep your networking logs based on technical requirements, cost considerations, and the criticality of the infrastructure.

    • Implementation
    • Automate the rotation and archival of logs to the appropriate storage tier

      Depending on your compliance and governance needs, you can configure your S3 bucket to move items to a lower storage type, such as Amazon S3 Glacier, or delete them after a period of time if there is no longer a need to maintain these records. The S3 storage lifetime configuration enables the establishment of policies for migrating and expiring items.

    • Scenario
    • Adding new logs into the log storage

      • Define a process to add new logs to the log storage
      • Build a mechanism to store new logs into the log storage
      • Automate the addition of new logs to your log storage
    • Overview
    • As your environment grows, you may need to store different types of logs described in the previous section. These logs need to be added and protected separately from one another, to provide fault resistance and the ability to control who has access to the various types of logs.

      You need to provide a way for owners of distinct work-streams or isolated workloads in your environment to request the addition of logs to the central log store for consumption. When requesting new logs, the owners of these logs will provide you with all the requirements necessary to store, archive, and access these logs. This will allow you to customize the different security policies and granularly set controls, as well as separate logs from production and non-production environments.

    • Implementation
    • Define a process to add new logs to the log storage

      New log streams may require their own storage space under the Log Archive account. To keep logs distinct, we advocate creating a new S3 bucket for each single isolated group of resources, and granularly setting controls to read from the logs of your production workloads (since they may content sensitive information).

      If you need to centrally store the logs from your development and staging environments, we recommend creating two new S3 buckets, one for development and one for staging, where you can establish subfolders and store the logs for the respective workloads.

      Build a mechanism to store new logs into the Log Storage

      The customer (team or owner of the workload) will submit a request to the security team to create a specific location to store the logs for the workload, which includes all of the required specifications such as governance and retention policies. The request should also indicate whether the logs contain personally identifiable information (PII) (as defined by your policy) or if they are subject to specific regulations, and the role or service that the workload will use to deliver the logs to the storage. You will create the S3 bucket in the Log Archive account based on the parameters and offer the S3 URI to the client so they can begin sending their logs to this bucket.

    • Scenario
    • Granting access to the logs

      • Grant read-only accesses by role/user using the Principle of Least Privilege (PoLP)
      • Provide read access to the logs in order to perform analysis or an audit (internal/external)
      • Automate temporary read-only access to specific log in the log storage by owner or use case
         
    • Overview
    • While additional logs are saved in your log storage, you may need to create tools that allow you to analyze the data and show it in a human-readable dashboard that delivers insights about your environment. Additionally, workload owners who want to view historical data and access your log storage can request permission to examine the logs.

      Once stored, your logs are of the "write once, read many" variety. This means that you should grant read-only access to the appropriate logs that certain stakeholders require access to. Following a shared responsibility model, the administrators of your log storage must enable particular users to access the data they seek, and these users are responsible for creating and maintaining the permissions provided to each of these users to access the data responsibly.

    • Implementation
    • Provide read access to the logs in order to perform analysis or an audit (internal/external)

      When granting external read-only access to your logs, make sure that the activities are restricted to only reading the items from the S3 buckets that your customer intends to view. To do this, add one or more lines to your S3 bucket policy that will allow the required roles to access the data they need to read, while restricting S3:GetObject rights to the exact location where those logs are kept.

      The following is an example of the statement that needs to be added to your bucket policy to allow a specific role, from a specific account, to read logs from a requested path.

      {
          "Sid": "AllowReadOnlyAccessToASpecifcPath",
          "Effect": "Allow",
          "Principal": {
              "AWS": [ "arn:aws:iam::<account where the role lives>:role/<role_requesting_access>"
      
              ]
          },
          "Action": "s3:GetObject",
          "Resource": "arn:aws:s3:::aws-[prefix]-log-storage-[account ID]-[Region]/path/to/requested/logs/*"
      }

      The role that is being used to access the logs needs to have sufficient IAM permissions to GetObjects from the S3 bucket on the Log Archive account. You can use this example to grant cross account access to an S3 bucket.

      Additionally, since the logs on the bucket are encrypted by a KMS key, you need to modify the KMS key policy to allow the role that needs access to the logs to be able to decrypt them.

Disclaimer

The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.

Was this page helpful?