AWS for Industries

FSI Services Spotlight: Featuring Amazon Simple Storage Service (Amazon S3)

Welcome back to the Financial Services Industry (FSI) Service Spotlight monthly blog series. Each month we look at five key considerations that FSI customers should focus on to help streamline cloud service approval for one particular service. Each of the five key considerations includes specific guidance, suggested reference architectures, and technical code that can be used to streamline service approval for the featured service. This guidance should be adapted to suit your own specific use case and environment.

This month we are covering Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Customers of all sizes and industries can store and protect any amount of data for virtually any use case, such as data lakes, cloud-native applications, and mobile apps.

Amazon S3 provides a simple web service interface that you can use to store and retrieve any amount of data, at any time, from anywhere. Using this service, you can easily build applications that utilize cloud native storage. Since Amazon S3 is highly scalable, and you only pay for what you use, you can start small and grow your application as you wish, with no compromise on performance or reliability.

Amazon S3 is also designed to be highly flexible. Store any type and amount of data that you want, read the same piece of data a million times or only for emergency disaster recovery, or build a simple FTP application or a sophisticated web application such as the Amazon.com retail web site. Amazon S3 frees developers to focus on innovation instead of figuring out how to store their data.

Amazon S3 use cases in the FSI

Nasdaq is home to over 4000 company listings and is the market technology provider to over 100 marketplaces worldwide in 50 countries. Nasdaq has some of its most critical data on Amazon S3 and Amazon S3 Glacier, and AWS has been a trusted partner for many years. According to Robert Hunt, Vice President of Software Engineering at Nasdq: “We were able to easily support the jump from 30 billion records to 70 billion records a day because of the flexibility and scalability of Amazon S3 and Amazon Redshift“.

Monzo has grown from an idea to a fully regulated bank on AWS Cloud. As a bank that “lives on your smartphone,” Monzo has already handled £1 billion worth of transactions for half a million customers in the UK. Monzo runs more than 1600 core-banking microservices on AWS, using services including Amazon Elastic Compute Cloud (Amazon EC2) and Amazon S3. “By using AWS, we can run a bank with more than 4 million customers with just eight people on our infrastructure and reliability team.”

Union Bank of the Philippines (UnionBank) aims to improve what it calls “prosperity inclusion”. A crucial part of this objective is its digital transformation on AWS. Since moving to Amazon S3 and Amazon S3 Glacier, the bank is saving 20 million pesos (US$380,500) annually, a figure that will double when it completely migrates its Tier 1 workloads.

Achieving compliance with Amazon S3

Amazon S3 is an AWS managed service, and third-party auditors regularly assess the security and compliance of it as part of multiple AWS compliance programs. As part of the AWS shared responsibility model, security and compliance are shared responsibilities between AWS and the customer. Amazon S3 is in the scope of the following compliance programs.

  • SOC 1,2,3
  • PCI
  • ISO/IEC 27001:2013, 27017:2015, 27018:2019, 27701:2019, 22301:2019, 9001:2015 and CSA STAR CCM v4.0
  • ISMAP
  • FedRAMP
  • DoD CC SRG
  • HIPAA
  • IRAP
  • MTCS (Regions: US-East (Ohio and N. Virginia), US-West (Oregon and N. California), Singapore, Seoul)
  • C5
  • K-ISMS
  • ENS High
  • OSPAR
  • HITRUST CSF
  • FINMA
  • GSMA (Regions: US-East (Ohio) and Europe (Paris))
  • PiTuKri
  • CCCS MEDIUM
  • GNS National Restricted Certification
  • IAR (United Arab Emirates Information Assurance Regulation)

You can obtain corresponding compliance reports under an AWS non-disclosure agreement (NDA) through AWS Artifact. Note that Amazon S3 compliance status doesn’t automatically apply to applications that you run in the AWS Cloud. You must make sure that your use of AWS services complies with the standards.

Your scope of the shared responsibility model when using Amazon S3 is determined by the sensitivity of your data, your organization’s compliance objectives, and applicable laws and regulations. If your use of Amazon S3 is subject to compliance with standards like HIPAA, PCI, or FedRAMP, then AWS provides resources to help.

Encryption with Amazon S3

At AWS, we recommend that encryption is applied to complement other access controls that are already in place. Data protection refers to protecting data while in-transit (as it travels to and from Amazon S3) and at rest (while it is stored on disks in Amazon S3 data centers). You can protect data in transit using Secure Socket Layer/Transport Layer Security (SSL/TLS) or client-side encryption. You have the following options for protecting data at rest in Amazon S3:

Server-side encryption

Amazon S3 applies server-side encryption with Amazon S3 managed keys (SSE-S3) as the base level of encryption for every bucket in Amazon S3. With this default encryption behavior for an S3 bucket, all new object uploads to Amazon S3 will be automatically encrypted at no additional cost and with no impact on performance. Customers can choose to update this default configuration using AWS KMS keys stored in AWS Key Management Service (AWS KMS) (SSE-KMS) or customer-provided keys (SSE-C).

The following are the three mutually exclusive options, depending on how you choose to manage the encryption keys.

SSE-S3: Amazon S3 encrypts each object with a unique key. As an additional safeguard, it encrypts the key itself with a key that it rotates regularly. Amazon S3 server-side encryption uses one of the strongest block ciphers available to encrypt your data – 256-bit Advanced Encryption Standard (AES-256). There are no additional fees for using server-side encryption with Amazon S3-managed keys (SSE-S3).

SSE-KMS: AWS Key Management Service (AWS KMS) is a service that combines secure, highly available hardware and software to provide a key management system scaled for the cloud. Amazon S3 uses server-side encryption with AWS KMS (SSE-KMS) to encrypt your Amazon S3 object data. There are additional charges for using AWS KMS keys. For more information, see AWS KMS key concepts in the AWS Key Management Service Developer Guide and AWS KMS pricing. With SSE-KMS, you can:

  • Centrally create, view, edit, monitor, enable or disable, rotate, and schedule the deletion of KMS keys.
  • Define the policies that control how and by whom KMS keys can be used.
  • Audit their usage to prove that they are being used correctly.

S3 Bucket Keys for SSE-KMS: If you decide to use SSE-KMS with a workload that accesses millions or billions of objects, then you may consider using S3 Bucket Keys for SSE-KMS. AWS generates a short-lived bucket-level key from AWS KMS, and then temporarily keeps it in Amazon S3. This bucket-level key will create data keys for new objects during its lifecycle. S3 Bucket Keys are used for a limited time period within Amazon S3, thereby reducing the need for Amazon S3 to make requests to AWS KMS to complete encryption operations. This reduces traffic from Amazon S3 to AWS KMS, thereby allowing you to access AWS KMS-encrypted objects in Amazon S3 at a fraction of the previous cost.

SSE-C: By using server-side encryption with customer-provided keys (SSE-C), you can store your own encryption keys. With the encryption key that you provide as part of your request, Amazon S3 manages data encryption as it writes to disks and data decryption when you access your objects. Therefore, you don’t need to maintain any code to perform data encryption and decryption. The only thing that you need to do is manage the encryption keys that you provide.

When designing for FSI workloads, SSE-KMS is typically recommended to meet encryption-related compliance requirements, as it provides the ability to manage the AWS KMS encryption keys separate from the data itself within the fully-managed AWS KMS service. This lets customers define key policies that control how and by whom the AWS KMS key can be used. It also enables separate auditability of the AWS KMS encryption key usage and Amazon S3 data usage.

If FSI customers have regulatory needs to provide their own key material, then SSE-C provides similar benefits as SSE-KMS.

Client-side encryption

Client-side encryption is the act of encrypting your data locally for security as it passes to the Amazon S3 service. The Amazon S3 service receives your encrypted data – It doesn’t play a role in encrypting or decrypting it. If you encrypt data client-side, then you can choose to use a key stored within your application or a key stored with AWS KMS. See more on how to setup this option here.

Aside from encrypting your objects, we recommend enabling Amazon S3 versioning on objects if the workload requires the ability to easily recover from both unintended user actions and application failures. Similarly, consider enabling Amazon S3 Object Lock to prevent accidental or malicious deletions and overwrites of data. Customers can use Object Lock to help meet regulatory requirements that require write-once-read-many (WORM) storage, or to simply add another layer of protection against object changes and deletion. Object Lock provides two ways to manage object retention: retention periods and legal holds.

Isolation of environments with Amazon S3

As a managed service, Amazon S3 is protected by the AWS global network security procedures that are described in the security pillar of the AWS Well-Architected Framework.

Access to Amazon S3 via the network is through AWS published APIs. Clients must support Transport Layer Security (TLS) 1.0. We recommend TLS 1.2. Clients must also support cipher suites with Perfect Forward Secrecy (PFS), such as Ephemeral Diffie-Hellman (DHE) or Elliptic Curve Diffie-Hellman Ephemeral (ECDHE). Additionally, requests must be signed using AWS Signature V4 or AWS Signature V2, requiring valid credentials to be provided. You can enforce clients using TLS 1.2 or higher for all connections to your S3 buckets by using a resource-based policy attached to your bucket. An example of that is as follows:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "EnforceTLSv12orHigher",
      "Principal": {
        "AWS": "*"
      },
      "Action": [
        "s3:*"
      ],
      "Effect": "Deny",
      "Resource": [
        "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*",
        "arn:aws:s3:::DOC-EXAMPLE-BUCKET"
      ],
      "Condition": {
        "NumericLessThan": {
          "s3:TlsVersion": 1.2
        }
      }
    }
  ]
}

Although these APIs can be called from any network location, Amazon S3 supports resource-based access policies, which can include restrictions based on the source IP address. You can also use Amazon S3 bucket policies to control access to buckets from specific virtual private cloud (VPC) endpoints (explored in the later section), or specific VPCs. This isolates network access to a given Amazon S3 bucket from only the specific VPC within the AWS network.

Access Points can be used to provide secure access to multi-tenant S3 buckets. Access points are named network endpoints that can be assigned to tenants using the same bucket to perform Amazon S3 object operations, such as GetObject and PutObject, only on their own data, which could be specific objects, tags, prefixes, or the entire bucket, as desired. Like Amazon S3 bucket policies, access point policies provide the capability to restrict access to the access point from resources within a specific VPC. While using Amazon S3 access points note that these endpoints allow operations on objects in the bucket and not the bucket itself. For a complete list of Amazon S3 operations and feature support, refer to the compatible Amazon S3 operations. When an S3 access point is created for a bucket, Amazon S3 automatically generates an access point alias. This is a named alias that can be used in any operation that requires a bucket name. The access point can be referenced either using the full Amazon Resource Name (ARN) or the alias for the access point.

Example of architecture utilizing S3 Access Points to abstract S3 Bucket data

Figure 1: Access Point Architecture Example

You can use Access Point in conjunction with Amazon S3 Object Lambda to perform specific business logic when calling the Access Point. S3 Object Lambda works with your existing applications and uses AWS Lambda functions to automatically process and transform your data as it is being retrieved from Amazon S3. The Lambda function is invoked in-line with a standard S3 GET request. Therefore, you don’t need to change your application code. You can add business logic within the Lambda function to perform certain tasks upon the GET request. However, one example is to detect and redact PII data.

Automating audits with APIs with Amazon S3

There are different ways to gain insights into Amazon S3 activity in your AWS accounts.

Amazon S3 is integrated with AWS CloudTrail, a service that records user activity and API calls on resources within your AWS accounts. CloudTrail captures a subset of API calls for Amazon S3 as events, including calls from the Amazon S3 console, and code calls to the Amazon S3 APIs. If you create a trail, then you can enable continuous delivery of CloudTrail events to an S3 bucket, including events for Amazon S3. If you don’t configure a trail, then you can still view the most recent events in the CloudTrail console in Event history. CloudTrail logging provides identity information for events like:

  • Whether the request was made with root or AWS Identity and Access Management (IAM) user credentials.
  • Whether the request was made with temporary security credentials for a role or federated user.
  • Whether the request was made by another AWS service.

By default, CloudTrail logs S3 bucket-level API calls that were made in the last 90 days, but not log requests made to objects. You can also get CloudTrail logs for object-level Amazon S3 actions. To do this, enable data events for your S3 bucket or all buckets in your account. When an object-level action occurs in your account, CloudTrail evaluates your trail settings. If the event matches the object that you specified in a trail, then the event is logged. To learn more about CloudTrail, see the AWS CloudTrail User Guide.

The following example shows a CloudTrail log entry for the CreateBucket action.

{
  "eventVersion": "1.08",
  "userIdentity": {
    "type": "AssumedRole",
    "principalId": "<IAMuser>",
    "arn": "<user-arn>",
    "accountId": "<accountId>",
    "accessKeyId": "<accessKeyId>",
    "sessionContext": {
      "sessionIssuer": {
        "type": "Role",
        "principalId": "<principalId>",
        "arn": "<principal-arn>",
        "accountId": "<accountId>",
        "userName": "<user-Name>"
      },
      "webIdFederationData": {},
      "attributes": {
        "creationDate": "2022-12-19T15:14:06Z",
        "mfaAuthenticated": "false"
      }
    }
  },
  "eventTime": "2022-12-19T15:16:44Z",
  "eventSource": "s3.amazonaws.com",
  "eventName": "CreateBucket",
  "awsRegion": "us-east-1",
  "sourceIPAddress": "<sourceIPAddress>",
  "userAgent": "[AWSCloudTrail, aws-internal/3 aws-sdk-java/1.11.1030 Linux/5.10.157-122.673.amzn2int.x86_64 OpenJDK_64-Bit_Server_VM/25.352-b10 java/1.8.0_352 vendor/Oracle_Corporation cfg/retry-mode/standard]",
  "requestParameters": {
    "bucketName": "<cloudtrail-logs-target-bucket>",
    "Host": "s3-external-1.amazonaws.com"
  },
  "responseElements": null,
  "additionalEventData": {
    "SignatureVersion": "SigV4",
    "CipherSuite": "ECDHE-RSA-AES128-GCM-SHA256",
    "bytesTransferredIn": 0,
    "AuthenticationMethod": "AuthHeader",
    "x-amz-id-2": "<x-amz-id-2>",
    "bytesTransferredOut": 0
  },
  "requestID": "requestID",
  "eventID": "eventID",
  "readOnly": false,
  "eventType": "AwsApiCall",
  "managementEvent": true,
  "recipientAccountId": "<accountId>",
  "vpcEndpointId": "<vpcEndpointId>",
  "eventCategory": "Management",
  "tlsDetails": {
    "tlsVersion": "TLSv1.2",
    "cipherSuite": "ECDHE-RSA-AES128-GCM-SHA256",
    "clientProvidedHostHeader": "s3-external-1.amazonaws.com"
  }
}

Access requests to Amazon S3 buckets can be audited using Server Access Logs. Logging is disabled by default, but it can be enabled to collect log information that can be useful in security and access audits. This also provides valuable insights into access patterns to S3 buckets from your customer-facing applications, thereby helping understand your customer base.

When you enable logging, Amazon S3 delivers access logs for a source bucket to a target bucket that you choose. The target bucket must be in the same AWS Region and AWS account as the source bucket, and it must not have a default retention period configuration. For simpler log management, we recommend that you save access logs in a different bucket.

Amazon S3 Storage Lens is a cloud storage analytics feature that you can use to gain organization-wide visibility into object-storage usage and activity. S3 Storage Lens metrics can be used to generate summary insights into how much storage you have across your entire organization, identify fastest-growing buckets and prefixes, identify cost-optimization opportunities, implement data-protection and access-management best practices, and improve the performance of application workloads. S3 Storage Lens also provides an interactive dashboard that you can use to visualize insights and trends, flag outliers, and receive recommendations for optimizing storage costs and applying data-protection best practices. In addition to viewing the dashboard on the Amazon S3 console, you can export metrics in CSV or Parquet format to an S3 bucket for further analysis with the analytics tool of your choice. To learn more about Amazon S3 storage lens, see the S3 Storage Lens user guide.

AWS Config can help automate governance of Amazon S3 configurations and take automated remediation. There is an ‘Operational Best Practices for Amazon S3’ Conformance Pack that we recommend enabling to make sure that you’re tracking the governance posture of your Amazon S3 usage. The full Conformance Pack template can be found here. However, some of the Config Rules are as follows:

Access control and security with Amazon S3

Amazon S3 is used by FSI customers globally and is secure by default. By default, all Amazon S3 resources – buckets, objects, and related sub-resources (for example, lifecycle configuration and website configuration) – are private. Only the resource owner, the AWS account that created it, can access the resource. The resource owner can optionally grant access permissions to others by writing an access policy. Additionally, the security and compliance of Amazon S3 is assessed by third-party auditors as part of multiple AWS compliance programs, as discussed in the above section.

However, customers can choose to further configure access controls depending on their workloads. Operational access to Amazon S3 buckets are typically controlled by a combination of resource-based policy and identity-based policy.

A bucket policy is a resource-based policy that is used to grant access to a bucket and objects in it. Only the bucket owner can associate a policy with a bucket. The permissions attached to the bucket apply to all of the objects in the bucket that are owned by the bucket owner. These permissions do not apply to objects that are owned by other AWS accounts.

The following example denies all users from performing any Amazon S3 operations on objects in the specified buckets unless the request originates from the specified range of IP addresses (example, 192.0.2.0/24):

{
  "Version": "2012-10-17",
  "Id": "S3PolicyId1",
  "Statement": [
    {
      "Sid": "IPAllow",
      "Effect": "Deny",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": [
        "arn:aws:s3:::DOC-EXAMPLE-BUCKET",
        "arn:aws:s3:::DOC-EXAMPLE-BUCKET1/*"
      ],
      "Condition": {
        "NotIpAddress": {
          "aws:SourceIp": "192.0.2.0/24"
        }
      }
    }
  ]
}

With identity-based policies (IAM user policies), you can attach permissions to IAM users, groups, and roles to perform various operations on S3 buckets.

The following is an example of an identity-based policy that grants access to an anonymous user to perform specific object level operations within a specific S3 bucket prefix:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:GetObjectVersion",
        "s3:DeleteObject",
        "s3:DeleteObjectVersion"
      ],
      "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET1/TenantX/*"
    }
  ]
}

You must implement IAM policies that follow the principle of least privilege. This enables specific actions on S3 buckets and objects to specific users and only grants enough permissions required to perform their tasks.

Starting April 2023, all newly created buckets in the AWS regions targeted by the change will by default have Amazon S3 Block Public Access enabled and access control lists (ACLs) disabled. Both of these options are already console defaults and have long been recommended as best practices. The options will become the default for buckets that are created using the S3 APIS3 CLI, the AWS SDKs, or AWS CloudFormation templates. Customers can adjust these settings after creating the buckets. More information can be found here.

With AWS PrivateLink for Amazon S3, you can provision interface VPC endpoints in your VPC. These endpoints are directly accessible from applications that are on-premises over VPN and AWS Direct Connect, or in a different AWS Region over VPC peering. AWS PrivateLink enables customers to access services hosted on AWS in a highly available and scalable manner, while keeping all the network traffic within the AWS network. Using VPC endpoints improves the security posture of S3 bucket access in multiple ways:

  • You can control the requests, users, or groups that are allowed through a specific VPC endpoint, by using a VPC endpoint policy.
  • You can only allow specific VPCs or VPC endpoints to have access to your S3 buckets by using S3 bucket policies.
  • You can help prevent data exfiltration by using a VPC that doesn’t have an Internet Gateway (IGW).

The following is an example of an S3 bucket policy that restricts access to a bucket from a specific VPC endpoint:

{
  "Version": "2012-10-17",
  "Id": "Policy1415115909152",
  "Statement": [
    {
      "Sid": "Access-to-specific-VPCE-only",
      "Principal": "*",
      "Action": "s3:*",
      "Effect": "Deny",
      "Resource": [
        "arn:aws:s3:::awsexamplebucket1",
        "arn:aws:s3:::awsexamplebucket1/*"
      ],
      "Condition": {
        "StringNotEquals": {
          "aws:SourceVpce": "vpce-1a2b3c4d"
        }
      }
    }
  ]
}

And the following is another example of an S3 bucket policy that restricts access to a bucket from a specific VPC, which applies automatically to all VPC endpoints within that VPC:

{
  "Version": "2012-10-17",
  "Id": "Policy1415115909153",
  "Statement": [
    {
      "Sid": "Access-to-specific-VPC-only",
      "Principal": "*",
      "Action": "s3:*",
      "Effect": "Deny",
      "Resource": [
        "arn:aws:s3:::awsexamplebucket1",
        "arn:aws:s3:::awsexamplebucket1/*"
      ],
      "Condition": {
        "StringNotEquals": {
          "aws:SourceVpc": "vpc-111bbb22"
        }
      }
    }
  ]
}

Conclusion

In this post, we reviewed Amazon S3 and highlighted key information that can help FSI customers accelerate the approval of the service within these five categories: achieving compliance, encryption, object locks, isolation of compute environments, automating audits with APIs, and access control and security. Although not a one-size-fits-all approach, the guidance can be adapted to meet your organization’s security and compliance requirements and provide a consolidated list of key areas for Amazon S3.

In the meantime, visit our AWS Financial Services Industry blog channel and stay tuned for more FSI news and best practices.

Any discussion of reference architectures in this post is illustrative and for informational purposes only. It is based on the information available at the time of publication. Any steps/recommendations are meant for educational purposes and initial proof of concepts, and not a full-enterprise solution.

Anthony Pasquariello

Anthony Pasquariello

Anthony is a Senior Solutions Architect at AWS based in New York City. He specializes in modernization and security for our advanced enterprise customers. Anthony enjoys writing and speaking about all things cloud. He’s pursuing an MBA, and received his MS and BS in Electrical & Computer Engineering.

Sayan Chakraborty

Sayan Chakraborty

Sayan is a Sr. Solutions Architect at AWS. He helps large enterprises build secure, scalable, and performant solutions in the AWS Cloud. With a background of Enterprise and Technology Architecture, he has experience delivering large scale digital transformation programs across a wide range of industry verticals. He holds a B. Tech. degree in Computer Engineering from Manipal University, Sikkim, India.