How do I troubleshoot 403 Access Denied errors from Amazon S3?

Last updated: 2022-05-13

My users are trying to access objects in my Amazon Simple Storage Service (Amazon S3) bucket, but Amazon S3 is returning the 403 Access Denied error. How can I troubleshoot this error?

Resolution

Use the AWS Systems Manager automation document

Use the AWSSupport-TroubleshootS3PublicRead automation document on AWS Systems Manager. This automation document helps you diagnose issues reading objects from a public S3 bucket that you specify.

Check bucket and object ownership

For AccessDenied errors from GetObject or HeadObject requests, check whether the object is also owned by the bucket owner. Also, verify whether the bucket owner has read or full control access control list (ACL) permissions.

Confirm the account that owns the objects

By default, an S3 object is owned by the AWS account that uploaded it. This is true even when the bucket is owned by another account. If other accounts can upload objects to your bucket, then verify the account that owns the objects that your users can't access.

Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.

1.    Run the list-buckets AWS Command Line Interface (AWS CLI) command to get the Amazon S3 canonical ID for your account by querying the Owner ID.

aws s3api list-buckets --query "Owner.ID"

2.    Run the list-objects command to get the Amazon S3 canonical ID of the account that owns the object that users can't access. Replace DOC-EXAMPLE-BUCKET with the name of your bucket and exampleprefix with your prefix value.

aws s3api list-objects --bucket DOC-EXAMPLE-BUCKET --prefix exampleprefix

Tip: Use the list-objects command to check several objects.

3.    If the canonical IDs don't match, then you don't own the object. The object owner can grant you full control of the object by running the put-object-acl command. Replace DOC-EXAMPLE-BUCKET with the name of the bucket that contains the objects. Replace exampleobject.jpg with your key name.

aws s3api put-object-acl --bucket DOC-EXAMPLE-BUCKET --key exampleobject.jpg --acl bucket-owner-full-control

4.    After the object owner changes the object's ACL to bucket-owner-full-control, the bucket owner can access the object. However, the ACL change alone doesn't change ownership of the object. To change the object owner to the bucket's account, run the cp command from the bucket's account to copy the object over itself.

Copy all new objects to a bucket in another account

1.    Set a bucket policy that requires objects to be uploaded with the bucket-owner-full-control ACL.

2.    Enable and set S3 Object Ownership to bucket owner preferred in the AWS Management Console.

The object's owner is then automatically updated to the bucket owner when the object is uploaded with the bucket-owner-full-control ACL.

Create an IAM role with permissions to your bucket

For ongoing cross-account permissions, create an IAM role in your account with permissions to your bucket. Then, grant another AWS account the permission to assume that IAM role. For more information, see Tutorial: Delegate access across AWS accounts using IAM roles.

Check the bucket policy or IAM user policies

Review the bucket policy or associated IAM user policies for any statements that might be denying access. Verify that the requests to your bucket meet any conditions in the bucket policy or IAM policies. Check for any incorrect deny statements, missing actions, or incorrect spacing in a policy.

Deny statement conditions

Check deny statements for conditions that block access based on the following:

  • multi-factor authentication (MFA)
  • encryption keys
  • specific IP address
  • specific VPCs or VPC endpoints
  • specific IAM users or roles

Note: If you require MFA and users send requests through the AWS CLI, then make sure that the users configure the AWS CLI to use MFA.

For example, in the following bucket policy, Statement1 allows public access to download objects (s3:GetObject) from DOC-EXAMPLE-BUCKET. However, Statement2 explicitly denies everyone access to download objects from DOC-EXAMPLE-BUCKET unless the request is from the VPC endpoint vpce-1a2b3c4d. In this case, the deny statement takes precedence. This means that users who try to download objects from outside of vpce-1a2b3c4d are denied access.

{
  "Id": "Policy1234567890123",
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Statement1",
      "Action": [
        "s3:GetObject"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*",
      "Principal": "*"
    },
    {
      "Sid": "Statement2",
      "Action": [
        "s3:GetObject"
      ],
      "Effect": "Deny",
      "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*",
      "Condition": {
        "StringNotEquals": {
          "aws:SourceVpce": "vpce-1a2b3c4d"
        }
      },
      "Principal": "*"
    }
  ]
}

Bucket policies or IAM policies

Check that the bucket policy or IAM policies allow the Amazon S3 actions that your users need. For example, the following bucket policy doesn’t include permission to the s3:PutObjectAcl action. If the IAM user tries to modify the access control list (ACL) of an object, then the user gets an Access Denied error.

{
  "Id": "Policy1234567890123",
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Stmt1234567890123",
      "Action": [
        "s3:PutObject"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*",
      "Principal": {
        "AWS": [
          "arn:aws:iam::111122223333:user/Dave"
        ]
      }
    }
  ]
}

Other policy errors

Check that there aren’t any extra spaces or incorrect ARNs in the bucket policy or IAM user policies.

For example, if an IAM policy has an extra space in the Amazon Resource Name (ARN) as follows: arn:aws:s3::: DOC-EXAMPLE-BUCKET/*. In this case, the ARN is then incorrectly evaluated as arn:aws:s3:::%20DOC-EXAMPLE-BUCKET/ and gives the IAM user an access denied error.

Confirm that IAM permissions boundaries allow access to Amazon S3

Review the IAM permissions boundaries that are set on the IAM identities that are trying to access the bucket. Confirm that the IAM permissions boundaries allow access to Amazon S3.

Check the bucket's Amazon S3 Block Public Access settings

If you're getting Access Denied errors on public read requests that are allowed, check the bucket's Amazon S3 Block Public Access settings.

Review the S3 Block Public Access settings at both the account and bucket level. These settings can override permissions that allow public read access. Amazon S3 Block Public Access can apply to individual buckets or AWS accounts.

Review user credentials

Review the credentials that your users have configured to access Amazon S3. AWS SDKs and the AWS CLI must be configured to use the credentials of the IAM user or role with access to your bucket.

For the AWS CLI, run the configure command to check the configured credentials:

aws configure list

If users access your bucket through an Amazon Elastic Compute Cloud (Amazon EC2) instance, then verify that the instance is using the correct role. Connect to the instance, then run the get-caller-identity command:

aws sts get-caller-identity

Review temporary security credentials

If users receive Access Denied errors from temporary security credentials granted using AWS Security Token Service (AWS STS), then review the associated session policy. When an administrator creates temporary security credentials using the AssumeRole API call, or the assume-role command, they can pass session-specific policies.

To find the session policies associated with the Access Denied errors from Amazon S3, look for AssumeRole events within the AWS CloudTrail event history. Make sure to look for AssumeRole events in the same timeframe as the failed requests to access Amazon S3. Then, review the requestParameters field in the relevant CloudTrail logs for any policy or policyArns parameters. Confirm that the associated policy or policy ARN grants the necessary Amazon S3 permissions.

For example, the following snippet of a CloudTrail log shows that the temporary credentials include an inline session policy that grants s3:GetObject permissions to DOC-EXAMPLE-BUCKET:

"requestParameters": {
        "roleArn": "arn:aws:iam::123412341234:role/S3AdminAccess",
        "roleSessionName": "s3rolesession",
        "policy": "{\n    \"Version\": \"2012-10-17\",\n    \"Statement\": [\n  {\n  \"Effect\": \"Allow\",\n           
         \"Action\": [\n   \"s3:GetObject\"\n ],\n    \"Resource\": [\n \"arn:aws:s3:::DOC-EXAMPLE-BUCKET/*\"\n  ]\n   }  }\n    ]\n}\n"
    }

Confirm that the Amazon VPC endpoint policy includes the correct permissions to access your S3 buckets and objects

If users access your bucket with an EC2 instance routed through a VPC endpoint, then check the VPC endpoint policy.

For example, the following VPC endpoint policy allows access only to DOC-EXAMPLE-BUCKET. Users who send requests through this VPC endpoint can’t access any other bucket.

{
  "Id": "Policy1234567890123",
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Stmt1234567890123",
      "Action": [
        "s3:GetObject",
        "s3:PutObject",
        "s3:ListBucket"
      ],
      "Effect": "Allow",
      "Resource": [
        "arn:aws:s3:::DOC-EXAMPLE-BUCKET",
        "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*"
      ],
      "Principal": "*"
    }
  ]
}

Review your Amazon S3 access point's IAM policy

If you use an Amazon S3 access point to manage access to your bucket, then review the access point's IAM policy.

Permissions granted in an access point policy are only effective if the underlying bucket policy also allow the same access. Confirm that the bucket policy and access point policy grant the correct permissions.

Confirm that the object isn't missing object or contains special characters

Check whether the requested object exists in the bucket. Otherwise, the request doesn't find the object and Amazon S3 assumes that the object doesn't exist. You receive an Access Denied error (instead of 404 Not Found errors) if you don't have proper s3:ListBucket permissions.

An object that has a special character (such as a space) requires special handling to retrieve the object.

Run the head-object AWS CLI command to check if an object exists in the bucket. Replace DOC-EXAMPLE-BUCKET with the name of the bucket that you want to check.

aws s3api head-object --bucket DOC-EXAMPLE-BUCKET --key exampleobject.jpg

If the object exists in the bucket, then the Access Denied error isn't masking a 404 Not Found error. Check other configuration requirements to resolve the Access Denied error.

If the object isn’t in the bucket, then the Access Denied error is masking a 404 Not Found error. Resolve the issue related to the missing object.

Check the AWS KMS encryption configuration

Note the following about AWS KMS (SSE-KMS) encryption:

  • If an IAM user can’t access an object that the user has full permissions to, then check if the object is encrypted by SSE-KMS. You can use the Amazon S3 console to view the object’s properties, which include the object’s server-side encryption information.
  • If the object is SSE-KMS encrypted, then make sure that the KMS key policy grants the IAM user the minimum required permissions for using the key. For example, if the IAM user is using the key only for downloading an S3 object, then the IAM user must have kms:Decrypt permissions. For more information, see Allows access to the AWS account and enables IAM policies.
  • If the IAM identity and key are in the same account, then kms:Decrypt permissions should be granted using the key policy. The key policy must reference the same IAM identity as the IAM policy.
  • If the IAM user belongs to a different account than the AWS KMS key, then these permissions must also be granted on the IAM policy. For example, to download the SSE-KMS encrypted objects, the kms:Decrypt permissions must be specified in both the key policy and IAM policy. For more information about cross-account access between the IAM user and KMS key, see Allowing users in other accounts to use a KMS key.

Confirm that the request-payer parameter is specified by users (if you're using Requester Pays)

If your bucket has Requester Pays activated, then users from other accounts must specify the request-payer parameter when they send requests to your bucket. To check whether Requester Pays is enabled, use the Amazon S3 console to view your bucket’s properties.

The following example AWS CLI command includes the correct parameter to access a cross-account bucket with Requester Pays:

aws s3 cp exampleobject.jpg s3://DOC-EXAMPLE-BUCKET/exampleobject.jpg --request-payer requester

Check your AWS Organizations service control policy

If you're using AWS Organizations, then check the service control policies to make sure that access to Amazon S3 is allowed. Service control policies specify the maximum permissions for the affected accounts. For example, the following policy explicitly denies access to Amazon S3 and results in an Access Denied error:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Deny",
      "Action": "s3:*",
      "Resource": "*"
    }
  ]
}

For more information on the features of AWS Organizations, see Enabling all features in your organization.


Did this article help?


Do you need billing or technical support?