How can I troubleshoot connectivity failures and errors for an AWS DMS task that uses Amazon Redshift as the target endpoint?
Last updated: 2022-01-21
How can I troubleshoot connectivity failures and errors for an AWS Database Migration Service (AWS DMS) task that uses Amazon Redshift as the target endpoint?
When you test the connectivity to an Amazon Redshift endpoint, the test can fail if you haven't met the prerequisites for using an Amazon Redshift database as a target for AWS Database Migration Service. This can happen if you haven't created and configured the required AWS Identity and Access Management (IAM) role or the Amazon Simple Storage Service (Amazon S3) bucket name for an endpoint ARN is in use. The required IAM role is created automatically when you use the AWS DMS console, but it isn't created if you use the AWS DMS API or the AWS Command Line Interface (AWS CLI).
A connectivity test can also fail if there are problems with the network configuration of the AWS DMS task. To troubleshoot endpoint connectivity errors, see How can I troubleshoot AWS DMS endpoint connectivity failures?
If the required IAM role isn't created and configured correctly, you might receive an error similar to the following:
Role 'dms-access-for-endpoint' is not configured properly
Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent AWS CLI version.
Resolve Role 'dms-access-for-endpoint' is not configured properly errors
To resolve this error, confirm that the dms-access-for-endpoint IAM role is created and configured correctly. For information about the configuration of this role, see Creating the IAM Roles to use with the AWS CLI and AWS DMS API.
If the Amazon managed policy isn't configured correctly, you might receive an error similar to the following:
Unable to create S3 bucket for Redshift. Bucket Name for endpoint ARN is in use.
This error occurs when:
- The Amazon managed policy (AmazonDMSRedshiftS3Role or a similar custom policy) isn't attached to the dms-access-for-endpoint IAM role.
- The dms-access-for-endpoint IAM role policy has an explicit deny for Amazon S3.
- The preconfigured Amazon S3 bucket policy that AWS DMS created automatically and associated with the Amazon Redshift endpoint has been modified with explicit restriction.
To resolve this error, attach the default managed policy (AmazonDMSRedshiftS3Role) or similar custom policy to the dms-access-for-endpoint IAM role. Then, confirm that the default Amazon S3 bucket policy associated by AWS DMS hasn't been modified. For more information, see Amazon S3 bucket settings.
Migrate data to an Amazon Redshift endpoint
When you migrate data to an Amazon Redshift target endpoint, DMS uses a default Amazon S3 bucket as intermediate task storage. It then copies the migrated data to Amazon Redshift. When you run the test connection for the target Amazon Redshift endpoint, this automatically creates an S3 bucket with the following naming convention:
dms-'Redshift endpoint ARN'
You can choose a custom S3 bucket for this intermediate storage. For more information, see Using an Amazon Redshift database as a target for AWS Database Migration Service.
Resolve Amazon S3; Status Code: 400; Error Code: TooManyBuckets errors
If your account has reached the limit for Amazon S3, you might receive an error similar to the following when you test your endpoint:
- Service: Amazon S3; Status Code: 400; Error Code: TooManyBuckets; Request ID: xxxxxxxxxxx; S3 Extended Request ID: xxxxxxxxxxxxxx; Proxy: null
To resolve this error, delete unused buckets from your account, and test the endpoint again.
Resolve <NoSuchBucket> The specified bucket does not exist errors
If you delete an Amazon S3 bucket that is created by AWS DMS during task migration, you might receive an error similar to the following in the task logs:
- <NoSuchBucket> The specified bucket does not exist.
To resolve this issue, test the connection for your Amazon Redshift endpoint, and then restart or resume the task. If you have configured your DMS endpoint to use as a custom bucket, then make sure that it is available in Amazon S3 before starting the task again.