When I try to import Amazon DynamoDB tables into Amazon EMR using Hive, I get an error message similar to the following:

The provided key element does not match the schema (Service: AmazonDynamoDBv2; Status Code: 400; Error Code

This is a 400 error, which means that the most common causes are an incorrect schema, corrupt data, mismatched data, and so on. Be sure that your request mapping template structure matches the Amazon DynamoDB key structure. For more information, see Incorrect DynamoDB key mapping.

If you are still getting the error message after ruling out the most common causes, check the Hive application logs. The logs are located in the /mnt/var/log/hive directory on the master node of your Amazon EMR cluster. Look for logs like this:

2018-02-01 08:17:27,782 [INFO] [TezChild] |s3n.S3NativeFileSystem|: Opening 's3://bucket/folder/ddb_hive.sql' for reading
2018-02-01 08:17:27,817 [INFO] [TezChild] |exec.Utilities|: PLAN PATH = hdfs://ip-172-31-xx-xxx.ec2.internal:8020/tmp/hive/hadoop/e27c0150-663a-4935-ba7c-a321a6b077bd/hive_2018-02-01_08-14-39_999_8671394604140937828-1/hadoop/_tez_scratch_dir/208eda2e-6ec4-40da-a3b0-79db50920ec4/map.xml
2018-02-01 08:17:27,817 [INFO] [TezChild] |lazy.LazyStruct|: Missing fields! Expected 3 fields but only got 2! Ignoring similar problems.

In this example, the Hive script is in the same location on Amazon Simple Storage Service (Amazon S3) as the input files. The import job is sending the Hive script to the DynamoDB table as data, as well as using it in the import job. To resolve this problem, move the Hive script to a different location in Amazon S3.


Did this page help you? Yes | No

Back to the AWS Support Knowledge Center

Need help? Visit the AWS Support Center

Published: 2018-07-17