Why is Kinesis Data Streams trigger unable to invoke my Lambda function?

Last updated: 2020-06-30

I've integrated AWS Lambda with Amazon Kinesis Data Streams as an event source to process my Amazon Kinesis data stream. However, the Lambda function is not being invoked. Why is this happening and how do I resolve this?

Short description

Lambda function errors are often caused by the following:

  • Insufficient permissions in the Lambda function's execution role.
  • No incoming data into the Kinesis data stream.
  • Inactive event source mapping caused by the recreation of a Kinesis data stream, Lambda function, or Lambda execution role.
  • A Lambda function that exceeds the execution limit, causing a timeout error in the Lambda function.
  • Lambda breaches its limits of concurrent executions. For more information about Lambda breaching its limits, see AWS Lambda limits.

If there is a Lambda function error, your function is not invoked, nor does it process records from the batch. An error can cause Lambda to retry the batch of records until the processing succeeds or the batch expires. For more information about Lambda function and Kinesis errors, see Using AWS Lambda with Amazon Kinesis.

Resolution

To identify why your Lambda function is not invoked, perform the following steps:

1.    Check the Invocations metric in Amazon CloudWatch with statistics set as Sum for the Lambda function. The Invocations metric can help you verify whether the Lambda function is invoked.

2.    Check the IteratorAge metric to see how old the last record in the batch was or when the processing completed. When your Lambda consumer is unable to invoke, the iterator age of your stream increases.

3.    Check the Lambda function's CloudWatch logs, which are named in the .../aws/lambda/<function name> format. Look for any corresponding entries against the function error. For example, if the error occurs because of Lambda's execution role permissions, then modify the policy to grant proper access to the AWS Identity and Access Management (IAM) role.

4.    Confirm that your IAM execution role has the proper permissions to access CloudWatch:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": "*"
        }
    ]
}

5.    (Optional) If you experience a permissions error, then update your Lambda function policy to grant it access to Kinesis Data Streams:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "kinesis:DescribeStream",
                "kinesis:DescribeStreamSummary",
                "kinesis:GetRecords",
                "kinesis:GetShardIterator",
                "kinesis:ListShards",
                "kinesis:ListStreams",
                "kinesis:SubscribeToShard",
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": "*"
        }
    ]
}

Note: The AWSLambdaKinesisExecutionRole policy includes these permissions.

Additional troubleshooting:

  • If the error is related to a Lambda execution function timeout, increase the timeout value to accommodate faster processing.
  • If your Kinesis data stream is encrypted using AWS Key Management Service (AWS KMS), the consumer and producer must have proper access. The Kinesis data stream must be able to access the AWS KMS keys that are used for encryption and decryption. The Lambda function's execution role must also have read access to the KMS key to successfully read data from the Kinesis data stream.
  • If the error is caused by an internal Lambda function error, then this error indicates an issue with stream processing. To avoid control plane API throttling, restrict each stream to 4 to 5 event source mappings.
  • If the error is caused by an internal Lambda function error, then it indicates an issue with stream processing. Restrict each stream to 4 to 5 event source mappings to avoid too many event source mappings with the same data stream. Multiple event source mappings with the same stream can result in a breach of Kinesis and Amazon DynamoDB control plane limits.
  • If you are getting a connection timeout error, then add logging statements before and after the API calls made in your code. You can then identify the exact line of code where the function begins to fail.
  • If you are experiencing slow or stalled shards, then configure the event source mapping to retry with a smaller batch size. You can also limit the number of retries or discard older records.
  • If you see a "memory used" error message in your CloudWatch logs, then increase the memory of your Lambda function.
  • If you exceeded the maximum timeout for your Lambda function, then modify the client library and client timeouts. To modify the timeout session based on the remaining time in the Lambda container, use the context.GetRemainingTimeInMillis function. The context.GetRemainingTimeInMillis function returns the amount of time left in the Lambda container before it times out.
  • If you receive errors from the Lambda function's code, then your Lambda function might get stuck trying the same record. Use a try-catch block to catch the failed data, and then record it using an Amazon SQS queue or Amazon SNS topic. You can also add a Lambda trigger to the Amazon SQS queue with the processing logic to retry the failed requests separately.
  • Set up an Amazon SQS dead-letter queue (DLQ) to manually invoke the Lambda function. Configure the DeadLetterConfig property when you create or update your Lambda function. You can provide an Amazon Simple Queue Service (Amazon SQS) queue or an Amazon Simple Notification Service (Amazon SNS) topic as the TargetArn for your DLQ. Lambda then writes the event object, invoking the Lambda function to the specified endpoint after the standard retry policy is exhausted.
  • Use AWS Lambda with AWS X-Ray to detect, analyze, and optimize performance issues with Lambda applications. AWS X-Ray collects metadata from the Lambda service and generates graphs that depict issues impacting the performance of Lambda application. For example, if there is a call that is taking a long time to execute, then you can use AWS X-Ray to confirm the issue.

Did this article help you?

Anything we could improve?


Need more help?