How can I push Amazon CloudWatch Logs cross-account to Kinesis Data Firehose?

Last updated: 2020-11-17

I want to stream Amazon CloudWatch Logs from Amazon Kinesis Data Firehose into another account in a different Region. How do I set up a cross-account stream from Kinesis Data Firehose to an Amazon Simple Storage Service (Amazon S3) bucket in another Region?

Short description

If you're trying to send Amazon CloudWatch Logs to a Kinesis Data Firehose stream in a different AWS Region, it can fail. 

Important: Make sure your Region supports Kinesis Data Firehose.

To establish cross-account and cross-Region streaming using Kinesis Data Firehose, perform the following steps:

1.    Create an S3 bucket in the target account. Create an AWS Identity and Access Management (IAM) role, and then attach the required permission for Kinesis Data Firehose to push data to S3.

2.    Create a destination for Kinesis Data Firehose in your destination account. Create an IAM role for Amazon CloudWatch Logs service to push to Kinesis Data Firehose. Then, create a destination for your delivery stream to which the logs will be pushed.

3.    Enable VPC Flow Logs and push the logs to Amazon CloudWatch in the source account.

4.    Create a subscription filter in your source account that points to the destination account.

5.    Validate the flow of log events in the S3 bucket in your destination account. 

Resolution

Setting up the destination account

1.    Create an Amazon S3 bucket:

aws s3api create-bucket --bucket my-bucket --create-bucket-configuration LocationConstraint=us-west-2 --region us-west-2

The location constraint indicates that the bucket will be created in the us-west-2 Region.

2.    Create the IAM role and trust policy that grant Kinesis Data Firehose the required permission:

{
    "Statement": {
        "Effect": "Allow",
        "Principal": {
            "Service": "firehose.amazonaws.com"
        },
        "Action": "sts:AssumeRole",
        "Condition": {
            "StringEquals": {
                "sts:ExternalId": "111111111111"
            }
        }
    }
}

The permission settings must allow Kinesis Data Firehose to put data into your Amazon S3 bucket. Replace "111111111111" with your AWS account ID.

3.    Create the IAM role and specify the trust policy file:

aws iam create-role \
    --role-name FirehosetoS3Role \
    --assume-role-policy-document file://~/TrustPolicyForFirehose.json
Note: You'll need to use the Role_Arn value in a later step.

4.    Create a permissions policy in a JSON file to define which actions Kinesis Data Firehose can perform within your account:

{
  "Statement": [
    {
      "Effect":
    "Allow",
      "Action": [ 
          "s3:AbortMultipartUpload", 
          "s3:GetBucketLocation", 
          "s3:GetObject", 
          "s3:ListBucket", 
          "s3:ListBucketMultipartUploads", 
          "s3:PutObjectAcl",
          "s3:PutObject"
    ],
      "Resource": [ 
          "arn:aws:s3:::my-bucket", 
          "arn:aws:s3:::my-bucket/*" ]
    }
    ]
}

5.    Associate the permissions policy with the IAM role:

aws iam put-role-policy --role-name FirehosetoS3Role --policy-name Permissions-Policy-For-Firehose --policy-document file://~/PermissionsForFirehose.json

6.    Create a destination delivery stream for Kinesis Data Firehose:

aws firehose create-delivery-stream --delivery-stream-name 'my-delivery-stream' --s3-destination-configuration RoleARN='arn:aws:iam::111111111111:role/FirehosetoS3Role',BucketARN='arn:aws:s3:::my-bucket' --region us-east-1

Replace "RoleARN" and "BucketARN" with the role and bucket Amazon Resource Names (ARNs) that you created.

Note: When an Amazon S3 object is successfully delivered to Kinesis Data Firehose, a custom prefix is used in the timestamp namespace expression. You can specify an extra prefix to be added in front of the time format prefix: yyyy/MM/dd/HH/. If the prefix ends with a forward slash (/), it appears as a folder in your S3 bucket.

7.    Use the describe-delivery-stream command to check the DeliveryStreamDescription.DeliveryStreamStatus property:

aws firehose describe-delivery-stream --delivery-stream-name "my-delivery-stream" --region us-east-1

Check the describe-delivery-stream command output to confirm that the stream is active:

{
    "DeliveryStreamDescription": {
        "DeliveryStreamType": "DirectPut", 
        "HasMoreDestinations": false, 
        "DeliveryStreamEncryptionConfiguration": {
            "Status": "DISABLED"
        }, 
        "VersionId": "1", 
        "CreateTimestamp": 1604484348.804, 
        "DeliveryStreamARN": "arn:aws:firehose:us-east-1:707800713736:deliverystream/my-delivery-stream", 
        "DeliveryStreamStatus": "ACTIVE", 
        "DeliveryStreamName": "my-delivery-stream", 
        "Destinations": [
            {
                "DestinationId": "destinationId-000000000001", 
                "ExtendedS3DestinationDescription": {
                    "RoleARN": "arn:aws:iam::111111111111:role/FirehosetoS3Role2test", 
                    "BufferingHints": {
                        "IntervalInSeconds": 300, 
                        "SizeInMBs": 5
                    }, 
                    "EncryptionConfiguration": {
                        "NoEncryptionConfig": "NoEncryption"
                    }, 
                    "CompressionFormat": "UNCOMPRESSED", 
                    "S3BackupMode": "Disabled", 
                    "CloudWatchLoggingOptions": {
                        "Enabled": false
                    }, 
                    "BucketARN": "arn:aws:s3:::kirubha-knowledge-article-test"
                }, 
                "S3DestinationDescription": {
                    "RoleARN": "arn:aws:iam::111111111111:role/FirehosetoS3Role2test", 
                    "BufferingHints": {
                        "IntervalInSeconds": 300, 
                        "SizeInMBs": 5
                    }, 
                    "EncryptionConfiguration": {
                        "NoEncryptionConfig": "NoEncryption"
                    }, 
                    "CompressionFormat": "UNCOMPRESSED", 
                    "CloudWatchLoggingOptions": {
                        "Enabled": false
                    }, 
                    "BucketARN": "arn:aws:s3:::my-bucket"
                }
            }
        ]
    }
}

Note: You'll need to use the DeliveryStreamDescription.DeliveryStreamARN value in a later step.

8.    Create the IAM role and trust policy that grant CloudWatch Logs the permission to put data into your Kinesis Data Firehose stream. Make sure to add the Regions where the logs are pushed:

{
  "Statement": {
    "Effect": "Allow",
    "Principal": {
  "Service": [
    "logs.us-east-1.amazonaws.com",
    "logs.us-east-2.amazonaws.com"
    
  ]
},
    "Action": "sts:AssumeRole"
  }
}

9.    Use the create-role command to create your IAM role and specify the trust policy file:

aws iam create-role \
    --role-name CWLtoKinesisFirehoseRole \
    --assume-role-policy-document file://~/TrustPolicyForCWL.json

Note: You'll need to use the returned Role_Arn value in a later step.

10.    Create a permissions policy to define which actions CloudWatch Logs can perform in your account:

{
    "Statement":[
      {
        "Effect":"Allow",
        "Action":["firehose:*"],
       
    "Resource":["arn:aws:firehose:us-east-1:111111111111:*"]
      },
      {
        "Effect":"Allow",
        "Action":["iam:PassRole"],
       
    "Resource":["arn:aws:iam::111111111111:role/CWLtoKinesisFirehoseRole"]
      }
    ]
}

Use the DeliveryStreamDescription.DeliveryStreamARN value from step 7 and the Role_Arn value from step 9.

11.    Associate the permissions policy with the role using the put-role-policy command:

aws iam put-role-policy --role-name CWLtoKinesisFirehoseRole --policy-name Permissions-Policy-For-CWL --policy-document file://~/PermissionsForCWL.json

12.    Create a destination in the target account so that the source account can use the destination source's ARN value:

aws logs put-destination --destination-name "myDestination" --target-arn "arn:aws:firehose:us-east-1:111111111111:deliverystream/my-delivery-stream"
 --role-arn "arn:aws:iam::111111111111:role/CWLtoKinesisRole" --region us-east-2

Note: You can create a destination for the delivery stream in any Region where Kinesis Data Firehose is supported. The Region where you create the destination must be same as the log source Region.

13.    Create an access policy for the Amazon CloudWatch destination:

{
  "Version" : "2012-10-17",
  "Statement" : [ ;
    {
      "Sid" : "",
            "Effect" : "Allow",
            "Principal" : {
       "AWS" : "222222222222"
    },
    "Action" : "logs:PutSubscriptionFilter",
    "Resource" : "arn:aws:logs:us-east-2:111111111111:destination:myDestination"
    }
  ]
}

Replace "222222222222" with the AWS source account where your Amazon Virtual Private Cloud (Amazon VPC) is located.

14.    Associate your access policy with the Amazon CloudWatch destination:

aws logs put-destination-policy --destination-name "myDestination" --access-policy file://~/AccessPolicy.json --region us-east-2

15.    Verify your destination by running the following command:

aws logs describe-destinations --region us-east-2

Setting up the source account

Note: You must be the IAM admin user or root user of the source account.

1.    Create an IAM role and trust policy to grant VPC Flow Logs the permissions to send data to the CloudWatch Logs log group:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "",
            "Effect": "Allow",
            "Principal":
          {
            "Service": "vpc-flow-logs.amazonaws.com"
          },
        "Action": "sts:AssumeRole"
       }
    ]
}

2.    Use the following command to create the IAM role and specify the trust policy file you created:

aws iam create-role --role-name PublishFlowLogs --assume-role-policy-document file://~/TrustPolicyForVPCFlowLogs.json

Note: You'll need to use the returned ARN value to be passed on to VPC Flow Logs in a later step.

3.    Create a permissions policy to define which actions VPC Flow Logs can perform in the source account:

{
    "Version": "2012-10-17",
    "Statement": [
        {
        "Action": [
            "logs:CreateLogGroup",
            "logs:CreateLogStream",
            "logs:PutLogEvents",
            "logs:DescribeLogGroups",
            "logs:DescribeLogStreams"    
            ],
        "Effect": "Allow",
        "Resource": "*"
        }
   ]
}

4.    Associate the permissions policy with the IAM role by running the following command:

aws iam put-role-policy --role-name PublishFlowLogs --policy-name Permissions-Policy-For-VPCFlowLogs --policy-document file://~/PermissionsForVPCFlowLogs.json

5.   Create a CloudWatch Logs log group that will be used to configure the destination for your VPC Flow Logs:

aws logs create-log-group --log-group-name vpc-flow-logs --region us-east-2

6.    Enable VPC Flow Logs by running the following command:

aws ec2 create-flow-logs --resource-type VPC --resource-ids vpc-12345678 --traffic-type ALL --log-group-name vpc-flow-logs --deliver-logs-permission-arn arn:aws:iam::222222222222:role/PublishFlowLogs --region us-east-2

Replace the --resource-ids and --deliver-logs-permission-arn placeholder values with your VPC ID and VPC Flow Logs role.

7.    Subscribe your CloudWatch Logs log group to Kinesis Data Firehose in your primary account:

aws logs put-subscription-filter --log-group-name "vpc-flow-logs" --filter-name "AllTraffic" --filter-pattern "" --destination-arn 
"arn:aws:logs:us-east-1:111111111111:destination:myDestination" --region us-east-2

Update the --destination ARN value and replace "111111111111" with the target account number.

8.    Check your S3 bucket to confirm that the logs have been published.


Did this article help?


Do you need billing or technical support?