Audit AWS service events with Amazon EventBridge and Amazon Kinesis Data Firehose
Amazon EventBridge is a serverless event bus that makes it easy to build event-driven applications at scale using events generated from your applications, integrated software as a service (SaaS) applications, and AWS services. Many AWS services generate EventBridge events. When an AWS service in your account emits an event, it goes to your account’s default event bus.
The following are a few event examples:
- Amazon EC2 Auto Scaling events
- Amazon CloudWatch alarm status change events
- Events generated from developer tools on AWS services like AWS CodeCommit and AWS CodeDeploy
- AWS Step Functions state change events
- Analytics on AWS services like AWS Glue, Amazon Redshift, and Amazon EMR
By default, these AWS service-generated events are transient and therefore not retained. This post shows how you can forward AWS service-generated events or custom events to Amazon Simple Storage Service (Amazon S3) for long-term storage, analysis, and auditing purposes using EventBridge rules and Amazon Kinesis Data Firehose.
In this post, we provide a working example of AWS service-generated events ingested to Amazon S3. To make sure we have some service events available in default event bus, we use Parameter Store, a capability of AWS Systems Manager to store new parameters manually. This action generates a new event, which is ingested by the following pipeline.
The pipeline includes the following steps:
- AWS service-generated events (for example, a new parameter created in Parameter Store) goes to the default event bus at EventBridge.
- The EventBridge rule matches all events and forwards those to Kinesis Data Firehose.
- Kinesis Data Firehose delivers events to the S3 bucket partitioned by detail-type and receipt time using its dynamic partitioning capability.
- The S3 bucket stores the delivered events, and their respective event schema is registered to the AWS Glue Data Catalog using an AWS Glue crawler.
- You query events using Amazon Athena.
Deploy resources using AWS CloudFormation
We use AWS CloudFormation templates to create all the necessary resources for the ingestion pipeline. This removes opportunities for manual error, increases efficiency, and provides consistent configurations over time. The template is also available on GitHub.
Complete the following steps:
- Click here to
- Acknowledge that the template may create AWS Identity and Access Management (IAM) resources.
- Choose Create stack.
The template takes about 10 minutes to complete and creates the following resources in your AWS account:
- An S3 bucket to store event data.
- A Firehose delivery stream with dynamic partitioning configuration. Dynamic partitioning enables you to continuously partition streaming data in Kinesis Data Firehose by using keys within the data (for example,
transaction_id) and then deliver the data grouped by these keys into corresponding S3 prefixes.
- An EventBridge rule that forwards all events from the default event bus to Kinesis Data Firehose.
- An AWS Glue crawler that references the path to the event data in the S3 bucket. The crawler inspects data landed to Amazon S3 and registers tables as per the schema with the AWS Glue Data Catalog.
- Athena named queries for you to query the data processed by this example.
Trigger a service event
After you create the CloudFormation stack, you trigger a service event.
- On the AWS CloudFormation console, navigate to the Outputs tab for the stack.
- Choose the link for the key CreateParameter.
You’re redirected to the Systems Manager console to create a new parameter.
- For Name, enter a name (for example,
- For Value, enter the test value of your choice (for example,
- Leave everything else as default and choose Create parameter.
This step saves the new Systems Manager parameter and pushes the parameter-created event to the default EventBridge event bus, as shown in the following code:
Discover the event schema
After the event is triggered by saving the parameter, wait at least 2 minutes for the event to be ingested via Kinesis Data Firehose to the S3 bucket. Now complete the following steps to run an AWS Glue crawler to discover and register the event schema in the Data Catalog:
- On the AWS Glue console, choose Crawlers in the navigation pane.
- Select the crawler with the name starting with S3EventDataCrawler.
- Choose Run crawler.
This step runs the crawler, which takes about 2 minutes to complete. The crawler discovers the schema from all events and registers it as tables in the Data Catalog.
Query the event data
When the crawler is complete, you can start querying event data. To query the event, complete the following steps:
- On the AWS CloudFormation console, navigate to the Outputs tab for your stack.
- Choose the link for the key AthenaQueries.
You’re redirected to the Saved queries tab on the Athena console. If you’re running Athena queries for the first time, set up your S3 output bucket. For instructions, see Working with Query Results, Recent Queries, and Output Files.
- Search for Blog to find the queries created by this post.
- Choose the query Blog – Query Parameter Store Events.
The query opens on the Athena console.
- Choose Run query.
You can update the query to search the event you created earlier.
- Apply a WHERE clause with the parameter name you selected earlier:
You can also choose the link next to the key CuratedBucket from the CloudFormation stack outputs to see paths and the objects loaded to the S3 bucket from other event sources. Similarly, you can query them via Athena.
Complete the following steps to delete your resources and stop incurring costs:
- On the AWS CloudFormation console, select the stack you created and choose Delete.
- On the Amazon S3 console, find the bucket with the name starting with
- Select the bucket and choose Empty.
- Enter permanently delete to confirm the choice.
- Select the bucket again and choose Delete.
- Confirm the action by entering the bucket name when prompted.
- On the Systems Manager console, go to the parameter store and delete the parameter you created earlier.
This post demonstrates how to use an EventBridge rule to redirect AWS service-generated events or custom events to Amazon S3 using Kinesis Data Firehose to use for long-term storage, analysis, querying, and audit purposes.
About the Author
Anand Shah is a Big Data Prototyping Solution Architect at AWS. He works with AWS customers and their engineering teams to build prototypes using AWS analytics services and purpose-built databases. Anand helps customers solve the most challenging problems using the art of the possible technology. He enjoys beaches in his leisure time.