AWS Big Data Blog
Integrating Datadog data with AWS using Amazon AppFlow for intelligent monitoring
Infrastructure and operation teams are often challenged with getting a full view into their IT environments to do monitoring and troubleshooting. New monitoring technologies are needed to provide an integrated view of all components of an IT infrastructure and application system.
Datadog provides intelligent application and service monitoring by bringing together data from servers, databases, containers, and third-party services in the form of a software as a service (SaaS) offering. It provides operations and development professionals the ability to measure application and infrastructure performance, visualize metrics with the help of a unified dashboard and create alerts and notifications.
Amazon AppFlow is a fully managed service that provides integration capabilities by enabling you to transfer data between SaaS applications like Datadog, Salesforce, Marketo, and Slack and AWS services like Amazon Simple Storage Service (Amazon S3) and Amazon Redshift. It provides capabilities to transform, filter, and validate data to generate enriched and usable data in a few easy steps.
In this post, I walk you through the process of extracting log data from Datadog, using Amazon AppFlow and storing it in Amazon S3, and querying it with Amazon Athena.
Solution overview
The following diagram shows the flow of our solution.
The Datadog Agent is a lightweight software that can be installed in many different platforms, either directly or as a containerized version. It collects events and metrics from hosts and sends them to Datadog. Amazon AppFlow extracts the log data from Datadog and stores it in Amazon S3, which is then queried using Athena.
To implement the solution, you complete the following steps:
- Install and configure the Datadog Agent.
- Create a new Datadog application key.
- Create an Amazon AppFlow connection for Datadog.
- Create a flow in Amazon AppFlow.
- Run the flow and query the data.
Prerequisites
The walkthrough requires the following:
- An AWS account
- A Datadog account
Installing and configuring the Datadog Agent
The Datadog Agent is lightweight software installed on your hosts. With additional setup, the Agent can report live processes, logs, and traces. The Agent needs an API key, which is used to associate the Agent’s data with your organization. Complete the following steps to install and configure the Datadog Agent:
- Create a Datadog account if you haven’t already.
- Login to your account.
- Under Integrations, choose APIs.
- Copy the API key.
- Download the Datadog Agent software for the selected platform.
- Install the Agent on the hosts using the API key you copied.
Collecting logs is disabled by default in Datadog Agent. To enable Agent log collection and configure a custom log collection, perform the following steps on your host:
- Update the Datadog Agent’s main configuration file (
datadog.yaml
) with the following code:
In Windows this file is in C:\ProgramData\Datadog
.
- Create custom log collection by customizing the
conf.yaml
file.
For example in Windows this file would be in the path C:\ProgramData\Datadog\conf.d\win32_event_log.d
. The following code is a sample entry in the conf.yaml
file that enables collection of Windows security events:
Getting the Datadog application key
The application keys in conjunction with your organization’s API key give you full access to Datadog’s programmatic API. Application keys are associated with the user account that created them. The application key is used to log all requests made to the API. Get your application key with the following steps:
- Login into your Datadog account.
- Under Integrations, choose APIs.
- Expand Application Keys.
- For Application key name, enter a name.
- Choose Create Application key.
Creating an Amazon AppFlow connection for Datadog
A connection defines the source or destination to use in a flow. To create a new connection for Datadog, complete the following steps:
- On the Amazon AppFlow console, in the navigation pane, choose Connections.
- For Connectors, choose Datadog.
- Choose Create Connection.
- For API key and Application Key, enter the keys procured from the previous steps.
- For Connection Name, enter a name; for example, myappflowconnection.
- Choose Connect.
Creating a flow in Amazon AppFlow
After you create the data connection, you can create a flow that uses the connection and defines the destination, data mapping, transformation, and filters.
Creating an S3 bucket
Create an S3 bucket as your Amazon AppFlow transfer destination.
- On the Amazon S3 console, choose Create bucket.
- Enter a name for your bucket; for example,
mydatadoglogbucket
. - Ensure that Block all public access is selected.
- Enable bucket versioning and encryption (optional).
- Choose Create bucket.
- Enable Amazon S3 server access logging (optional).
Configuring the flow source
After you create the Datadog agent and the S3 bucket, complete the following steps to create a flow:
- On the Amazon AppFlow console, in the navigation pane, choose Flows.
- Choose Create flow.
- For Flow name, enter a name for your flow; for example
mydatadogflow
. - For Source name, choose Datadog.
- For Choose Datadog connection, choose the connection created earlier.
- For Choose Datadog object, choose Logs.
Choosing a destination
In the Destination details section, provide the following information:
- For Destination name, Choose Amazon S3.
- For Bucket details, choose the name of the S3 bucket created earlier.
This step create a folder with the flow name you specified within the bucket to store the logs.
Additional settings
You can provide additional settings for data format (JSON, CSV, Parquet), data transfer preference, filename preference, flow trigger and transfer mode. Leave all settings as default:
- For Data format preference, choose JSON format.
- For Data transfer preference, choose No aggregation.
- For Filename preference, choose No timestamp.
- For Folder structure preference, choose No timestamped folder.
Adding a flow trigger
Flows can be run on a schedule, based on an event or on demand. For this post, we choose Run on demand.
Mapping data fields
You can map manually or using a CSV file. This determines how data is transferred from source to destination. You can apply transformations like concatenation, masking, and truncation to the mappings.
- In the Map data fields section, for Mapping method, choose Manually map fields.
- For Source field name, choose Map all fields directly.
- Choose Next.
Validation
You can add validation to perform certain actions based on conditions on field values.
- In the Validations section, for Field name choose Content.
- For Condition, choose Values are missing or null.
- For Action, choose Ignore record.
Filters
Filters specify which records to transfer. You can add multiple filters with criterion. For the Datadog data source, it’s mandatory to specify filters for Date_Range
and Query
. The format for specifying filter query for metrics and logs are different.
- In the Add filters section, for Field name, choose Date_Range.
- For Condition, choose is between.
- For Criterion 1 and Criterion 2, enter start and end dates for log collection.
- Choose Add filter.
- For your second filter, for Field name, choose
- For Condition, enter
host:<yourhostname>
ANDservice:
(windowsOS
ORLinuxOS
). - Choose Save.
The service names specified in the filter should have Datadog logs enabled (refer to the earlier step when you installed and configured the Datadog Agent).
The following are some examples of the filter Query
for metrics:
load.1{*} by {host}
avg:system.cpu.idle{*}
avg:system.cpu.system{*}
avg:system.cpu.user{*}
avg:system.cpu.guest{*}
avg:system.cpu.user{host:yourhostname}
The following are some examples of the filter Query
for logs:
service:servicename
host:myhostname
host:hostname1 AND service:(servicename1 OR servicename2)
Running the Flow and querying the data
If a flow is based on a trigger, you can activate or deactivate it. If it’s on demand, it must be run each time data needs to be transferred. When you run the flow, the logs or metrics are pulled into files residing in Amazon S3. The data is in the form of a nested JSON in this example. Use AWS Glue and Athena to create a schema and query the log data.
Querying data with Athena
When the Datadog data is in AWS, there are a host of possibilities to store, process, integrate with other data sources, and perform advanced analytics. One such method is to use Athena to query the data directly from Amazon S3.
- On the AWS Glue console, in the navigation pane, choose Databases.
- Choose Add database.
- For Database name, enter a name such as
mydatadoglogdb
. - Choose Create.
- In the navigation pane, choose Crawlers.
- Choose Add Crawler.
- For Crawler name, enter a name, such as
mylogcrawler
. - Choose Next.
- For Crawler source type, select Data stores.
- Choose Next.
- In the Add a data store section, choose S3 for the data store.
- Enter the path to the S3 folder that has the log files; for example
s3://mydatadoglogbucket/logfolder/
. - In the Choose an IAM role section, select Create an IAM role and provide a name.
- For Frequency select Run on demand.
- In the Configure the crawler’s output section, for Database, select the database created previously.
- Choose Next.
- Review and choose Finish.
- When the crawler’s status changes to Active, select it and choose Run Crawler.
When the crawler finishes running, it creates the tables and populates them with data based on the schema it infers from the JSON log files.
- On the Athena console, choose Settings.
- Select an S3 bucket and folder where Athena results are stored.
- In the Athena query window, enter the following query:
- Choose Run Query.
This sample query gets all the log entries where the level is Information
. We’re traversing a nested JSON object in the Athena query, simply with a dot notation.
Summary
In this post, I demonstrated how we can bring Datadog data into AWS. Doing so opens a host of opportunities to use the tools available in AWS to drive advance analytics and monitoring while integrating with data from other sources.
With Amazon AppFlow, you can integrate applications in a few minute, transfer data at massive scale, and enrich the data as it flows, using mapping, merging, masking, filtering, and validation. For more information about integrating SaaS applications and AWS, see Amazon AppFlow.
About the Author
Gopalakrishnan Ramaswamy is a Solutions Architect at AWS based out of India with extensive background in database, analytics, and machine learning. He helps customers of all sizes solve complex challenges by providing solutions using AWS products and services. Outside of work, he likes the outdoors, physical activities and spending time with friends and family.