AWS Compute Blog
Orchestrating AWS Glue crawlers using AWS Step Functions
This blog post is written by Justin Callison, General Manager, AWS Workflow.
Organizations generate terabytes of data every day in a variety of semistructured formats. AWS Glue and Amazon Athena can give you a simpler and more cost-effective way to analyze this data with no infrastructure to manage. AWS Glue crawlers identify the schema of your data and manage the metadata required to analyze the data in place, without the need to transform this data and load into a data warehouse.
The timing of when your crawlers run and complete is important. You must ensure the crawler runs after your data has updated and before you query it with Athena or analyze with an AWS Glue job. If not, your analysis may experience errors or return incomplete results.
In this blog, you learn how to use AWS Step Functions, a low-code visual workflow service that integrates with over 220 AWS services. The service orchestrates your crawlers to control when they start, confirm completion, and combine them into end-to-end, serverless data processing workflows.
Using Step Functions to orchestrate multiple AWS Glue crawlers, provides a number of benefits when compared to implementing a solution directly with code. Firstly, the workflow provides an instant visual understanding of the application, and any errors that might occur during execution. Step Functions’ ability to run nested workflows inside a Map state helps to decouple and reuse application components with native array iteration. Finally, the Step Functions wait state lets the workflow periodically poll the status of the crawl job, without incurring additional cost for idle wait time.
Deploying the example
With this example, you create three datasets in Amazon S3, then use Step Functions to orchestrate AWS Glue crawlers to analyze the datasets and make them available to query using Athena.
You deploy the example with AWS CloudFormation using the following steps:
- Download the
template.yaml
file from here. - Log in to the AWS Management Console and go to AWS CloudFormation.
- Navigate to Stacks -> Create stack and select With new resources (standard).
- Select Template is ready and Upload a template file, then Choose File and select the template.yaml file that you downloaded in Step 1 and choose Next.
- Enter a stack name, such as glue-stepfunctions-demo, and choose Next.
- Choose Next, check the acknowledgement boxes in the Capabilities and transforms section, then choose Create stack.
- After deployment, the status updates to CREATE_COMPLETE.
Create your datasets
Navigate to Step Functions in the AWS Management Console and select the create-dataset state machine from the list. This state machine uses Express Workflows and the Parallel state to build three datasets concurrently in S3. The first two datasets include information by user
and location
respectively and include files per day over the 5-year period from 2016 to 2020. The third dataset is a simpler, all-time summary of data by location
.
To create the datasets, you choose Start execution
from the toolbar for the create-dataset
state machine, then choose Start execution
again in the dialog box. This runs the state machine and creates the datasets in S3.
Navigate to the S3 console and view the glue-demo-databucket
created for this example. In this bucket, in a folder named data, there are three subfolders, each containing a dataset.
The all-time-location-summaries
folder contains a set of JSON files, one for each location
.
The daily-user-summaries
and daily-location-summaries
contain a folder structure with nested folders for each year, month, and date. In addition to making this data easier to navigate via the console, this folder structure provides hints to AWS Glue that it can use to partition this dataset and make it more efficient to query.
Crawling
You now use AWS Glue crawlers to analyze these datasets and make them available to query. Navigate to the AWS Glue console, select Crawlers
to see the list of Crawlers that you created when you deployed this example. Select the daily-user-summaries
crawler to view details and note that they have tags assigned to indicate metadata such as the datatype of the data and whether the dataset is-partitioned
.
Now, return to the Step Functions console and view the run-crawlers-with-tags
state machine. This state machine uses AWS SDK service integrations to get a list of all crawlers matching the tag criteria you enter. It then uses the map state and the optimized service integration for Step Functions to execute the run-crawler state machine for each of the matching crawlers concurrently. The run-crawler
state machine starts each crawler and monitors status until the crawler completes. Once each of the individual crawlers have completed, the run-crawlers-with-tags
state machine also completes.
To initiate the crawlers:
- Choose Start execution from the top of the page when viewing the run-crawlers-with-tags state machine
- Provide the following as Input
{"tags": {"datatype": "json"}}
- Choose Start execution.
After 2-3 minutes, the execution finishes with a Succeeded status once all three crawlers have completed. During this time, you can navigate to the run-crawler
state machine to view the individual, nested executions per crawler or to the AWS Glue console to see the status of the crawlers.
Querying the data using Amazon Athena
Now, navigate to the Athena console where you can see the database and tables created by your crawlers. Note that AWS Glue recognized the partitioning scheme and included fields for year, month, and date in addition to user and usage fields for the data contained in the JSON files.
If you have not used Athena in this account before, you see a message instructing you to set a query result location. Choose View settings -> Manage -> Browse S3 and select the athena-results bucket that you created when you deployed the example. Choose Save then return to the Editor to continue.
You can now run queries such as the following, to calculate the total usage for all users over 5 years.
SELECT SUM(usage) all_time_usage FROM “daily_user_summaries”
You can also add filters, as shown in the following example, which limit results to those from 2016.
SELECT SUM(usage) all_time_usage FROM “daily_user_summaries” WHERE year = ‘2016’
Note this second query scanned only 17% as much data (133 KB vs 797 KB) and completed faster. This is because Athena used the partitioning information to avoid querying the full dataset. While the differences in this example are small, for real-world datasets with terabytes of data, your cost and latency savings from partitioning data can be substantial.
The disadvantage of a partitioning scheme is that new folders are not included in query results until you add new partitions. Re-running your crawler identifies and adds the new partitions and using Step Functions to orchestrate these crawlers makes that task simpler.
Extending the example
You can use these example state machines as they are in your AWS accounts to manage your existing crawlers. You can use Amazon S3 event notifications with Amazon EventBridge to trigger crawlers based on data changes. With the Optimized service integration for Amazon Athena, you can extend your workflows to execute queries against these crawled datasets. And you can use these examples to integrate crawler execution into your end-to-end data processing workflows, creating reliable, auditable workflows from ingestion through to analysis.
Conclusion
In this blog post, you learn how to use Step Functions to orchestrate AWS Glue crawlers. You deploy an example that generates three datasets, then uses Step Functions to start and coordinate crawler runs that analyze this data and make it available to query using Athena.
To learn more about Step Functions, visit Serverless Land.