AWS Big Data Blog
Introducing AWS Glue Data Quality anomaly detection
Thousands of organizations build data integration pipelines to extract and transform data. They establish data quality rules to ensure the extracted data is of high quality for accurate business decisions. These rules commonly assess the data based on fixed criteria reflecting the current business state. However, when the business environment changes, data properties shift, rendering these fixed criteria outdated and causing poor data quality.
For example, a data engineer at a retail company established a rule that validates daily sales must exceed a 1-million-dollar threshold. After a few months, daily sales surpassed 2 million dollars, rendering the threshold obsolete. The data engineer couldn’t update the rules to reflect the latest thresholds due to lack of notification and the effort required to manually analyze and update the rule. Later in the month, business users noticed a 25% drop in their sales. After hours of investigation, the data engineers discovered that an extract, transform, and load (ETL) pipeline responsible for extracting data from some stores had failed without generating errors. The rule with outdated thresholds continued to operate successfully without detecting this anomaly.
Also, breaks or gaps that significantly deviate from the seasonal pattern can sometimes point to data quality issues. For instance, retail sales may be highest on weekends and holiday seasons while relatively low on weekdays. Divergence from this pattern may indicate data quality issues such as missing data from a store or shifts in business circumstances. Data quality rules with fixed criteria can’t detect seasonal patterns because this requires advanced algorithms that can learn from past patterns and capture seasonality to detect deviations. You need the ability spot anomalies with ease, enabling you to proactively detect data quality issues and make confident business decisions.
To address these challenges, we are excited to announce the general availability of anomaly detection capabilities in AWS Glue Data Quality. In this post, we demonstrate how this feature works with an example. We provide an AWS Cloud Formation template to deploy this setup and experiment with this feature.
For completeness and ease of navigation, you can explore all the following AWS Glue Data Quality blog posts. This will help you understand all the other capabilities of AWS Glue Data Quality, in addition to anomaly detection.
- Part 1: Getting started with AWS Glue Data Quality from the AWS Glue Data Catalog
- Part 2: Getting started with AWS Glue Data Quality for ETL Pipelines
- Part 3: Set up data quality rules across multiple datasets using AWS Glue Data Quality
- Part 4: Set up alerts and orchestrate data quality rules with AWS Glue Data Quality
- Part 5: Visualize data quality score and metrics generated by AWS Glue Data Quality
- Part 6: Measure performance of AWS Glue Data Quality for ETL pipelines
- Part 7: Get started with AWS Glue Data Quality dynamic rules for ETL pipelines
- Part 8: Introducing AWS Glue Data Quality anomaly detection
Solution overview
For our use case, a data engineer wants to measure and monitor data quality of the New York taxi ride dataset. The data engineer knows about a few rules, but wants to monitor critical columns and be notified about any anomalies in these columns. These columns include fare amount, and the data engineer wants to be notified about any major deviations. Another attribute is the number of rides, which varies during peak hours, mid-day hours, and night hours. Also, as the city grows, there will be gradual increase in the number of rides overall. We use anomaly detection to help set up and maintain rules for this seasonality and growing trend.
We demonstrate this feature with the following steps:
- Deploy a CloudFormation template that will generate 7 days of NYC taxi data.
- Create an AWS Glue ETL job and configure the anomaly detection capability.
- Run the job for 6 days and explore how AWS Glue Data Quality learns from data statistics and detects anomalies.
Set up resources with AWS CloudFormation
This post includes a CloudFormation template for a quick setup. You can review and customize it to suit your needs. The template generates the following resources:
- An Amazon Simple Storage Service (Amazon S3) bucket (
anomaly-detection-blog-<account-id>-<region>
) - An AWS Identity and Access Management (IAM) policy to associate with the S3 bucket (
anomaly-detection-blog-<account-id>-<region>
) - An IAM role with AWS Glue run permission as well as read and write permission on the S3 bucket (
anomaly_detection_blog_GlueServiceRole
) - An AWS Glue database to catalog the data (anomaly_detection_blog_db)
- An AWS Glue visual ETL job to generate sample data (
anomaly_detection_blog_data_generator_job
)
To create your resources, complete the following steps:
- Launch your CloudFormation stack in
us-east-1
.
- Keep all settings as default.
- Select I acknowledge that AWS CloudFormation might create IAM resources and choose Create stack.
- When the stack is complete, copy the AWS Glue script to the S3 bucket
anomaly-detection-blog-<account-id>-<region>
. - Open AWS CloudShell.
- Run the following command; replace account-id and region as appropriate:
Run the data generator job
As part of the CloudFormation template, a data generator AWS Glue job is provisioned in your AWS account. Complete the following steps to run the job:
- On the AWS Glue console, choose ETL jobs in the navigation pane.
- Choose the job
- Review the script on the Script
- On the Job details tab, verify the job run parameters in the Advanced section:
- bucket_name – The S3 bucket name where you want the data to be generated.
- bucket_prefix – The prefix in the S3 bucket.
- gluecatalog_database_name – The database name in the AWS Glue Data Catalog that was created by the CloudFormation template.
- gluecatalog_table_name – The table name to be created in the Data Catalog in the database.
- Choose Run to run this job.
- On the Runs tab, monitor the job until the Run status column shows as Succeeded.
When the job is complete, it will have generated the NYC taxi dataset for the date range of May 1, 2024, to May 7, 2024, in the specified S3 bucket and cataloged the table and partitions in the Data Catalog for year, month, day, and hour. This dataset contains 7 day of hourly rides that fluctuates between high and low on alternate days. For instance, on Monday, there are approximately 1,400 rides, on Tuesday around 700 rides, and this pattern continues. Of the 7 days, the first 5 days of data is non-anomalous. However, on the sixth day, an anomaly occurs where the number of rows jumps to around 2,200 and the fare_amount
is set to an unusually high value of 95 for mid-day traffic.
Create an AWS Glue visual ETL job
Complete the following steps:
- On the AWS Glue console, create a new AWS Glue visual job named
anomaly-detection-blog-visual
. - On the Job details tab, provide the IAM role created by the CloudFormation stack.
- On the Visual tab, add an S3 node for the data source.
- Provide the following parameters:
- For Database, choose
anomaly_detection_blog_db
. - For Table, choose
nyctaxi_raw
. - For Partition predicate, enter
year==2024 AND month==5 AND day==1
.
- For Database, choose
- Add the Evaluate Data Quality transform and add use the following rule for
fare_amount
:
Because we’re still trying to understand the statistics on this metric, we start with a wide range rule, and after a few runs, we will analyze the results and fine-tune as needed.
Next, we add two analyzers: one for RowCount and another for distinct values of pulocationid
.
- On the Anomaly detection tab, choose Add analyzer.
- For Statistics, enter RowCount.
- Add a second analyzer.
- For Statistics, enter
DistinctValuesCount
and for Columns, enterpulocationid
.
Your final ruleset should look like the following code:
- Save the job.
We have now generated a synthetic NYC taxi dataset and authored an AWS Glue visual ETL job to read from this dataset and perform analysis with one rule and two analyzers.
Run and evaluate the visual ETL job
Before we run the job, let’s look at how anomaly detection works. In this example, we have configured one rule and two analyzers. Rules have thresholds to compare what good looks like. Sometimes, you might know the critical columns, but not know specific thresholds. Rules and analyzers gather data statistics or data profiles. In this example, AWS Glue Data Quality will gather four statistics (a ColumnValue
rule will gather two statistics, namely minimum and maximum fare amount, and two analyzers will gather two statistics). After gathering three data points from three runs, AWS Glue Data Quality will predict the fourth run along with upper and lower bounds. It will then compare the predicted value with the actual value. When the actual value breaches the predicted upper or lower bounds, it will create an anomaly.
Let’s see this in action.
Run the job for 5 days and analyze results
Because the first 5 days of data is non-anomalous, it will set a baseline with seasonality for training the model. Complete the following steps to run the job five times, once for each day’s partition:
- Choose the S3 node on the Visual tab and go to its properties.
- Set the day field in the partition predicate to 1.
- Choose Run to run this job.
- Monitor the job on the Runs tab for Succeeded
- Repeat these steps four more times, each time incrementing the day field in the partition predicate. Run the jobs at more or less regular intervals to get a clean graph that simulates the automated scheduled pipeline.
- After five successful runs, go to the Data quality tab, where you should see the statistic gathered for
fare_amount
andRowCount
.
The anomaly detection algorithm takes a minimum of three data points to learn and start predicting. After three runs, you may see multiple anomalies detected in your dataset. This is expected because every new trend is seen as an anomaly at first. As the algorithm processes more and more records, it learns from it and sets the upper and lower bounds on your data accurately. The upper and lower bound predictions are dependent on the interval between the job runs.
Also, we can observe that the data quality score is always 100% based on the generic fare_amount
rule we set up. You can explore the statistics by choosing the View trends links for each of the metrics to deep dive into the values. For example, the following screenshot shows the values for minimum fare_amount
over a set of runs.
The model has predicted the upper bound to be around 1.4 and the lower bound to be around 1.2 for the minimum
statistic of the fare_amount
metric. When these bounds are breached, it would be considered an anomaly.
Run the job for the sixth (anomalous) day and analyze results
For the sixth day, we process a file that has two known anomalies. With this run, you should see anomalies detected on the graph. Complete the following steps:
- Choose the S3 node on the Visual tab and go to its properties.
- Set the
day
field in the partition predicate to6
. - Choose Run to run this job.
- Monitor the job on the Runs tab for Succeeded
You should see a screenshot as follows where two anomalies are detected as expected: one for fare_amount
with a high value of 95 and one for RowCount
with a value of 2776.
Notice that even though the fare_amount
score was anomalous and high, the data quality score is still 100%. We will fix this later.
Let’s investigate the RowCount
anomaly further. As shown in the following screenshot, if you expand the anomaly record, you can see how the prediction upper bound was breached to cause this anomaly.
Up until this point, we saw how a baseline was set for the model training and statistics collected. We also saw how an anomalous value in our dataset was flagged as an anomaly by the model.
Update data quality rules based on findings
Now that we understand the statistics, lets adjust our ruleset such that when the rules fail, the data quality score is impacted. We take rule recommendations from the anomaly detection feature and add them to the ruleset.
As shown earlier, when the anomaly is detected, it gives you rule recommendations to the right of the graph. For this case, the rule recommendation states the RowCount
metric should be between 275.0–1966.0. Let’s update our visual job.
- Copy the rule under Rule Recommendations for RowCount.
- On the Visual tab, choose the Evaluate Data Quality node, go to its properties, and enter the rule in the rules editor.
- Repeat these steps for
fare_amount
. - You can adjust your final ruleset to look as follows:
- Save the job, but don’t run it yet.
So far, we have learned how to use statistics collected to adjust the rules and make sure our data quality score is accurate. But there is a problem—the anomalous values influence the model training, forcing the upper and lower bounds to adjust to the anomaly. We need to exclude those data points.
Exclude the RowCount anomaly
When an anomaly is detected in your dataset, the upper and lower bound prediction will adjust to it because it will assume it’s a seasonality by default. After investigation, if you believe that it is indeed an anomaly and not a seasonality, you should exclude the anomaly so it doesn’t impact future predictions.
Because our sixth run is an anomaly, you can complete the following steps to exclude it:
- On the Anomalies tab, select the anomaly row you want to exclude.
- On the Edit training inputs menu, choose Exclude anomaly.
- Choose Save and retrain.
- Choose the refresh icon.
If you need to view previous anomalous runs, navigate to the Data quality trend graph, hover over the anomaly data point, and choose View selected run results. This will take you to the job run on a new tab where you can follow the preceding steps to exclude the anomaly.
Alternatively, if you ran the job over a period of time and need to exclude multiple data points, you can do so from the Statistics tab:
- On the Data quality tab, go to the Statistics tab and choose View trends for
RowCount
. - Select the value you want to exclude.
- On the Edit training inputs menu, choose Exclude anomaly.
- Choose Save and retrain.
- Choose the refresh icon.
It may take a few seconds to reflect the change.
The following figure shows how the model adjusted to the anomalies before exclusion.
The following figure shows how the model retrained itself after the anomalies were excluded.
Now that the predictions are adjusted, all future out-of-range values will be detected as anomalies again.
Now you can run the job for day 7, which has non-anomalous data, and explore the trends.
Add an anomaly detection rule
It can be challenging to modify the rule values with the growing business trends. For example, at some point in future, the NYC taxi rows will exceed the now anomalous RowCount
value of 2200. As you run the job over a longer period of time, the model matures and fine-tunes itself to the incoming data. At that point, you can make anomaly detection a rule by itself so you don’t have to update the values and can stop the jobs or decrease the data quality score. When there is an anomaly in the dataset, it means that the quality of the data is not good and the data quality score should reflect that. Let’s add a DetectAnomalies
rule for the RowCount
metric.
- On the Visual tab, choose the Evaluate Data Quality node.
- For Rule types, search for and choose
DetectAnomalies
, then add the rule.
Your final ruleset should look like the following screenshot. Notice that you don’t have any values for RowCount
.
This is the real power of anomaly detection in your ETL pipeline.
Seasonality use case
The following screenshot shows an example of a trend with a more in-depth seasonality. The NYC taxi dataset has a varying number of rides throughout the day depending on peak hours, mid-day hours, and night hours. The following anomaly detection job ran on the current timestamp every hour to capture the seasonality of the day, and the upper and lower bounds have adjusted to this seasonality. When the number of rides drops unexpectedly within that seasonality trend, it is detected as an anomaly.
We saw how a data engineer can build anomaly detection into their pipeline for the incoming flow of data being processed at regular interval. We also learned how you can make anomaly detection a rule after the model is mature and fail the job, if an anomaly is detected, to avoid redundant downstream processing.
Clean up
To clean up your resources, complete the following steps:
- On the Amazon S3 console, empty the S3 bucket created by the CloudFormation stack.
- On the AWS Glue console, delete the
anomaly-detection-blog-visual
AWS Glue job you created. - If you deployed the CloudFormation stack, delete the stack on the AWS CloudFormation console.
Conclusion
This post demonstrated the new anomaly detection feature in AWS Glue Data Quality. Although data quality static and dynamic rules are very useful, they can’t capture data seasonality and how data changes as your business evolves. A machine learning model supporting anomaly detection can understand these complex changes and inform you of anomalies in the dataset. Also, the recommendations provided can help you author accurate data quality rules. You can also enable anomaly detection as a rule after the model has been trained over a longer period of time on a sufficient amount of data.
To learn more about AWS Glue Data Quality, check out AWS Glue Data Quality. If you have any comments or feedback, leave them in the comments section.
About the authors
Noah Soprala is a Solutions Architect based out of Dallas. He is a trusted advisor to his customers in the ISV industry and helps them build innovative solutions using AWS technologies. Noah has over 20+ years of experience in consulting, development and solution delivery.
Shovan Kanjilal is a Senior Analytics and Machine Learning Architect with Amazon Web Services. He is passionate about helping customers build scalable, secure and high-performance data solutions in the cloud.
Shiv Narayanan is a Technical Product Manager for AWS Glue’s data management capabilities like data quality, sensitive data detection and streaming capabilities. Shiv has over 20 years of data management experience in consulting, business development and product management.
Jesus Max Hernandez is a Software Development Engineer at AWS Glue. He joined the team after graduating from The University of Texas at El Paso, and the majority of his work has been in frontend development. Outside of work, you can find him practicing guitar or playing flag football.
Tyler McDaniel is a software development engineer on the AWS Glue team with diverse technical interests, including high-performance computing and optimization, distributed systems, and machine learning operations. He has eight years of experience in software and research roles.
Andrius Juodelis is a Software Development Engineer at AWS Glue with a keen interest in AI, designing machine learning systems, and data engineering.