Q: What is AWS Glue?
AWS Glue is a serverless data integration service that makes it easy to discover, prepare, and combine data for analytics, machine learning, and application development. AWS Glue provides all the capabilities needed for data integration, so you can start analyzing your data and putting it to use in minutes instead of months. AWS Glue provides both visual and code-based interfaces to make data integration easier. Users can easily find and access data using the AWS Glue Data Catalog. Data engineers and ETL (extract, transform, and load) developers can visually create, run, and monitor ETL workflows with a few clicks in AWS Glue Studio. Data analysts and data scientists can use AWS Glue DataBrew to visually enrich, clean, and normalize data without writing code.
Q: How do I get started with AWS Glue?
To start using AWS Glue, simply sign into the AWS Management Console and navigate to “Glue” under the “Analytics” category. You can follow one of our guided tutorials that will walk you through an example use case for AWS Glue. You can also find sample ETL code in our GitHub repository under AWS Labs.
Q. What are the main components of AWS Glue?
AWS Glue consists of a Data Catalog which is a central metadata repository; an ETL engine that can automatically generate Scala or Python code; a flexible scheduler that handles dependency resolution, job monitoring, and retries; AWS Glue DataBrew for cleaning and normalizing data with a visual interface. Together, these automate much of the undifferentiated heavy lifting involved with discovering, categorizing, cleaning, enriching, and moving data, so you can spend more time analyzing your data.
Q: When should I use AWS Glue?
You should use AWS Glue to discover properties of the data you own, transform it, and prepare it for analytics. Glue can automatically discover both structured and semi-structured data stored in your data lake on Amazon S3, data warehouse in Amazon Redshift, and various databases running on AWS. It provides a unified view of your data via the Glue Data Catalog that is available for ETL, querying and reporting using services like Amazon Athena, Amazon EMR, and Amazon Redshift Spectrum. Glue automatically generates Scala or Python code for your ETL jobs that you can further customize using tools you are already familiar with. You can use AWS Glue DataBrew to visually clean up and normalize data without writing code.
Q: What data sources does AWS Glue support?
AWS Glue can integrate with more than 80 data sources on AWS, on premises, and on other clouds. The service natively supports data stored in the following databases in your Amazon Virtual Private Cloud (Amazon VPC) running on Amazon EC2:
- Amazon Aurora
- Amazon RDS for MySQL
- Amazon RDS for Oracle
- Amazon RDS for PostgreSQL
- Amazon RDS for SQL Server
- Amazon Redshift
- Amazon DynamoDB
- Amazon S3
- MySQL, Oracle, Microsoft SQL Server, and PostgreSQL
AWS Glue also supports data streams from Amazon MSK, Amazon Kinesis Data Streams, and Apache Kafka. You can add connectors, including Snowflake, GCP BigQuery, and Teradata, from the AWS Marketplace.
You can also write custom Scala or Python code and import custom libraries and Jar files into your AWS Glue ETL jobs to access data sources not natively supported by AWS Glue. For more details on importing custom libraries, refer to our documentation.
Q: How does AWS Glue relate to AWS Lake Formation?
A: Lake Formation leverages a shared infrastructure with AWS Glue, including console controls, ETL code creation and job monitoring, a common data catalog, and a serverless architecture. While AWS Glue is still focused on these types of functions, Lake Formation encompasses AWS Glue features AND provides additional capabilities designed to help build, secure, and manage a data lake. See the AWS Lake Formation pages for more details.
AWS Glue Data Catalog
Q: What is the AWS Glue Data Catalog?
The AWS Glue Data Catalog is a central repository to store structural and operational metadata for all your data assets. For a given data set, you can store its table definition, physical location, add business relevant attributes, as well as track how this data has changed over time.
The AWS Glue Data Catalog is Apache Hive Metastore compatible and is a drop-in replacement for the Apache Hive Metastore for Big Data applications running on Amazon EMR. For more information on setting up your EMR cluster to use AWS Glue Data Catalog as an Apache Hive Metastore, click here.
The AWS Glue Data Catalog also provides out-of-box integration with Amazon Athena, Amazon EMR, and Amazon Redshift Spectrum. Once you add your table definitions to the Glue Data Catalog, they are available for ETL and also readily available for querying in Amazon Athena, Amazon EMR, and Amazon Redshift Spectrum so that you can have a common view of your data between these services.
Q: How do I get my metadata into the AWS Glue Data Catalog?
AWS Glue provides a number of ways to populate metadata into the AWS Glue Data Catalog. Glue crawlers scan various data stores you own to automatically infer schemas and partition structure and populate the Glue Data Catalog with corresponding table definitions and statistics. You can also schedule crawlers to run periodically so that your metadata is always up-to-date and in-sync with the underlying data. Alternately, you can add and update table details manually by using the AWS Glue Console or by calling the API. You can also run Hive DDL statements via the Amazon Athena Console or a Hive client on an Amazon EMR cluster. Finally, if you already have a persistent Apache Hive Metastore, you can perform a bulk import of that metadata into the AWS Glue Data Catalog by using our import script.
Q: What are AWS Glue crawlers?
An AWS Glue crawler connects to a data store, progresses through a prioritized list of classifiers to extract the schema of your data and other statistics, and then populates the Glue Data Catalog with this metadata. Crawlers can run periodically to detect the availability of new data as well as changes to existing data, including table definition changes. Crawlers automatically add new tables, new partitions to existing table, and new versions of table definitions. You can customize Glue crawlers to classify your own file types.
Q: How do I import data from my existing Apache Hive Metastore to the AWS Glue Data Catalog?
You simply run an ETL job that reads from your Apache Hive Metastore, exports the data to an intermediate format in Amazon S3, and then imports that data into the AWS Glue Data Catalog.
Q: Do I need to maintain my Apache Hive Metastore if I am storing my metadata in the AWS Glue Data Catalog?
No. AWS Glue Data Catalog is Apache Hive Metastore compatible. You can point to the Glue Data Catalog endpoint and use it as an Apache Hive Metastore replacement. For more information on how to configure your cluster to use AWS Glue Data Catalog as an Apache Hive Metastore, please read our documentation here.
Q. If I am already using Amazon Athena or Amazon Redshift Spectrum and have tables in Amazon Athena’s internal data catalog, how can I start using the AWS Glue Data Catalog as my common metadata repository?
Before you can start using AWS Glue Data Catalog as a common metadata repository between Amazon Athena, Amazon Redshift Spectrum, and AWS Glue, you must upgrade your Amazon Athena data catalog to AWS Glue Data Catalog. The steps required for the upgrade are detailed here.
Q: What analytics services use the AWS Glue Data Catalog?
The metadata stored in the AWS Glue Data Catalog can be readily accessed from Glue ETL, Amazon Athena, Amazon EMR, Amazon Redshift Spectrum, and third-party services.
AWS Glue Schema Registry
Q: What is the AWS Glue Schema Registry?
AWS Glue Schema Registry, a serverless feature of AWS Glue, enables you to validate and control the evolution of streaming data using schemas registered in Apache Avro and JSON Schema data formats, at no additional charge. Through Apache-licensed serializers and deserializers, the Schema Registry integrates with Java applications developed for Apache Kafka, Amazon Managed Streaming for Apache Kafka (MSK), Amazon Kinesis Data Streams, Apache Flink, Amazon Kinesis Data Analytics for Apache Flink, and AWS Lambda. When data streaming applications are integrated with the Schema Registry, you can improve data quality and safeguard against unexpected changes using compatibility checks that govern schema evolution. Additionally, you can create or update AWS Glue tables and partitions using Apache Avro schemas stored within the registry.
Q: Why should I use AWS Glue Schema Registry?
With the AWS Glue Schema Registry, you can:
- Validate schemas. When data streaming applications are integrated with AWS Glue Schema Registry, schemas used for data production are validated against schemas within a central registry, allowing you to centrally control data quality.
- Safeguard schema evolution. You can set rules on how schemas can and cannot evolve using one of eight compatibility modes.
- Improve data quality. Serializers validate schemas used by data producers against those stored in the registry, improving data quality when it originates and reducing downstream issues from unexpected schema drift.
- Save costs. Serializers convert data into a binary format and can compress it before it is delivered, reducing data transfer and storage costs.
- Improve processing efficiency. In many cases, a data stream contains records of different schemas. The Schema Registry enables applications that read from data streams to selectively process each record based on the schema without having to parse its contents, which increases processing efficiency.
Q: What data format, client language, and integrations are supported by AWS Glue Schema Registry?
The Schema Registry supports Apache Avro and JSON Schema data formats and Java client applications. We plan to continue expanding support for other data formats and non-Java clients. The Schema Registry integrates with applications developed for Apache Kafka, Amazon Managed Streaming for Apache Kafka (MSK), Amazon Kinesis Data Streams, Apache Flink, Amazon Kinesis Data Analytics for Apache Flink, and AWS Lambda.
Q: What kinds of evolution rules does AWS Glue Schema Registry support?
The following compatibility modes are available for you to manage your schema evolution: Backward, Backward All, Forward, Forward All, Full, Full All, None, and Disabled. Visit the Schema Registry user documentation to learn more about compatibility rules.
Q: How does AWS Glue Schema Registry maintain high availability for my applications?
The Schema Registry storage and control plane is designed for high availability and is backed by the AWS Glue SLA, and the serializers and deserializers leverage best-practice caching techniques to maximize schema availability within clients.
Q: Is AWS Glue Schema Registry open-source?
AWS Glue Schema Registry storage is an AWS service, while the serializers and deserializers are Apache-licensed open-source components.
Q: Does AWS Glue Schema Registry provide encryption at rest and in-transit?
Yes, your clients communicate with the Schema Registry via API calls which encrypt data in-transit using TLS encryption over HTTPS. Schemas stored in the Schema Registry are always encrypted at rest using a service-managed KMS key.
Q: How can I privately connect to AWS Glue Schema Registry?
You can use AWS PrivateLink to connect your data producer’s VPC to AWS Glue by defining an interface VPC endpoint for AWS Glue. When you use a VPC interface endpoint, communication between your VPC and AWS Glue is conducted entirely within the AWS network. For more information, please visit the user documentation.
Q: How can I monitor my AWS Glue Schema Registry usage?
AWS CloudWatch metrics are available as part of CloudWatch’s free tier. You can access these metrics in the CloudWatch Console. Visit the AWS Glue Schema Registry user documentation for more information.
Q: Does AWS Glue Schema Registry provide tools to manage user authorization?
Yes, the Schema Registry supports both resource-level permissions and identity-based IAM policies.
Q: How do I migrate from an existing schema registry to the AWS Glue Schema Registry?
Steps to migrate from a third-party schema registry to AWS Glue Schema Registry are available in the user documentation.
Extract, transform, and load (ETL)
Q: Does AWS Glue have a no-code interface for visual ETL?
Yes. AWS Glue Studio offers a graphical interface for authoring Glue jobs to process your data. After you define the flow of your data sources, transformations and targets in the visual interface, AWS Glue studio will generate Apache Spark code on your behalf.
Q: What programming language can I use to write my ETL code for AWS Glue?
You can use either Scala or Python.
Q: How can I customize the ETL code generated by AWS Glue?
AWS Glue’s ETL script recommendation system generates Scala or Python code. It leverages Glue’s custom ETL library to simplify access to data sources as well as manage job execution. You can find more details about the library in our documentation. You can write ETL code using AWS Glue’s custom library or write arbitrary code in Scala or Python by using inline editing via the AWS Glue Console script editor, downloading the auto-generated code, and editing it in your own IDE. You can also start with one of the many samples hosted in our Github repository and customize that code.
Q: Can I import custom libraries as part of my ETL script?
Yes. You can import custom Python libraries and Jar files into your AWS Glue ETL job. For more details, please check our documentation here.
Q: Can I bring my own code?
Yes. You can write your own code using AWS Glue’s ETL library, or write your own Scala or Python code and upload it to a Glue ETL job. For more details, please check our documentation here.
Q: How can I develop my ETL code using my own IDE?
You can create and connect to development endpoints that offer ways to connect your notebooks and IDEs.
Q: How can I build end-to-end ETL workflow using multiple jobs in AWS Glue?
In addition to the ETL library and code generation, AWS Glue provides a robust set of orchestration features that allow you to manage dependencies between multiple jobs to build end-to-end ETL workflows. AWS Glue ETL jobs can either be triggered on a schedule or on a job completion event. Multiple jobs can be triggered in parallel or sequentially by triggering them on a job completion event. You can also trigger one or more Glue jobs from an external source such as an AWS Lambda function.
Q: How does AWS Glue monitor dependencies?
AWS Glue manages dependencies between two or more jobs or dependencies on external events using triggers. Triggers can watch one or more jobs as well as invoke one or more jobs. You can either have a scheduled trigger that invokes jobs periodically, an on-demand trigger, or a job completion trigger.
Q: How does AWS Glue handle ETL errors?
AWS Glue monitors job event metrics and errors, and pushes all notifications to Amazon CloudWatch. With Amazon CloudWatch, you can configure a host of actions that can be triggered based on specific notifications from AWS Glue. For example, if you get an error or a success notification from Glue, you can trigger an AWS Lambda function. Glue also provides default retry behavior that will retry all failures three times before sending out an error notification.
Q: Can I run my existing ETL jobs with AWS Glue?
Yes. You can run your existing Scala or Python code on AWS Glue. Simply upload the code to Amazon S3 and create one or more jobs that use that code. You can reuse the same code across multiple jobs by pointing them to the same code location on Amazon S3.
Q: How can I use AWS Glue to ETL streaming data?
AWS Glue supports ETL on streams from Amazon Kinesis Data Streams, Apache Kafka, and Amazon MSK. Add the stream to the Glue Data Catalog and then choose it as the data source when setting up your AWS Glue job.
Q: Do I have to use both AWS Glue Data Catalog and Glue ETL to use the service?
No. While we do believe that using both the AWS Glue Data Catalog and ETL provides an end-to-end ETL experience, you can use either one of them independently without using the other.
Q: When should I use AWS Glue Streaming and when should I use Amazon Kinesis Data Analytics?
Both AWS Glue and Amazon Kinesis Data Analytics can be used to process streaming data. AWS Glue is recommended when your use cases are primarily ETL and when you want to run jobs on a serverless Apache Spark-based platform. Amazon Kinesis Data Analytics is recommended when your use cases are primarily analytics and when you want to run jobs on a serverless Apache Flink-based platform.
Streaming ETL in AWS Glue enables advanced ETL on streaming data using the same serverless, pay-as-you-go platform that you currently use for your batch jobs. AWS Glue generates customizable ETL code to prepare your data while in flight and has built-in functionality to process streaming data that is semi-structured or has an evolving schema. Use Glue to apply both its built-in and Spark-native transforms to data streams and load them into your data lake or data warehouse.
Amazon Kinesis Data Analytics enables you to build sophisticated streaming applications to analyze streaming data in real time. It provides a serverless Apache Flink runtime that automatically scales without servers and durably saves application state. Use Amazon Kinesis Data Analytics for real-time analytics and more general stream data processing.
Q: When should I use AWS Glue and when should I use Amazon Kinesis Data Firehose?
Both AWS Glue and Amazon Kinesis Data Firehose can be used for streaming ETL. AWS Glue is recommended for complex ETL, including joining streams, and partitioning the output in Amazon S3 based on the data content. Amazon Kinesis Data Firehose is recommended when your use cases focus on data delivery and preparing data to be processed after it is delivered.
Streaming ETL in AWS Glue enables advanced ETL on streaming data using the same serverless, pay-as-you-go platform that you currently use for your batch jobs. AWS Glue generates customizable ETL code to prepare your data while in flight and has built-in functionality to process streaming data that is semi-structured or has an evolving schema. Use Glue to apply complex transforms to data streams, enrich records with information from other streams and persistent data stores, and then load records into your data lake or data warehouse.
Streaming ETL in Amazon Kinesis Data Firehose enables you to easily capture, transform, and deliver streaming data. Amazon Kinesis Data Firehose provides ETL capabilities including serverless data transformation through AWS Lambda and format conversion from JSON to Parquet. It includes ETL capabilities that are designed to make data easier to process after delivery, but does not include the advanced ETL capabilities that AWS Glue supports.
Q: What kind of problems does the FindMatches ML Transform solve?
FindMatches generally solves Record Linkage and Data Deduplication problems. Deduplication is what you have to do when you are trying to identify records in a database which are conceptually “the same”, but for which you have separate records. This problem is trivial if duplicate records can be identified by a unique key (for instance if products can be uniquely identified by a UPC Code), but becomes very challenging when you have to do a “fuzzy match”.
Record linkage is basically the same problem as data deduplication under the hood, but this term usually means that you are doing a “fuzzy join” of two databases that do not share a unique key rather than deduplicating a single database. As an example, consider the problem of matching a large database of customers to a small database of known fraudsters. FindMatches can be used on both record linkage and deduplication problems.
For instance, AWS Glue's FindMatches ML Transform can help you with the following problems:
Linking patient records between hospitals so that doctors have more background information and are better able to treat patients by using FindMatches on separate databases that both contain common fields such as name, birthday, home address, phone number, etc.
Deduplicating a database of movies containing columns like “title”, “plot synopsis”, “year of release”, “run time”, and “cast”. For instance, the same movie might be variously identified as “Star Wars”, “Star Wars: A New Hope”, and “Star Wars: Episode IV—A New Hope (Special Edition)”.
Automatically group all related products together in your storefront by identifying equivalent items in an apparel product catalog where you want to define “equivalent” to mean that they are the same ignoring differences in size and color. Hence “Levi 501 Blue Jeans, size 34x34” is defined to be the same as “Levi 501 Jeans--black, Size 32x31”.
Q: How does AWS Glue deduplicate my data?
AWS Glue's FindMatches ML Transform makes it easy to find and link records that refer to the same entity but don’t share a reliable identifier. Before FindMatches, developers would commonly solve data-matching problems deterministically, by writing huge numbers of hand-tuned rules. FindMatches uses machine learning algorithms behind the scenes to learn how to match records according to each developer's own business criteria. FindMatches first identifies records for the customer to label as to whether they match or do not match and then uses machine learning to create an ML Transform. Customers can then execute this Transform on their database to find matching records or they can ask FindMatches to give them additional records to label to push their ML Transform to higher levels of accuracy.
Q: What are ML Transforms?
ML Transforms provide a destination for creating and managing machine-learned transforms. Once created and trained, these ML Transforms can then be executed in standard AWS Glue scripts. Customers select a particular algorithm (for example, the FindMatches ML Transform) and input datasets and training examples, and the tuning parameters needed by that algorithm. AWS Glue uses those inputs to build an ML Transform that can be incorporated into a normal ETL Job workflow.
Q: How do ML Transforms work?
AWS Glue includes specialized ML-based dataset transformation algorithms customers can use to create their own ML Transforms. These include record de-duplication and match finding.
Customers start by navigating to the ML Transforms tab in the console (or using the ML Transforms service endpoints or accessing ML Transforms training via CLI) to create their first ML transform model. The ML Transforms tab provides a user-friendly view for management of user transforms. ML Transforms require distinct workflow requirements from other transforms, including the need for separate training, parameter tuning, and execution workflows; the need for estimating the quality metrics of generated transformations; and the need to manage and collect additional truth labels for training and active learning.
To create an ML transform via the console, customers first select the transform type (such as Record Deduplication or Record Matching) and provide the appropriate data sources previously discovered in Data Catalog. Depending on the transform, customers may then be asked to provide ground truth label data for training or additional parameters. Customers can monitor the status of their training jobs and view quality metrics for each transform. (Quality metrics are reported using a hold-out set of the customer-provided label data.)
Once satisfied with the performance, customers can promote ML Transforms models for use in production. ML Transforms can then be used during ETL workflows, both in code autogenerated by the service and in user-defined scripts submitted with other jobs, similar to pre-built transforms offered in other AWS Glue libraries.
Q: Can I see a presentation on using AWS Glue (and AWS Lake Formation) to find matches and deduplicate records?
A: Yes, the full recording of the AWS Online Tech Talk, "Fuzzy Matching and Deduplicating Data with ML Transforms for AWS Lake Formation" is available here.
AWS Glue Data Quality (Preview)
Q: What is AWS Glue Data Quality?
AWS Glue Data Quality is a feature of AWS Glue that reduces manual data quality effort by automatically measuring and monitoring the quality of data in data lakes and pipelines. AWS Glue Data Quality analyzes data in data lakes and automatically recommends data quality rules. You can modify these rules, add additional rules from built-in rule types, and configure actions to alert teams when quality issues occur. Rules can also be included in AWS Glue data pipelines and scheduled to run periodically. This feature then measures data quality by evaluating these rules and calculates data quality scores. You can view these data quality scores in the AWS Glue Data Catalog. Data quality issues can be remediated by modifying data pipelines and tracking quality score improvements using AWS Glue Data Quality.
Q: What is data quality and why is it important?
Data quality is the measure of how well suited a dataset is to serve its specific purpose, such as analytics to improve operations, business decision making, and planning. Hundreds of thousands of customers use data lakes on AWS, but customers struggle to use their data assets effectively due to poor data quality.
Q: Why should I use AWS Glue Data Quality?
AWS Glue Data Quality reduces the manual effort and time that it takes to set up data quality checks in your data lakes and pipelines. It automates the process of analyzing data to determine the appropriate data quality rules, and then it applies those checks on a schedule that you choose. It also provides built-in options for monitoring and alerting.
Q: What rules does AWS Glue Data Quality support?
AWS Glue Data Quality currently supports 18 built-in rule types under four categories:
- Consistency rules check if data across different columns agrees by looking at column correlations.
- Accuracy rules check if record counts meet a set threshold and if columns are not empty, match certain patterns, have valid data types, and have valid values.
- Integrity rules check if duplicates exist in a dataset.
- Completeness rules check if data in your datasets do not have missing values.
These rule types are high-level groupings of data quality rules that address several use cases. For any missing requirements, you can author custom rules using SQL.
Q: How can I get started with AWS Glue Data Quality?
To get started, go to Data Quality in the Data Catalog and select a table. Then choose the Data Quality tab to get started. Alternatively, you can set up data quality rules within your pipelines by adding a Data Quality transform on AWS Glue Studio. You can also use APIs to set up data quality rules and run them.
Q: How does AWS Glue Data Quality generate recommendations?
AWS Glue Data Quality uses Deequ, an Amazon developed open-source framework that many Amazon teams use to manage the quality of Amazon internal datasets at petabyte scale. One Amazon team uses Deequ to check dataset quality in their 60 PB data lake. Deequ uses Apache Spark to gather data statistics, such as averages, correlations, patterns, and other advanced statistics. It then uses these statistics to identify the right set of checks or rules to validate data quality.
Q: How can I edit the recommended rules or add new rules?
You can view and edit recommended rules in the Data Catalog. If you are using other AWS services, you can programmatically access your recommendations using the AWS Glue Data Quality API. You can also add new rules in the Data Catalog.
Q: How does AWS Glue Data Quality verify that my rules are relevant when data changes?
You can schedule the recommendation process to get new recommendations based on recent data. AWS Glue Data Quality will provide new recommendations based on recent data patterns.
Q: What built-in actions are available on AWS Glue Data Quality?
You can use actions to respond to a data quality issue. In the Data Catalog, you can write the metrics to Amazon CloudWatch and set up alerts in CloudWatch to notify you when scores go below a threshold. On AWS Glue Studio, you can fail a job when quality deteriorates, preventing bad data from moving into data lakes.
Q: How can I evaluate my data’s quality?
After you create data quality rules in the Data Catalog, you can create a data quality task and run it immediately or schedule it to run at certain intervals. Data quality rules on your pipelines evaluate your data quality as data is brought into your data lake through your pipelines.
Q: Where can I view AWS Glue Data Quality scores?
You can make confident data-driven decisions using data quality scores. You can view data quality scores on the Data Quality tab of your table from Data Catalog. You can view your data pipeline scores on AWS Glue Studio by opening an AWS Glue Studio job and choosing Data Quality. You can configure your data quality tasks to write results to an Amazon Simple Storage Service (S3) bucket. You can then query this data using Amazon Athena or Amazon QuickSight.
Q: What is the difference between data quality rules on AWS Glue DataBrew, AWS Glue Data Catalog, and AWS Glue Studio?
Business analysts and data analysts use DataBrew to transform data without writing any code. Data stewards and data engineers use Data Catalog to manage metadata. Data engineers use AWS Glue Studio to author scalable data integration pipelines. These user types must manage data quality in their workflows. Also, data engineers need more technical data quality rules compared to business analysts who write functional rules. Therefore, data quality features are made available in each of these experiences to meet unique user requirements.
AWS Glue DataBrew
Q: What is AWS Glue DataBrew?
AWS Glue DataBrew is a visual data preparation tool that makes it easy for data analysts and data scientists to prepare data with an interactive, point-and-click visual interface without writing code. With Glue DataBrew, you can easily visualize, clean, and normalize terabytes, and even petabytes of data directly from your data lake, data warehouses, and databases, including Amazon S3, Amazon Redshift, Amazon Aurora, and Amazon RDS. AWS Glue DataBrew is generally available today in US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), EU (Frankfurt), Asia Pacific (Sydney), and Asia Pacific (Tokyo).
Q: Who can use AWS Glue DataBrew?
AWS Glue DataBrew is built for users who need to clean and normalize data for analytics and machine learning. Data analysts and data scientists are the primary users. For data analysts, examples of job functions are business intelligence analysts, operations analysts, market intelligence analysts, legal analysts, financial analysts, economists, quants, or accountants. For data scientists, examples of job functions are materials scientists, bioanalytical scientists, and scientific researchers.
Q: What types of transformations are supported in AWS Glue DataBrew?
You can choose from over 250 built-in transformations to combine, pivot, and transpose the data without writing code. AWS Glue DataBrew also automatically recommends transformations such as filtering anomalies, correcting invalid, incorrectly classified, or duplicate data, normalizing data to standard date and time values, or generating aggregates for analyses. For complex transformations, such as converting words to a common base or root word, Glue DataBrew provides transformations that use advanced machine learning techniques such as Natural Language Processing (NLP). You can group multiple transformations together, save them as recipes, and apply the recipes directly to the new incoming data.
Q: What file formats does AWS Glue DataBrew support?
For input data, AWS Glue DataBrew supports commonly used file formats, such as comma-separated values (.csv), JSON and nested JSON, Apache Parquet and nested Apache Parquet, and Excel sheets. For output data, AWS Glue DataBrew supports comma-separated values (.csv), JSON, Apache Parquet, Apache Avro, Apache ORC and XML.
Q: Can I try AWS Glue DataBrew for free?
Yes. Sign up for an AWS Free Tier account, then visit the AWS Glue DataBrew Management Console, and get started instantly for free. If you are a first-time user of Glue DataBrew, the first 40 interactive sessions are free. Visit the AWS Glue Pricing page to learn more.
Q: Do I need to use AWS Glue Data Catalog or AWS Lake Formation to use AWS Glue DataBrew?
No. You can use AWS Glue DataBrew without using either the AWS Glue Data Catalog or AWS Lake Formation. However, if you use either the AWS Glue Data Catalog or AWS Lake Formation, DataBrew users can select the data sets available to them from their centralized data catalog.
Q: Can I retain a record of all changes made to my data?
Yes. You can visually track all the changes made to your data in the AWS Glue DataBrew Management Console. The visual view makes it easy to trace the changes and relationships made to the datasets, projects and recipes, and all other associated jobs. In addition, Glue DataBrew keeps all account activities as logs in the AWS CloudTrail.
AWS Glue Flex Jobs
Q: What is Glue Flex?
AWS Glue Flex is a flexible execution job class that allows you to reduce the cost of your non-urgent data integration workloads (e.g., pre-production jobs, testing, data loads, etc.) by up to 35%. Glue has two job execution classes: standard and flexible. The standard execution class is ideal for time-sensitive workloads that require fast job startup and dedicated resources. The flexible execution-class is appropriate for non-urgent jobs whose start and completion times may vary. AWS Glue Flex can reduce the cost of your non-time-sensitive workloads (e.g. nightly batch ETL jobs, weekend jobs, one-time bulk data ingestion jobs, etc.).
Q: How do AWS Glue’s standard and flexible execution classes differ?
AWS Glue’s standard and flexible execution classes have different execution properties. With the standard execution class, jobs start immediately and have dedicated resources while running. Flexible execution class jobs run on non-dedicated compute resources in AWS that can be reclaimed while a job is running, and their start and completion times vary. As a result, the two execution-classes are appropriate for different workloads. The standard execution class is ideal for time-sensitive workloads that require fast job startup and dedicated resources. The flexible execution class is less expensive and suitable for non-urgent jobs where variance in start and completion times is acceptable.
Q: How do I get started with AWS Glue Flex flexible execution class jobs?
The flexible execution class is available for Glue Spark jobs. To use the flexible execution class, you simply change the default setting of the execution class parameter from “STANDARD to “FLEX”. You can do this via Glue Studio or CLI. Visit the AWS Glue _user documentation_ for more information.
Q: What types of data integration and ETL workloads are not appropriate for AWS Glue Flex flexible execution class?
AWS Glue Flex flexible execution class is not appropriate for time-sensitive workloads that require consistent job start and run times, or for jobs that must complete execution by a specific time. AWS Glue Flex is also not recommended for long-running data integration workloads because they are more likely to get interrupted, resulting in frequent cancellations.
Q: How often should I expect jobs running with AWS Glue Flex flexible execution class to be interrupted?
The availability and interruption frequency of AWS Glue Flex depends on several factors, including the Region and Availability Zone (AZ), time of day, day of week. Resource availability determines whether Glue Flex jobs will start at all. While the interruption rate can be between 5-10% during peak hours, we expect the interruption rate of Glue Flex jobs to be about 5% or the failure rate of Glue Flex jobs due to interruption to be under 5%.
Q: Is the flexible execution class always available?
Yes, you can always choose the flexible execution class to run your Glue jobs. However, the ability of AWS Glue to execute these jobs is based on the availability of non-dedicated AWS capacity and the number of workers selected for your job. It is possible that, during peak times, Glue may not have adequate capacity for your job. In that case, your job will not start. You can specify a timeout value after which Glue will cancel the job. The longer the timeout value, the greater the chance that your job will be executed.
Q: What happens if an AWS Glue Flex job is interrupted during execution?
If a Glue Flex job is interrupted because there are no longer sufficient workers to complete the job based on the number of workers specified, the job will fail. Glue will retry failed jobs up to the specified maximum number of retries on the job definition before cancelling the job. You should not use flexible execution class for any job that has a downstream dependency on other systems or processes.
Q: What types of AWS Glue jobs are supported by the flexible execution class?
The flexible execution class supports only Glue Spark jobs. Pythonshell and streaming are not supported.AWS Glue Flex is supported by Glue version 3.0 and later. The flexible execution class does not currently support streaming workloads.
AWS Glue for Ray (Preview)
Q: What is AWS Glue for Ray?
AWS Glue for Ray is an engine option that data engineers can use to process large datasets using Python and popular Python libraries. AWS Glue for Ray combines the AWS Glue serverless data integration service with Ray (ray.io), a popular new open-source framework that helps scale Python workloads. You pay only for the resources that you use while running code and don’t need to configure or tune any resources.
Q: Why should I use AWS Glue for Ray?
With AWS Glue for Ray, you use the same data processing tools that you currently use (for example, Python libraries for data cleansing, computation, and machine learning [ML]) on large datasets. You don’t need to switch to other big data frameworks or rewrite your code to work on large datasets. AWS Glue for Ray helps you run distributed Python scripts over multi-node clusters. It also simplifies the process of orchestrating large numbers of tasks that must be run in parallel.
Q. What is Ray?
Ray (ray.io) is an open-source distributed compute framework that scales Python applications from a laptop to a cluster consisting of hundreds of compute nodes. It provides simplified primitive types for building and running distributed applications. You can parallelize single-machine code with a few additional lines of code. You can also build complex applications using a straightforward programming model (Ray Core) and a collection of high-level libraries and tools.
Q: How do I start using AWS Glue for Ray?
You can create and run Ray jobs by using the existing AWS Glue jobs, command line interfaces (CLIs), and APIs, and selecting the Ray engine through notebooks (Amazon SageMaker or a local notebook) or by using AWS Glue Studio. When a Ray job is ready, you can run it manually or on a schedule.
Q: What infrastructure do I need to manage to support AWS Glue for Ray users?
AWS Glue for Ray is fully serverless, so there is no infrastructure to manage. However, administrators can manage how much infrastructure is provisioned for users by setting defaults and limits for the size of AWS Glue for Ray clusters on a per-account, per-user, and per-role basis. They can also set usage limits that will automatically initiate alerts and stop code from running when usage thresholds are exceeded.
AWS Product Integrations
Q: When should I use AWS Glue vs. AWS Data Pipeline?
AWS Glue provides a managed ETL service that runs on a serverless Apache Spark environment. This allows you to focus on your ETL job and not worry about configuring and managing the underlying compute resources. AWS Glue takes a data first approach and allows you to focus on the data properties and data manipulation to transform the data to a form where you can derive business insights. It provides an integrated data catalog that makes metadata available for ETL as well as querying via Amazon Athena and Amazon Redshift Spectrum.
AWS Data Pipeline provides a managed orchestration service that gives you greater flexibility in terms of the execution environment, access and control over the compute resources that run your code, as well as the code itself that does data processing. AWS Data Pipeline launches compute resources in your account allowing you direct access to the Amazon EC2 instances or Amazon EMR clusters.
Furthermore, AWS Glue ETL jobs are Scala or Python based. If your use case requires you to use an engine other than Apache Spark or if you want to run a heterogeneous set of jobs that run on a variety of engines like Hive, Pig, etc., then AWS Data Pipeline would be a better choice.
Q: When should I use AWS Glue vs. Amazon EMR?
AWS Glue works on top of the Apache Spark environment to provide a scale-out execution environment for your data transformation jobs. AWS Glue infers, evolves, and monitors your ETL jobs to greatly simplify the process of creating and maintaining jobs. Amazon EMR provides you with direct access to your Hadoop environment, affording you lower-level access and greater flexibility in using tools beyond Spark.
Q: When should I use AWS Glue vs AWS Database Migration Service?
AWS Database Migration Service (DMS) helps you migrate databases to AWS easily and securely. For use cases which require a database migration from on-premises to AWS or database replication between on-premises sources and sources on AWS, we recommend you use AWS DMS. Once your data is in AWS, you can use AWS Glue to move, combine, replicate, and transform data from your data source into another database or data warehouse, such as Amazon Redshift.
Q: When should I use AWS Glue vs AWS Batch?
AWS Batch enables you to easily and efficiently run any batch computing job on AWS regardless of the nature of the job. AWS Batch creates and manages the compute resources in your AWS account, giving you full control and visibility into the resources being used. AWS Glue is a fully-managed ETL service that provides a serverless Apache Spark environment to run your ETL jobs. For your ETL use cases, we recommend you explore using AWS Glue. For other batch-oriented use cases, including some ETL use cases, AWS Batch might be a better fit.
Pricing and billing
Q: How am I charged for AWS Glue?
You will pay a simple monthly fee, above the AWS Glue Data Catalog free tier, for storing and accessing the metadata in the AWS Glue Data Catalog. You will pay an hourly rate, billed per second, for the crawler run with a 10-minute minimum. If you choose to use a development endpoint to interactively develop your ETL code, you will pay an hourly rate, billed per second, for the time your development endpoint is provisioned, with a 10-minute minimum. Additionally, you will pay an hourly rate, billed per second, for the ETL job with either a 1-minute minimum or 10-minute minimum based on the Glue version you select. For more details, please refer our pricing page.
Q: When does billing for my AWS Glue jobs begin and end?
Billing commences as soon as the job is scheduled for execution and continues until the entire job completes. With AWS Glue, you only pay for the time for which your job runs and not for the environment provisioning or shutdown time.
Security and availability
Q: How does AWS Glue keep my data secure?
We provide server-side encryption for data at rest and SSL for data in motion.
Q: What are the service limits associated with AWS Glue?
Please refer our documentation to learn more about service limits.
Q: What regions is AWS Glue in?
Please refer to the AWS Region Table for details of AWS Glue service availability by region.
Q: How many DPUs (Data Processing Units) are allocated to the development endpoint?
A development endpoint is provisioned with 5 DPUs by default. You can configure a development endpoint with a minimum of 2 DPUs and a maximum of 5 DPUs.
Q: How do I scale the size and performance of my AWS Glue ETL jobs?
You can simply specify the number of DPUs (Data Processing Units) you want to allocate to your ETL job. A Glue ETL job requires a minimum of 2 DPUs. By default, AWS Glue allocates 10 DPUs to each ETL job.
Q: How do I monitor the execution of my AWS Glue jobs?
AWS Glue provides the status of each job and pushes all notifications to Amazon CloudWatch. You can set up SNS notifications via CloudWatch actions to be informed of job failures or completions.
Service Level Agreement
Q: What does the AWS Glue SLA guarantee?
Our AWS Glue SLA guarantees a Monthly Uptime Percentage of at least 99.9% for AWS Glue.
Q: How do I know if I qualify for a SLA Service Credit?
You are eligible for a SLA credit for AWS Glue under the AWS Glue SLA if more than one Availability Zone in which you are running a task, within the same region has a Monthly Uptime Percentage of less than 99.9% during any monthly billing cycle.
For full details on all of the terms and conditions of the SLA, as well as details on how to submit a claim, please see the AWS Glue SLA details page.
Explore pricing options for AWS Glue.
Instantly get access to the AWS Free Tier.
Get started building with AWS Glue on the AWS Management Console.