General

Q: What is AWS Glue?

AWS Glue is a serverless data integration service that makes it easy to discover, prepare, and combine data for analytics, machine learning, and application development. AWS Glue provides all of the capabilities needed for data integration, so you can start analyzing your data and putting it to use in minutes instead of months. AWS Glue provides both visual and code-based interfaces to make data integration easier. Users can easily find and access data using the AWS Glue Data Catalog. Data engineers and ETL (extract, transform, and load) developers can visually create, run, and monitor ETL workflows with a few clicks in AWS Glue Studio. Data analysts and data scientists can use AWS Glue DataBrew to visually enrich, clean, and normalize data without writing code. With AWS Glue Elastic Views, application developers can use familiar Structured Query Language (SQL) to combine and replicate data across different data stores.

Q: How do I get started with AWS Glue?

To start using AWS Glue, simply sign into the AWS Management Console and navigate to “Glue” under the “Analytics” category. You can follow one of our guided tutorials that will walk you through an example use case for AWS Glue. You can also find sample ETL code in our GitHub repository under AWS Labs. To register for the AWS Glue Elastic Views preview, learn more here.

Q. What are the main components of AWS Glue?

AWS Glue consists of a Data Catalog which is a central metadata repository; an ETL engine that can automatically generate Scala or Python code; a flexible scheduler that handles dependency resolution, job monitoring, and retries; AWS Glue DataBrew for cleaning and normalizing data with a visual interface; and AWS Glue Elastic Views, for combining and replicating data across multiple data stores. Together, these automate much of the undifferentiated heavy lifting involved with discovering, categorizing, cleaning, enriching, and moving data, so you can spend more time analyzing your data.

Q: When should I use AWS Glue?

You should use AWS Glue to discover properties of the data you own, transform it, and prepare it for analytics. Glue can automatically discover both structured and semi-structured data stored in your data lake on Amazon S3, data warehouse in Amazon Redshift, and various databases running on AWS. It provides a unified view of your data via the Glue Data Catalog that is available for ETL, querying and reporting using services like Amazon Athena, Amazon EMR, and Amazon Redshift Spectrum. Glue automatically generates Scala or Python code for your ETL jobs that you can further customize using tools you are already familiar with. You can use AWS Glue DataBrew to visually clean up and normalize data without writing code. You should use AWS Glue Elastic Views to combine and continuously replicate data across multiple data stores in near-real time. AWS Glue is serverless, so there are no compute resources to configure and manage.

Q: What data sources does AWS Glue support?

AWS Glue natively supports data stored in Amazon Aurora, Amazon RDS for MySQL, Amazon RDS for Oracle, Amazon RDS for PostgreSQL, Amazon RDS for SQL Server, Amazon Redshift, DynamoDB and Amazon S3, as well as MySQL, Oracle, Microsoft SQL Server, and PostgreSQL databases in your Virtual Private Cloud (Amazon VPC) running on Amazon EC2. AWS Glue also supports data streams from Amazon MSK, Amazon Kinesis Data Streams, and Apache Kafka.

You can also write custom Scala or Python code and import custom libraries and Jar files into your AWS Glue ETL jobs to access data sources not natively supported by AWS Glue. For more details on importing custom libraries, refer to our documentation.

The AWS Glue Elastic Views preview currently supports Amazon DynamoDB as a source, with support for Amazon Aurora and Amazon RDS to follow. Currently supported targets are Amazon Redshift, Amazon S3, and Amazon Elasticsearch Service, with support for Amazon Aurora, Amazon RDS, and Amazon DynamoDB to follow.

Q: How does AWS Glue relate to AWS Lake Formation?

A: Lake Formation leverages a shared infrastructure with AWS Glue, including console controls, ETL code creation and job monitoring, a common data catalog, and a serverless architecture. While AWS Glue is still focused on these types of functions, Lake Formation encompasses AWS Glue features AND provides additional capabilities designed to help build, secure, and manage a data lake. See the AWS Lake Formation pages for more details.

AWS Glue Data Catalog

Q: What is the AWS Glue Data Catalog?

The AWS Glue Data Catalog is a central repository to store structural and operational metadata for all your data assets. For a given data set, you can store its table definition, physical location, add business relevant attributes, as well as track how this data has changed over time.

The AWS Glue Data Catalog is Apache Hive Metastore compatible and is a drop-in replacement for the Apache Hive Metastore for Big Data applications running on Amazon EMR. For more information on setting up your EMR cluster to use AWS Glue Data Catalog as an Apache Hive Metastore, click here.

The AWS Glue Data Catalog also provides out-of-box integration with Amazon Athena, Amazon EMR, and Amazon Redshift Spectrum. Once you add your table definitions to the Glue Data Catalog, they are available for ETL and also readily available for querying in Amazon Athena, Amazon EMR, and Amazon Redshift Spectrum so that you can have a common view of your data between these services.

Q: How do I get my metadata into the AWS Glue Data Catalog?

AWS Glue provides a number of ways to populate metadata into the AWS Glue Data Catalog. Glue crawlers scan various data stores you own to automatically infer schemas and partition structure and populate the Glue Data Catalog with corresponding table definitions and statistics. You can also schedule crawlers to run periodically so that your metadata is always up-to-date and in-sync with the underlying data. Alternately, you can add and update table details manually by using the AWS Glue Console or by calling the API. You can also run Hive DDL statements via the Amazon Athena Console or a Hive client on an Amazon EMR cluster. Finally, if you already have a persistent Apache Hive Metastore, you can perform a bulk import of that metadata into the AWS Glue Data Catalog by using our import script.

Q: What are AWS Glue crawlers?

An AWS Glue crawler connects to a data store, progresses through a prioritized list of classifiers to extract the schema of your data and other statistics, and then populates the Glue Data Catalog with this metadata. Crawlers can run periodically to detect the availability of new data as well as changes to existing data, including table definition changes. Crawlers automatically add new tables, new partitions to existing table, and new versions of table definitions. You can customize Glue crawlers to classify your own file types.

Q: How do I import data from my existing Apache Hive Metastore to the AWS Glue Data Catalog?

You simply run an ETL job that reads from your Apache Hive Metastore, exports the data to an intermediate format in Amazon S3, and then imports that data into the AWS Glue Data Catalog.

Q: Do I need to maintain my Apache Hive Metastore if I am storing my metadata in the AWS Glue Data Catalog?

No. AWS Glue Data Catalog is Apache Hive Metastore compatible. You can point to the Glue Data Catalog endpoint and use it as an Apache Hive Metastore replacement. For more information on how to configure your cluster to use AWS Glue Data Catalog as an Apache Hive Metastore, please read our documentation here.

Q. If I am already using Amazon Athena or Amazon Redshift Spectrum and have tables in Amazon Athena’s internal data catalog, how can I start using the AWS Glue Data Catalog as my common metadata repository?

Before you can start using AWS Glue Data Catalog as a common metadata repository between Amazon Athena, Amazon Redshift Spectrum, and AWS Glue, you must upgrade your Amazon Athena data catalog to AWS Glue Data Catalog. The steps required for the upgrade are detailed here.

Q: What analytics services use the AWS Glue Data Catalog?

The metadata stored in the AWS Glue Data Catalog can be readily accessed from Glue ETL, Amazon Athena, Amazon EMR, Amazon Redshift Spectrum, and third-party services.

AWS Glue Schema Registry

Q: What is the AWS Glue Schema Registry?

AWS Glue Schema Registry, a serverless feature of AWS Glue, enables you to validate and control the evolution of streaming data using registered Apache Avro schemas, at no additional charge. Through Apache-licensed serializers and deserializers, the Schema Registry integrates with Java applications developed for Apache Kafka, Amazon Managed Streaming for Apache Kafka (MSK), Amazon Kinesis Data Streams, Apache Flink, Amazon Kinesis Data Analytics for Apache Flink, and AWS Lambda. When data streaming applications are integrated with the Schema Registry, you can improve data quality and safeguard against unexpected changes using compatibility checks that govern schema evolution. Additionally, you can create or update AWS Glue tables and partitions using schemas stored within the registry.

Q: Why should I use AWS Glue Schema Registry?

With the AWS Glue Schema Registry, you can:

  1. Validate schemas. When data streaming applications are integrated with AWS Glue Schema Registry, schemas used for data production are validated against schemas within a central registry, allowing you to centrally control data quality.
  2. Safeguard schema evolution. You can set rules on how schemas can and cannot evolve using one of eight compatibility modes.
  3. Improve data quality. Serializers validate schemas used by data producers against those stored in the registry, improving data quality when it originates and reducing downstream issues from unexpected schema drift.
  4. Save costs. Serializers convert data into a binary format and can compress it before it is delivered, reducing data transfer and storage costs.
  5. Improve processing efficiency. In many cases, a data stream contains records of different schemas. The Schema Registry enables applications that read from data streams to selectively process each record based on the schema without having to parse its contents, which increases processing efficiency.

Q: What data format, client language, and integrations are supported by AWS Glue Schema Registry?

The Schema Registry supports Apache Avro data schemas and Java client applications, and we plan to expand support to non-Avro and non-Java clients. The Schema Registry integrates with applications developed for Apache Kafka, Amazon Managed Streaming for Apache Kafka (MSK), Amazon Kinesis Data Streams, Apache Flink, Amazon Kinesis Data Analytics for Apache Flink, and AWS Lambda.

Q: What kinds of evolution rules does AWS Glue Schema Registry support?

The following compatibility modes are available for you to manage your schema evolution: Backward, Backward All, Forward, Forward All, Full, Full All, None, and Disabled. Visit the Schema Registry user documentation to learn more about compatibility rules.

Q: How does AWS Glue Schema Registry maintain high availability for my applications?

The Schema Registry storage and control plane is designed for high availability and is backed by the AWS Glue SLA, and the serializers and deserializers leverage best-practice caching techniques to maximize schema availability within clients.

Q: Is AWS Glue Schema Registry open-source?

AWS Glue Schema Registry storage is an AWS service, while the serializers and deserializers are Apache-licensed open-source components.

Q: Does AWS Glue Schema Registry provide encryption at rest and in-transit?

Yes, your clients communicate with the Schema Registry via API calls which encrypt data in-transit using TLS encryption over HTTPS. Schemas stored in the Schema Registry are always encrypted at rest using a service-managed KMS key.

Q: How can I privately connect to AWS Glue Schema Registry?

You can use AWS PrivateLink to connect your data producer’s VPC to AWS Glue by defining an interface VPC endpoint for AWS Glue. When you use a VPC interface endpoint, communication between your VPC and AWS Glue is conducted entirely within the AWS network. For more information, please visit the user documentation.

Q: How can I monitor my AWS Glue Schema Registry usage?

AWS CloudWatch metrics are available as part of CloudWatch’s free tier. You can access these metrics in the CloudWatch Console. Visit the AWS Glue Schema Registry user documentation for more information.

Q: Does AWS Glue Schema Registry provide tools to manage user authorization?

Yes, the Schema Registry supports both resource-level permissions and identity-based IAM policies.

Q: How do I migrate from an existing schema registry to the AWS Glue Schema Registry?

Steps to migrate from a third-party schema registry to AWS Glue Schema Registry are available in the user documentation.

Extract, transform, and load (ETL)

Q: Does AWS Glue have a no-code interface for visual ETL?

Yes. AWS Glue Studio offers a graphical interface for authoring Glue jobs to process your data. After you define the flow of your data sources, transformations and targets in the visual interface, AWS Glue studio will generate Apache Spark code on your behalf.

Q: What programming language can I use to write my ETL code for AWS Glue?

You can use either Scala or Python.

Q: How can I customize the ETL code generated by AWS Glue?

AWS Glue’s ETL script recommendation system generates Scala or Python code. It leverages Glue’s custom ETL library to simplify access to data sources as well as manage job execution. You can find more details about the library in our documentation. You can write ETL code using AWS Glue’s custom library or write arbitrary code in Scala or Python by using inline editing via the AWS Glue Console script editor, downloading the auto-generated code, and editing it in your own IDE. You can also start with one of the many samples hosted in our Github repository and customize that code.

Q: Can I import custom libraries as part of my ETL script?

Yes. You can import custom Python libraries and Jar files into your AWS Glue ETL job. For more details, please check our documentation here.

Q: Can I bring my own code?

Yes. You can write your own code using AWS Glue’s ETL library, or write your own Scala or Python code and upload it to a Glue ETL job. For more details, please check our documentation here.

Q: How can I develop my ETL code using my own IDE?

You can create and connect to development endpoints that offer ways to connect your notebooks and IDEs.

Q: How can I build end-to-end ETL workflow using multiple jobs in AWS Glue?

In addition to the ETL library and code generation, AWS Glue provides a robust set of orchestration features that allow you to manage dependencies between multiple jobs to build end-to-end ETL workflows. AWS Glue ETL jobs can either be triggered on a schedule or on a job completion event. Multiple jobs can be triggered in parallel or sequentially by triggering them on a job completion event. You can also trigger one or more Glue jobs from an external source such as an AWS Lambda function.

Q: How does AWS Glue monitor dependencies?

AWS Glue manages dependencies between two or more jobs or dependencies on external events using triggers. Triggers can watch one or more jobs as well as invoke one or more jobs. You can either have a scheduled trigger that invokes jobs periodically, an on-demand trigger, or a job completion trigger.

Q: How does AWS Glue handle ETL errors?

AWS Glue monitors job event metrics and errors, and pushes all notifications to Amazon CloudWatch. With Amazon CloudWatch, you can configure a host of actions that can be triggered based on specific notifications from AWS Glue. For example, if you get an error or a success notification from Glue, you can trigger an AWS Lambda function. Glue also provides default retry behavior that will retry all failures three times before sending out an error notification.

Q: Can I run my existing ETL jobs with AWS Glue?

Yes. You can run your existing Scala or Python code on AWS Glue. Simply upload the code to Amazon S3 and create one or more jobs that use that code. You can reuse the same code across multiple jobs by pointing them to the same code location on Amazon S3.

Q: How can I use AWS Glue to ETL streaming data?

AWS Glue supports ETL on streams from Amazon Kinesis Data Streams, Apache Kafka, and Amazon MSK. Add the stream to the Glue Data Catalog and then choose it as the data source when setting up your AWS Glue job.

Q: Do I have to use both AWS Glue Data Catalog and Glue ETL to use the service?

No. While we do believe that using both the AWS Glue Data Catalog and ETL provides an end-to-end ETL experience, you can use either one of them independently without using the other.

Q: When should I use AWS Glue Streaming and when should I use Amazon Kinesis Data Analytics?

Both AWS Glue and Amazon Kinesis Data Analytics can be used to process streaming data. AWS Glue is recommended when your use cases are primarily ETL and when you want to run jobs on a serverless Apache Spark-based platform. Amazon Kinesis Data Analytics is recommended when your use cases are primarily analytics and when you want to run jobs on a serverless Apache Flink-based platform.

Streaming ETL in AWS Glue enables advanced ETL on streaming data using the same serverless, pay-as-you-go platform that you currently use for your batch jobs. AWS Glue generates customizable ETL code to prepare your data while in flight and has built-in functionality to process streaming data that is semi-structured or has an evolving schema. Use Glue to apply both its built-in and Spark-native transforms to data streams and load them into your data lake or data warehouse.

Amazon Kinesis Data Analytics enables you to build sophisticated streaming applications to analyze streaming data in real time. It provides a serverless Apache Flink runtime that automatically scales without servers and durably saves application state. Use Amazon Kinesis Data Analytics for real-time analytics and more general stream data processing.

Q: When should I use AWS Glue and when should I use Amazon Kinesis Data Firehose?

Both AWS Glue and Amazon Kinesis Data Firehose can be used for streaming ETL. AWS Glue is recommended for complex ETL, including joining streams, and partitioning the output in Amazon S3 based on the data content. Amazon Kinesis Data Firehose is recommended when your use cases focus on data delivery and preparing data to be processed after it is delivered.

Streaming ETL in AWS Glue enables advanced ETL on streaming data using the same serverless, pay-as-you-go platform that you currently use for your batch jobs. AWS Glue generates customizable ETL code to prepare your data while in flight and has built-in functionality to process streaming data that is semi-structured or has an evolving schema. Use Glue to apply complex transforms to data streams, enrich records with information from other streams and persistent data stores, and then load records into your data lake or data warehouse.

Streaming ETL in Amazon Kinesis Data Firehose enables you to easily capture, transform, and deliver streaming data. Amazon Kinesis Data Firehose provides ETL capabilities including serverless data transformation through AWS Lambda and format conversion from JSON to Parquet. It includes ETL capabilities that are designed to make data easier to process after delivery, but does not include the advanced ETL capabilities that AWS Glue supports.

Deduplicate data

Q: What kind of problems does the FindMatches ML Transform solve?

FindMatches generally solves Record Linkage and Data Deduplication problems. Deduplication is what you have to do when you are trying to identify records in a database which are conceptually “the same”, but for which you have separate records. This problem is trivial if duplicate records can be identified by a unique key (for instance if products can be uniquely identified by a UPC Code), but becomes very challenging when you have to do a “fuzzy match”.

Record linkage is basically the same problem as data deduplication under the hood, but this term usually means that you are doing a “fuzzy join” of two databases that do not share a unique key rather than deduplicating a single database. As an example, consider the problem of matching a large database of customers to a small database of known fraudsters. FindMatches can be used on both record linkage and deduplication problems.

For instance, AWS Glue's FindMatches ML Transform can help you with the following problems:

Linking patient records between hospitals so that doctors have more background information and are better able to treat patients by using FindMatches on separate databases that both contain common fields such as name, birthday, home address, phone number, etc.

Deduplicating a database of movies containing columns like “title”, “plot synopsis”, “year of release”, “run time”, and “cast”. For instance, the same movie might be variously identified as “Star Wars”, “Star Wars: A New Hope”, and “Star Wars: Episode IV—A New Hope (Special Edition)”.

Automatically group all related products together in your storefront by identifying equivalent items in an apparel product catalog where you want to define “equivalent” to mean that they are the same ignoring differences in size and color. Hence “Levi 501 Blue Jeans, size 34x34” is defined to be the same as “Levi 501 Jeans--black, Size 32x31”.

Q: How does AWS Glue deduplicate my data?

AWS Glue's FindMatches ML Transform makes it easy to find and link records that refer to the same entity but don’t share a reliable identifier. Before FindMatches, developers would commonly solve data-matching problems deterministically, by writing huge numbers of hand-tuned rules. FindMatches uses machine learning algorithms behind the scenes to learn how to match records according to each developer's own business criteria. FindMatches first identifies records for the customer to label as to whether they match or do not match and then uses machine learning to create an ML Transform. Customers can then execute this Transform on their database to find matching records or they can ask FindMatches to give them additional records to label to push their ML Transform to higher levels of accuracy.

Q: What are ML Transforms?

ML Transforms provide a destination for creating and managing machine-learned transforms. Once created and trained, these ML Transforms can then be executed in standard AWS Glue scripts. Customers select a particular algorithm (for example, the FindMatches ML Transform) and input datasets and training examples, and the tuning parameters needed by that algorithm. AWS Glue uses those inputs to build an ML Transform that can be incorporated into a normal ETL Job workflow.

Q: How do ML Transforms work?

AWS Glue includes specialized ML-based dataset transformation algorithms customers can use to create their own ML Transforms. These include record de-duplication and match finding.

Customers start by navigating to the ML Transforms tab in the console (or using the ML Transforms service endpoints or accessing ML Transforms training via CLI) to create their first ML transform model. The ML Transforms tab provides a user-friendly view for management of user transforms. ML Transforms require distinct workflow requirements from other transforms, including the need for separate training, parameter tuning, and execution workflows; the need for estimating the quality metrics of generated transformations; and the need to manage and collect additional truth labels for training and active learning.

To create an ML transform via the console, customers first select the transform type (such as Record Deduplication or Record Matching) and provide the appropriate data sources previously discovered in Data Catalog. Depending on the transform, customers may then be asked to provide ground truth label data for training or additional parameters. Customers can monitor the status of their training jobs and view quality metrics for each transform. (Quality metrics are reported using a hold-out set of the customer-provided label data.)

Once satisfied with the performance, customers can promote ML Transforms models for use in production. ML Transforms can then be used during ETL workflows, both in code autogenerated by the service and in user-defined scripts submitted with other jobs, similar to pre-built transforms offered in other AWS Glue libraries.

Q: Can I see a presentation on using AWS Glue (and AWS Lake Formation) to find matches and deduplicate records?

A: Yes, the full recording of the AWS Online Tech Talk, "Fuzzy Matching and Deduplicating Data with ML Transforms for AWS Lake Formation" is available here.

AWS Glue DataBrew

Q: What is AWS Glue DataBrew?

AWS Glue DataBrew is a visual data preparation tool that makes it easy for data analysts and data scientists to prepare data with an interactive, point-and-click visual interface without writing code. With Glue DataBrew, you can easily visualize, clean, and normalize terabytes, and even petabytes of data directly from your data lake, data warehouses, and databases, including Amazon S3, Amazon Redshift, Amazon Aurora, and Amazon RDS. AWS Glue DataBrew is generally available today in US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), EU (Frankfurt), Asia Pacific (Sydney), and Asia Pacific (Tokyo). 

Q: Who can use AWS Glue DataBrew?

AWS Glue DataBrew is built for users who need to clean and normalize data for analytics and machine learning. Data analysts and data scientists are the primary users. For data analysts, examples of job functions are business intelligence analysts, operations analysts, market intelligence analysts, legal analysts, financial analysts, economists, quants, or accountants. For data scientists, examples of job functions are materials scientists, bioanalytical scientists, and scientific researchers.

Q: What types of transformations are supported in AWS Glue DataBrew?

You can choose from over 250 built-in transformations to combine, pivot, and transpose the data without writing code. AWS Glue DataBrew also automatically recommends transformations such as filtering anomalies, correcting invalid, incorrectly classified, or duplicate data, normalizing data to standard date and time values, or generating aggregates for analyses. For complex transformations, such as converting words to a common base or root word, Glue DataBrew provides transformations that use advanced machine learning techniques such as Natural Language Processing (NLP). You can group multiple transformations together, save them as recipes, and apply the recipes directly to the new incoming data.

Q: What file formats does AWS Glue DataBrew support?

For input data, AWS Glue DataBrew supports commonly used file formats, such as comma-separated values (.csv), JSON and nested JSON, Apache Parquet and nested Apache Parquet, and Excel sheets. For output data, AWS Glue DataBrew supports comma-separated values (.csv), JSON, Apache Parquet, Apache Avro, Apache ORC and XML.

Q: Can I try AWS Glue DataBrew for free?

Yes. Sign up for an AWS Free Tier account, then visit the AWS Glue DataBrew Management Console, and get started instantly for free. If you are a first-time user of Glue DataBrew, the first 40 interactive sessions are free. Visit the AWS Glue Pricing page to learn more.

Q: Do I need to use AWS Glue Data Catalog or AWS Lake Formation to use AWS Glue DataBrew?

No. You can use AWS Glue DataBrew without using either AWS Glue Data Catalog or AWS Lake Formation. If you use Glue Data Catalog to store schema and metadata, Glue DataBrew automatically infers schema from the Glue Data Catalog. If your data is centralized and secured in AWS Lake Formation, DataBrew users can use all data sets available to them from its centralized data catalog.

Q: Can I retain a record of all changes made to my data?

Yes. You can visually track all the changes made to your data in the AWS Glue DataBrew Management Console. The visual view makes it easy to trace the changes and relationships made to the datasets, projects and recipes, and all other associated jobs. In addition, Glue DataBrew keeps all account activities as logs in the AWS CloudTrail.

 

AWS Glue Elastic Views (Preview)

What is AWS Glue Elastic Views?

AWS Glue Elastic Views makes it easy to build materialized views that combine and replicate data across multiple data stores without you having to write custom code. With AWS Glue Elastic Views, you can use familiar Structured Query Language (SQL) to quickly create a virtual table—a materialized view—from multiple different source data stores. AWS Glue Elastic Views copies data from each source data store and creates a replica in a target data store. AWS Glue Elastic Views continuously monitors for changes to data in your source data stores, and provides updates to the materialized views in your target data stores automatically, ensuring data accessed through the materialized view is always up-to-date. AWS Glue Elastic Views supports many AWS databases and data stores, including Amazon DynamoDB, Amazon S3, Amazon Redshift, and Amazon Elasticsearch Service, with support for Amazon RDS, Amazon Aurora, and others to follow. AWS Glue Elastic Views is serverless and scales capacity up or down automatically based on demand, so there’s no infrastructure to manage. AWS Glue Elastic Views is available in preview today.

Why should I use AWS Glue Elastic Views?

You should use AWS Glue Elastic Views to combine and continuously replicate data across multiple data stores in near-real time. This frequently applies when building new application functionality where the application needs to access data from one or more existing data stores. For example, an organization might use a customer relationship management (CRM) application to track their customer contacts and an e-commerce website for online sales. These applications would use one or more data stores to store information. Now, the company builds a new custom application that creates and displays special offers to active website visitors. To do so, this application combines customer information from the CRM application with the web clickstream data from the e-commerce application. With AWS Glue Elastic Views, a developer can build the new functionality in three steps. First, they connect the CRM and e-commerce application data stores with AWS Glue Elastic Views. Next, they use SQL to choose the right data from the CRM and e-commerce data stores. And, finally they connect the custom application’s data store to store the results.

How does AWS Glue Elastic Views work with other AWS services?

AWS Glue Elastic Views lets you connect to multiple data store sources in AWS and create views over these sources using familiar SQL. You can materialize these views into target data stores. As an example, you can create views that access restaurant information in Amazon Aurora and customer reviews in Amazon DynamoDB and materialize those views to Amazon Redshift. You can then build an application combining food preferences and popular restaurants on top of Amazon Redshift. Also, because AWS Glue Elastic Views sources are separate from targets, if you have read heavy applications, you can offload read requests to an AWS Glue Elastic Views target that maintains a consistent copy of the source. You can visualize the data in AWS Glue Elastic Views target data stores using services like Amazon QuickSight or partner visualization tools like Tableau.

Can I use AWS Glue Elastic Views for both operational and analytical workloads?

Yes. With AWS Glue Elastic Views, you can replicate data from one data store to another in near-real time. This enables high performance operational applications that need access to up-to-date data from multiple data stores. AWS Glue Elastic Views also enables you to integrate your operational and analytical systems without having to build and maintain complex data integration pipelines. Using AWS Glue Elastic Views, you can create database views over data in your operational databases and materialize those views in your data warehouse or data lake. AWS Glue Elastic Views keeps track of changes in your operational databases and ensures that data in your data warehouse and data lake is kept in sync. You can now run analytical queries on your most recent operational data.

Which sources and targets does AWS Glue Elastic Views support today?

Currently supported sources for the preview include Amazon DynamoDB, with support for Amazon Aurora MySQL, Amazon Aurora PostgreSQL, Amazon RDS for MySQL, and Amazon RDS for PostgreSQL to follow. Currently supported targets are Amazon Redshift, Amazon S3, and Amazon Elasticsearch Service, with support for Amazon Aurora MySQL, Amazon Aurora PostgreSQL, Amazon RDS for MySQL, and Amazon RDS for PostgreSQL to follow.

How does AWS Glue Elastic Views relate to a data lake?

A data lake is a scalable centralized repository in Amazon S3 that is optimized to make data from many diverse data stores accessible in one place to support analytical applications and queries. A data lake enables analytics and machine learning across all of your organization’s data for improved business insights and decision making. AWS Glue Elastic Views, on the other hand, is a service that enables you to combine and replicate data across multiple databases and your Amazon S3 data lake. If you are building application functionality that needs to access specific data from one or more existing data stores in near-real time, AWS Glue Elastic Views enables you to replicate data from multiple data stores and keep the data up-to-date. You can also use AWS Glue Elastic Views to load data from operational databases into a data lake by creating views over your operational databases and materializing them into your data lake.

AWS Product Integrations

Q: When should I use AWS Glue vs. AWS Data Pipeline?

AWS Glue provides a managed ETL service that runs on a serverless Apache Spark environment. This allows you to focus on your ETL job and not worry about configuring and managing the underlying compute resources. AWS Glue takes a data first approach and allows you to focus on the data properties and data manipulation to transform the data to a form where you can derive business insights. It provides an integrated data catalog that makes metadata available for ETL as well as querying via Amazon Athena and Amazon Redshift Spectrum.

AWS Data Pipeline provides a managed orchestration service that gives you greater flexibility in terms of the execution environment, access and control over the compute resources that run your code, as well as the code itself that does data processing. AWS Data Pipeline launches compute resources in your account allowing you direct access to the Amazon EC2 instances or Amazon EMR clusters.

Furthermore, AWS Glue ETL jobs are Scala or Python based. If your use case requires you to use an engine other than Apache Spark or if you want to run a heterogeneous set of jobs that run on a variety of engines like Hive, Pig, etc., then AWS Data Pipeline would be a better choice.

Q: When should I use AWS Glue vs. Amazon EMR?

AWS Glue works on top of the Apache Spark environment to provide a scale-out execution environment for your data transformation jobs. AWS Glue infers, evolves, and monitors your ETL jobs to greatly simplify the process of creating and maintaining jobs. Amazon EMR provides you with direct access to your Hadoop environment, affording you lower-level access and greater flexibility in using tools beyond Spark.

Q: When should I use AWS Glue vs AWS Database Migration Service?

AWS Database Migration Service (DMS) helps you migrate databases to AWS easily and securely. For use cases which require a database migration from on-premises to AWS or database replication between on-premises sources and sources on AWS, we recommend you use AWS DMS. Once your data is in AWS, you can use AWS Glue to move, combine, replicate, and transform data from your data source into another database or data warehouse, such as Amazon Redshift.

Q: When should I use AWS Glue vs AWS Batch?

AWS Batch enables you to easily and efficiently run any batch computing job on AWS regardless of the nature of the job. AWS Batch creates and manages the compute resources in your AWS account, giving you full control and visibility into the resources being used. AWS Glue is a fully-managed ETL service that provides a serverless Apache Spark environment to run your ETL jobs. For your ETL use cases, we recommend you explore using AWS Glue. For other batch oriented use cases, including some ETL use cases, AWS Batch might be a better fit.

Pricing and billing

Q: How am I charged for AWS Glue?

You will pay a simple monthly fee, above the AWS Glue Data Catalog free tier, for storing and accessing the metadata in the AWS Glue Data Catalog. You will pay an hourly rate, billed per second, for the crawler run with a 10-minute minimum. If you choose to use a development endpoint to interactively develop your ETL code, you will pay an hourly rate, billed per second, for the time your development endpoint is provisioned, with a 10-minute minimum. Additionally, you will pay an hourly rate, billed per second, for the ETL job with either a 1-minute minimum or 10-minute minimum based on the Glue version you select. For more details, please refer our pricing page.

Q: When does billing for my AWS Glue jobs begin and end?

Billing commences as soon as the job is scheduled for execution and continues until the entire job completes. With AWS Glue, you only pay for the time for which your job runs and not for the environment provisioning or shutdown time.

Security and availability

Q: How does AWS Glue keep my data secure?

We provide server side encryption for data at rest and SSL for data in motion.

Q: What are the service limits associated with AWS Glue?

Please refer our documentation to learn more about service limits.

Q: What regions is AWS Glue in?

Please refer to the AWS Region Table for details of AWS Glue service availability by region.

Q: How many DPUs (Data Processing Units) are allocated to the development endpoint?

A development endpoint is provisioned with 5 DPUs by default. You can configure a development endpoint with a minimum of 2 DPUs and a maximum of 5 DPUs.

Q: How do I scale the size and performance of my AWS Glue ETL jobs?

You can simply specify the number of DPUs (Data Processing Units) you want to allocate to your ETL job. A Glue ETL job requires a minimum of 2 DPUs. By default, AWS Glue allocates 10 DPUs to each ETL job.

Q: How do I monitor the execution of my AWS Glue jobs?

AWS Glue provides the status of each job and pushes all notifications to Amazon CloudWatch. You can set up SNS notifications via CloudWatch actions to be informed of job failures or completions.

Service Level Agreement

Q: What does the AWS Glue SLA guarantee?

Our AWS Glue SLA guarantees a Monthly Uptime Percentage of at least 99.9% for AWS Glue.

Q: How do I know if I qualify for a SLA Service Credit?

You are eligible for a SLA credit for AWS Glue under the AWS Glue SLA if more than one Availability Zone in which you are running a task, within the same region has a Monthly Uptime Percentage of less than 99.9% during any monthly billing cycle.

For full details on all of the terms and conditions of the SLA, as well as details on how to submit a claim, please see the AWS Glue SLA details page.

Product-Page_Standard-Icons_01_Product-Features_SqInk
Visit the pricing page

Explore pricing options for AWS Glue.

Learn more 
Product-Page_Standard-Icons_02_Sign-Up_SqInk
Sign up for a free account

Instantly get access to the AWS Free Tier. 

Sign up 
Product-Page_Standard-Icons_03_Start-Building_SqInk
Start building on the console

Get started building with AWS Glue on the AWS Management Console.

Sign in