AWS Big Data Blog

How 1Strategy simplified their spreadsheet ETL process using AWS Glue DataBrew

This is a guest blog post by Pat Reilly and Gary Houk at 1Strategy. In their own words, “1Strategy is an APN Premier Consulting Partner focusing exclusively on AWS solutions. 1Strategy consultants help businesses architect, migrate, and optimize their workloads on AWS, creating scalable, cost-effective, secure, and reliable solutions. 1Strategy holds the AWS DevOps, Migration, Data & Analytics, and Machine Learning Competencies, and is a member of the AWS Well-Architected and the AWS Public Sector partner programs.”


Accurately reporting hours billed to each customer is critical to 1Strategy’s business operations. Each consultant is responsible for keeping records up to date. To promote visibility into work completed and hours used vs. hours remaining, 1Strategy provides customers with a weekly report.

Historically, this data was in the hands of a select few managers who alerted their teams when hours were getting low or the contract was nearing expiry. This data was delivered in the form of an Excel spreadsheet as an extract from the system of record for billable hours and sent every morning to those managers. This top-down method of sharing information meant that the people delivering the work were often the last to receive it. Frequently, reports were received too late to add any meaningful support to a customer conversation.

As part of an AWS Data Lab, the 1Strategy team built a data lake to empower reporting across consultants within the company. The main data source was a spreadsheet, which held billable daily hours for every employee in our company. At the time, the only method available for ingesting this was AWS Glue. It was better than nothing; however, it introduced more complexity than warranted given the use case. For example, AWS Glue does not support parsing Excel file formats out of the box so you are required to provide your own Python libraries to use in the job.

For this reason, we used two AWS Glue jobs: one to parse the spreadsheet into CSV format, and another to convert it to Parquet. Coupled with AWS Glue crawlers so our Data Catalog is current, this process can take up to 40 minutes to complete and spans multiple Amazon Simple Storage Service (Amazon S3) buckets.

AWS Glue DataBrew changes that. AWS Glue DataBrew is a new visual data preparation tool that enables customers to clean and normalize data without writing code. DataBrew makes writing extract, transform, and load (ETL) jobs against an Excel spreadsheet immensely easier and faster to build than anything currently available in the AWS data landscape. Aside from simplifying the engineering, the interface is built to be friendly to less technical end users. In the context of the 1Strategy data lake, this lowers the barrier for people who want to use the data by taking the development out of the hands of engineers, who can spend their time on more critical or more complex problems.

Solution overview

As an illustration of how you can use DataBrew to write ETL, let’s walk through how to do this using a sample dataset from the English Premier League (EPL). The data consists of 380 rows by 58 columns with data from individual football matches in an Excel spreadsheet. The following screenshot shows a small sample of the raw data.

To implement our solution, you complete the following high-level steps:

  1. Create your dataset.
  2. Create a project.
  3. Add recipe steps and fill in missing values.
  4. Concatenate columns.
  5. Format and split columns.
  6. Rename the columns.
  7. Create your DataBrew job.
  8. Review the results in Amazon S3.
  9. Analyze the results.

Creating your dataset

We start by creating a dataset.

  1. Convert the CSV format into an Excel spreadsheet.

DataBrew supports CSV, but for this post, we use an Excel file.

  1. Upload your spreadsheet to an S3 location.

To give DataBrew the data, we create a dataset, which can take the form of a file upload, an S3 object or path, or metadata stored in your Data Catalog. For this post, we use an Excel spreadsheet stored in Amazon S3.

  1. On the DataBrew console, in the navigation pane, choose Datasets.
  2. Choose Connect new dataset.
  3. For Dataset name, enter a name.
  4. For Enter your source from S3, enter the full S3 object key name.
  5. Choose Create dataset.

Creating a project

Projects are a collection of transformation steps called recipes, which are run on your data. They offer a user interface to interact with your data using a functionality that is very familiar to people who regularly use spreadsheets.

  1. On the DataBrew console, in the navigation pane, choose Projects.
  2. Choose Create project.
  3. For Project name, enter a name for your project.
  4. For Select a dataset, select My datasets.
  5. Search for and select your dataset.
  6. In the Permissions section, for Role name, choose Create new IAM role.

This step creates an AWS Identity and Access Management (IAM) role to give DataBrew the necessary permissions to access your data.

  1. For New IAM role suffix, add an optional suffix.
  2. Choose Create project.
  3. Wait for DataBrew to parse your spreadsheet and generate statistics.

When DataBrew is done parsing your spreadsheet, you’re presented with an interface. The menu bar contains familiar spreadsheet-like functions. You can apply these as transformations to your data and add them as steps to your recipe.

DataBrew also profiles your sample data automatically to describe the distribution of values so you can more quickly understand your data.

Adding recipe steps and filling in missing values

Recipes are the lists of transformation steps your data goes through to meet your business requirements. They also serve as the logic every time a job from a specific recipe is run.

We start by replacing null values in the Full-Time Home Goals (FTHG) and Full-Time Away Goals (FTAG) columns.

  1. On your project page, choose the icon for the column to adjust.
  2. Choose Remove or fill missing values.
  3. Choose Fill with custom value.
  4. Use the custom value 0.
  5. Choose Preview changes to verify the changes.
  6. To confirm the changes, choose Apply.

You should now see zeroes instead of null values in your columns.

Concatenating columns

Now you create a Full-Time Score column by concatenating the FTHG and FTAG columns with a hyphen. The results should look something like “3-2” where the 3 is the FTHG and the 2 is the FTAG.

  1. In the Recipe pane, choose Add step.

This appends your step to the end of your current list. Alternatively, you can import an existing recipe to add those steps to your recipe. Do so by choosing the icon and Import recipe. We don’t use this method for this post.

  1. In the Add step section, choose the merge step.
  2. Choose Merge columns.
  3. For Source column, choose FTHG and FTAG.
  4. For Separator, enter - (hyphen).
  5. For New column name¸ enter FT_SCORE.
  6. To confirm the changes, choose Apply.

Formatting and splitting columns

Now we split the date column into its year, month, and day parts so that each is its own column. We use these columns to partition our data in our job output.

  1. Choose Format column.
  2. For Source column, choose Date.
  3. For Format column to, choose Date-time format.
  4. For Choose date-time format, select mm/dd/yyyy.
  5. For Apply transform to, select All rows.
  6. Choose Apply.

You now split the column using a single delimiter.

  1. On the Date column, choose the icon.
  2. Choose Format.
  3. Choose date-time formats.
  4. Choose mm/dd/yyyy.
  5. For Split column, choose Date.
  6. For Split column options, choose Using delimiters.
  7. For Split column, select On a single delimiter.
  8. For Delimiter, select Enter custom value
  9. Enter / (forward slash).

We use the forward slash as our delimiter because our date format always has two slashes.

  1. For Number of times to split¸ choose 2.
  2. Choose Apply.

Renaming columns

To rename your columns, complete the following steps:

  1. On the source column, choose the icon.
  2. Choose Rename.
  3. Rename each new column to its unit of date.

Your recipe steps should look like the following screenshot when you’re finished. Note the naming of the new day, month, and year columns.

Creating a DataBrew job

Now we create the DataBrew job that runs our recipe steps in succession and outputs the result set to Amazon S3 in snappy Parquet format.

  1. In the navigation pane, choose Jobs.
  2. Choose Create job.
  3. For Job name, enter a name for your job.
  4. For Job type, select Create a recipe job.
  5. For Job input, select Project.
  6. For Select a project, choose the project you created.
  7. In the Job output settings section, for File type¸ choose Parquet.

Apache Parquet is an open-source columnar storage file format that is optimized for use in Hadoop frameworks you find in Amazon EMR, AWS Glue, and more. It’s also well supported by Amazon Redshift. Other commonly known file types are also available to use.

  1. For Compression, choose Snappy.

At scale, compressing our output conserves Amazon S3 storage costs.

  1. Under Additional configuration, for Custom partition by column values, match the year, month, and day columns in that order.

This step separates data into separate S3 prefixes based on the date of the match. Partitioning by date is a common approach to organizing data at scale to reduce scans on your data. Queries that use these partitions to find data only scan prefixes relevant to their filter or where clause.

  1. In the Permissions section, for Role name, choose the DataBrew IAM role we created.
  2. Choose Create and run job.

Viewing results in Amazon S3

Verify the data is stored in our S3 location and partitioned by the columns specified. The file format should be .parquet.snappy.

Analyzing the results

You’ve completed the DataBrew portion of this post. You can now add an AWS Glue crawler to infer the schema of your data, and query it using Amazon Athena. The crawler automatically infers the schema, and our new column FT_SCORE is now available. Because adding the crawler is beyond the scope of this article, we only present the results of this process in the following screenshot.

Conclusion

In this post, we created a basic DataBrew job with a recipe that transforms our spreadsheet. This job outputs our data to Amazon S3 in snappy Parquet format, and you can schedule the job using DataBrew or other AWS services like AWS Step Functions (for more information, see Orchestrating an AWS Glue DataBrew job and Amazon Athena query with AWS Step Functions).

DataBrew makes the development process for common ETL transformations simpler and easier to implement across your data lake. This empowers you to write logic to your exact specifications and lowers the barriers to developing ETL from nearly any data source.

About AWS Glue DataBrew

AWS Glue DataBrew is a visual data preparation tool for AWS Glue that allows data analysts and data scientists to clean and transform data with an interactive, point-and-click visual interface, without writing any code. With DataBrew, you can easily access and visually explore any amount of data across your organization directly from your S3 data lake, Amazon Redshift data warehouse, and Amazon Aurora and Amazon Relational Database Service (Amazon RDS) databases. You can choose from over 250 built-in functions to combine, pivot, and transpose the data without writing code. DataBrew recommends data cleaning and normalization steps like filtering anomalies, normalizing data to standard date and time values, generating aggregates for analyses, and correcting invalid, misclassified, or duplicative data. For complex tasks like converting words to a common base or root word (such as converting “yearly” and “yearlong” to “year”), DataBrew also provides transformations that use advanced machine learning (ML) techniques like natural language processing (NLP). You can then save these cleaning and normalization steps into a workflow (called a recipe) and apply them automatically to future incoming data. If changes need to be made to the workflow, data analysts and data scientists simply update the cleaning and normalization steps in the recipe, and the steps are automatically applied to new data as it arrives. DataBrew publishes the prepared data to Amazon S3, which makes it easy for you to immediately use it in analytics and ML. DataBrew is serverless and fully managed, so you never need to configure, provision, or manage any compute resources.


About the Authors

Pat Reilly is a Sr. Data Engineer for 1Strategy, an AWS Premier Consulting Partner, and specializes in Data Engineering and Analytics. He builds data pipelines and event-driven data architectures to support data-driven decision making. He is not a former NBA basketball coach, but he is passionate about soccer and spending time with his family.

Gary Houk is a Data Science and Cloud Engineering Consultant. He researches, designs, and productionizes well-architected AWS Machine Learning (ML) solutions in partnership with clients in order to help them develop the skills necessary to maintain and expand their ML capabilities. He is passionate about running and Scrabble.