AWS Big Data Blog

Run Apache Spark and Apache Iceberg write jobs 2x faster with Amazon EMR

Amazon EMR runtime for Apache Spark offers a high-performance runtime environment while maintaining API compatibility with open source Apache Spark and Apache Iceberg table format. Amazon EMR on EC2, Amazon EMR Serverless, Amazon EMR on Amazon EKS, Amazon EMR on AWS Outposts and AWS Glue use the optimized runtimes.

In this post, we demonstrate the write performance benefits of using the Amazon EMR 7.12 runtime for Spark and Iceberg compares to open source Spark 3.5.6 with Iceberg 1.10.0 tables on a 3TB merge workload.

Write Benchmark Methodology

Our benchmarks demonstrate that Amazon EMR 7.12 can run 3TB merge workloads over 2 times faster than open source Spark 3.5.6 with Iceberg 1.10.0, delivering significant improvements for data ingestion and ETL pipelines while providing the advanced features of Iceberg including ACID transactions, time travel, and schema evolution.

Benchmark workload

To evaluate the write performance improvements in Amazon EMR 7.12, we chose a merge workload that reflects common data ingestion and ETL patterns. The benchmark consists of 37 basic merge operations on TPC-DS 3TB tables, testing the performance of INSERT, UPDATE, and DELETE operations. The workload is inspired by established benchmarking approaches from the open source community, including Delta Lake’s merge benchmark methodology and the LST-Bench framework. We combined and adapted these approaches to create a comprehensive test of Iceberg write performance on AWS. We also started with an initial focus on copy-on-write performance only.

Workload characteristics

The benchmark executes 37 basic sequential merge queries that modify TPC-DS fact tables. The 37 queries are organized into three categories:

  • Inserts (queries m1-m6): Adding new records to tables with varying data volumes. These queries use source tables with 5-100% new records and zero matches, testing pure insert performance at different scales.
  • Upserts (queries m8-m16): Modifying existing records while inserting new ones. These upsert operations combine different ratios of matched and non-matched records—for example, 1% matches with 10% inserts, or 99% matches with 1% inserts—representing typical scenarios where data is both updated and augmented.
  • Deletes (queries m7, m17-m37): Removing records with varying selectivity. These range from small, targeted deletes affecting 5% of files and rows to large-scale deletions, including partition-level deletes that can be optimized to metadata-only operations.

The queries operate on the table state created by previous operations, simulating real ETL pipelines where subsequent steps depend on earlier transformations. For example, the first six queries insert between 607,000 and 11.9 million records into the web_returns table. Later queries then update and delete from this modified table, testing read-after-write performance. Source tables were generated by sampling the TPC-DS web_returns table with controlled match/non-match ratios for consistent test conditions across the benchmark runs.

The merge operations vary in scale and complexity:

  • Small operations affecting 607,000 records
  • Large operations modifying over 12 million records
  • Selective deletes requiring file rewrites
  • Partition-level deletes optimized to metadata operations

Benchmark configuration

We ran the benchmark on identical hardware for both Amazon EMR 7.12 and open source Spark 3.5.6 with Iceberg 1.10.0:

  • Cluster: 9 r5d.4xlarge instances (1 primary, 8 workers)
  • Compute: 144 total vCPUs, 1,152 GB memory
  • Storage: 2 x 300 GB NVMe SSD per instance
  • Catalog: Hadoop Catalog
  • Data format: Parquet files on Amazon S3
  • Table format: Apache Iceberg (default: copy-on-write mode)

Benchmark results

We compared benchmark results for Amazon EMR 7.12 to open source Spark 3.5.6 and Iceberg 1.10.0. We ran the 37 merge queries in three sequential iterations, and the average runtime across these iterations was taken for comparison. The following table shows the results averaged across three iterations:

Amazon EMR 7.12 (seconds) Open Source Spark 3.5.6 + Iceberg 1.10.0 (seconds) Speedup
443.58 926.63 2.08x

The average runtime for the three iterations on Amazon EMR 7.12 with Iceberg enabled was 443.58 seconds, demonstrating a 2.08x speed increase compared to open source Spark 3.5.6 and Iceberg 1.10.0. The following figure presents the total runtimes in seconds.

The following table summarizes the metrics.

Metric Amazon EMR 7.12 on EC2 Open source Spark 3.5.6 and Iceberg 1.10.0
Average runtime in seconds 443.58 926.63
Geometric mean over queries in seconds 6.40746 18.50945
Cost* $1.58 $2.68

*Detailed cost estimates are discussed later in this post.

The following chart demonstrates the per-query performance improvement of Amazon EMR 7.12 relative to open source Spark 3.5.6 and Iceberg 1.10.0. The extent of the speedup varies from one query to another, with the fastest up to 13.3 times faster for query m31, with Amazon EMR outperforming open source Spark with Iceberg tables. The horizontal axis arranges the TPC-DS 3TB benchmark queries in descending order based on the performance improvement seen with Amazon EMR, and the vertical axis depicts the magnitude of this speedup as a ratio.

Performance optimizations in Amazon EMR

Amazon EMR 7.12 achieves over 2x faster write performance through systematic optimizations across the write execution pipeline. These improvements span multiple areas:

  • Metadata-only delete operations: When deleting entire partitions, EMR can now optimize these operations to metadata-only changes, eliminating the need to rewrite data files. This significantly reduces the time and cost for partition-level delete operations.
  • Bloom filter joins for merge operations: Enhanced join strategies using bloom filters reduce the amount of data that needs to be read and processed during merge operations, particularly benefiting queries with selective predicates.
  • Parallel file write out: Optimized parallelism during the write phase of merge operations improves throughput when writing filtered results back to Amazon S3, reducing overall merge operation time. We balanced the parallelism with read performance for overall optimized performance on the entire workload.

These optimizations work together to deliver consistent performance improvements across diverse write patterns. The result is significantly faster data ingestion and ETL pipeline execution while maintaining Iceberg’s ACID assurances and data consistency of Iceberg.

Cost comparison

Our benchmark provides the total runtime and geometric mean data to assess the performance of Spark and Iceberg in a complex, real-world decision support scenario. For additional insights, we also examine the cost aspect. We calculate cost estimates using formulas that account for EC2 On-Demand instances, Amazon Elastic Block Store (Amazon EBS), and Amazon EMR expenses.

  • Amazon EC2 cost (includes SSD cost) = number of instances * r5d.4xlarge hourly rate * job runtime in hours
    • 4xlarge hourly rate = $1.152 per hour
  • Root Amazon EBS cost = number of instances * Amazon EBS per GB-hourly rate * root EBS volume size * job runtime in hours
  • Amazon EMR cost = number of instances * r5d.4xlarge Amazon EMR cost * job runtime in hours
    • 4xlarge Amazon EMR cost = $0.27 per hour
  • Total cost = Amazon EC2 cost + root Amazon EBS cost + Amazon EMR cost

The calculations reveal that the Amazon EMR 7.12 benchmark yields a 1.7x cost efficiency improvement over open source Spark 3.5.6 and Iceberg 1.10.0 in running the benchmark job.

Metric Amazon EMR 7.12 Open source Spark 3.5.6 and Iceberg 1.10.0
Runtime in seconds 443.58 926.63
Number of EC2 instances(Includes primary node) 9 9
Amazon EBS Size 20gb 20gb
Amazon EC2(Total runtime cost) $1.28 $2.67
Amazon EBS cost $0.00 $0.01
Amazon EMR cost $0.30 $0
Total cost $1.58 $2.68
Cost savings Amazon EMR 7.12 is 1.7 times better Baseline

Run open source Spark benchmarks on Iceberg tables

We used separate EC2 clusters, each equipped with nine r5d.4xlarge instances, for testing both open source Spark 3.5.6 and Amazon EMR 7.12 for Iceberg workload. The primary node was equipped with 16 vCPU and 128 GB of memory, and the eight worker nodes together had 128 vCPU and 1024 GB of memory. We conducted tests using the Amazon EMR default settings to showcase the typical user experience and minimally adjusted the settings of Spark and Iceberg to maintain a balanced comparison.

The following table summarizes the Amazon EC2 configurations for the primary node and eight worker nodes of type r5d.4xlarge.

EC2 Instance vCPU Memory (GiB) Instance storage (GB) EBS root volume (GB)
r5d.4xlarge 16 128 2 x 300 NVMe SSD 20 GB

Benchmarking instructions

Follow the steps below to run the benchmark:

  1. For the open source run, create a Spark cluster on Amazon EC2 using Flintrock with the configuration described previously.
  2. Setup the TPC-DS source data with Iceberg in your S3 bucket.
  3. Build the benchmark application jar from the source to run the benchmarking and get the results.

Detailed instructions are provided in the emr-spark-benchmark GitHub repository.

Summarize the results

After the Spark job finishes, retrieve the test result file from the output S3 bucket at s3://<YOUR_S3_BUCKET>/benchmark_run/timestamp=xxxx/summary.csv/xxx.csv. This can be done either through the Amazon S3 console by navigating to the specified bucket location or by using the Amazon Command Line Interface (AWS CLI). The Spark benchmark application organizes the data by creating a timestamp folder and placing a summary file within a folder labeled summary.csv. The output CSV files contain four columns without headers:

  • Query name
  • Median time
  • Minimum time
  • Maximum time

With the data from three separate test runs with one iteration each time, we can calculate the average and geometric mean of the benchmark runtimes.

Clean up

To help prevent future charges, delete the resources you created by following the instructions provided in the Cleanup section of the GitHub repository.

Summary

Amazon EMR is consistently enhancing the EMR runtime for Spark when used with Iceberg tables, achieving write performance that is over 2 times faster than open source Spark 3.5.6 and Iceberg 1.10.0 with EMR 7.12 on 3TB merge workloads. This represents a significant improvement for data ingestion and ETL pipelines, helping to deliver 1.7x cost reduction while maintaining the ACID assurances of Iceberg. We encourage you to keep up to date with the latest Amazon EMR releases to fully benefit from ongoing performance improvements.

To stay informed, subscribe to the RSS feed for the AWS Big Data Blog, where you can find updates on the EMR runtime for Spark and Iceberg, as well as tips on configuration best practices and tuning recommendations.


About the authors

Atul Felix Payapilly is a software development engineer for Amazon EMR at Amazon Web Services.

Akshaya KP is a software development engineer for Amazon EMR at Amazon Web Services.

Hari Kishore Chaparala is a software development engineer for Amazon EMR at Amazon Web Services.

Giovanni Matteo is the Senior Manager for the Amazon EMR Spark and Iceberg group.