-1.3 C
New York
Thursday, February 5, 2026

Run Apache Spark and Iceberg 4.5x sooner than open supply Spark with Amazon EMR


This publish reveals how Amazon EMR 7.12 could make your Apache Spark and Iceberg workloads as much as 4.5x sooner efficiency.

The Amazon EMR runtime for Apache Spark offers a high-performance runtime setting with full API compatibility with open supply Apache Spark and Apache Iceberg. Amazon EMR on EC2, Amazon EMR Serverless, Amazon EMR on Amazon EKS, Amazon EMR on AWS Outposts and AWS Glue use the optimized runtimes.

Our benchmarks present Amazon EMR 7.12 runs TPC-DS 3 TB workloads 4.5x sooner than open supply Spark 3.5.6 with Iceberg 1.10.0.

Efficiency enhancements embrace optimizations for metadata caching, parallel I/O, adaptive question planning, information sort dealing with, and fault tolerance. There have been additionally some Iceberg particular regressions round information scans that we recognized and stuck.

These optimizations allow you to match Parquet efficiency on Amazon EMR whereas conserving the important thing options of Iceberg key options: ACID transactions, time journey, and schema evolution.

Benchmark outcomes in comparison with open supply

To evaluate the efficiency of the Spark engine with the Iceberg desk format, we carried out benchmark exams utilizing the 3 TB TPC-DS dataset, model 2.13, a well-liked business customary benchmark. Benchmark exams for the Amazon EMR runtime for Apache Spark and Apache Iceberg have been performed on Amazon EMR 7.12 EC2 clusters in comparison with open supply Apache Spark 3.5.6 and Apache Iceberg 1.10.0 on EC2 clusters.

Notice: Our outcomes derived from the TPC-DS dataset usually are not straight akin to the official TPC-DS outcomes as a result of setup variations.

The setup directions and technical particulars can be found in our GitHub repository. To reduce the affect of exterior catalogs like AWS Glue and Hive, we used the Hadoop catalog for the Iceberg tables. This makes use of the underlying file system, particularly Amazon S3, because the catalog. We will outline this setup by configuring the property spark.sql.catalog..sort. The actual fact tables used the default partitioning by the date column, which range from 200–2,100 partitions. No precalculated statistics have been used for these tables.

We ran a complete of 104 SparkSQL queries in 3 sequential rounds, and the typical runtime of every question throughout these rounds was taken for comparability. The common runtime for the three rounds on Amazon EMR 7.12 with Iceberg enabled was 0.37 hours, demonstrating a 4.5x velocity enhance in comparison with open supply Spark 3.5.6 and Iceberg 1.10.0. The next determine presents the full runtimes in seconds.

The next desk summarizes the metrics.

Metric Amazon EMR 7.12 on EC2 Amazon EMR 7.5 on EC2 Open supply Apache Spark 3.5.6 and Apache Iceberg 1.10.0
Common runtime in seconds 1349.62 1535.62 6113.92
Geometric imply over queries in seconds 7.45910 8.30046 22.31854
Price* $4.81 $5.47 $17.65

*Detailed value estimates are mentioned later on this publish.

The next chart demonstrates the per-query efficiency enchancment of Amazon EMR 7.12 relative to open supply Spark 3.5.6 and Iceberg 1.10.0. The extent of the speedup varies from one question to a different, with the quickest as much as 13.6x sooner for q23b, with Amazon EMR outperforming open supply Spark with Iceberg tables. The horizontal axis arranges the TPC-DS 3TB benchmark queries in descending order based mostly on the efficiency enchancment seen with Amazon EMR, and the vertical axis depicts the magnitude of this speedup as a ratio.

Price comparability breakdown

Our benchmark offers the full runtime and geometric imply information to evaluate the efficiency of Spark and Iceberg in a fancy, real-world resolution assist state of affairs. For extra insights, we additionally study the associated fee side. We calculate value estimates utilizing formulation that account for EC2 On-Demand cases, Amazon Elastic Block Retailer (Amazon EBS), and Amazon EMR bills.

  • Amazon EC2 value (consists of SSD value) = variety of cases * r5d.4xlarge hourly price * job runtime in hours
    • 4xlarge hourly price = $1.152 per hour
  • Root Amazon EBS value = variety of cases * Amazon EBS per GB-hourly price * root EBS quantity dimension * job runtime in hours
  • Amazon EMR value = variety of cases * r5d.4xlarge Amazon EMR value * job runtime in hours
    • 4xlarge Amazon EMR value = $0.27 per hour
  • Whole value = Amazon EC2 value + root Amazon EBS value + Amazon EMR value

The calculations reveal that the Amazon EMR 7.12 benchmark yields a 3.6x value effectivity enchancment over open supply Spark 3.5.6 and Iceberg 1.10.0 in operating the benchmark job.

Metric Amazon EMR 7.12 Amazon EMR 7.5 Open supply Apache Spark 3.5.6 and Apache Iceberg 1.10.0
Runtime in seconds 1349.62 1535.62 6113.92

Variety of EC2 cases

(Contains major node)

9 9 9
Amazon EBS Dimension 20gb 20gb 20gb

Amazon EC2

(Whole runtime value)

$3.89 $4.42 $17.61
Amazon EBS value $0.01 $0.01 $0.04
Amazon EMR value $0.91 $1.04 $0
Whole value $4.81 $5.47 $17.65
Price financial savings Amazon EMR 7.12 is 3.6x higher Amazon EMR 7.5 is 3.2x higher Baseline

Along with the time-based metrics mentioned to date, information from Spark occasion logs present that Amazon EMR scanned roughly 4.3x much less information from Amazon S3 and 5.3x fewer information than the open supply model within the TPC-DS 3 TB benchmark. This discount in Amazon S3 information scanning contributes on to value financial savings for Amazon EMR workloads.

Run open supply Apache Spark benchmarks on Apache Iceberg tables

We used separate EC2 clusters, every geared up with 9 r5d.4xlarge cases, for testing each open supply Spark 3.5.6 and Amazon EMR 7.12 for Iceberg workload. The first node was geared up with 16 vCPU and 128 GB of reminiscence, and the 8 employee nodes collectively had 128 vCPU and 1024 GB of reminiscence. We performed exams utilizing the Amazon EMR default settings to showcase the standard person expertise and minimally adjusted the settings of Spark and Iceberg to take care of a balanced comparability.

The next desk summarizes the Amazon EC2 configurations for the first node and eight employee nodes of sort r5d.4xlarge.

EC2 Occasion vCPU Reminiscence (GiB) Occasion storage (GB) EBS root quantity (GB)
r5d.4xlarge 16 128 2 x 300 NVMe SSD 20 GB

Conditions

The next stipulations are required to run the benchmarking:

  1. Utilizing the directions within the emr-spark-benchmark GitHub repository, arrange the TPC-DS supply information in your S3 bucket and in your native pc.
  2. Construct the benchmark software following the steps supplied in Steps to construct spark-benchmark-assembly software and duplicate the benchmark software to your S3 bucket. Alternatively, copy spark-benchmark-assembly-3.5.6.jar to your S3 bucket.
  3. Create Iceberg tables from the TPC-DS supply information. Observe the directions on GitHub to create Iceberg tables utilizing the Hadoop catalog. For instance, the next code makes use of an Amazon EMR 7.12 cluster with Iceberg enabled to create the tables:
aws emr add-steps --cluster-id  --steps Sort=Spark,Title="Create Iceberg Tables",
Args=[--class,com.amazonaws.eks.tpcds.CreateIcebergTables,--conf,spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions,
--conf,spark.sql.catalog.hadoop_catalog=org.apache.iceberg.spark.SparkCatalog,
--conf,spark.sql.catalog.hadoop_catalog.type=hadoop,
--conf,spark.sql.catalog.hadoop_catalog.warehouse=s3:////,
--conf,spark.sql.catalog.hadoop_catalog.io-impl=org.apache.iceberg.aws.s3.S3FileIO,
s3:////spark-benchmark-assembly-3.5.6.jar,s3://blogpost-sparkoneks-us-east-1/blog/BLOG_TPCDS-TEST-3T-partitioned/,
/home/hadoop/tpcds-kit/tools,parquet,3000,true,,true,true],ActionOnFailure=CONTINUE --region 

Notice: The Hadoop catalog warehouse location and database title from the previous step. We use the identical Iceberg tables to run benchmarks with Amazon EMR 7.12 and open supply Spark.

This benchmark software is constructed from the department tpcds-v2.13_iceberg. Should you’re constructing a brand new benchmark software, change to the proper department after downloading the supply code from the GitHub repository.

Create and configure a YARN cluster on Amazon EC2

To match Iceberg efficiency between Amazon EMR on Amazon EC2 and open supply Spark on Amazon EC2, observe the directions within the emr-spark-benchmark GitHub repository to create an open supply Spark cluster on Amazon EC2 utilizing Flintrock with 8 employee nodes.

Based mostly on the cluster choice for this check, the next configurations are used:

Be certain to interchange the placeholder , within the yarn-site.xml file, with the first node’s IP deal with of your Flintrock cluster.

Run the TPC-DS benchmark with Apache Spark 3.5.6 and Apache Iceberg 1.10.0

Full the next steps to run the TPC-DS benchmark:

  1. Log in to the open supply cluster major node utilizing flintrock login $CLUSTER_NAME.
  2. Submit your Spark job:
    1. Select the proper Iceberg catalog warehouse location and database that has the created Iceberg tables.
    2. The outcomes are created in s3:///benchmark_run.
    3. You’ll be able to observe progress in /media/ephemeral0/spark_run.log.
spark-submit 
--master yarn 
--deploy-mode shopper 
--class com.amazonaws.eks.tpcds.BenchmarkSQL 
--conf spark.driver.cores=4 
--conf spark.driver.reminiscence=10g 
--conf spark.executor.cores=16 
--conf spark.executor.reminiscence=100g 
--conf spark.executor.cases=8 
--conf spark.community.timeout=2000 
--conf spark.executor.heartbeatInterval=300s 
--conf spark.dynamicAllocation.enabled=false 
--conf spark.shuffle.service.enabled=false 
--conf spark.hadoop.fs.s3a.aws.credentials.supplier=com.amazonaws.auth.InstanceProfileCredentialsProvider 
--conf spark.hadoop.fs.s3.impl=org.apache.hadoop.fs.s3a.S3AFileSystem 
--conf spark.jars.packages=org.apache.hadoop:hadoop-aws:3.3.4,org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.10.0,org.apache.iceberg:iceberg-aws-bundle:1.10.0 
--conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions   
--conf spark.sql.catalog.native=org.apache.iceberg.spark.SparkCatalog    
--conf spark.sql.catalog.native.sort=hadoop  
--conf spark.sql.catalog.native.warehouse=s3a://// 
--conf spark.sql.defaultCatalog=native   
--conf spark.sql.catalog.native.io-impl=org.apache.iceberg.aws.s3.S3FileIO   
spark-benchmark-assembly-3.5.6.jar   
s3:///benchmark_run 3000 1 false  
q1-v2.13,q10-v2.13,q11-v2.13,q12-v2.13,q13-v2.13,q14a-v2.13,q14b-v2.13,q15-v2.13,q16-v2.13,
q17-v2.13,q18-v2.13,q19-v2.13,q2-v2.13,q20-v2.13,q21-v2.13,q22-v2.13,q23a-v2.13,q23b-v2.13,
q24a-v2.13,q24b-v2.13,q25-v2.13,q26-v2.13,q27-v2.13,q28-v2.13,q29-v2.13,q3-v2.13,q30-v2.13,
q31-v2.13,q32-v2.13,q33-v2.13,q34-v2.13,q35-v2.13,q36-v2.13,q37-v2.13,q38-v2.13,q39a-v2.13,
q39b-v2.13,q4-v2.13,q40-v2.13,q41-v2.13,q42-v2.13,q43-v2.13,q44-v2.13,q45-v2.13,q46-v2.13,
q47-v2.13,q48-v2.13,q49-v2.13,q5-v2.13,q50-v2.13,q51-v2.13,q52-v2.13,q53-v2.13,q54-v2.13,
q55-v2.13,q56-v2.13,q57-v2.13,q58-v2.13,q59-v2.13,q6-v2.13,q60-v2.13,q61-v2.13,q62-v2.13,
q63-v2.13,q64-v2.13,q65-v2.13,q66-v2.13,q67-v2.13,q68-v2.13,q69-v2.13,q7-v2.13,q70-v2.13,
q71-v2.13,q72-v2.13,q73-v2.13,q74-v2.13,q75-v2.13,q76-v2.13,q77-v2.13,q78-v2.13,q79-v2.13,
q8-v2.13,q80-v2.13,q81-v2.13,q82-v2.13,q83-v2.13,q84-v2.13,q85-v2.13,q86-v2.13,q87-v2.13,
q88-v2.13,q89-v2.13,q9-v2.13,q90-v2.13,q91-v2.13,q92-v2.13,q93-v2.13,q94-v2.13,q95-v2.13,
q96-v2.13,q97-v2.13,q98-v2.13,q99-v2.13,ss_max-v2.13    
true  > /media/ephemeral0/spark_run.log 2>&1 &!

Summarize the outcomes

After the Spark job finishes, retrieve the check outcome file from the output S3 bucket at s3:///benchmark_run/timestamp=xxxx/abstract.csv/xxx.csv. This may be executed both by the Amazon S3 console by navigating to the desired bucket location or through the use of the Amazon Command Line Interface (AWS CLI). The Spark benchmark software organizes the information by making a timestamp folder and inserting a abstract file inside a folder labeled abstract.csv. The output CSV information include 4 columns with out headers:

  • Question title
  • Median time
  • Minimal time
  • Most time

With the information from 3 separate check runs with 1 iteration every time, we will calculate the typical and geometric imply of the benchmark runtimes.

Run the TPC-DS benchmark with Amazon EMR runtime for Apache Spark

A lot of the directions are much like Steps to run Spark Benchmarking with a couple of Iceberg-specific particulars.

Conditions

Full the next prerequisite steps:

  1. Run aws configure to configure the AWS CLI shell to level to the benchmarking AWS account. Confer with Configure the AWS CLI for directions.
  2. Add the benchmark software JAR file to Amazon S3.

Deploy Amazon EMR cluster and run the benchmark job

Full the next steps to run the benchmark job:

  1. Use the AWS CLI command as proven in Deploy EMR on EC2 Cluster and run benchmark job to deploy an Amazon EMR on EC2 cluster. Be certain to allow Iceberg. See Create an Iceberg cluster for extra particulars. Select the proper Amazon EMR model, root quantity dimension, and similar useful resource configuration because the open supply Flintrock setup. Confer with create-cluster for an in depth description of the AWS CLI choices.
  2. Retailer the cluster ID from the response. We want this for the subsequent step.
  3. Submit the benchmark job in Amazon EMR utilizing add-steps from the AWS CLI:
    1. Substitute with the cluster ID from Step 2.
    2. The benchmark software is at s3:///spark-benchmark-assembly-3.5.6.jar.
    3. Select the proper Iceberg catalog warehouse location and database that has the created Iceberg tables. This ought to be the identical because the one used for the open supply TPC-DS benchmark run.
    4. The outcomes might be in s3:///benchmark_run.
aws emr add-steps   --cluster-id 
--steps Sort=Spark,Title="SPARK Iceberg EMR TPCDS Benchmark Job",
Args=[--class,com.amazonaws.eks.tpcds.BenchmarkSQL,
--conf,spark.driver.cores=4,
--conf,spark.driver.memory=10g,
--conf,spark.executor.cores=16,
--conf,spark.executor.memory=100g,
--conf,spark.executor.instances=8,
--conf,spark.network.timeout=2000,
--conf,spark.executor.heartbeatInterval=300s,
--conf,spark.dynamicAllocation.enabled=false,
--conf,spark.shuffle.service.enabled=false,
--conf,spark.sql.iceberg.data-prefetch.enabled=true,
--conf,spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions,
--conf,spark.sql.catalog.local=org.apache.iceberg.spark.SparkCatalog,
--conf,spark.sql.catalog.local.type=hadoop,
--conf,spark.sql.catalog.local.warehouse=s3:///,
--conf,spark.sql.defaultCatalog=local,
--conf,spark.sql.catalog.local.io-impl=org.apache.iceberg.aws.s3.S3FileIO,
s3:///spark-benchmark-assembly-3.5.6.jar,
s3:///benchmark_run,3000,1,false,
'q1-v2.13,q10-v2.13,q11-v2.13,q12-v2.13,q13-v2.13,q14a-v2.13,q14b-v2.13,q15-v2.13,q16-v2.13,q17-v2.13,q18-v2.13,q19-v2.13,q2-v2.13,q20-v2.13,q21-v2.13,q22-v2.13,q23a-v2.13,q23b-v2.13,q24a-v2.13,q24b-v2.13,q25-v2.13,q26-v2.13,q27-v2.13,q28-v2.13,q29-v2.13,q3-v2.13,q30-v2.13,q31-v2.13,q32-v2.13,q33-v2.13,q34-v2.13,q35-v2.13,q36-v2.13,q37-v2.13,q38-v2.13,q39a-v2.13,q39b-v2.13,q4-v2.13,q40-v2.13,q41-v2.13,q42-v2.13,q43-v2.13,q44-v2.13,q45-v2.13,q46-v2.13,q47-v2.13,q48-v2.13,q49-v2.13,q5-v2.13,q50-v2.13,q51-v2.13,q52-v2.13,q53-v2.13,q54-v2.13,q55-v2.13,q56-v2.13,q57-v2.13,q58-v2.13,q59-v2.13,q6-v2.13,q60-v2.13,q61-v2.13,q62-v2.13,q63-v2.13,q64-v2.13,q65-v2.13,q66-v2.13,q67-v2.13,q68-v2.13,q69-v2.13,q7-v2.13,q70-v2.13,q71-v2.13,q72-v2.13,q73-v2.13,q74-v2.13,q75-v2.13,q76-v2.13,q77-v2.13,q78-v2.13,q79-v2.13,q8-v2.13,q80-v2.13,q81-v2.13,q82-v2.13,q83-v2.13,q84-v2.13,q85-v2.13,q86-v2.13,q87-v2.13,q88-v2.13,q89-v2.13,q9-v2.13,q90-v2.13,q91-v2.13,q92-v2.13,q93-v2.13,q94-v2.13,q95-v2.13,q96-v2.13,q97-v2.13,q98-v2.13,q99-v2.13,ss_max-v2.13',
true,],ActionOnFailure=CONTINUE --region 

Summarize the outcomes

After the step is full, you may see the summarized benchmark outcome at s3:///benchmark_run/timestamp=xxxx/abstract.csv/xxx.csv in the identical manner because the earlier run and compute the typical and geometric imply of the question runtimes.

Clear up

To assist stop future fees, delete the assets you created by following the directions supplied within the Cleanup part of the GitHub repository.

Abstract

Amazon EMR optimizes the runtime for Spark when used with Iceberg tables, attaining 4.5x sooner efficiency than open supply Apache Spark 3.5.6 and Apache Iceberg 1.10.0 with Amazon EMR 7.12 on TPC-DS 3 TB, v2.13. This represents a big development from Amazon EMR 7.5, which delivered 3.6x sooner efficiency and closes the hole to parquet efficiency on Amazon EMR so prospects can use the advantages of Iceberg with out a efficiency penalty.

We encourage you to maintain updated with the newest Amazon EMR releases to totally profit from ongoing efficiency enhancements.

To remain knowledgeable, subscribe to the RSS feed for the AWS Massive Information Weblog, the place yow will discover updates on the Amazon EMR runtime for Spark and Iceberg, in addition to recommendations on configuration finest practices and tuning suggestions.


Concerning the authors

Atul Felix Payapilly is a software program improvement engineer for Amazon EMR at Amazon Internet Providers.

Akshaya KP is a software program improvement engineer for Amazon EMR at Amazon Internet Providers.

Hari Kishore Chaparala is a software program improvement engineer for Amazon EMR at Amazon Internet Providers.

Giovanni Matteo is the Senior Supervisor for the Amazon EMR Spark and Iceberg group.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
0FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img

Latest Articles