A whole lot of hundreds of consumers construct synthetic intelligence and machine studying (AI/ML) and analytics functions on AWS, regularly reworking knowledge by way of a number of phases for improved question efficiency—from uncooked knowledge to processed datasets to remaining analytical tables. Information engineers should resolve complicated issues, together with detecting what knowledge has modified in base tables, writing and sustaining transformation logic, scheduling and orchestrating workflows throughout dependencies, provisioning and managing compute infrastructure, and troubleshooting failures whereas monitoring pipeline well being. Take into account an ecommerce firm the place knowledge engineers must constantly merge clickstream logs with orders knowledge for analytics. Every transformation requires constructing sturdy change detection mechanisms, writing complicated joins and aggregations, coordinating a number of workflow steps, scaling compute assets appropriately, and sustaining operational oversight—all whereas supporting knowledge high quality and pipeline reliability. This complexity calls for months of devoted engineering effort and ongoing upkeep, making knowledge transformation pricey and time-intensive for organizations in search of to unlock insights from their knowledge.
To deal with these challenges, AWS introduced a brand new materialized view functionality for Apache Iceberg tables within the AWS Glue Information Catalog. The brand new materialized view functionality simplifies knowledge pipelines and accelerates knowledge lake question efficiency. A materialized view is a managed desk within the AWS Glue Information Catalog that shops pre-computed outcomes of a question in Iceberg format that’s incrementally up to date to replicate adjustments to the underlying datasets. This alleviates the necessity to construct and keep complicated knowledge pipelines to generate reworked datasets and speed up question efficiency. Apache Spark engines throughout Amazon Athena, Amazon EMR, and AWS Glue assist the brand new materialized views and intelligently rewrite queries to make use of materialized views that velocity up efficiency whereas lowering compute prices.
On this put up, we present you the way Iceberg materialized view works and the way to get began.
How Iceberg materialized views work
Iceberg materialized views provide a easy, managed answer constructed on acquainted SQL syntax. As a substitute of constructing complicated pipelines, you may create materialized views utilizing commonplace SQL queries from Spark, reworking knowledge with aggregates, filters, and joins with out writing customized knowledge pipelines. Change detection, incremental updates, and monitoring supply tables are robotically dealt with within the AWS Glue Information Catalog and refreshing materialized views as new knowledge arrive, assuaging the necessity for guide pipeline orchestration. Information transformations run on totally managed compute infrastructure, eradicating the burden of provisioning, scaling, or sustaining servers.
The ensuing pre-computed knowledge is saved as Iceberg tables in an Amazon Easy Storage Service (Amazon S3) common function bucket, or Amazon S3 Tables buckets inside the your account, making reworked knowledge instantly accessible to a number of question engines, together with Athena, Amazon Redshift, and AWS optimized Spark runtime. Spark engines throughout Athena, Amazon EMR, and AWS Glue assist an automated question rewrite performance that intelligently makes use of materialized views, delivering automated efficiency enchancment for knowledge processing jobs or interactive pocket book queries.
Within the following sections, we stroll by way of the steps to create, question, and refresh materialized views.
Pre-requisite
To observe together with this put up, you need to have an AWS account.
To run the instruction on Amazon EMR, full the next steps to configure the cluster:
- Launch an Amazon EMR cluster 7.12.0 or greater.
- SSH login to the first node of your Amazon EMR cluster, and run the next command to start out a Spark utility with required configurations:
To run the instruction on AWS Glue for Spark, full the next steps to configure the job:
- Create an AWS Glue model 5.1 job or greater.
- Configure a job parameter
- Key:
--conf - Worth:
spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions
- Key:
- Configure your job with the next script:
- Run the next queries utilizing Spark SQL to arrange a base desk. In AWS Glue, you may run them by way of
spark.sql("QUERY STATEMENT").
Within the subsequent sections, we create a materialized view with this base desk.
If you wish to retailer your materialized views in Amazon S3 Tables as a substitute of a common Amazon S3 bucket, consult with Appendix 1 on the finish of this put up for the configuration particulars.
Create a materialized view
To create a materialized view, run the next command:
After you create a materialized view, AWS Spark’s in-memory metadata cache wants time to populate with details about the brand new materialized view. Throughout this cache inhabitants interval, queries in opposition to the bottom desk will run usually with out utilizing the materialized view. After the cache is totally populated (sometimes inside tens of seconds), Spark robotically detects that the materialized view can fulfill the question and rewrites it to make use of the pre-computed materialized view as a substitute, bettering efficiency.
To see this habits, run the next EXPLAIN command instantly after creating the materialized view:
The next output reveals the preliminary end result earlier than cache inhabitants:
On this preliminary execution plan, Spark scans the base_tbl instantly (BatchScan glue_catalog.iceberg_mv.base_tbl) and runs aggregations (COUNT and SUM) on the uncooked knowledge. That is the habits earlier than the materialized view metadata cache is populated.
After ready roughly tens of seconds for the metadata cache inhabitants, run the identical EXPLAIN command once more. The next output reveals the first variations within the question optimization plan after cache inhabitants:
After the cache is populated, Spark now scans the materialized view (BatchScan glue_catalog.iceberg_mv.mv) as a substitute of the bottom desk. The question has been robotically rewritten to learn from the pre-computed aggregated knowledge within the materialized view. The output particularly reveals the aggregation features now merely sum the pre-computed values (sum(mv_order_count) and sum(mv_total_amount)) somewhat than recalculating COUNT and SUM from uncooked knowledge.
Create a materialized view with scheduling automated refresh
By default, a newly created materialized view incorporates the preliminary question outcomes. It’s not robotically up to date when the underlying base desk knowledge adjustments. To maintain your materialized view synchronized with the bottom desk knowledge, you may configure automated refresh schedules. To allow automated refresh, use the REFRESH EVERY clause when creating the materialized view. This clause accepts a time interval and unit, so you may specify how regularly the materialized view is up to date.
The next instance creates a materialized view that robotically refreshes each 24 hours:
You possibly can configure the refresh interval utilizing any of the next time models: SECONDS, MINUTES, HOURS, or DAYS. Select an acceptable interval based mostly in your knowledge freshness necessities and question patterns.
When you choose extra management over when your materialized view updates, or must refresh it outdoors of the scheduled intervals, you may set off guide refreshes at any time. We offer detailed directions on guide refresh choices, together with full and incremental refresh, later on this put up.
Question a materialized view
To question a materialized view in your Amazon EMR cluster and retrieve its aggregated knowledge, you should use a normal SELECT assertion:
This question retrieves all rows from the materialized view. The output reveals the aggregated buyer order counts and whole quantities. The end result shows three clients with their respective metrics:
Moreover, you may question the identical materialized view from Athena SQL. The next screenshot reveals the identical question run on Athena and the ensuing output.

Refresh a materialized view
You possibly can refresh materialized views utilizing two refresh varieties: full refresh or incremental refresh. Full refresh re-computes your complete materialized view from all base desk knowledge. Incremental refresh processes solely the adjustments for the reason that final refresh. Full refresh is right once you want consistency or after vital knowledge adjustments. Incremental refresh is most popular once you want instant updates. The next examples present each refresh varieties.
To make use of full refresh, full the next steps:
- Insert three new information into the bottom desk to simulate new knowledge arriving:
- Question the materialized view to confirm it nonetheless reveals the outdated aggregated values:
- Run a full refresh of the materialized view utilizing the next command:
- Question the materialized view once more to confirm the aggregated values now embody the brand new information:
To make use of incremental refresh, full the next steps:
- Allow incremental refresh by setting the Spark configuration properties:
- Insert two further information into the bottom desk:
- Run an incremental refresh utilizing the
REFRESHcommand with out theFULLclause. To confirm if incremental refresh is enabled, consult with Appendix 2 on the finish of this put up. - Question the materialized view to substantiate the incremental adjustments are mirrored within the aggregated outcomes:
Along with utilizing Spark SQL, you too can set off guide refreshes by way of AWS Glue APIs once you want updates outdoors your scheduled intervals. Run the next AWS CLI command:
The AWS Lake Formation console shows refresh historical past for API-triggered updates. Open your materialized view to see the refresh sort (INCREMENTAL or FULL), begin and finish time, standing and so forth:

You’ve gotten discovered the way to use Iceberg materialized views to make your environment friendly knowledge processing and queries. You created a materialized view utilizing Spark on Amazon EMR, queried it from each Amazon EMR and Athena, and used two refresh mechanisms: full refresh and incremental refresh. Iceberg materialized views allow you to remodel and optimize your knowledge pipelines effortlessly.
Concerns
There are essential points to contemplate for optimum utilization of the potential:
- We launched new SQL syntax to handle materialized views within the AWS optimized Spark runtime engine solely. These new SQL instructions can be found in Spark model 3.5.6 and above throughout Athena, Amazon EMR, and AWS Glue. Open supply Spark is just not supported.
- Materialized views are ultimately in line with base tables. When supply tables change, the materialized views are up to date by way of background refresh processes as outlined by customers within the refresh schedule at creation. In the course of the refresh window, queries instantly accessing materialized views would possibly see outdated knowledge. Nevertheless, clients who want instant entry to probably the most up-to-date datasets can run a guide refresh with a easy
REFRESH MATERIALIZED VIEWSQL command.
Clear up
To keep away from incurring future costs, clear up the assets you created throughout this walkthrough:
- Run the next instructions to delete a materialized view and tables:
- For Amazon EMR, terminate the Amazon EMR cluster.
- For AWS Glue, delete the AWS Glue job.
Conclusion
This put up demonstrated how Iceberg materialized views facilitate environment friendly knowledge lake operations on AWS. The brand new materialized view functionality simplifies knowledge pipelines and improves question efficiency by storing pre-computed outcomes which might be robotically up to date as base tables change. You possibly can create materialized views utilizing acquainted SQL syntax, utilizing each full and incremental refresh mechanisms to take care of knowledge consistency. This answer alleviates the necessity for complicated pipeline upkeep whereas offering seamless integration with AWS providers like Athena, Amazon EMR, and AWS Glue. The automated question rewrite performance additional optimizes efficiency by intelligently using materialized views when relevant, making it a robust device for organizations trying to streamline their knowledge transformation workflows and speed up question efficiency.
Appendix 1: Spark configuration to make use of Amazon S3 Tables storing Apache Iceberg materialized views
As talked about earlier on this put up, materialized views are saved as Iceberg tables in Amazon S3 Tables buckets inside your account. Whenever you need to use Amazon S3 Tables because the storage location on your materialized views as a substitute of a common Amazon S3 bucket, you need to configure Spark with the Amazon S3 Tables catalog.
The distinction from the usual AWS Glue Information Catalog configuration proven within the stipulations part is the glue.id parameter format. For Amazon S3 Tables, use the format as a substitute of simply the account ID:
After you configure Spark with these settings, you may create and handle materialized views utilizing the identical SQL instructions proven on this put up, and the materialized views are saved in your Amazon S3 Tables bucket.
Appendix 2: Confirm refreshing a materialized view with Spark SQL
Run SHOW TBLPROPERTIES in Spark SQL to verify which refresh methodology was used:
Concerning the authors
