With the rising adoption of open desk codecs like Apache Iceberg, Amazon Redshift continues to advance its capabilities for open format information lakes. In 2025, Amazon Redshift delivered a number of efficiency optimizations that improved question efficiency over twofold for Iceberg workloads on Amazon Redshift Serverless, delivering distinctive efficiency and cost-effectiveness on your information lake workloads.
On this submit, we describe a few of the optimizations that led to those efficiency beneficial properties. Information lakes have turn out to be a basis of recent analytics, serving to organizations retailer huge quantities of structured and semi-structured information in cost-effective information codecs like Apache Parquet whereas sustaining flexibility by open desk codecs. This structure creates distinctive efficiency optimization alternatives throughout your complete question processing pipeline.
Efficiency enhancements
Our newest enhancements span a number of areas of the Amazon Redshift SQL question processing engine, together with vectorized scanners that speed up execution, optimum question plans powered by just-in-time (JIT) runtime statistics, distributed Bloom filters, and new decorrelation guidelines.
The next chart summarizes the efficiency enhancements achieved up to now in 2025, as measured by {industry} customary 10 TB TPC-DS and TPC-H benchmarks run on Iceberg tables on an 88 RPU Redshift Serverless endpoint.

Discover one of the best efficiency on your workloads
The efficiency outcomes offered on this submit are primarily based on benchmarks derived from the industry-standard TPC-DS and TPC-H benchmarks, and have the next traits:
- The schema and information of Iceberg tables are used unmodified from TPC-DS. Tables are partitioned to mirror real-world information group patterns.
- The queries are generated utilizing the official TPC-DS and TPC-H kits with question parameters generated utilizing the default random seed of the kits.
- The TPC-DS check consists of all 99 TPC-DS SELECT queries. It doesn’t embrace upkeep and throughput steps. The TPC-H check consists of all 22 TPC-H SELECT queries.
- Benchmarks are run out of the field: no guide tuning or stats assortment is finished for the workloads.
Within the following sections, we focus on key efficiency enhancements delivered in 2025.
Sooner information lake scans
To enhance information lake learn efficiency, the Amazon Redshift workforce constructed a very new scan layer designed from the ground-up for information lakes. This new scan layer features a purpose-built I/O subsystem, incorporating good prefetch capabilities to cut back information latency. Moreover, the brand new scan layer is optimized for processing Apache Parquet recordsdata, essentially the most generally used file format for Iceberg, by quick vectorized scans.
This new scan layer additionally consists of subtle information pruning mechanisms that function at each partition and file ranges, dramatically decreasing the quantity of information that must be scanned. This pruning functionality works in concord with the good prefetch system, making a coordinated strategy that maximizes effectivity all through your complete information retrieval course of.
JIT ANALYZE for Iceberg tables
Not like conventional information warehouses, information lakes usually lack complete table- and column-level statistics in regards to the underlying information, making it difficult for the planner and optimizer within the question engine to decide on up-front which execution plan shall be most optimum. Sub-optimal plans can result in slower and fewer predictable efficiency.
JIT ANALYZE is a brand new Amazon Redshift function that robotically collects and makes use of statistics for Iceberg tables throughout question execution—minimizing guide statistics assortment whereas giving the planner and optimizer within the question engine the knowledge it must generate optimum question plans. The system makes use of clever heuristics to establish queries that may profit from statistics, performs quick file-level sampling utilizing Iceberg metadata, and extrapolates inhabitants statistics utilizing superior strategies.
JIT ANALYZE delivers out-of-the-box efficiency practically equal to queries which have pre-calculated statistics, whereas offering the muse for a lot of different efficiency optimizations. Some TPC-DS queries improved by 50 occasions quicker with these statistics.
Question optimizations
For correlated subqueries akin to people who include EXISTS/IN clauses, Amazon Redshift makes use of decorrelation guidelines to rewrite the queries. In lots of instances, these decorrelation guidelines weren’t producing optimum plans, leading to question execution efficiency regressions. To handle this, we launched a brand new inner be a part of kind, SEMI JOIN, and a brand new decorrelation rule primarily based on this be a part of kind. This decorrelation rule helps in producing essentially the most optimum plans, thereby enhancing execution efficiency. For example, one of many TPC-DS queries that comprises EXIST clause ran 7 occasions quicker with this optimization.
We launched distributed Bloom filter optimization for information lake workloads. Distributed Bloom filters create Bloom filters regionally in each compute node after which distributes them to each different node. Distributing Bloom filters can considerably scale back the quantity of information that must be despatched over the community for the be a part of by filtering out the tuples earlier. This offers good efficiency beneficial properties for big, advanced information lake queries that course of and be a part of massive quantities of information.
Conclusion
These efficiency enhancements for Iceberg workloads characterize a serious leap ahead in Redshift information lake capabilities. By specializing in out-of-the-box efficiency, we’ve made it simple to realize distinctive question efficiency with out advanced tuning or optimization.
These enhancements display the ability of deep technical innovation mixed with sensible buyer focus. JIT ANALYZE reduces the operational burden of statistics administration whereas offering optimum question planning info. The brand new Redshift information lake question engine on Redshift Serverless was rewritten from the bottom up for best-in-class scan efficiency, and lays the groundwork for extra superior efficiency optimizations. Semi-join optimizations deal with a few of the most difficult question patterns in analytical workloads. You may run advanced analytical workloads in your Iceberg information and get quick, predictable question efficiency.
Amazon Redshift is dedicated to being one of the best analytics engine for information lake workloads, and these efficiency optimizations characterize our continued funding in that purpose.
To study extra about Amazon Redshift and its efficiency capabilities, go to the Amazon Redshift product web page. To get began with Redshift, you’ll be able to attempt Amazon Redshift Serverless and begin querying information in minutes with out having to arrange and handle information warehouse infrastructure. For extra particulars on efficiency greatest practices, see the Amazon Redshift Database Developer Information. To remain up-to-date with the most recent developments in Amazon Redshift, subscribe to the What’s New in Amazon Redshift RSS feed.
Particular because of this submit’s contributors: Martin Milenkoski, Gerard Louw, Konrad Werblinski, Mengchu Cai, Mehmet Bulut, Mohammed Alkateb, and Sanket Hase
