0.6 C
New York
Wednesday, February 4, 2026

Medidata’s journey to a contemporary lakehouse structure on AWS


This put up was co-authored by Mike Araujo Principal Engineer at Medidata Options.

The life sciences trade is transitioning from fragmented, standalone instruments in direction of built-in, platform-based options. Medidata, a Dassault Systèmes firm, is constructing a next-generation knowledge platform that addresses the complicated challenges of contemporary scientific analysis. On this put up, we present you the way Medidata created a unified, scalable, real-time knowledge platform that serves hundreds of scientific trials worldwide with AWS companies, Apache Iceberg, and a contemporary lakehouse structure.

Challenges with legacy structure

Because the Medidata scientific knowledge repository expanded, the staff acknowledged the shortcomings of the legacy knowledge answer to offer high quality knowledge merchandise to their prospects throughout their rising portfolio of knowledge choices. A number of knowledge tenants started to erode. The next diagram exhibits Medidata’s legacy extract, remodel, and cargo (ETL) structure.

Constructed upon a sequence of scheduled batch jobs, the legacy system proved ill-equipped to offer a unified view of the information throughout the complete ecosystem. Batch jobs ran at completely different intervals, typically requiring a ample diploma of scheduling buffer to verify upstream jobs accomplished throughout the anticipated window. As the information quantity expanded, the roles and their schedules continued to inflate, introducing a latency window between ingestion and processing for dependent customers. Totally different customers working from varied underlying knowledge companies additional magnified the issue as pipelines needed to be repeatedly constructed throughout a wide range of knowledge supply stacks.

The increasing portfolio of pipelines started to overwhelm present upkeep operations. With extra operations, the chance for failure expanded and restoration efforts additional difficult. Present observability methods had been inundated with operational knowledge, and figuring out the basis trigger of knowledge high quality points turned a multi-day endeavor. Will increase within the knowledge quantity required scaling issues throughout the complete knowledge property.

Moreover, the proliferation of knowledge pipelines and copies of the information in numerous applied sciences and storage methods necessitated increasing entry controls with enhanced security measures to verify solely the proper customers had entry to the subset of knowledge to which they had been permitted. Ensuring entry management modifications had been accurately propagated throughout all methods added an extra layer of complexity to customers and producers.

Answer overview

With the arrival of Scientific Knowledge Studio (Medidata’s unified knowledge administration and analytics answer for scientific trials) and Knowledge Join (Medidata’s knowledge answer for buying, reworking, and exchanging digital well being document (EHR) knowledge throughout healthcare organizations), Medidata launched a brand new world of knowledge discovery, evaluation, and integration to the life sciences trade powered by open supply applied sciences and hosted on AWS. The next diagram illustrates the answer structure.

Fragmented batch ETL jobs had been changed by real-time Apache Flink streaming pipelines, an open supply, distributed engine for stateful processing, and powered by Amazon Elastic Kubernetes Service (Amazon EKS), a completely managed Kubernetes service. The Flink jobs write to Apache Kafka operating in Amazon Managed Apache Kafka (Amazon MSK), a streaming knowledge service that manages Kafka infrastructure and operations, earlier than touchdown in Iceberg tables backed by the AWS Glue Knowledge Catalog, a centralized metadata repository for knowledge belongings. From this assortment of Iceberg tables, a central, single supply of knowledge is now accessible from a wide range of customers with out extra downstream processing, assuaging the necessity for customized pipelines to fulfill the necessities of downstream customers. By way of these elementary architectural modifications, the staff at Medidata solved the problems introduced by the legacy answer.

Knowledge availability and consistency

With the introduction of the Flink jobs and Iceberg tables, the staff was capable of ship a constant view of their knowledge throughout the Medidata knowledge expertise. Pipeline latency was diminished from days to minutes, serving to Medidata prospects notice a 99% efficiency acquire from the information ingestion to the information analytics layers. As a consequence of Iceberg’s interoperability, Medidata customers noticed the identical view of the information no matter the place they considered that knowledge, minimizing the necessity for consumer-driven customized pipelines as a result of Iceberg may plug into present customers.

Upkeep and sturdiness

Iceberg’s interoperability offered a single copy of the information to fulfill their use instances, so the Medidata staff may focus its statement and upkeep efforts on a five-times smaller subset of operations than beforehand required. Observability was enhanced by tapping into the assorted metadata elements and metrics uncovered by Iceberg and the Knowledge Catalog. High quality administration remodeled from cross-system traces and queries to a single evaluation of unified pipelines, with an added good thing about cut-off date knowledge queries due to the Iceberg snapshot characteristic. Knowledge quantity will increase are dealt with with out-of-box scaling supported by the complete infrastructure stack and AWS Glue Iceberg optimization options that embrace compaction, snapshot retention, and orphan file deletion, which give a set-and-forget expertise for fixing plenty of frequent Iceberg frustrations, such because the small file drawback, orphan file retention, and question efficiency.

Safety

With Iceberg on the heart of its answer structure, the Medidata staff not needed to spend the time constructing customized entry management layers with enhanced security measures at every knowledge integration level. Iceberg on AWS centralizes the authorization layer utilizing acquainted methods reminiscent of AWS Id and Entry Administration (IAM), offering a single and sturdy management for knowledge entry. The information additionally stays solely throughout the Medidata digital personal cloud (VPC), additional lowering the chance for unintended disclosures.

Conclusion

On this put up, we demonstrated how legacy universe of consumer-driven customized ETL pipelines will be changed with a scalable, high-performant streaming lakehouses. By placing Iceberg on AWS on the heart of knowledge operations, you’ll be able to have a single supply of knowledge in your customers.

To study extra about Iceberg on AWS, seek advice from Optimizing Iceberg tables and Utilizing Apache Iceberg on AWS.


In regards to the authors

Mike Araujo

Mike is a Principal Engineer at Medidata Options, engaged on constructing a subsequent era knowledge and AI platform for scientific knowledge and trials. By utilizing the facility of open supply applied sciences reminiscent of Apache Kafka, Apache Flink, and Apache Iceberg, Mike and his staff have enabled the supply of billions of scientific occasions and knowledge transformations in close to actual time to downstream customers, purposes, and AI brokers. His core abilities concentrate on architecting and constructing huge knowledge and ETL options at scale in addition to their integration in agentic workflows.

Sandeep Adwankar

Sandeep is a Senior Product Supervisor at AWS, who has pushed characteristic launches throughout Amazon SageMaker, AWS Glue, and AWS Lake Formation. He has led initiatives in Amazon S3 Tables analytics, Iceberg compaction methods, and AWS Glue Iceberg optimizations. His current work focuses on generative AI and autonomous methods, together with the AWS Glue Knowledge Catalog mannequin context protocol and Amazon Bedrock structured data bases. Primarily based within the California Bay Space, he works with prospects across the globe to translate enterprise and technical necessities into merchandise that speed up their enterprise outcomes.

Ian Beatty

Ian is a Technical Account Supervisor at AWS, the place he makes a speciality of supporting unbiased software program vendor (ISV) prospects within the healthcare and life sciences (HCLS) and monetary companies trade (FSI) sectors. Primarily based within the Rochester, NY space, Ian helps ISV prospects navigate their cloud journey by sustaining resilient and optimized workloads on AWS. With over a decade of expertise constructing on AWS since 2014, he brings deep technical experience from his earlier roles as an AWS Architect and DevSecOps staff lead for SaaS ISVs earlier than becoming a member of AWS greater than 3 years in the past.

Ashley Chen

Ashley is a Options Architect at AWS primarily based in Washington D.C. She helps unbiased software program vendor (ISV) prospects within the healthcare and life sciences industries, specializing in buyer enablement, generative AI purposes, and container workloads.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
0FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img

Latest Articles