10.8 C
New York
Sunday, April 26, 2026

Analyzing your information catalog: Question SageMaker Catalog metadata with SQL


As your information and machine studying (ML) property develop, monitoring which property lack documentation or monitoring asset registration tendencies turns into difficult with out customized reporting infrastructure. You want visibility into your catalog’s well being, with out the overhead of managing ETL jobs. The metadata characteristic of Amazon SageMaker gives this functionality to customers. Changing catalog asset metadata into Apache Iceberg tables saved in Amazon S3 Tables removes the necessity to construct and preserve customized ETL pipelines. Your group can then question asset metadata straight utilizing commonplace SQL instruments. Now you can reply governance questions like asset registration tendencies, classification standing, and metadata completeness utilizing commonplace SQL queries by instruments like Amazon Athena, Amazon SageMaker Unified Studio notebooks, and BIsystems.

This automated strategy reduces ETL improvement time and provides your group visibility into catalog well being, compliance gaps, and asset lifecycle patterns. The exported tables embody technical metadata, enterprise metadata, challenge possession particulars, and timestamps, partitioned by snapshot date to allow time journey queries and historic evaluation. Groups can use this functionality to proactively monitor catalog well being, determine gaps in documentation, observe asset lifecycle patterns, and make it possible for governance insurance policies are constantly utilized.

How metadata export works

After you allow the metadata export characteristic, it runs mechanically on a each day schedule:

  1. SageMaker Catalog creates the infrastructure — An Amazon Easy Storage Service (Amazon S3) desk bucket named aws-sagemaker-catalog is created with an asset_metadata namespace and an empty asset desk.
  2. Day by day snapshots are captured — A scheduled job runs as soon as per day round midnight (native time per AWS Area) to export up to date asset metadata.
  3. Metadata is structured and partitioned — The export captures technical metadata (resource_id, resource_type), enterprise metadata (asset_name, business_description), challenge possession particulars, and timestamps, partitioned by snapshot_date for question efficiency.
  4. Information turns into queryable — Inside 24 hours, the asset desk seems in Amazon SageMaker Unified Studio below the aws-sagemaker-catalog bucket and turns into accessible by Amazon Athena, Studio notebooks, or exterior BI instruments.
  5. Groups question utilizing commonplace SQL — Information groups can now reply questions like “What number of property had been registered final month?” or “Which property lack enterprise descriptions?” with out constructing customized ETL pipelines.

The export evaluates catalog property and their metadata properties within the area, changing them into Apache Iceberg desk format. The info flows into downstream analytics operations instantly, with no separate ETL or batch processes to keep up. The exported metadata turns into a part of a queryable information lake that helps time-travel queries and historic evaluation.

On this publish, we exhibit how one can use the metadata export functionality in Amazon SageMaker Catalog and carry out analytics on these tables. We discover the next particular use-cases.

  • Audit historic adjustments to analyze what an asset appeared like at a particular cut-off date.
  • Monitor asset progress view how the info catalog has grown during the last 30 days.
  • Observe metadata enhancements to see which property gained descriptions or possession over time.

Resolution overview

AWS Cloud architecture diagram showing data pipeline from Amazon SageMaker Catalog to Amazon S3 Tables with daily export, connecting to query engines including Amazon Athena, Amazon Redshift, and Apache Spark

Determine 1 – SageMaker catalog export to S3 Tables

The structure consists of three key parts:

  1. Amazon SageMaker Catalog exports asset metadata each day to Amazon S3.
  2. S3 Tables shops metadata as Apache Iceberg tables within the aws-sagemaker-catalog bucket with ACID compliance and time journey.
  3. Question engines (Amazon Athena, Amazon Redshift, and Apache Spark) entry metadata utilizing commonplace SQL from the asset_metadata.asset desk.

What metadata is uncovered?

SageMaker Catalog exports metadata within the asset_metadata.asset desk:

Metadata Kind Fields Description
Technical metadata resource_id, resource_type_enum, account_id, area Useful resource identifiers (ARN), varieties (GlueTable, RedshiftTable, S3Collection), and placement
Namespace hierarchy catalog, namespace, resource_name Organizational construction for property
Enterprise metadata asset_name, business_description Human-readable names and descriptions
Possession extended_metadata['owningEntityId'] Asset possession data
Timestamps asset_created_time, asset_updated_time, snapshot_time Creation
Customized metadata extended_metadata['form-name.field-name'] Person-defined metadata kinds as key-value pairs

The snapshot_time column helps point-in-time evaluation and question of historic catalog states.

Conditions

To comply with together with this publish, it’s essential to have the next:

For SageMaker Unified Studio area setup directions, consult with the SageMaker Unified Studio Getting began information.

After you full the conditions, full the next steps.

  1. Add this coverage to our IAM consumer or function to allow metadata export. If utilizing SageMaker Unified Studio to question the catalog, add this coverage to the AmazonSageMakerAdminIAMExecutionRole managed function.
{ "Model": "2012-10-17", 
"Assertion": [ 
{
 "Effect": "Allow",
 "Action": [ "datazone:GetDataExportConfiguration",
 "datazone:PutDataExportConfiguration"
 ],
 "Useful resource": "*"
 },
 {
 "Impact": "Enable",
 "Motion": [
 "s3tables:CreateTableBucket",
 "s3tables:PutTableBucketPolicy"
 ],
 "Useful resource": "arn:aws:s3tables:*:*:bucket/aws-sagemaker-catalog" 
} 
]
}
  1. Grant describe and choose permissions for SageMaker Catalog with AWS Lake Formation. This step might be carried out within the AWS Lake Formation console.
    1. Choose Permissions -> Information permissions and select Grant.

      AWS Lake Formation Grant Permissions interface showing principal type selection with IAM users and roles option selected and AmazonSageMakerAdminIAMExecutionRole assigned

      Determine 2 – AWS Lake Formation grant permission

    2. Underneath Principal sort, choose Principals, IAM customers and roles and the AWS managed AmazonSageMakerAdminIAMExecutionRole execution function.
    3. Select Named Information Catalog assets.
    4. Underneath Catalogs, seek for and choose :s3tablecatalog/aws-sagemaker-catalog.
    5. Underneath Databases, choose asset_metadata database.
      AWS Lake Formation Grant Permissions page showing Named Data Catalog resources method with s3tablescatalog/aws-sagemaker-catalog selected, asset_metadata database, and asset table configured

      Determine 3 – AWS Lake Formation catalog, database, and desk

      AWS Lake Formation Grant Permissions interface showing table permissions with Select and Describe checked, grantable permissions section, and All data access radio button selected

      Determine 4 – AWS Lake Formation grant permission

    6. For Desk, choose asset.
    7. Underneath Desk permissions, examine Choose and Describe.
    8. Select Grant to avoid wasting the permissions.

Allow information export utilizing the AWS CLI

Configure metadata export utilizing the PutDataExportConfiguration API. The Amazon DataZone service mechanically creates an S3 desk bucket named aws-sagemaker-catalog with an asset_metadata namespace, and schedules a each day export job. Asset metadata is exported as soon as each day round midnight native time per AWS Area.

The SageMaker Area identifier is on the market on area element web page within the AWS Administration Console. Accessing the asset desk by the S3 Tables console or the Information tab in SageMaker Unified Studio can require as much as 24 hours.

AWS CLI command to allow SageMaker catalog export:

aws datazone put-data-export-configuration --domain-identifier  --region  --enable-export

Use this AWS CLI command to validate the configuration is enabled:

aws datazone get-data-export-configuration --domain-identifier  --region 
{
    "isExportEnabled": true,
    "standing": "COMPLETED",
    "s3TableBucketArn": "arn:aws:s3tables:::bucket/aws-sagemaker-catalog",
    "createdAt": "2025-11-26T18:24:02.150000+00:00",
    "updatedAt": "2026-02-23T19:33:40.987000+00:00"
}

Entry the exported asset desk

  1. Navigate to Amazon SageMaker Domains within the AWS Administration Console.
  2. Choose your area and choose Open.
    Amazon SageMaker Domains management page showing an Identity Center based domain with Available status, created February 26, 2026, with Open unified studio button highlighted

    Determine 5 – Open Amazon SageMaker Unified Studio

  3. In SageMaker Unified Studio, select a challenge from the Choose a challenge dropdown listing.
  4. To question SageMaker catalog information, choose Construct within the menu bar after which select Question Editor. To create a brand new challenge, comply with the directions within the Amazon SageMaker Unified Studio Person Information.
    SageMaker Unified Studio project overview dashboard showing IDE and Applications, Data Analysis and Integration with Query Editor highlighted, Orchestration, and Machine Learning and Generative AI categories

    Determine 6 – Open SageMaker Unified Studio Question Editor

The asset_metadata.asset desk is on the market in Information explorer. Use Information explorer to view the schema and question information to carry out analytics from.

  1. Broaden Catalogs in Information explorer. Then, choose and increase s3tablecatalog, aws-sagemaker-catalog, asset_metadata, and asset.
  2. Check querying the catalog with SELECT * FROM asset_metadata.asset LIMIT 10;.
SageMaker Unified Studio Query Editor with Data Explorer showing Lakehouse hierarchy including s3tablescatalog, aws-sagemaker-catalog, asset_metadata database, and asset table schema with SQL SELECT query

Determine 7 – Question SageMaker catalog

Queries for observability and analytics

With setup full, execute queries to realize insights on catalog utilization and adjustments. To watch asset progress, and examine how the info catalog has grown during the last 5 days:

SELECT 
    DATE (snapshot_time) as date,
    COUNT (*) as total_assets
FROM asset_metadata.asset
WHERE 
     DATE (snapshot_time) >= CURRENT_DATE - INTERVAL '5' DAY
GROUP BY DATE (snapshot_time)
ORDER BY date DESC;

SageMaker Unified Studio Query Editor showing SQL aggregation query on asset_metadata.asset table with results displaying date and total_assets columns, returning 42 assets for March 7-8, 2026"

Determine 8 – Question asset progress

Use the catalog to trace metadata adjustments to find out which property gained descriptions or possession over time. Use this question to determine property that gained enterprise descriptions over the previous 5 days by evaluating in the present day’s snapshot with the sooner snapshot.

SELECT
    t.asset_id,
    t.resource_name,
    p.business_description as description_before,
    t.business_description as description_now
FROM asset_metadata.asset t
JOIN asset_metadata.asset p ON t.asset_id = p.asset_id
WHERE DATE(t.snapshot_time) = CURRENT_DATE
    AND DATE(p.snapshot_time) = CURRENT_DATE - INTERVAL '5' DAY
    AND p.business_description IS NULL
    AND t.business_description IS NOT NULL;

Examine asset values at a particular cut-off date utilizing this question to retrieve metadata from any snapshot date.

SELECT
     asset_id,
     resource_name,
     business_description,
     extended_metadata['owningEntityId'] as proprietor,
     snapshot_time
FROM asset_metadata.asset
WHERE asset_id = 'your-asset-id'
     AND DATE(snapshot_time) = DATE('2025-11-26');

Clear up assets

To keep away from ongoing fees, clear up the assets created on this walkthrough:

  1. Disable metadata export:

Disable the each day metadata export to cease new snapshots:

aws datazone put-data-export-configuration 
  --domain-identifier 

  1. Delete S3 Tables assets:

Optionally, delete the S3 Tables namespace containing the exported metadata to take away historic snapshots and cease storage fees. For directions on how one can delete S3 tables, see Deleting an Amazon S3 desk within the Amazon Easy Storage Service Person Information.

Conclusion

On this publish, you enabled the metadata export characteristic of SageMaker Catalog and used SQL queries to realize visibility into your asset stock. The characteristic converts asset metadata into Apache Iceberg tables partitioned by snapshot date, so you possibly can carry out time-travel queries, monitor catalog progress, observe metadata completeness, and audit historic asset states. This gives a repeatable, low-overhead option to preserve catalog well being and meet governance necessities over time.

To study extra about Amazon SageMaker Catalog, see the Amazon SageMaker Catalog documentation. To discover Apache Iceberg desk codecs and time-travel queries, see the Amazon S3 Tables documentation.


In regards to the Authors

Photo of Author Ramesh Singh

Ramesh is a Senior Product Supervisor Technical (Exterior Providers) at AWS in Seattle, Washington, at the moment with the Amazon SageMaker group. He’s enthusiastic about constructing high-performance ML/AI and analytics merchandise that assist enterprise prospects obtain their vital targets utilizing cutting-edge expertise.

Photo of Author Pradeep Misra

Pradeep is a Principal Analytics and Utilized AI Options Architect at AWS. He’s enthusiastic about fixing buyer challenges utilizing information, analytics, and Utilized AI. Exterior of labor, he likes exploring new locations and taking part in badminton along with his household. He additionally likes doing science experiments, constructing LEGOs, and watching anime along with his daughters.

Photo of Author - Rohith Kayathi

Rohith is a Senior Software program Engineer at Amazon Internet Providers (AWS) working with Amazon SageMaker group. He leads enterprise information catalog, generative AI–powered metadata curation, and lineage options. He’s enthusiastic about constructing large-scale distributed methods, fixing complicated issues, and setting the bar for engineering excellence for his group.

Photo of AUthor - Steve Phillips

Steve is a Principal Technical Account Supervisor and Analytics specialist at AWS within the North America area. Steve at the moment focuses on information warehouse architectural design, information lakes, information ingestion pipelines, and cloud distributed architectures.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
0FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img

Latest Articles