7-Eleven’s Information Documentation Dilemma
7-Eleven’s knowledge ecosystem is huge and complicated, housing hundreds of tables with a whole bunch of columns throughout our Databricks atmosphere. This knowledge varieties the spine of our operations, analytics and decision-making processes. Historically, 7-Eleven’s knowledge dictionary and documentation lived in Confluence pages, meticulously maintained by our knowledge workforce members who would manually doc desk and column definitions.
We confronted a crucial roadblock as we started exploring the AI-powered options on the Databricks Information Intelligence Platform, together with AI/BI Genie, clever dashboards and different purposes. These superior instruments rely closely on desk metadata and feedback embedded straight inside Databricks to generate insights, reply questions on our knowledge, and construct automated visualizations. With out correct desk and column feedback in Databricks itself, we had been primarily leaving highly effective AI capabilities on the desk. For instance, when Genie lacks column definitions, it might probably misread the that means of bespoke columns, requiring finish customers to make clear. As soon as we enriched our metadata, Genie’s contextual understanding improved dramatically—precisely figuring out column functions, surfacing the precise tables in response to pure language queries, and producing much more related and actionable insights. Merely put, Genie, like all AI brokers, will get extra considerate and extra useful when it has higher metadata to work with.
The hole between our well-documented Confluence pages and our “metadata-light” Databricks atmosphere was stopping us from realizing the total potential of our knowledge platform funding.
Handbook Migration’s Inconceivable Scale
After we initially thought-about migrating our documentation from Confluence to Databricks, the dimensions of the problem grew to become instantly obvious. With hundreds of tables containing a whole bunch of columns every, a handbook migration would require:
- Time-intensive labor: A whole lot of person-hours to repeat and paste documentation
- Handbook metadata updates: Crafting hundreds of particular person SQL statements to replace metadata or going to every desk UI
- Undertaking oversight: Implementing a monitoring system to make sure all tables had been correctly up to date
- High quality assurance: Making a validation course of to catch inevitable human errors
- Ongoing maintenance: Establishing an ongoing upkeep protocol to maintain each methods in sync
Human error could be unavoidable even when we devoted important assets to this effort. Some tables could be missed, feedback could be incorrectly formatted, and the method would possible should be repeated as documentation advanced. Furthermore, the tedious nature of the work possible results in inconsistent high quality throughout the documentation.
Most regarding was the chance value. Whereas our knowledge workforce targeted on this migration, they couldn’t work on higher-value initiatives. On daily basis, we confronted delays in strengthening our Databricks metadata, leaving untapped potential within the AI/BI capabilities already at our fingertips.
The Clever Doc Processing Pipeline
To unravel this problem, 7-Eleven developed a complicated agentic AI workflow powered by Llama 4 Maverick, deployed by Mosaic AI Mannequin Serving, that automated the complete documentation migration course of by an clever multistage pipeline:
- Discovery section: The agent makes use of Databricks APIs to get all tables, desk names and column buildings.
- Doc retrieval: The agent pulls all related knowledge dictionary paperwork from Confluence, making a corpus of potential documentation sources.
- Reranking and filtering: Implementing superior reranking algorithms, the system prioritizes probably the most related documentation for every desk, filtering out noise and irrelevant content material. This crucial step ensures we match tables with their correct documentation even when naming conventions aren’t completely constant.
- Clever matching: For every Databricks desk, the AI agent analyzes potential documentation matches, utilizing contextual understanding to find out the proper Confluence web page even when names don’t match precisely.
- Focused extraction: As soon as the proper documentation is recognized, the agent intelligently extracts related descriptions for each tables and their columns, preserving the unique that means whereas formatting appropriately for Databricks metadata.
- SQL technology: The system mechanically generates correctly formatted SQL statements to replace the Databricks desk and column feedback, dealing with particular characters and formatting necessities.
- Execution and verification: The agent runs the SQL updates and, by MLflow monitoring and analysis, verifies that metadata was utilized appropriately, logs outcomes, and surfaces any points for human evaluate.
- Monitoring and insights: The workforce additionally makes use of the AI/BI Genie Dashboard to trace challenge metrics in actual time, making certain transparency, high quality management, and steady enchancment.
This clever pipeline reworked months of tedious, error-prone work into an automatic course of that accomplished the preliminary migration in days. The system’s capacity to know context and make clever matches between otherwise named or structured assets was key to attaining excessive accuracy.
Since implementing this answer, we plan emigrate documentation for over 90% of our tables, unlocking the total potential of Databricks’ AI/BI options. What started as a evenly used AI assistant has advanced into an on a regular basis device in our knowledge workflows.. Genie’s capacity to know context now mirrors how a human would interpret the info, because of the column-level metadata we injected. Our knowledge scientists and analysts can now use pure language queries by AI/BI Genie to discover knowledge, and our dashboards leverage the wealthy metadata to offer extra significant visualizations and insights.
The answer continues to offer worth as an ongoing synchronization device, making certain that as our documentation evolves in Confluence, these modifications are mirrored in our Databricks atmosphere. This challenge demonstrated how thoughtfully utilized AI brokers can clear up complicated knowledge governance challenges at enterprise scale, turning what appeared like an insurmountable documentation activity into a chic automated answer.
Need to study extra about AI/BI and the way it will help unlock worth out of your knowledge? Study extra right here.
