Content
IMPLEMENTATION GUIDESEMANTIC LAYERAI ANALYTICS

Natural Language Analytics

Databricks Genie vs. Snowflake Cortex Analyst

Author
Momenta Analytics
March 2026
3 min read
Who this is for

Analytics leaders, platform engineers, and technical evaluators planning or deploying Databricks Genie or Snowflake Cortex Analyst. This guide explains the hidden implementation work behind “ask your data in English,” helps you plan scope and staffing realistically, and surfaces the governance risks that product documentation does not emphasize.

01
SECTION 01

This Is Not Setup. It Is Answer Production.

Both Databricks and Snowflake now offer products that let business users type a question in plain English and get a SQL-generated answer. The pitch sounds similar: self-service analytics without SQL expertise. In practice, both platforms require substantial upfront work that is regularly underestimated.

What may not be obvious from product documentation is how structurally similar the underlying requirements have become. Both have converged on the same core architecture: a native semantic layer object that defines business metrics, dimensions, and relationships, consumed by a natural language interface. In Databricks, that object is a Metric View. In Snowflake, it is a Semantic View.

Governance Risk

Once deployed, the system can serve hundreds of answers per day with minimal review. Because answers come from inside the analytics platform, they are perceived as sanctioned. If the underlying logic is weak, the system will scale that weakness, not correct it.

The Central Stakes

The views, joins, metric definitions, synonyms, and example queries you select during implementation become the raw material from which AI will produce answers for your organization.

This is not configuration. It is answer production at scale. Wrong answers will be delivered confidently. Disputed business definitions will be returned as if they were settled truth. Inconsistent metric logic will be encoded into the system and distributed across every user with access.

Every implementation decision feeds a causal chain that determines the quality of AI answers across the company:

1

Views exposed to the AI define the boundaries of what it can answer.

2

Join paths and grain choices determine whether the AI can combine data correctly or produces silent duplicates and incorrect aggregations.

3

Metric definitions become the canonical formulas the AI applies. If two teams define revenue differently and only one version is codified, the AI serves that version as organizational truth.

4

Metadata, naming, and synonyms teach the AI how to interpret business language. Poor metadata forces the AI to guess. It will guess wrong at scale.

5

Example and benchmark queries are not just tests. They are encoded precedent: they teach the system how your organization prefers to translate business questions into SQL.

6

AI answers produced for end users inherit every flaw in the chain above. Flaws in upstream choices become flaws in downstream answers.

02
SECTION 02

What Both Platforms Actually Require

Both platforms require the same fundamental inputs. None are optional. Each directly determines the accuracy and trustworthiness of the answers the system will produce:

  • Agreed-upon metric definitions with canonical aggregation formulas. These become the formulas the AI applies to every question.
  • Clean metadata: business-quality descriptions, synonyms, and sample values on every field. This is how the AI interprets business language.
  • Explicit join paths declared in advance. Neither platform supports runtime joins. The joins you define are the only joins the AI can use.
  • Curated example queries that encode how your organization translates business questions into SQL. High-value institutional logic artifacts, not throwaway test cases.
  • Ongoing ownership by someone who understands both the data model and the business context. Without this, the system degrades within months.
In Practice

Both platforms start with a blank definition file and ask you to fill it in. Neither extracts institutional knowledge automatically. The platform turns whatever semantic and business choices your organization gives it into production answer logic. It does not create truth or resolve ambiguity. If you give it contested definitions, it will serve contested answers as if they were settled.

03
SECTION 03

Databricks: Setup in Practice

The Databricks natural language analytics stack has two layers. The Metric View is a YAML-based semantic layer object in Unity Catalog that defines dimensions, measures, and joins. The Genie Space is the NL configuration layer where example SQL, text instructions, and business context teach Genie how to translate questions. Together, these become the answer substrate for every question a business user asks.

Implementation Steps

Step 1: Scope the Data Domain


Technical

Select the tables that will feed the Metric View and define the analytical domain for the Genie Space.

Plain English

Decide which data the system should answer questions about and limit scope so the AI has fewer possible interpretations.

Identify the business domain the Metric View will serve. Limit scope to one analytical domain per Metric View (sales, marketing, finance). If underlying tables are heavily normalized, create denormalized views or gold-layer Materialized Views. Exclude audit columns, internal IDs, and staging artifacts.

Effort

Several hours with clean existing tables. One to two weeks if denormalization is needed.

Watch Out

Most organizations do not have clean, pre-joined views ready. Teams frequently scope too broadly at the start, then narrow after accuracy problems surface. In our experience, Genie Spaces perform best with five or fewer tables. Every additional table increases the probability of confidently wrong answers.

Step 2: Build the Metric View


Technical

Define dimensions, measures, joins, and metadata in a YAML-based Metric View object in Unity Catalog.

Plain English

This is where you encode the official calculations and relationships the AI uses when it writes SQL. Every formula here becomes a production answer ingredient.

All relationships must be declared in the YAML. The canonical aggregation formula for each metric must be agreed upon before it is codified, because once codified, it is the answer the AI will give.

Effort

Several days for a small domain (5-8 dimensions, 10-15 measures). Two to four weeks for complex domains.

Watch Out

Organizations often discover during this step that different teams define the same metric differently. That conflict must be resolved before anything is codified. If it is not, the system will freeze one version into infrastructure and serve it as organizational truth.

Step 3: Enrich Metadata and Synonyms


Technical

Populate dimension and measure comments, synonyms, and example values inside the Metric View definition.

Plain English

You are teaching the AI what business terms mean and how users will refer to the data.

This metadata is the primary mechanism Genie uses to match questions to fields. Without rich metadata, Genie guesses, and at scale, guessing means wrong answers delivered to users who have no reason to doubt them.

Effort

Several hours for small domains. One to two weeks for complex domains with cross-team terminology.

Step 4: Configure the Genie Space


Technical

Add example SQL queries, SQL expressions, text instructions, and join definitions to the Genie Space.

Plain English

Give the NL interface a library of known-good answers and domain-specific rules. These are encoded precedent that teaches the system how your organization translates business questions into SQL.

The example library is effectively the training data for your domain. Shallow examples create shallow system behavior. The examples that matter most encode how your organization handles ambiguity, edge cases, and contested business logic.

Effort

One to two days for 5-10 examples. Two to four weeks for a comprehensive library of 20-50.

Watch Out

Politically convenient examples that avoid the real contested questions the business cares about are dangerous: they leave the hardest, highest-value questions unanchored.

Step 5: Plan Materialization (Optional)


Technical

Configure pre-computed aggregations in the Metric View YAML with a CRON refresh schedule.

Plain English

For large datasets, pre-computing common query patterns speeds up response time without changing the user experience.

Effort

Several hours for configuration. One to two days for testing and tuning.

Step 6: Benchmark, Monitor, and Iterate


Technical

Run test suites, review user feedback, and update the Metric View and Genie Space as usage patterns emerge.

Plain English

The semantic layer requires ongoing maintenance. Without it, the system degrades and begins delivering wrong answers with the same confidence it delivers correct ones.

The benchmark suite is an institutional logic artifact: it encodes what “correct” means for your organization.

Effort

One day for initial setup. Several hours per week ongoing.

In Practice

The heaviest lift is Steps 2-4: building the Metric View YAML, enriching metadata, and writing example queries. This is where institutional logic gets encoded. Budget two to six weeks for initial setup. The platform work is the smaller portion; the human work of deciding what reality is called takes the most time.

04
SECTION 04

Snowflake: Setup in Practice

Snowflake’s stack follows a parallel architecture. The Semantic View is a native schema-level object that defines logical tables, dimensions, facts, metrics, and relationships. Cortex Analyst reads it and translates questions into SQL inside Snowflake’s governance boundary. Every definition, join, and verified query you encode determines the answers Cortex Analyst produces.

Implementation Steps

Step 1: Design the Data Model


Technical

Identify physical tables, define primary and foreign keys, prepare a star-schema-friendly data model.

Plain English

Decide what data the system should cover and organize it so the AI has a clean schema to work from.

Current documented guidance suggests a practical limit of roughly 50-100 columns total. If your domain exceeds this, split into multiple semantic views.

Effort

Several hours with an existing star schema. One to two weeks if denormalization is needed.

Watch Out

Teams with wide tables need to make scope decisions early.

Step 2: Create the Semantic View


Technical

Define logical tables, dimensions, facts, metrics, and relationships using SQL DDL or the Snowsight wizard.

Plain English

Encode the official business logic. Every definition is a production answer ingredient.

Every COMMENT becomes context for Cortex Analyst, and comment quality directly determines AI accuracy.

Effort

Several days for a small domain. Two to four weeks for complex domains.

Watch Out

The most time-intensive step. If a metric formula encodes a disputed definition, Cortex Analyst will serve that definition as organizational truth.

Step 3: Write Verified Queries and Custom Instructions


Technical

Add natural-language questions paired with exact SQL answers. Add plain-text custom instructions for domain-specific rules.

Plain English

Verified queries are encoded institutional precedent: they define how your organization translates its most important business questions into SQL.

Effort

One to two days for 5-10 verified queries. One to two weeks for a comprehensive library.

Watch Out

Verified queries must use logical names. Politically convenient examples that sidestep the hardest questions leave the system weakest where it matters most.

Step 4: Enrich Metadata


Technical

Write business-quality comments, synonyms, and sample values for every column and metric.

Plain English

The difference between a generic comment and a business-quality comment is the difference between an AI that interprets questions correctly and one that guesses.

Effort

Several hours for small domains. One to two weeks for complex domains.

Step 5: Configure RBAC and Privileges


Technical

Grant end-user roles USAGE on the semantic view and SELECT on all underlying tables.

Effort

Several hours for straightforward environments. One to two days for complex role hierarchies.

Watch Out

Privilege misconfigurations are the most common source of “it works for me but not for them” issues.

Step 6: Benchmark, Optimize, and Iterate


Technical

Run test suites, review optimization suggestions, and update the semantic view as patterns emerge.

Plain English

Without maintenance, the system returns wrong answers indistinguishable from correct ones.

Effort

Two to three days for initial testing. Several hours per week ongoing.

In Practice

The heaviest lift is Steps 2-4: building the semantic view DDL, writing verified queries, and enriching metadata. Snowflake’s Semantic View is metadata-only and does not copy data. The platform plumbing is the smaller portion. The larger portion is deciding what the correct answer should be.

05
SECTION 05

Side-by-Side Comparison

Neither platform creates business truth; both turn whatever definitions the organization provides into production answer logic.

Same
Core architecture: semantic layer object + NL interface
Same
Fundamental inputs: metrics, metadata, joins, examples, ownership
Differs
Syntax, materialization, governance model, edit workflow
CapabilityDatabricksSnowflake
Semantic objectMetric View (YAML in CREATE VIEW)Semantic View (SQL DDL)
NL interfaceAI/BI Genie (Genie Space)Cortex Analyst
SynonymsNative YAML fieldEncoded in comments
MaterializationCRON schedule; aggregate-aware routingNot available; metadata-only
Custom instructionsText instructions in Genie SpaceALTER SEMANTIC VIEW SET CUSTOM_INSTRUCTIONS
Example queriesExample SQL; Trusted badgeVerified Query Repository; high-confidence match
Incremental editsGenie Space iterative; Metric View is REPLACEALTER for additions; REPLACE for structural
GovernanceUnity Catalog RBAC, lineage, maskingSnowflake RBAC, masking, row access policies
Core dependencySomeone encodes YAML + example SQLSomeone encodes DDL + verified queries

These operational differences matter, but they are usually less determinative of answer quality than semantic readiness, benchmark quality, and business-definition clarity.

06
SECTION 06

Common Failure Modes

Each one results in wrong answers delivered to users who have no reliable way to detect them.

Scoping too broadly


Teams include too many tables or try to cover multiple domains in one semantic object. The AI faces too many interpretations and the probability of confidently wrong answers increases.

Treating setup as a one-time project


Teams that build the semantic layer and walk away see accuracy degrade within months. Without active maintenance, the NL interface delivers wrong answers indistinguishable from correct ones.

Codifying unresolved metric politics


Semantic layers do not resolve business disagreement. They freeze one answer into infrastructure. If two teams define “revenue” differently and the conflict is not resolved, the system silently picks whichever version was codified and serves it as organizational truth.

Weak benchmark selection


Teams validate against easy, uncontested questions and declare success. The real test is whether the system handles the high-value, edge-case-ridden, business-critical questions analysts actually answer. If your benchmark suite does not include the hard questions, you have not tested the system.

Skipping metadata enrichment


Generic comments force the AI to guess at field meanings. At scale, guessing means wrong answers delivered to users who have no way to know the AI was uncertain.

Compensating with instructions instead of examples


Both platforms benefit far more from well-written example queries than from text instructions. Instructions produce diminishing returns.

No designated owner


Without an owner, maintenance becomes nobody’s job. Accuracy erodes. Because the AI does not signal uncertainty, no one notices until business decisions have been made on bad answers.

In Practice

Most early failures trace back to one of three root causes: scope was too broad, ownership was not assigned, or contested definitions were codified without resolution. Fix those three and the rest becomes manageable.

07
SECTION 07

Total Estimated Effort

The killer hidden cost is usually not the platform. It is the human work of deciding what reality is called.

The work is usually more bounded than teams fear, but heavier than they expect.

Assumptions for these ranges

Narrow domain (3-5 tables). 10-20 measures. Well-defined data model. Benchmark questions already known. Minimal business-definition conflict. Dedicated semantic owner at least half-time during setup.

Phase-Level Rollup

PhaseTotal EffortNotes
MVP / first domain80-160 hoursNarrow scope, curated benchmark set, limited metric conflict.
Expansion / second domain40-100 hoursReuse reduces platform setup, but new semantics still require full effort.
Ongoing monthly upkeep8-20+ hrs/monthScales with usage volume, data change rate, and governance expectations.

Effort by Work Category

Category% of TotalWhat This Covers
Platform setup15-20%Workspace config, RBAC, CI/CD, source table prep. Feels like traditional engineering.
Semantic modeling30-40%Dimensions, measures, joins, metadata, synonyms, comments. Raw schema into trustworthy model.
Benchmark / validation15-20%Example/verified queries, benchmark suites, accuracy testing. Where encoded precedent is created.
Business alignment20-35%Resolving metric definitions, validating logic, agreeing on terminology. Cannot be automated.
Key Risk

Business alignment is regularly underestimated. It is the most variable, hardest to plan, and most consequential category. If this work is skipped, every downstream answer inherits the ambiguity.

08
SECTION 08

Effort by Role

Each role exists to prevent a specific category of failure. This is a control system, not a staffing list.

RoleHoursPurpose in the SystemFailure Prevented
Platform Engineer15-25Secure, stable, governable, repeatable environment.Ungoverned access, unrepeatable deployments, privilege misconfigurations.
Analytics Engineer25-50Trustworthy analytical model: correct joins, grain, reusable measures.Wrong joins, silent duplicates, technically valid but analytically wrong formulas.
BI / Semantic Owner25-55Teach the system how business talks and what “correct” means in practice.Misinterpreted language, shallow coverage, undetected accuracy degradation.
Business SME15-30Resolve meaning where data structure alone cannot decide correctness.Disputed definitions codified as truth, metric politics frozen into infrastructure.

Hours assume favorable conditions. If metric conflict is significant, BI/Semantic Owner and Business SME hours expand most.

09
SECTION 09

How to Start: Minimum Viable Implementation

Focus on a single domain, small table set, well-understood metrics. Validate the workflow and establish maintenance before expanding.

Timeline Assumption

A narrow MVP can be possible in a matter of weeks under favorable conditions: narrow domain, limited metric conflict, known benchmarks, dedicated semantic owner, stable data model. Speed to MVP should not be confused with a domain that is broadly governed, validated, or safe to trust at scale.

Week 1: Foundation
1

Pick one domain

Choose a single analytical domain (sales pipeline, marketing, product usage). Select 3-5 clean tables.

2

Identify your semantic owner

Assign someone who understands both SQL and business context. Without this person, no timeline is realistic.

3

Document 10 benchmark questions

Include at least 3-4 that are genuinely hard: ambiguous, edge-case-ridden, or politically contested. If you only benchmark the easy questions, you have not tested the system.

Weeks 2-3: Build
4

Build the semantic object

Create the Metric View or Semantic View. Every formula you define here becomes a production answer ingredient.

5

Enrich every field

Business-quality comments and synonyms on every dimension, measure, and column. This is where you teach the AI how the business talks.

6

Load example/verified queries

Add your 10 benchmark questions. These are encoded institutional precedent, not throwaway test data.

Week 4: Validate & Expand
7

Run your benchmark suite

Test all 10 questions against known-good answers. Pay special attention to the hard questions.

8

Invite 2-3 business users to test

Real users, real questions. Add examples based on what the system handles poorly.

9

Set a maintenance cadence

Weekly review: process feedback, update definitions, expand examples. The system is technically deployable, but that does not mean it is safe to trust broadly.

In Practice

The goal of the MVP is not comprehensive coverage. It is to prove accuracy on a narrow domain, establish the maintenance workflow, and build confidence before expanding. Speed to MVP is not the same as semantic safety.

10
SECTION 10

When Each Platform Is the Easier Fit

Databricks may be better when:

  • Already running Databricks with Unity Catalog and a medallion architecture.
  • Data volumes large enough that materialization matters for response time.
  • You want the Genie Space’s iterative configuration model.
  • Team is comfortable with YAML-based configuration.

Snowflake may be better when:

  • Already running Snowflake with established RBAC and governance.
  • SQL DDL syntax feels more natural to the team.
  • PRIVATE modifier for intermediate metrics fits your needs.
  • Row access policies need to flow through the semantic layer.
  • You want Cortex Analyst’s optimization feature.
Platform Choice

In most cases, the platform choice should follow your existing data stack. The harder question is not which platform to choose, but whether your organization has the definitions, metadata, and ownership model to make either one work. The system can be technically deployable long before it is safe to trust broadly.

11
SECTION 11

What Neither Platform Does for You

Both platforms assume you have already defined your metrics, agreed on business logic, and documented how analysts translate questions into SQL. Neither extracts institutional knowledge automatically. Neither resolves ambiguity. Neither flags that a codified definition was contested.

The kinds of buried institutional logic that must be surfaced before implementation:

  • Preferred join paths: which tables to join and in what order, learned through years of trial and error
  • Exception handling: nulls, edge cases, partial records, known data quality issues
  • Exclusion logic: records, statuses, or source tables filtered out by default
  • Trusted vs. distrusted tables: which tables analysts actually use vs. avoid, and why
  • Time-period conventions: fiscal vs. calendar year, trailing periods, default lookback windows
  • Default filters: standard WHERE clauses applied without thinking (excluding test accounts, internal transactions)
  • Business-safe interpretation patterns: handling ambiguous questions in ways the business considers acceptable
12
SECTION 12

Three Kinds of Readiness

The system can be technically deployable long before it is safe to trust broadly.

Technical
Platform configured, semantic object exists, RBAC in place. Necessary but not sufficient.
Semantic
Metadata rich and accurate. Examples cover hard cases. Synonyms map to how business actually talks.
Institutional
Definitions agreed. Contested terms resolved. Organization accepts accountability for operationalized answers.
The Hardest Readiness

Most organizations achieve technical readiness first. Semantic readiness follows with effort. Institutional readiness is the hardest: it requires the organization to decide what its own answers should be, and to accept accountability for encoding those answers into a system that will distribute them at scale.

13
SECTION 13

Readiness Checklist

Data Foundation (Technical)


  • Source tables identified and accessible in the target platform
  • Star schema or denormalized views available (or plan to create them)
  • Primary and foreign keys defined and validated
  • Column data types appropriate (no implicit conversions needed)

Business Logic (Semantic + Institutional)


  • Core metrics defined with agreed-upon aggregation formulas
  • Metric conflicts identified and resolved (or explicitly scoped with documented rationale)
  • 10+ benchmark questions written, including 3-4 hard, contested, or edge-case questions
  • The people who define or approve the metrics have signed off on what gets codified
  • Preferred join paths, exclusion logic, and default filters documented

Ownership (Institutional)


  • Semantic owner assigned (SQL + business context hybrid role)
  • Weekly maintenance cadence committed
  • Version control and CI/CD pipeline planned
  • Feedback review process defined
  • Organization understands that codified definitions will be served as truth at scale

Governance (All Three)


  • RBAC roles identified for semantic layer consumers
  • Masking / row access policies defined at the physical table level
  • Go-live testing plan includes per-role privilege validation
  • Plan exists for detecting and correcting wrong answers post-deployment

That is where most implementations slow down

Not in building the platform, but in assembling the institutional logic the platform needs to answer safely.

Momenta Analytics focuses on the step both platforms assume is already done: extracting the business logic and metric definitions embedded in existing SQL query histories.

Implementing AI in your analytics team? Get in touch.