Print this page (Cmd+P) and bring it to the lab. Every dbt Wizard prompt from every scenario, copy-paste ready, in the order you'll run them. Scenarios 2 and 4 use placeholders - the path-substitution tables are at the start of each section.
First Week at The Builder Depot - 7 steps
Find missing inventory - 6 steps
Add a new source - 6 steps - 3 paths
VIP/big-spender segmentation - 7 steps
Column rename, blast radius - 5 steps + terminal - 3 paths
A seven-step workflow that collapses the typical two-week "where is everything?" onboarding into a single guided session, ending with a working model the user wrote themselves, previewed but not materialized.
Summarize what this dbt project does. What are the main subject areas and how is the project organized?
List the staging, intermediate, and mart models. Group them by domain.
Show me the lineage, grain, and key columns for the orders mart model.
Show me a 10-row sample of the orders mart and the distinct values in the order_status column.
What tests and contracts are defined on the orders model? Are any currently failing?
Create a new mart model called orders_by_week that aggregates orders to the week grain with order count, gross revenue, and distinct customers.
Compile and preview orders_by_week. Don't materialize it.
A six-step workflow that turns a stakeholder question - "where did the missing inventory go?" - into a materialized dbt model naming the stores with inventory counts above or below the expected shipment-plan quantity.
Find the models in this project related to inventory, stores, items, and shipments.
For those models, show the grain, key columns, and how they join together.
Check [specific item] shipments and inventory. Expected quantity is [N] per store. Show the per-store expected quantity, actual inventory, variance quantity, and variance direction, ordered by absolute variance desc, warehouse_id.
Create a dbt model named inventory_shipment_variance that lists every store where actual inventory for that item differs from the expected per-store quantity. Include store name, city or region, item name, actual inventory count, expected count, variance quantity, and a variance direction showing over-count or under-count.
Compile the model and preview the first 20 rows using deterministic ordering. Order inventory variances by abs(variance_quantity) desc, warehouse_id, product_id.
Before materializing, confirm the active dbt target, the dev schema, and permission to create the model. Then materialize inventory_shipment_variance into my dev schema as a table. For this timed lab, skip extended verification after the successful compile and deterministic preview. Materialize only after the expected rows appear.
A six-step workflow for wiring a new Fivetran-synced source into an existing intermediate model without breaking the downstream consumers that already depend on it.
| Placeholder | Path A - Customer 360 | Path B - Operations | Path C - Merchandising |
|---|---|---|---|
[TARGET_MODEL] |
int_customer_order_summary |
int_orders_enriched |
int_product_sales_summary |
[NEW_SOURCE] |
retail.RET_TICKETS |
retail.RET_TICKETS |
retail.RET_PRODUCT_REVIEWS |
[ENTITY] |
customer | order | product |
[NEW_COLUMNS] |
open_tickets_count, last_ticket_status, last_ticket_opened_at | ticket_count, has_open_ticket_flag, last_ticket_status | avg_rating, review_count, low_rating_count |
Find [TARGET_MODEL] in this project. Show me what it currently produces, its grain, and which models depend on it downstream.
Find every source in this project related to [ENTITY] that [TARGET_MODEL] does NOT currently reference. I want to know what data is sitting in our warehouse that we're not using yet.
Describe the schema of [NEW_SOURCE]. Show me the columns, their types, the grain, and which column joins back to [ENTITY].
Run a quick check: count rows in [NEW_SOURCE], count distinct join keys, and count how many of those keys match an [ENTITY] already in [TARGET_MODEL]. Tell me whether the grain is one-to-one or one-to-many.
Update [TARGET_MODEL] to add [NEW_COLUMNS] from [NEW_SOURCE]. Use a LEFT JOIN so [ENTITY] rows without a match still appear, and aggregate [NEW_SOURCE] to one-row-per-[ENTITY] before joining if its grain is many-to-one. Preserve every column the model currently emits - only add new columns at the end.
Compile [TARGET_MODEL] and every downstream model that depends on it. Then preview 20 rows of [TARGET_MODEL] ordered deterministically. Do not materialize anything.
Materialize [TARGET_MODEL] into my dev schema. Skip the verification pass - the preview and downstream compile already confirmed the output.
RET_TICKETS - that's intentional. The lesson is that "new source" means new to THIS model, not new to the project.
A seven-step workflow that turns "which customers should Marketing target?" into a reusable activity layer plus a segment model, materialized into the user's dev schema.
Find the models related to customers, stores, orders, order lines, products, and categories.
Show the grain and joins for those models.
Check recent order dates and category values needed for a 180-day segmentation model.
Create a 180-day customer activity model by store.
Create a segment model for VIPs, big spenders, and category-loyal customers, built on top of the activity model.
Compile and preview the segment model. Exclude customers with no segment.
Materialize the segment model into my dev schema. Skip the verification pass - the preview already confirmed the output.
A five-step workflow that turns a red dbt run ("Column 'X' does not exist in source") into a fixed, re-running pipeline. Reproduce the failure, map the blast radius, apply an alias-preserving fix, re-run to green.
| Placeholder | Path A - Products | Path B - Orders | Path C - Customers |
|---|---|---|---|
[BROKEN_MODEL] |
stg_products |
stg_orders |
stg_customers |
[OLD_COLUMN] |
brand |
status |
customer_type |
[NEW_COLUMN] |
brand_name |
order_status |
segment |
| Source table | retail.RET_PRODUCTS |
retail.RET_ORDERS |
retail.RET_CUSTOMERS |
| Blast radius | ~8 downstream files (4 intermediates, 4 marts) | ~7 downstream files + YAML tests | 5 downstream files + YAML tests |
dbt run --select [BROKEN_MODEL]+
My dbt run just failed. Read the most recent run results and tell me which model failed, what the error was, and which upstream source or column the error references.
Describe the current schema of the upstream source table that [BROKEN_MODEL] reads from. List every column that exists today.
Show me every model, source definition, and test in this project that references the column [OLD_COLUMN]. I need a complete blast-radius list before I change anything.
Update [BROKEN_MODEL] and every other file you just listed to use [NEW_COLUMN] instead of [OLD_COLUMN]. Keep the downstream column alias the same so consumers of these models don't break - only the source-side reference should change.
Compile [BROKEN_MODEL] and every downstream model you just edited, then preview the first 10 rows of [BROKEN_MODEL] ordered deterministically. Do not materialize anything yet.
dbt run --select [BROKEN_MODEL]+
Terms that appear across the prompts and dbt Wizard responses.