A five-step workflow that turns a red dbt run ("Column 'X' does not exist in source") into a fixed, re-running pipeline. No grep-and-pray across the project, no manual DESC TABLE against the warehouse, no guessing which downstream models also break.
Path A - RET_PRODUCTS.brand renamed to brand_name. Eight downstream files to fix, alias-preserved.
Path B - RET_ORDERS.status renamed to order_status. Widest blast radius - eleven files including the YAML column tests in _staging__models.yml.
Path C - RET_CUSTOMERS.customer_type renamed to segment. Compact blast radius - five files plus the YAML tests.
A Fivetran-synced retail source column was renamed upstream overnight. Your dbt run is broken.
You need to find what changed, find every place the old name was referenced, fix them, and re-run, all before the morning standup. The everyday analytics-engineering problem dbt Wizard exists to solve: upstream-source drift caught and fixed in minutes, not in a Slack thread that lasts the rest of the day.
Pick one path before Step 1. The prompt flow is identical across paths - only the placeholders and blast radius change.
retail.RET_PRODUCTS.brand was renamed to brand_name.
Blast radius: stg_products plus roughly 8 downstream files - 4 intermediates (int_inventory_status, int_order_items_enriched, int_product_sales_summary, int_products_enriched) and 4 marts (dim_products, fct_inventory_transactions, fct_order_items, agg_product_performance).
retail.RET_ORDERS.status was renamed to order_status.
Blast radius (wide): stg_orders, the YAML column tests in _staging__models.yml, and roughly 7 downstream files including int_orders_enriched, int_daily_revenue, int_customer_order_summary, and fct_orders. The staging model already aliases status as order_status, so the fix is mechanical but the blast radius makes the demo land.
retail.RET_CUSTOMERS.customer_type was renamed to segment.
Blast radius (compact): stg_customers, the YAML column tests in _staging__models.yml, and 5 downstream files (int_customer_cohorts, dim_customers, customer_lifetime_value, fct_tickets, agg_customer_cohorts).
The prompts below use placeholders [OLD_COLUMN], [NEW_COLUMN], and [BROKEN_MODEL]. Substitute the values for the path you picked - the reference card sits underneath.
The lab environment was set up with the rename already applied to the source table, so the first thing you do is run dbt and watch it fail. This grounds the rest of the workflow in a real error message, not a hypothetical.
dbt run --select [BROKEN_MODEL]+You will see a compile or runtime error referencing
[OLD_COLUMN]. Capture the exact error text - Step 1 uses it.
Start from the failure, not from a hunch. No need to scroll through stack traces in the terminal. dbt Wizard surfaces the model name, the failing column, and the source it traces back to.
My dbt run just failed. Read the most recent run results and tell me which model failed, what the error was, and which upstream source or column the error references.
Then ask for the current schema.
Describe the current schema of the upstream source table that [BROKEN_MODEL] reads from. List every column that exists today.
dbt Wizard pulls the live column list from Snowflake - the source-of-truth - and lays it next to what the model is asking for. This converts "something changed upstream" into "this specific column was renamed."
Show me every model, source definition, and test in this project that references the column [OLD_COLUMN]. I need a complete blast-radius list before I change anything.
[OLD_COLUMN] renamed to [NEW_COLUMN]. Guessing the new column name from context is the #1 way this fix goes sideways.
This is the step that prevents the "I fixed it but it's still broken" loop. dbt Wizard returns every file that references the old name: staging SQL, intermediate SQL, mart SQL, the column tests defined in _staging__models.yml, and any other YAML descriptions.
Update [BROKEN_MODEL] and every other file you just listed to use [NEW_COLUMN] instead of [OLD_COLUMN]. Keep the downstream column alias the same so consumers of these models don't break - only the source-side reference should change.
_staging__models.yml is in the result." Column-test definitions in _staging__models.yml are where this scenario most commonly catches an incomplete fix.
The critical instruction is the alias preservation: change the source-side reference to [NEW_COLUMN] but keep the public column name (brand, order_status, customer_type) so every downstream model and dashboard keeps working without further changes. A staging model selecting brand becomes select brand_name as brand, not select brand_name.
Compile [BROKEN_MODEL] and every downstream model you just edited, then preview the first 10 rows of [BROKEN_MODEL] ordered deterministically. Do not materialize anything yet.
The compile is the smoke test. If any file still references [OLD_COLUMN], the compile fails here, not in the warehouse. The preview confirms the renamed column is flowing through correctly under its original public name.
[OLD_COLUMN] (the alias survived) with sane-looking values. No compile errors anywhere in the lineage.
The "did we really fix it?" moment. A green run here closes the loop on the original failure from the pre-step.
dbt run --select [BROKEN_MODEL]+
stg_productsbrandbrand_namestg_ordersstatusorder_statusfct_ordersstg_customerscustomer_typesegmentstg_products, stg_orders, or stg_customers depending on path) now compiles, runs, and emits the renamed source column under its original public name. Every downstream model in the project still resolves without further changes. The fix landed in minutes, not a Slack thread that lasts the rest of the day.
The canonical scenario-4 skill definition with all three paths, blast-radius detail, and instructor notes.
One-page lab handout with every prompt in this scenario, formatted for print.
You've finished the four scenarios plus onboarding. Return to the overview to pick another path or share with a teammate.