You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
dbt leaves partitioned __dbt_tmp table even when --full-refresh removes partitioning from the model, leading to an error on the next incremental refresh.
Steps To Reproduce
Create the model with partitioning and run the model.
Remove the partitioning and then re-run the model. dbt run --full-refresh --select my_model
1 of 1 OK created incremental model
Try to run the model incrementally dbt run --select my_model
Database Error in model ...
Cannot replace a table with a different partitioning spec. Instead, DROP the table, and then recreate it. New partitioning spec is ... and existing spec is ....
Delete the table my_model__dbt_tmp from BigQuery
Successfully run the incremental refresh dbt run --select my_model
1 of 1 OK created incremental model
Expected behavior
dbt should drop the my_model__dbt_tmp table on full refresh, even if that table is not used on full-refresh.
Screenshots and log output
Please contact me if you need the logs, because they are generally full of proprietary information about our sources and transformations.
System information
I'm using dbt Cloud.
The text was updated successfully, but these errors were encountered:
github-actionsbot
changed the title
dbt leaves partitioned __dbt_tmp table even when --full-refresh removes partitioning from the model, causing errors
[CT-465] dbt leaves partitioned __dbt_tmp table even when --full-refresh removes partitioning from the model, causing errors
Apr 6, 2022
@martinburch Thank you for bringing up these interesting case. It falls under a slight edge case dealing with partitions.
Luckily after some discussion and playing around it looks like we may of found a few potential solutions.
pulling adding a drop_table_if_exists() either in after where we define the tmp_relationhere or inside the script that runs during insert_overwrite This has the advantage of being a simple addition and works because we don't currently use real BigQuery Temporary Tables.
We could also go a step further and add a conditional check to see first if the tmp relation (a) already exists + (b) is replaceable, using adapter.is_replacable. This has the added benefit of checking if the partition config has changed and would remain usuable if we ever did swap to using real temporary tables. but fot the __dbt_tmp tables we use may not be fully necessary.
I really hope this helps you and please feel free to ask any other questions you might have.
This issue has been marked as Stale because it has been open for 180 days with no activity. If you would like the issue to remain open, please remove the stale label or comment on the issue, or it will be closed in 7 days.
Describe the bug
dbt leaves partitioned __dbt_tmp table even when --full-refresh removes partitioning from the model, leading to an error on the next incremental refresh.
Steps To Reproduce
dbt run --full-refresh --select my_model
1 of 1 OK created incremental model
dbt run --select my_model
Database Error in model ...
Cannot replace a table with a different partitioning spec. Instead, DROP the table, and then recreate it. New partitioning spec is ... and existing spec is ....
dbt run --select my_model
1 of 1 OK created incremental model
Expected behavior
dbt should drop the my_model__dbt_tmp table on full refresh, even if that table is not used on full-refresh.
Screenshots and log output
Please contact me if you need the logs, because they are generally full of proprietary information about our sources and transformations.
System information
I'm using dbt Cloud.
The text was updated successfully, but these errors were encountered: