Remove how-we-shaved-90-minutes-off-model
article thumbnail

How we shaved 90 minutes off our longest running model

dbt Developer Hub

When running a job that has over 1,700 models, how do you know what a “good” runtime is? While there are many possible answers depending on dataset size, complexity of modeling, and historical run times, the crux of the matter is normally “did you hit your SLAs”? The model fct_dbt_invocations takes, on average, 1.5

article thumbnail

August 2022 dbt Update: v1.3 beta, Tech Partner Program, and Coalesce!

dbt Developer Hub

Semantic layer, Python model support, the new dbt Cloud UI and IDE… there’s a lot our product team is excited to share with you at Coalesce in a few weeks. But how these things fit together—because of where dbt Labs is headed—is what I’m most excited to discuss. including support for dbt models written in Python—is ready to explore !