Remove bigquery-etl
article thumbnail

Data News — 2 years anniversary

Christophe Blefari

One day, I decided to save the links on a blog created for the occasion, a few days later, 3 people subscribed. During my time at Kapten, we built a data stack with Airflow, BigQuery and Metabase + Tableau. I was coming from the Hadoop world and BigQuery was a breath of fresh air.

Data 130
article thumbnail

Data News — Week 23.11

Christophe Blefari

Speeding up “Reverse ETL” — Ziqi works at Microsoft and details in this article what they had to consider to improve their Lakehouse exports to downstream databases. This blog shows how you can automate it with a CI and definitions in exposures. At the moment only part 1 and 2 are written but it looks promising.

Data 130
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Level Up Your Data Platform With Active Metadata

Data Engineering Podcast

Their SDKs make event streaming from any app or website easy, and their state-of-the-art reverse ETL pipelines enable you to send enriched data to any cloud tool. Ascend automates workloads on Snowflake, Databricks, BigQuery, and open source Spark, and can be deployed in AWS, Azure, or GCP.

Metadata 130
article thumbnail

The Future of Data Warehousing

Monte Carlo

In this blog post, we’ll look at six innovations that are shaping the future of the data warehousing, as well as challenges and considerations that organizations should keep in mind. Beyond zero-copy data sharing, Amazon (AWS) has a grander vision for a zero ETL future altogether. Table of Contents 1. Zero-copy data sharing 4.

article thumbnail

Maintain Your Data Engineers' Sanity By Embracing Automation

Data Engineering Podcast

Their SDKs make event streaming from any app or website easy, and their state-of-the-art reverse ETL pipelines enable you to send enriched data to any cloud tool. Ascend automates workloads on Snowflake, Databricks, BigQuery, and open source Spark, and can be deployed in AWS, Azure, or GCP.

article thumbnail

The Future of Data Engineering as a Data Engineer

Monte Carlo

In short, Maxime argues that to effectively scale data science and analytics in the future, data teams needed a specialized engineer to manage ETL, build pipelines, and scale data infrastructure. But there are a few elements of data that make working with ETL pipelines much different than a codebase. Enter, the data engineer.

article thumbnail

Building And Managing Data Teams And Data Platforms In Large Organizations With Ashish Mrig

Data Engineering Podcast

Finally, if you have existing workflows in AbInitio, Informatica or other ETL formats that you want to move to the cloud, you can import them automatically into Prophecy making them run productively on Spark. You can observe your pipelines with built in metadata search and column level lineage. Closing Announcements Thank you for listening!

Building 100