article thumbnail

Mainframe Optimization: 5 Best Practices to Implement Now

Precisely

It frequently also means moving operational data from native mainframe databases to modern relational databases. Typically, a mainframe to cloud migration includes re-factoring code to a modern object-oriented language such as Java or C# and moving to a modern relational database. Best Practice 2. Best Practice 3.

article thumbnail

Reflections On Designing A Data Platform From Scratch

Data Engineering Podcast

If you’re a data engineering podcast listener, you get credits worth $3000 on an annual subscription TimescaleDB, from your friends at Timescale, is the leading open-source relational database with support for time-series data. Time-series data is relentless and requires a database like TimescaleDB with speed and petabyte-scale.

Designing 100
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

5 Layers of Data Lakehouse Architecture Explained

Monte Carlo

In this article, we’ll peel back the 5 layers that make up data lakehouse architecture: data ingestion, data storage, metadata, API, and data consumption, understand the expanded opportunities a data lakehouse opens up for generative AI, and how to maintain data quality throughout the pipeline with data observability. Metadata layer 4.

article thumbnail

Data Lakehouse Architecture Explained: 5 Layers

Monte Carlo

In this article, we’ll peel back the 5 layers that make up data lakehouse architecture: data ingestion, data storage, metadata, API, and data consumption, understand the expanded opportunities a data lakehouse opens up for generative AI, and how to maintain data quality throughout the pipeline with data observability. Metadata layer 4.

article thumbnail

Toward a Data Mesh (part 2) : Architecture & Technologies

François Nguyen

To illustrate that, let’s take Cloud SQL from the Google Cloud Platform that is a “Fully managed relational database service for MySQL, PostgreSQL, and SQL Server” It looks like this when you want to create an instance. You can choose your parameters like the region, the version or the number of CPUs.

article thumbnail

What Are the Best Data Modeling Methodologies & Processes for My Data Lake?

phData: Data Engineering

Metadata Management Data modeling methodologies help in managing metadata within the data lake. Metadata describes the characteristics, attributes, and context of the data. By incorporating metadata into the data model, users can easily discover, understand, and interpret the data stored in the lake.

article thumbnail

Iceberg Tables: Catalog Support Now Available

Snowflake

Iceberg supports many catalog implementations: Hive, AWS Glue, Hadoop, Nessie, Dell ECS, any relational database via JDBC, REST, and now Snowflake. After making an initial connection to Snowflake via the Iceberg Catalog SDK, Spark can read Iceberg metadata and Parquet files directly from the customer-managed storage account.