New Databricks Delta Features Simplify Data Pipelines

Microsoft Azure Azure Databricks Standard Data Engineering       ✓ Azure Databricks Standard Data Analytics       ✓ Azure Databricks Premium Data Engineering       ✓ Azure Databricks Premium Data Analytics       ✓ AWS Databricks Basic       ✘ Databricks Data Engineering       ✓ Databricks Data Analytics       ✓ Easy to Adopt: Check Out Delta Today Porting existing Spark code for using Delta is as simple as changing “CREATE TABLE … USING parquet” to “CREATE TABLE … USING delta” or changing “dataframe.

write.

format(“parquet“).

load(“/data/events“)” to “dataframe.

write.

format(“delta“).

load(“/data/events“)” You can explore Delta today using: Databricks Delta Quickstart – for an introduction to Databricks Delta Azure | AWS.

Optimizing Performance and Cost – for a discussion of features such as compaction, z-ordering and data skipping Azure | AWS.

Both of these contain notebooks in Python, Scala and SQL that you can use to try Delta.

If you are not already using Databricks, you can try Databricks Delta for free by signing up for the Databricks trial Azure | AWS.

Making Data Engineering Easier Data engineering is critical to successful analytics and customers can use Delta in various way to improve their data pipelines.

We have summarized some of these use cases in the below set of blogs: Change data capture with Databricks Delta Building a real-time attribution pipeline with Databricks Delta Processing Petabytes of data in seconds with Databricks Delta Simplifying streaming stock data analysis with Databricks Delta Build a mobile gaming events data pipeline with Databricks Delta You can learn more about Delta from the Databricks Delta documentation Azure | AWS.

Try Databricks for free.

Get started today.

. More details

Leave a Reply