Databricks for Data Engineering

Type:

  • Webcast

Topic(s):

  • Databricks
  • Data Engineering
  • Data Transformation
Register

Whether you’re looking to transform and clean large volumes of data or collaborate with colleagues to build advanced analytics jobs that can be scaled and run automatically, Databricks offers a Unified Analytics Platform that promises to make your life easier.

 
Built by the same team who came up with Apache Spark, and with strong partnerships with both Microsoft Azure and AWS, Databricks is designed to take the pain out of managing your cloud-scale analytics platform, allowing you to focus on valuable analysis.

Across these two webcasts, we’ll look at two key use cases for Databricks:

 

Part 2: Databricks for Data Engineering on September 19

Part 2 will introduce the vital role Databricks can play in your organization’s cloud data architecture as the primary tool for data transformation.

We’ll showcase Databricks’ data transformation and data movement capabilities, how the tool aligns with cloud computing services, and highlight the security, flexibility and collaboration aspects of Databricks. We’ll also look at Databricks Delta Lake, and how it offers improved storage for both large-scale datasets and real-time streaming data.

Register for part two below

 

Part 1: Databricks for Data Science on August 22

Using demos based on real customer use cases, we’ll introduce some of Databricks’ key features for Data Science, like the ability to automatically scale the analysis based on the workload, and the option to switch between SQL, R and Python depending on the task at hand.

We’ll look at how Databricks MLFlow supports the analytics project lifecycle, and consider how you can use it in combination with other tools to automate analytics and present outputs to the key decision-makers in your organization.

Register for part one here