Learn how to harness the power of Apache Spark and powerful clusters running on the Azure Databricks platform to run large data engineering workloads in the cloud.
Course Outline
Lesson : Implement a data engineering solution with Azure Databricks
- Perform incremental processing with spark structured streaming
- Introduction
- Set up real-time data sources for incremental processing
- Optimize Delta Lake for incremental processing in Azure Databricks
- Handle late data and out-of-order events in incremental processing
- Monitoring and performance tuning strategies for incremental processing in Azure Databricks
- Exercise – Real-time ingestion and processing with Delta Live Tables with Azure Databricks
- Implement streaming architecture patterns with Delta Live Tables
- Introduction
- Event driven architectures with Delta Live tables
- Ingest data with structured streaming
- Maintain data consistency and reliability with structured streaming
- Scale streaming workloads with Delta Live tables
- Exercise – end-to-end streaming pipeline with Delta Live tables
- Optimize performance with Spark and Delta Live Tables
- Introduction
- Optimize performance with Spark and Delta Live Tables
- Perform cost-based optimization and query tuning
- Use change data capture (CDC)
- Use enhanced autoscaling
- Implement observability and data quality metrics
- Exercise – optimize data pipelines for better performance in Azure Databricks
- Implement CI/CD workflows in Azure Databricks
- Introduction
- Implement version control and Git integration
- Perform unit testing and integration testing
- Manage and configure your environment
- Implement rollback and roll-forward strategies
- Exercise – Implement CI/CD workflows
- Automate workloads with Azure Databricks Jobs
- Introduction
- Implement job scheduling and automation
- Optimize workflows with parameters
- Handle dependency management
- Implement error handling and retry mechanisms
- Explore best practices and guidelines
- Exercise – Automate data ingestion and processing
- Manage data privacy and governance with Azure Databricks
- Introduction
- Implement data encryption techniques in Azure Databricks
- Manage access controls in Azure Databricks
- Implement data masking and anonymization in Azure Databricks
- Use compliance frameworks and secure data sharing in Azure Databricks
- Use data lineage and metadata management
- Implement governance automation in Azure Databricks
- Exercise – Practice the implementation of Unity Catalog
- Use SQL Warehouses in Azure Databricks
- Introduction
- Get started with SQL Warehouses
- Create databases and tables
- Create queries and dashboards
- Exercise – Use a SQL Warehouse in Azure Databricks
- Run Azure Databricks Notebooks with Azure Data Factory
- Introduction
- Understand Azure Databricks notebooks and pipelines
- Create a linked service for Azure Databricks
- Use a Notebook activity in a pipeline
- Use parameters in a notebook
- Exercise – Run an Azure Databricks Notebook with Azure Data Factory