Deep Learning with BigDL
Enable students to harness BigDL on Spark/Hadoop to build, scale, and visualise deep‑learning workflows.
Get Course Info
Audience: Developers / Data Analysts / Data Scientists
Duration: 3 days
Format: Lectures and hands‑on labs (50% lecture, 50% lab)
Overview
Intel’s open‑source BigDL library brings state‑of‑the‑art Deep Learning to Spark and Hadoop environments at scale. This course introduces DL concepts and walks through implementing neural‑network models with BigDL, leveraging existing TensorFlow and Caffe models.
Objective
Enable students to harness BigDL on Spark/Hadoop to build, scale, and visualise deep‑learning workflows.
What You Will Learn
- Deep Learning foundations & activation/loss/optimiser concepts
- BigDL library architecture, RDD vs. Pipeline APIs
- Visualising training with TensorBoard
- Implementing perceptrons, CNN, RNN, LSTM in BigDL
- Importing TensorFlow/Caffe models & transfer learning
- Scaling DL with distributed data and Spark clusters
Course Details
Audience: Developers / Data Analysts / Data Scientists
Duration: 3 days
Format: Lectures and hands‑on labs (50% lecture, 50% lab)
Basic Python & Jupyter notebooks; basic Linux; optional ML familiarity
Setup: Cloud cluster with SSH & browser access • Labs in Jupyter notebooks
Detailed Outline
- Activation & loss functions
- Training & validation
- BigDL features & versions
- Spark/Hadoop integration
- Lab: setup & run BigDL
- Execution model & layers
- Lab: BigDL layers
- Perceptron intro
- Activation & softmax
- Backprop & gradient descent
- Lab
- Solving XOR
- Distributed training
- ReLU & loss functions
- TensorBoard visualisation
- Lab
- High‑level BigDL
- Pipeline API model
- Lab
- CNN intro & image classification
- Lab
- Model import/export
- Transfer learning
- ImageNet
- Lab
- RNN & LSTM intro
- Labs
- BigDL advantages
- Next steps
Ready to Get Started?
Contact us to learn more about this course and schedule your training.