Let us help you find the training program you are looking for.

If you can't find what you are looking for, contact us, we'll help you find it. We have over 800 training programs to choose from.

banner-img

Course Skill Level:

Foundational

Course Duration:

3 day/s

  • Course Delivery Format:

    Live, instructor-led.

  • Course Category:

    AI / Machine Learning

  • Course Code:

    AS2MLCL21E09

Who should attend & recommended skills:

Beginners with basic Python experience

Who should attend & recommended skills

  • This course is designed for beginners who want to simplify machine learning model implementations with Spark.
  • Skill-level: Foundation-level Apache Spark 2.
  • x Machine Learning Cookbook skills for Intermediate skilled team members.
  • This is not a basic class.
  • Python: Basic (1-2 years’ experience).

About this course

Machine learning aims to extract knowledge from data, relying on fundamental concepts in computer science, statistics, probability, and optimization. Learning about algorithms enables a wide range of applications, from everyday tasks such as product recommendations and spam filtering to cutting edge applications such as self-driving cars and personalized medicine. You will gain hands-on experience of applying these principles using Apache Spark, a resilient cluster computing system well suited for large-scale machine learning tasks. This book begins with a quick overview of setting up the necessary IDEs to facilitate the execution of code examples that will be covered in various chapters. It also highlights some key issues developers face while working with machine learning algorithms on the Spark platform. We progress by uncovering the various Spark APIs and the implementation of ML algorithms with developing classification systems, recommendation engines, text analytics, clustering, and learning systems. Toward the final chapters, we will focus on building high-end applications and explain various unsupervised methodologies and challenges to tackle when implementing with big data ML systems.

Skills acquired & topics covered

  • Working in a hands-on learning environment, led by our Data Science with Python and Jupyter expert instructor, participants will learn about and explore:
  • Solving the day-to-day problems of data science with Spark
  • This unique cookbook consists of exciting and intuitive numerical recipes
  • Optimizing your work by acquiring, cleaning, analyzing, predicting, and visualizing your data
  • Getting to know how Scala and Spark go hand-in-hand for developers when developing ML systems with Spark
  • Building a recommendation engine that scales with Spark
  • Finding out how to build unsupervised clustering systems to classify data in Spark
  • Building machine learning systems with the Decision Tree and Ensemble models in Spark
  • Dealing with the curse of high-dimensionality in big data using Spark
  • Implementing text analytics for Search Engines in Spark
  • Streaming Machine Learning System implementation using Spark

Course breakdown / modules

  • Introduction
  • Downloading and installing the JDK
  • Downloading and installing IntelliJ
  • Downloading and installing Spark
  • Configuring IntelliJ to work with Spark and run Spark ML sample codes
  • Running a sample ML code from Spark
  • Identifying data sources for practical machine learning
  • Running your first program using Apache Spark 2.0 with the IntelliJ IDE
  • How to add graphics to your Spark program

  • Introduction
  • Package imports and initial setup for vectors and matrices
  • Creating DenseVector and setup with Spark 2.0
  • Creating SparseVector and setup with Spark
  • Creating dense matrix and setup with Spark 2.0
  • Using sparse local matrices with Spark 2.0
  • Performing vector arithmetic using Spark 2.0
  • Performing matrix arithmetic using Spark 2.0
  • Exploring RowMatrix in Spark 2.0
  • Exploring Distributed IndexedRowMatrix in Spark 2.0
  • Exploring distributed CoordinateMatrix in Spark 2.0
  • Exploring distributed BlockMatrix in Spark 2.0

  • Introduction
  • Creating RDDs with Spark 2.0 using internal data sources
  • Creating RDDs with Spark 2.0 using external data sources
  • Transforming RDDs with Spark 2.0 using the filter() API
  • Transforming RDDs with the super useful flatMap() API
  • Transforming RDDs with set operation APIs
  • RDD transformation/aggregation with groupBy() and reduceByKey()
  • Transforming RDDs with the zip() API
  • Join transformation with paired key-value RDDs
  • Reduce and grouping transformation with paired key-value RDDs
  • Creating DataFrames from Scala data structures
  • Operating on DataFrames programmatically without SQL
  • Loading DataFrames and setup from an external source
  • Using DataFrames with standard SQL language – SparkSQL
  • Working with the Dataset API using a Scala Sequence
  • Creating and using Datasets from RDDs and back again
  • Working with JSON using the Dataset API and SQL together
  • Functional programming with the Dataset API using domain objects

  • Introduction
  • Spark's basic statistical API to help you build your own algorithms
  • ML pipelines for real-life machine learning applications
  • Normalizing data with Spark
  • Splitting data for training and testing
  • Common operations with the new Dataset API
  • Creating and using RDD versus DataFrame versus Dataset from a text file in Spark 2.0
  • LabeledPoint data structure for Spark ML
  • Getting access to Spark cluster in Spark 2.0
  • Getting access to Spark cluster pre-Spark 2.0
  • Getting access to SparkContext vis-a-vis SparkSession object in Spark 2.0
  • New model export and PMML markup in Spark 2.0
  • Regression model evaluation using Spark 2.0
  • Binary classification model evaluation using Spark 2.0
  • Multiclass classification model evaluation using Spark 2.0
  • Multilabel classification model evaluation using Spark 2.0
  • Using the Scala Breeze library to do graphics in Spark 2.0

  • Introduction
  • Fitting a linear regression line to data the old fashioned way
  • Generalized linear regression in Spark 2.0
  • Linear regression API with Lasso and L-BFGS in Spark 2.0
  • Linear regression API with Lasso and 'auto' optimization selection in Spark 2.0
  • Linear regression API with ridge regression and 'auto' optimization selection in Spark 2.0
  • Isotonic regression in Apache Spark 2.0
  • Multilayer perceptron classifier in Apache Spark 2.0
  • One-vs-Rest classifier (One-vs-All) in Apache Spark 2.0
  • Survival regression – parametric AFT model in Apache Spark 2.0

  • Introduction
  • Linear regression with SGD optimization in Spark 2.0
  • Logistic regression with SGD optimization in Spark 2.0
  • Ridge regression with SGD optimization in Spark 2.0
  • Lasso regression with SGD optimization in Spark 2.0
  • Logistic regression with L-BFGS optimization in Spark 2.0
  • Support Vector Machine (SVM) with Spark 2.0
  • Naive Bayes machine learning with Spark 2.0 MLlib
  • Exploring ML pipelines and DataFrames using logistic regression in Spark 2.0

  • Introduction
  • Setting up the required data for a scalable recommendation engine in Spark 2.0
  • Exploring the movies data details for the recommendation system in Spark 2.0
  • Exploring the ratings data details for the recommendation system in Spark 2.0
  • Building a scalable recommendation engine using collaborative filtering in Spark 2.0

  • Introduction
  • Building a KMeans classifying system in Spark 2.0
  • Bisecting KMeans, the new kid on the block in Spark 2.0
  • Using Gaussian Mixture and Expectation Maximization (EM) in Spark to classify data
  • Classifying the vertices of a graph using Power Iteration Clustering (PIC) in Spark 2.0
  • Latent Dirichlet Allocation (LDA) to classify documents and text into topics
  • Streaming KMeans to classify data in near real-time

  • Introduction
  • Optimizing a quadratic cost function and finding the minima using just math to gain insight
  • Coding a quadratic cost function optimization using Gradient Descent (GD) from scratch
  • Coding Gradient Descent optimization to solve Linear Regression from scratch
  • Normal equations as an alternative for solving Linear Regression in Spark 2.0

  • Introduction
  • Getting and preparing real-world medical data for exploring Decision Trees and Ensemble models in Spark 2.0
  • Building a classification system with Decision Trees in Spark 2.0
  • Solving Regression problems with Decision Trees in Spark 2.0
  • Building a classification system with Random Forest Trees in Spark 2.0
  • Solving regression problems with Random Forest Trees in Spark 2.0
  • Building a classification system with Gradient Boosted Trees (GBT) in Spark 2.0
  • Solving regression problems with Gradient Boosted Trees (GBT) in Spark 2.0

  • Introduction
  • Two methods of ingesting and preparing a CSV file for processing in Spark
  • Singular Value Decomposition (SVD) to reduce high-dimensionality in Spark
  • Principal Component Analysis (PCA) to pick the most effective latent factor for machine learning in Spark

  • Introduction
  • Doing term frequency with Spark – everything that counts
  • Displaying similar words with Spark using Word2Vec
  • Downloading a complete dump of Wikipedia for a real-life Spark ML project
  • Using Latent Semantic Analysis for text analytics with Spark 2.0
  • Topic modeling with Latent Dirichlet allocation in Spark 2.0

  • Introduction
  • Structured streaming for near real-time machine learning
  • Streaming DataFrames for real-time machine learning
  • Streaming Datasets for real-time machine learning
  • Streaming data and debugging with queueStream
  • Downloading and understanding the famous Iris data for unsupervised classification
  • Streaming KMeans for a real-time on-line classifier
  • Downloading wine quality data for streaming regression
  • Streaming linear regression for a real-time regression
  • Downloading Pima Diabetes data for supervised classification
  • Streaming logistic regression for an on-line classifier