Let us help you find the training program you are looking for.

If you can't find what you are looking for, contact us, we'll help you find it. We have over 800 training programs to choose from.


Course Skill Level:


Course Duration:

3 day/s

  • Course Delivery Format:

    Live, instructor-led.

  • Course Category:

    AI / Machine Learning

  • Course Code:


Who should attend & recommended skills:

Those with Python experience and basic IT and Linux skills for developing, training, & optimizing ML models

Who should attend & recommended skills

  • This course is geared for Python experienced developers, analysts or others with Python skills who desire a guide to developing, training, and optimizing your machine learning models.
  • Skill-level: Foundation-level machine learning skills for Intermediate skilled team members. This is not a basic class.
  • IT skills: Basic to Intermediate (1-5 years’ experience)
  • Python programming: Basic to Intermediate (1-5 years’ experience)
  • Linux: Basic (1-2 years’ experience), including familiarity with command-line options such as ls, cd, cp, and su
  • Machine Learning: Not required
  • Advanced Math skills: Not required
  • Attendees without a programming background like Python may view labs as follow along exercises or team with others to complete them.

About this course

Machine learning makes it possible to learn about the unknowns and gain hidden insights into your datasets by mastering many tools and techniques. This course guides you to do just that in a very compact manner. After giving a quick overview of what machine learning is all about, Machine Learning Quick Reference jumps right into its core algorithms and demonstrates how they can be applied to real-world scenarios. From model evaluation to optimizing their performance, this course will introduce you to the best practices in machine learning. Furthermore, you will also look at the more advanced aspects such as training neural networks and work with different kinds of data, such as text, time-series, and sequential data. Advanced methods and techniques such as causal inference, deep Gaussian processes, and more are also covered. By the end of this course, you will be able to train fast, accurate machine learning models at your fingertips, which you can easily use as a point of reference.
Machine learning is a technique that focuses on developing computer programs that can be modified when exposed to new data. We can make use of it for our mobile applications and this course will show you how to do so. The course starts with the basics of machine learning concepts for mobile applications and how to get well equipped for further tasks. You will start by developing an app to classify age and gender using Core ML and Tensorflow Lite. You will explore neural style transfer and get familiar with how deep CNNs work. We will also take a closer look at Googles ML Kit for the Firebase SDK for mobile applications. You will learn how to detect handwritten text on mobile. You will also learn how to create your own Snapchat filter by making use of facial attributes and OpenCV. You will learn how to train your own food classification model on your mobile; all of this will be done with the help of deep learning techniques. Lastly, you will build an image classifier on your mobile, compare its performance, and analyze the results on both mobile and cloud using TensorFlow Lite with an RCNN. By the end of this course, you will not only have mastered the concepts of machine learning but also learned how to resolve problems faced while building powerful apps on mobiles using TensorFlow Lite, Caffe2, and Core ML.

Skills acquired & topics covered

  • Working in a hands-on learning environment, led by our Machine Learning expert instructor, students will learn about and explore:
  • Your guide to learning efficient machine learning processes from scratch
  • Expert techniques and hacks for a variety of machine learning concepts
  • Writing effective code in R, Python, Scala, and Spark to solve all your machine learning problems
  • Getting a quick rundown of model selection, statistical modeling, and cross-validation
  • Choosing the best machine learning algorithm to solve your problem
  • Exploring kernel learning, neural networks, and time-series analysis
  • Training deep learning models and optimize them for maximum performance
  • Briefly covering Bayesian techniques and sentiment analysis in your NLP solution
  • Implementing probabilistic graphical models and causal inferences
  • Measuring and optimizing the performance of your machine learning models

Course breakdown / modules

  • Statistical models
  • Learning curve
  • Curve fitting
  • Statistical modeling the two cultures of Leo Breiman
  • Training data development data test data
  • Bias-variance trade off
  • Regularization
  • Cross-validation and model selection
  • Model selection using cross-validation
  • 0.632 rule in bootstrapping
  • Model evaluation
  • Receiver operating characteristic curve
  • H-measure
  • Dimensionality reduction

  • Introduction to vectors
  • Linear separability
  • Hyperplanes
  • SVM
  • Kernel trick
  • Kernel types
  • SVM example and parameter optimization through grid search

  • What is ensemble learning?
  • Bagging
  • Decision tree
  • Random forest algorithm
  • Boosting

  • Neural networks
  • Network initialization
  • Overfitting
  • Prevention of overfitting in NNs
  • Vanishing gradient
  • Recurrent neural networks

  • Introduction to time series analysis
  • White noise
  • Random walk
  • Autoregression
  • Autocorrelation
  • Stationarity
  • AR model
  • Moving average model
  • Autoregressive integrated moving average
  • Optimization of parameters
  • Anomaly detection

  • Text corpus
  • TF-IDF
  • Sentiment analysis
  • Topic modeling
  • The Bayes theorem

  • Association rules
  • Apriori algorithm
  • Frequent pattern growth

  • Key concepts
  • Bayes rule
  • Bayes network

  • Deep neural networks
  • Backward propagation
  • Forward propagation equation
  • Backward propagation equation
  • Parameters and hyperparameters
  • Bias initialization
  • Generative adversarial networks
  • Hintons Capsule network

  • Granger causality
  • F-test
  • Graphical causal models

  • Introduction
  • Kernel PCA
  • Independent component analysis
  • Compressed sensing
  • Self-organizing maps
  • Bayesian multiple imputation