Let us help you find the training program you are looking for.

If you can't find what you are looking for, contact us, we'll help you find it. We have over 800 training programs to choose from.

banner-img

Course Skill Level:

Foundational

Course Duration:

3 day/s

  • Course Delivery Format:

    Live, instructor-led.

  • Course Category:

    AI / Machine Learning

  • Course Code:

    EMLCOOL21E09

Who should attend & recommended skills:

Python & machine learning experienced developers seeking to build ensemble models using Keras, H20, Scikit-Learn, Pandas, & more by implementing ML algorithms

Who should attend & recommended skills

  • This course is geared for developers who want to implement machine learning algorithms to build ensemble models using Keras, H2O, Scikit-Learn, Pandas and more.
  • Skill-level: Foundation-level Ensemble Machine Learning Cookbook skills for Intermediate skilled team members. This is not a basic class.
  • Python: Basic (1-2 years’ experience)
  • Machine Learning: Basic (1-2 years’ experience)

About this course

Ensemble modeling is an approach used to improve the performance of machine learning models. It combines two or more similar or dissimilar machine learning algorithms to deliver superior intellectual powers. This course will help you to implement popular machine learning algorithms to cover different paradigms of ensemble machine learning such as boosting, bagging, and stacking. The Ensemble Machine Learning Cookbook will start by getting you acquainted with the basics of ensemble techniques and exploratory data analysis. You will then learn to implement tasks related to statistical and machine learning algorithms to understand the ensemble of multiple heterogeneous algorithms. It will also ensure that you don’t miss out on key topics, such as like resampling methods. As you progress, you will get a better understanding of bagging, boosting, stacking, and working with the Random Forest algorithm using real-world examples. The course will highlight how these ensemble methods use multiple models to improve machine learning results, as compared to a single model. In the concluding lessons, you will delve into advanced ensemble models using neural networks, natural language processing, and more. You will also be able to implement models such as fraud detection, text categorization, and sentiment analysis. By the end of this course, you’ll be able to harness ensemble techniques and the working mechanisms of machine learning algorithms to build intelligent models using individual recipes.

Skills acquired & topics covered

  • Working in a hands-on learning environment, led by our Ensemble Machine Learning Cookbook expert instructor, students will learn about and explore:
  • Popular machine learning algorithms using a recipe-based approach
  • Implementing boosting, bagging, and stacking ensemble methods to improve machine learning models
  • Real-world ensemble applications and encounter complex challenges in Kaggle competitions
  • How to use machine learning algorithms for regression and classification problems
  • Ensemble techniques such as averaging, weighted averaging, and max-voting
  • Getting to grips with advanced ensemble methods, such as bootstrapping, bagging, and stacking
  • Using Random Forest for tasks such as classification and regression
  • Implementing an ensemble of homogeneous and heterogeneous machine learning algorithms
  • Learning and implementing various boosting techniques, such as AdaBoost, Gradient Boosting Machine, and XGBoost

Course breakdown / modules

  • Introduction
  • Data manipulation with Python
  • Analyzing, visualizing, and treating missing values
  • Exploratory data analysis

  • Introduction to ensemble machine learning
  • Max-voting
  • Averaging
  • Weighted averaging

  • Introduction to sampling
  • k-fold and leave-one-out cross-validation
  • Bootstrapping

  • Multiple linear regression
  • Logistic regression
  • Naive Bayes
  • Decision trees
  • Support vector machines

  • Introduction
  • Bootstrap aggregation
  • Ensemble meta-estimators
  • Bagging regressors

  • Introduction to random forests
  • Implementing a random forest for predicting credit card defaults using scikit-learn
  • Implementing random forest for predicting credit card defaults usingH2O

  • Introduction to boosting
  • Implementing AdaBoost for disease risk prediction using scikit-learn
  • Implementing a gradient boosting machine for disease risk prediction using scikit-learn
  • Implementing the extreme gradient boosting method for glass identification using XGBoost with scikit-learn

  • Technical requirements
  • Understanding stacked generalization
  • Implementing stacked generalization by combining predictions
  • Implementing stacked generalization for campaign outcome prediction using H2O

  • Introduction
  • An ensemble of homogeneous models for energy prediction
  • An ensemble of homogeneous models for handwritten digit classification

  • Introduction
  • Predicting credit card defaulters using heterogeneous ensemble classifiers

  • Introduction
  • Spam filtering using an ensemble of heterogeneous algorithms
  • Sentiment analysis of movie reviews using an ensemble model

  • Introduction
  • An ensemble of homogeneous models to classify fashion products