Let us help you find the training program you are looking for.

If you can't find what you are looking for, contact us, we'll help you find it. We have over 800 training programs to choose from.

banner-img

Course Skill Level:

Foundational

Course Duration:

2 day/s

  • Course Delivery Format:

    Live, instructor-led.

  • Course Category:

    Big Data & Data Science

  • Course Code:

    FEENMEL21E09

Who should attend & recommended skills:

Those with basic developing & Python experience

Who should attend & recommended skills

  • This course is geared for experienced developers, analysts or others attendees with Python skills who want a perfect guide to speed up the predicting power of Machine Learning Algorithms.
  • Skill-level: Foundation-level Feature Engineering skills for Intermediate skilled team members. This is not a basic class.
  • Developing: Basic (1-2 years’ experience)
  • Python: Basic (1-2 years’ experience)

About this course

Feature engineering is the most important step in creating powerful machine learning systems. This course will take you through the entire feature-engineering journey to make your machine learning much more systematic and effective. You will start with understanding your data – often the success of your ML models depends on how you leverage different feature types, such as continuous, categorical, and more. You will learn when to include a feature, when to omit it, and why, all by understanding error analysis and the acceptability of your models. You will learn to convert a problem statement into useful new features. You will learn to deliver features driven by business needs as well as mathematical insights. You’ll also learn how to use machine learning on your machines, automatically learning amazing features for your data. By the end of the course, you will become proficient in Feature Selection, Feature Learning, and Feature Optimization.

Skills acquired & topics covered

  • Working in a hands-on learning environment, led by our Feature Engineering expert instructor, students will learn about and explore:
  • Designing, discovering, and creating dynamic, efficient features for your machine learning application
  • Understanding your data in-depth and derive astonishing data insights with the help of this Guide
  • Grasping powerful feature-engineering techniques and build machine learning systems
  • Identifying and leverage different feature types
  • Cleaning features in data to improve predictive power
  • Understanding why and how to perform feature selection, and model error analysis
  • Leveraging domain knowledge to construct new features
  • Delivering features based on mathematical insights
  • Using machine-learning algorithms to construct features
  • Mastering feature engineering and optimization
  • Harnessing feature engineering for real world applications through a structured case study

Course breakdown / modules

  • Motivating example – AI-powered communications
  • Why feature engineering matters
  • What is feature engineering?
  • Evaluation of machine learning algorithms and feature engineering procedures
  • Feature understanding – what’s in my dataset?
  • Feature improvement – cleaning datasets
  • Feature selection – say no to bad attributes
  • Feature construction – can we build it?
  • Feature transformation – enter math-man
  • Feature learning – using AI to better our AI

  • The structure, or lack thereof, of data
  • An example of unstructured data – server logs
  • Quantitative versus qualitative data
  • The four levels of data
  • Recap of the levels of data

  • Identifying missing values in data
  • Dealing with missing values in a dataset
  • Standardization and normalization

  • Examining our dataset
  • Imputing categorical features
  • Encoding categorical variables
  • Extending numerical features
  • Text-specific feature construction

  • Achieving better performance in feature engineering
  • Creating a baseline machine learning pipeline
  • The types of feature selection
  • Choosing the right feature selection method

  • Dimension reduction – feature transformations versus feature selection versus feature construction
  • Principal Component Analysis
  • Scikit-learn’s PCA
  • How centering and scaling data affects PCA
  • A deeper look into the principal components
  • Linear Discriminant Analysis
  • LDA versus PCA – iris dataset

  • Parametric assumptions of data
  • Restricted Boltzmann Machines
  • The BernoulliRBM
  • Extracting RBM components from MNIST
  • Using RBMs in a machine learning pipeline
  • Learning text features – word vectorizations

  • Case study 1 – facial recognition
  • Case study 2 – predicting topics of hotel reviews data