Let us help you find the training program you are looking for.

If you can't find what you are looking for, contact us, we'll help you find it. We have over 800 training programs to choose from.

banner-img

Course Skill Level:

Foundational

Course Duration:

2 day/s

  • Course Delivery Format:

    Live, instructor-led.

  • Course Category:

    AI / Machine Learning

  • Course Code:

    LSCMPYL21E09

Who should attend & recommended skills:

Python experienced developers seeking to learn to build ML models

Who should attend & recommended skills

  • This course is designed for developers who want to learn to build powerful machine learning models quickly and deploy large-scale predictive applicationsSkill-level: Foundation-level R Large Scale Machine Learning with Python for Intermediate skilled team members.
  • This is not a basic class.
  • Python: Basic (1-2 years’ experience).

About this course

Large Python machine learning projects involve new problems associated with specialized machine learning architectures and designs that many data scientists have yet to tackle. But finding algorithms and designing and building platforms that deal with large sets of data is a growing need. Data scientists have to manage and maintain increasingly complex data projects, and with the rise of big data comes an increasing demand for computational and algorithmic efficiency. Large Scale Machine Learning with Python uncovers a new wave of machine learning algorithms that meet scalability demands together with a high predictive accuracy. Dive into scalable machine learning and the three forms of scalability. Speed up algorithms that can be used on a desktop computer with tips on parallelization and memory allocation. Get to grips with new algorithms that are specifically designed for large projects and can handle bigger files, and learn about machine learning in big data environments. We will also cover the most effective machine learning techniques on a map reduce framework in Hadoop and Spark in Python.
Unity Machine Learning agents allow researchers and developers to create games and simulations using the Unity Editor, which serves as an environment where intelligent agents can be trained with machine learning methods through a simple-to-use Python API. This course takes you from the basics of Reinforcement and Q Learning to building Deep Recurrent Q-Network agents that cooperate or compete in a multi-agent ecosystem. You will start with the basics of Reinforcement Learning and how to apply it to problems. Then you will learn how to build self-learning advanced neural networks with Python and Keras/TensorFlow. From there you move o n to more advanced training scenarios where you will learn further innovative ways to train your network with A3C, imitation, and curriculum learning models. By the end of the course, you will have learned how to build more complex environments by building a cooperative and competitive multi-agent ecosystem.

Skills acquired & topics covered

  • Working in a hands-on learning environment, led by our Machine Learning with Python expert instructor, students will learn about and explore:
  • Designing, engineering and deploying scalable machine learning solutions with the power of Python
  • Taking command of Hadoop and Spark with Python for effective machine learning on a map reduce framework
  • Building state-of-the-art models and develop personalized recommendations to perform machine learning at scale
  • Applying the most scalable machine learning algorithms
  • Working with modern state-of-the-art large-scale machine learning techniques
  • Increasing predictive accuracy with deep learning and scalable data-handling techniques
  • Improving your work by combining the MapReduce framework with Spark
  • Building powerful ensembles at scale
  • Using data streams to train linear and non-linear predictive models from extremely large datasets using a single machine

Course breakdown / modules

  • Explaining scalability in detail
  • Python for large scale machine learning
  • Python packages

  • Out-of-core learning
  • Streaming data from sources
  • Stochastic learning
  • Feature management with data streams

  • Datasets to experiment with on your own
  • Support Vector Machines
  • Feature selection by regularization
  • Including non-linearity in SGD
  • Hyperparameter tuning

  • The neural network architecture
  • Neural networks and regularization
  • Neural networks and hyperparameter optimization
  • Neural networks and decision boundaries
  • Deep learning at scale with H2O
  • Deep learning and unsupervised pretraining
  • Deep learning with theanets
  • Autoencoders and unsupervised learning

  • TensorFlow installation
  • Machine learning on TensorFlow with SkFlow
  • Keras and TensorFlow installation
  • Convolutional Neural Networks in TensorFlow through Keras
  • CNN with an incremental approach
  • GPU Computing

  • Bootstrap aggregation
  • Random forest and extremely randomized forest
  • Fast parameter optimization with randomized search
  • CART and boosting
  • XGBoost
  • Out-of-core CART with H2O

  • Unsupervised methods
  • Feature decomposition PCA
  • PCA with H2O
  • Clustering K-means
  • K-means with H2O
  • LDA

  • From a standalone machine to a bunch of nodes
  • Setting up the VM
  • The Hadoop ecosystem
  • Spark

  • Setting up the VM for this chapter
  • Sharing variables across cluster nodes
  • Data preprocessing in Spark
  • Machine learning with Spark
  • Summary