Let us help you find the training program you are looking for.

If you can't find what you are looking for, contact us, we'll help you find it. We have over 800 training programs to choose from.


Course Skill Level:


Course Duration:

3 day/s

  • Course Delivery Format:

    Live, instructor-led.

  • Course Category:

    Big Data & Data Science

  • Course Code:


Who should attend & recommended skills:

Those with basic IT & Linux skills

Who should attend & recommended skills

  • This course is geared for attendees with Machine Learning on Google Cloud Platform skills to design and use machine learning models for music generation using Magenta and make them interact with existing music creation tools. Those who wish to be well-versed with Magenta and develop the skills needed to use ML models for music generation in your own style.
  • Skill-level: Foundation-level Magenta skills for Intermediate skilled team members. This is not a basic class.
  • IT Skills and Machine Learning knowledge: Basic to Intermediate (1-5 years’ experience)
  • Linux: Basic (1-2 years’ experience), including familiarity with command-line options such as ls, cd, cp, and su

About this course

The importance of machine learning (ML) in art is growing at a rapid pace due to recent advancements in the field, and Magenta is at the forefront of this innovation. With this course, you’ll follow a hands-on approach to using ML models for music generation, learning how to integrate them into an existing music production workflow. Complete with practical examples and explanations of the theoretical background required to understand the underlying technologies, this course is the perfect starting point to begin exploring music generation. The course will help you learn how to use the models in Magenta for generating percussion sequences, monophonic and polyphonic melodies in MIDI, and instrument sounds in raw audio. Through practical examples and in-depth explanations, you’ll understand ML models such as RNNs, VAEs, and GANs. Using this knowledge, you’ll create and train your own models for advanced music generation use cases, along with preparing new datasets. Finally, you’ll get to grips with integrating Magenta with other technologies, such as digital audio workstations (DAWs), and using Magenta.js to distribute music generation apps in the browser.

Skills acquired & topics covered

  • How machine learning, deep learning, and reinforcement learning are used in music generation
  • Generating new content by manipulating the source data using Magenta utilities, and train machine learning models with it
  • Exploring various Magenta projects such as Magenta Studio, MusicVAE, and NSynth
  • Using RNN models in Magenta to generate MIDI percussion, and monophonic and polyphonic sequences
  • Using WaveNet and GAN models to generate instrument notes in the form of raw audio
  • Employing Variational Autoencoder models like MusicVAE and GrooVAE to sample, interpolate, and humanize existing sequences
  • Preparing and create your dataset on specific styles and instruments
  • Training your network on your personal datasets and fix problems when training networks
  • Appling MIDI to synchronize Magenta with existing music production tools like DAWs

Course breakdown / modules

  • Technical requirements
  • Overview of generative art
  • New techniques with machine learning
  • Google’s Magenta and TensorFlow in music generation
  • Installing Magenta and Magenta for GPU
  • Installing the music software and synthesizers
  • Installing the code editing software
  • Generating a basic MIDI file

  • Technical requirements
  • The significance of RNNs in music generation
  • Using the Drums RNN on the command line
  • Using the Drums RNN in Python

  • Technical requirements
  • LSTM for long-term dependencies
  • Generating melodies with the Melody RNN
  • Generating polyphony with the Polyphony RNN and Performance RNN

  • Technical requirements
  • Continuous latent space in VAEs
  • Score transformation with MusicVAE and GrooVAE
  • Understanding TensorFlow code

  • Technical requirements
  • Learning about WaveNet and temporal structures for music
  • Neural audio synthesis with NSynth
  • Using GANSynth as a generative instrument

  • Technical requirements
  • Looking at existing datasets
  • Building a dance music dataset
  • Building a jazz dataset
  • Preparing the data using pipelines

  • Technical requirements
  • Choosing the model and configuration
  • Training and tuning a model
  • Using Google Cloud Platform

  • Technical requirements
  • Introducing Magenta.js and TensorFlow.js
  • Creating a Magenta.js web application
  • Making Magenta.js interact with other apps

  • Technical requirements
  • Sending MIDI to a DAW or synthesizer
  • Looping the generated MIDI
  • Using Magenta as a standalone application with Magenta Studio