Deep neural networks are at the heart of spectacular breakthroughs in the processing of music audio signals. On 12 February 2020, Geoffroy Peeters, professor at Télécom Paris, was invited to the Collège de France, as part of the Mathematics and Digital Science/Data Science  Chair held by Stéphane Mallat, to give a seminar, “Deep neural networks in music audio signals”.

The seminar will present the characteristics of music audio signals and the need for deep neural networks to adapt to signal modeling, in three stages.

 Geoffroy Peeters will first review certain aspects of audio signal processing and  then show how these fit in to a classic machine-learning approach, in order to build hand-crafted features as classification algorithm inputs. He will then address the way in which deep neural networks (especially convolutional) make feature learning possible. Lastly, Geoffroy Peeters will cover the various learning paradigms used in the music and audio field: classification, encoder-decoder (source separation,  latent space constraints), metric learning (triplet loss) and semi-supervised learning.

Click this link to watch the video