Skip to content

Generating Music in the Raw Audio Domain and Probabilistic Symmetry

Photo of Martin Goodson
Hosted By
Martin G.
Generating Music in the Raw Audio Domain and Probabilistic Symmetry

Details

Please note that Photo ID will be required. Please can attendees ensure their meetup profile name includes their full name to ensure entry.

Agenda:

  • 18:30: Doors open, pizza, beer, networking
  • 19:00: First talk
  • 19:45: Break & networking
  • 20:00: Second talk
  • 20:45: Close

Sponsors
Man AHL: At Man AHL, we mix machine learning, computer science and engineering with terabytes of data to invest billions of dollars every day.
Evolution AI: Build a state-of-the-art NLP pipeline in seconds.

  • Generating music in the raw audio domain (Sander Dieleman)

Abstract: Realistic music generation is a challenging task. When machine learning is used to build generative models of music, typically high-level representations such as scores, piano rolls or MIDI sequences are used that abstract away the idiosyncrasies of a particular performance. But these nuances are very important for our perception of musicality and realism, so we embark on modelling music in the raw audio domain. I will discuss some of the advantages and disadvantages of this approach, and the challenges it entails.

Bio: Sander Dieleman is a Research Scientist at DeepMind in London, UK, where he has worked on the development of AlphaGo and WaveNet. He was previously a PhD student at Ghent University, where he conducted research on feature learning and deep learning techniques for learning hierarchical representations of musical audio signals. During his PhD, he also developed the deep learning library Lasagne and won solo and team gold medals respectively in Kaggle's "Galaxy Zoo" competition and the first National Data Science Bowl. In the summer of 2014, he interned at Spotify in New York, where he worked on implementing audio-based music recommendation using deep learning on an industrial scale.

  • Probabilistic symmetry and invariant neural networks (Benjamin Bloem-Reddy)

Abstract: In an effort to improve the performance of deep neural networks in data-scarce, non-i.i.d., or unsupervised settings, much recent research has been devoted to encoding invariance under symmetry transformations into neural network architectures. We treat the neural network input and output as random variables, and consider group invariance from the perspective of probabilistic symmetry---a field with a long history, dating back at least to Laplace. Drawing on tools from probability and statistics, we establish a link between functional and probabilistic symmetry, and obtain functional representations of probability distributions that encode symmetry under the action of a compact group. Those representations characterize the structure of neural networks that can be used to represent such distributions and yield a general program for constructing invariant stochastic or deterministic neural networks. We develop the details of the general program for exchangeable sequences and arrays, recovering a number of recent examples as special cases.

This is work in collaboration with Yee Whye Teh. https://arxiv.org/abs/1901.06082

Bio: Benjamin Bloem-Reddy is a postdoctoral researcher in the Computational Statistics and Machine Learning group, led by Yee Whye Teh, at the University of Oxford. Ben obtained his PhD in Statistics from Columbia University. His research has focused on probabilistic and statistical methodology for analysis of discrete data like graphs, partitions, and permutations. Natural applications of these ideas arise in, for example, modeling networks or text, and in matrix factorization. Recently, he has also worked on problems involving probabilistic symmetry in neural networks and in probabilistic programming. In summer 2019, he will join the faculty in the Department of Statistics at the University of British Columbia.

Photo of London Machine Learning Meetup group
London Machine Learning Meetup
See more events