Corpora for MIR Research & Computational Music Theory

Dies ist ein vergangenes Event

58 Personen haben teilgenommen

CRCLR

Rollbergstrasse 26, 12053 Neukölln · Berlin

Wie du uns findest

The entrance of the venue is up the street at the crossing between Rollbergstraße and Am Sudhaus.

Bild des Veranstaltungsortes

Details

The fifth Berlin MIR meetup is supported by Aitokaiku: http://www.aitokaiku.com/

Our speakers will be:

Xavier Serra: Creating and maintaining corpora for MIR research

One of the biggest bottlenecks for the advancement of the research in MIR is the lack of very large and openly available corpora of music data, corpora with which to train and test machine learning models. In this presentation, I want to talk about this issue and about the initiatives in which my group is involved to tackle it. I will cover freesound.org and the recent effort to create the FreesoundDataset (https://datasets.freesound.org). I will also describe our efforts in collaborating with MusicBrainz to create AcousticBrainz.org, a framework to crowdsource acoustic information for music tracks. Finally, I will talk about our research in CompMusic to develop Dunya (http://dunya.compmusic.upf.edu), which comprises corpora of several music repertoires plus software tools, for the purpose of musicological research.

Xavier Serra is Associate Professor of the Department of Information and Communication Technologies and Director of the Music Technology Group at the Universitat Pompeu Fabra in Barcelona. After a multidisciplinary academic education, he obtained a PhD in Computer Music from Stanford University in 1989 with a dissertation on the spectral processing of musical sounds that is considered a key reference in the field. His research interests cover the computational analysis, description, and synthesis of sound and music signals, with a balance between basic and applied research and approaches from both scientific/technological and humanistic/artistic disciplines. Dr. Serra is very active in fields of Audio Signal Processing, Sound and Music Computing, Music Information Retrieval and Computational Musicology at the local and international levels, being involved in the editorial board of a number of journals and conferences and giving lectures on current and future challenges of these fields. He was awarded an Advanced Grant from the European Research Council to carry out the project CompMusic aimed at promoting multicultural approaches in music information research. More info: https://www.upf.edu/web/xavier-serra

Ryan Groves: Computational Music Theory: Working with Symbolic Data

This talk will present an overview of methods that look to extract information from symbolic representations (such as digital scores). The tasks discussed will include pattern recognition (e.g., melodic phrase detection), harmonic sequence analysis and melodic reduction.

Ryan received his B.S. in Computer Science from UCLA, and continued on to complete a Master's in Music Technology from McGill University. As the former Director of R&D for Zya, he developed a musical messenger app that automatically sings your texts, called Ditty. Ditty won the Best Music App of 2015 by the Appy Awards. In 2016, his research in computational music theory was awarded the Best Paper of the most prominent music technology conference, ISMIR. With his new venture, Melodrive, he and his co-founding team of two PhDs in Music and AI are looking to build the world's best artificially intelligent composer, and to change the way music is experienced in video games and virtual environments.