Music information retrieval (MIR) is an interdisciplinary field bridging the domains of statistics, signal processing, machine learning, musicology, biology, and more. MIR algorithms allow a computer to make sense of audio data in order to bridge the semantic gap between high-level musical information — e.g. tempo, key, pitch, instrumentation, chord progression, genre, song structure — and low-level audio data. In this talk, we will survey common research problems in MIR, including music fingerprinting, transcription, classification, and recommendation, and recently proposed solutions in the research literature. The talk will contain both a high-level overview as well as concrete examples of implementing MIR algorithms in Python using the IPython notebook. We will discuss concrete elements of prototyping an MIR system including data visualization, open-source tools, and evaluation.
Generously sponsored by O'Reilly Media. Special discount to Strata Conference + Hadoop World in NYC this October for SF Data Science Members! Visit http://oreil.ly/UGSHW14 for details.
About Steve Tjoa
Steve Tjoa (http://stevetjoa.com) is a researcher and engineer in the areas of signal processing and machine learning for music information retrieval (MIR). He currently works on the MIR team at Humtap in San Francisco. Before that, he worked on content-based audio recognition and recommendation as an NSF-sponsored postdoctoral fellow at iZotope and Imagine Research (acquired by iZotope). He has also worked as a consultant in the areas of audio/image signal processing, machine learning, and information retrieval.
Since 2011, he has co-instructed the annual summer workshop on MIR at the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University.