It is incredible to observe the amount of information that can be contained in a time-series signal. Maybe one of the most impressive and beautiful examples of this is the harmonic and rhythmic information stored in a .wav or .mp3 file of a music piece. Of course, our ears/brains can decipher which instruments, chords, and notes are present in a music piece, as well as beat, tempo, and other rhythmic attributes of the song. But to create software that mimics this ability is a difficult challenge.
Jarod Hart [http://people.ku.edu/~jvhart/index.html] will discuss his time-frequency analysis of music project which aims to formulate algorithms to characterize the harmonic and rhythmic content of music based only on time-series information from .wav or .mp3 file. In this presentation he will detail some partial results of this music analysis project.
He will discuss some techniques for making a time-frequency decomposition of piano music that provide rich harmonic information and visualizations related to the signal. Additionally, he will detail an algorithm that decomposes the input piano music signal in terms of a “piano-key” domain that describes the intensity of the harmonics related to each key on the piano at each time in the signal. The main data science and mathematical techniques to be discussed related to these algorithms are the short-time Fourier transform, bag-of-features dictionary constructions, and sparse representation through the LASSO optimization problem.
Finally, he will describe some future plans for this project including ways to formulate an algorithm that recovers sheet music from .wav or .mp3 piano music files, extending the algorithms to simultaneously handle multiple instruments/vocal signals, and some plans to speed up computation using the sparse and highly structured behavior of music signals in the frequency domain.