addressalign-toparrow-leftarrow-rightbackbellblockcalendarcameraccwcheckchevron-downchevron-leftchevron-rightchevron-small-downchevron-small-leftchevron-small-rightchevron-small-upchevron-upcircle-with-checkcircle-with-crosscircle-with-pluscrossdots-three-verticaleditemptyheartexporteye-with-lineeyefacebookfolderfullheartglobegmailgooglegroupshelp-with-circleimageimagesinstagramlinklocation-pinm-swarmSearchmailmessagesminusmoremuplabelShape 3 + Rectangle 1ShapeoutlookpersonJoin Group on CardStartprice-ribbonShapeShapeShapeShapeImported LayersImported LayersImported Layersshieldstartickettrashtriangle-downtriangle-uptwitteruserwarningyahoo

Gert Lanckriet @ eHarmony

12:00 Arrival and lunch served
12:30 Gert talks
13:30 Discussion

Title: Music Recommendation from Millions of Songs.


A revolution in music production, distribution and consumption made millions of songs available to virtually anyone on the planet, through the Internet. To allow users to retrieve the desired content from this nearly infinite pool of possibilities, algorithms for automatic music indexing and recommendation are a must.

In this talk, I will discuss two aspects of automated content-based music analysis for music search and recommendation: i) automated music tagging for semantic retrieval, and ii) a query-by-example paradigm for content-based music recommendation, wherein a user queries the system by providing a song, and the system responds with a list of relevant or similar song recommendations (e.g., playlist generation for online radio).

Query-by-example applications ultimately depend on the notion of similarity between items to produce high-quality results. Current state-of-the-art systems employ collaborative filter methods to represent musical items, effectively comparing items in terms of their constituent users.  While collaborative filter techniques perform well when historical data is available for each item, their reliance on historical data impedes performance on novel or unpopular items. To combat this problem, we rely on content-based similarity, which naturally extends to novel items, but is typically out-performed by collaborative filter methods. In this talk, I will present a method for optimizing content-based similarity by learning from a sample of collaborative filter data. Finally, I will discuss how such algorithms may be adapted to improve recommendations if a variety of information besides musical content is available as well (e.g., music video clips, web documents and/or art work describing musical artists).


Gert Lanckriet received a Master's degree in Electrical Engineering from the Katholieke Universiteit Leuven, Leuven, Belgium, in 2000 and the M.S. and Ph.D. degrees in Electrical Engineering and Computer Science from the University of California, Berkeley in 2001 respectively 2005. In 2005, he joined the Department of Electrical and Computer Engineering at the University of California, San Diego, where he heads the Computer Audition Lab (CALab). He was awarded the SIAM Optimization Prize in 2008 and is the recipient of a Hellman Fellowship, an IBM Faculty Award, an NSF CAREER Award and an Alfred P. Sloan Foundation Research Fellowship. In 2011, MIT Technology Review named him one of the 35 top young technology innovators in the world (TR35). His lab received a Yahoo! Key Scientific Challenges Award and a Qualcomm Innovation Fellowship. His research focuses on the interplay of convex optimization, machine learning and applied statistics, with applications in computer audition and music information retrieval.

Bring your parking tickets with you for validation.

Join or login to comment.

Our Sponsors

People in this
Meetup are also in:

Sign up

Meetup members, Log in

By clicking "Sign up" or "Sign up using Facebook", you confirm that you accept our Terms of Service & Privacy Policy