Brain’s music perception and our music-to-image dreaming project: deepsing


Details
Join us in our attempt to uncover the exciting interplay between music, neuroscience, cognition and deep learning in the last meetup of this year on Friday 20/12 - 19:00, at OK!Thess.
------
Agenda:
-
19:15: Predicting music preferences through mind-reading: connecting the dots between neuroscience, machine learning and innovation, Dr. Dimitrios Adamos
-
20:15: Seeing music using deepsing: Creating machine-generated visual stories of songs, Nikolaos Passalis and Stavros Doropoulos
-
21:00: Networking and socializing
------
'Predicting music preferences through mind-reading: connecting the dots between neuroscience, machine learning and innovation'
Dr. Dimitrios Adamos
Abstract: What exactly happens in our brain, when we enjoy a song? Can we use mobile neuroimaging to predict our favourite music? Is it feasible to build computational models of our musical taste at the population level?
This talk will overview research efforts in decoding the listener’s brain dynamics to identify signatures of aesthetic evaluation, mined from wearable EEG recordings. A technology demonstrator for the media campaign of Norway’s largest mobile network operator, featuring the participation of famous Norwegian artists, will be presented. These efforts built upon recent empirical evidence that music-induced pleasure is associated with increased functional connectivity and richer network organization in the human brain. Hence, the use of graph-emanated representations of the brain as a complex networked system will be demonstrated to robustly attain the listener’s “mind-reading”. In addition, challenges in using modern machine learning tools to train deep learning models on such EEG signals will be discussed. Finally, my current work will be presented to lead the first large-scale inclusion of volunteers for collecting human brainwaves during music listening at the population level, in collaboration with London’s Science Museum.
Bio: Dimitrios Adamos holds a MEng in Electrical & Computer Engineering, a MSc in Medical informatics from the School of Medicine and a PhD in Neuroinformatics from the School of Biology of Aristotle University of Thessaloniki (AUTh). He is currently a Visiting Researcher in the Department of Computing of Imperial College London, leading the MyBrainTunes research intervention which aims to deepen our understanding on the decoding of people’s music listening brain states at the population level. He is also a Senior Research Fellow in the School of Music Studies of AUTh and a member of the Neuroinformatics Group. He has research expertise in neural signal processing and graph analytics. He now focuses on leveraging consumer-grade EEG devices to record brain activity in nearly real-life settings and study the effects of music on listeners’ brain dynamics.
------
'Seeing music using deepsing: Creating machine-generated visual stories of songs'
Nikolaos Passalis (Postdoctoral Researcher, AUTh) and
Stavros Doropoulos (CIO, DataScouting)
Abstract:
Can machines feel? Is music perception a solely human ability? Are machines creative? Can they express their "feelings"? These are some questions that naturally arise from the artificial intelligence revolution that we are currently going through. In this talk, we will discuss these issues and present a novel method, deepsing, which will bring us closer to machines that can feel and express themselves. deepsing was born to materialize our idea of translating audio to images inspired by Futurama Holophoner. In this way, deepsing is able to autonomously generate visual stories in order to convey the emotions expressed in songs. In this presentation, we will briefly present the technology and the methodological advancements needed to realize deepsing, as well as generate visual stories for several well-known songs using only neural networks!

Brain’s music perception and our music-to-image dreaming project: deepsing