Location visible to members
**WAITLIST ONLY Please register HERE (https://www.eventbrite.co.uk/e/creative-ai-artists-and-creatives-in-the-age-of-machine-intelligence-tickets-27413032117) as the venue needs full names for security purposes**
Over the past few months, there has been increasing interest in applying the latest developments in artificial intelligence to creative projects in art, music, film, theatre and beyond. From Google's DeepDream and algorithms replicating artistic style to the world's first computer-generated musical playing in London's West End, more and more creative AI projects are moving beyond the world of research and academia into the public eye.
This event is the first of a series designed to bring together artists, coders, designers, technologists and industry professionals to discuss the applications of artificial intelligence in the creative industries.
For our first event, we will have the following speakers:
Luba Elliott (https://twitter.com/elluba), Art & AI researcher: 'Artists and Creatives in the Age of Machine Intelligence'
This introductory talk will detail the latest developments in Creative AI, giving an overview of recent trends and projects in the field. Luba Elliott is a creative technology researcher, focussing on art and machine intelligence. She has previously profiled the AI landscape for Mosaic Ventures and built a community around technological and business model innovation in the art industry.
Terence Broad (https://twitter.com/Terrybroad), Artist & Research Student at Goldsmiths: 'Autoencoding Blade Runner: Reconstructing films with artificial neural networks'.
Autoencoding Blade Runner is a research project for Terence's dissertation on the Msci Creative Computing programme at Goldsmiths. He trained a type of artificial neural network called an autoencoder to reconstruct the individual frames from the film Blade Runner. Once it had been trained, he got the network to reconstruct every frame from Blade Runner and then resequenced it into a video. You can read more about the project in his medium post (https://medium.com/@Terrybroad/autoencoding-blade-runner-88941213abbe#.ndgb0hpxz) or in more technical detail in his dissertation (https://www.academia.edu/25585807/Autoencoding_Video_Frames). This project was featured in Vox (http://www.vox.com/2016/6/1/11787262/blade-runner-neural-network-encoding), boingboing (http://boingboing.net/2016/06/02/deep-learning-ai-autoencodes.html), Wired Italy (http://www.wired.it/tv/vede-network-neurale-quando-guarda-film/), prosthetic knowledge (http://prostheticknowledge.tumblr.com/post/144865082641/blade-runner-autoencoded-experiment-by) and received radio coverage on CBC radio show Spark (http://www.cbc.ca/radio/spark/323-inmate-video-visitation-and-more-1.3610791/an-artificial-intelligence-remade-blade-runner-1.3610799).
Bob L. Sturm (https://twitter.com/boblsturm), Lecturer in Digital Media at the School of Electronic Engineering and Computer Science, Queen Mary University of London: 'Composing music with deep recurrent networks'.
His research seeks to interrogate the “creative” and/or “intelligent” machine: though it may appear creative and/or intelligent, he wants to know what the machine has _actually_ learned to do. The answer is often counter-intuitive. He will discuss various approaches to answering this question, and how such "learned machines" can provide useful assistance and inspiration to some human creators.