Its been a long summer, but were back! Today we will first of all dive into the various way in which Deep Learning can be used to model data of sequential nature, ie time series data, language data, etc, and secondly go over the developments in using Deep Learning for creative endeavours.
• 18:00 - 18:15 Grab a coffee/beer and get ready to rumble...
• 18:15 - 18:30 Welcome
• 18:30 - 19:00 Roelof Pieters: Deep Learning & Creativity (Language and Vision)
• 19:00 - 19:15 Break
• 19:15 - 20:00 Anders Huss, Watty: Introduction to sequential modelling with RNNs in Theano and Blocks
• 20:00 - 21:00 Mingling
- Anders Huss (Watty): Introduction to sequential modelling with RNNs in theano and Blocks
Over the passed couple of years recurrent neural networks have outperformed the state of the art in a number of sequential learning problems. For long, deep neural nets in general - and recurrent neural networks in particular - have had a reputation of being notoriously hard to train. Thanks to the ongoing progress in research and a growing number of open source neural network libraries, the witchcraft aura of deep neural nets is by no means justified. In this talk Anders Huss from Watty will give a introduction to defining and training recurrent neural networks in the library Blocks (http://blocks.readthedocs.org/en/latest/index.html) built on top of Theano (http://deeplearning.net/software/theano/). Focus will be on implementation and “how to get started” rather than theoretical aspects of recurrent network structures.
Watty (http://watty.io/) is a Machine Learning Company at the front line of what can be done with digitalised energy data. By uncovering energy waste and turing energy data into actionable information we are determined to make a long term impact for a sustainable society. Deep learning approaches are a natural part of Watty’s algorithm development and we are happy to contribute to and be part of the burgeoning Stockholm Machine Learning community.
- Roelof Pieters: Deep Learning & Creativity (Language and Vision)
The last couple of months have seen an incredible development in techniques for what one would call creative AI. From using RNNs to generate text (http://karpathy.github.io/2015/05/21/rnn-effectiveness/), music (https://soundcloud.com/graphific/sets/rnn-against-the-machine) or poetry (http://sballas8.github.io/2015/08/11/Poet-RNN.html), to turning Convnets upside down to make them visualize their internals states ("DeepDream (http://www.csc.kth.se/~roelof/deepdream/)"), to fusing (http://gitxiv.com/posts/jG46ukGod8R7Rdtud/a-neural-algorithm-of-artistic-style) the higher level stylistic representation of one image with the lower level content representations of another, generative approaches of using AI for creativity are booming.
In this talk Roelof will give a short overview of using RNNs for character-level language models, as well as ConvNets for visualizations or "art".
(A more general - less technical - talk on this topic can be seen at the MLParis recording (https://www.youtube.com/watch?v=hoWOCWprR1U&feature=youtu.be&t=17m45s))
Keep in mind that the talks will be quite technical in orientation and as we have a limited number of places, we prefer the audience to have basic knowledge of machine learning :)
The meetup will of course be live streamed, just like before.
See you all there!