What we're about
This is a group for anyone interested in research concerning artificial neural networks, deep learning, and so on. The format varies between discussion groups, guest speakers, and tutorials. Proposed topics include: training deep belief networks with unsupervised pre-training/supervised finetuning, recurrent neural networks, different optimization methods, scalable architectures for large networks across clusters, practical GPU programming (with Python!), NNs for NLP, using NNs in robotics, visualizations to monitor NN learning, and other stuff specific to these areas of research. Recent developments in this area of research show a lot of promise, but very few people within the machine learning community have any practical experience with these methods. Since there's so few of us, we should get together and share notes!