AI - Ben Dunn and Graham Taylor !


Details
Do not miss this meetup with talks by Ben and Graham!
Ben Dunn: Driving model selection with persistent homology
The optimal choice of model for a given set of data depends greatly on the often unknown properties of the underlying features. A particularly relevant property is the feature topology. For example, seasonal effects in stock trading is easily accounted for once it is recognised as being cyclical, i.e. having a circular topology. Here we leverage tools developed in persistent homology to uncover the topological properties of a priori unknown features. To demonstrate the utility of the method, we consider data recorded from neurons and systematically explore and visualise the underlying code.
Graham Taylor: Hardware Accelerators for Deep Learning
Deep learning is a branch of machine learning which is based on learning feature hierarchies from high-dimensional, complex data sets. It has transformed industry, powering the major players such as Google, Facebook, IBM and Microsoft, as well as hundreds of startups, enabling new products and services in areas such as computer vision, speech analysis, and natural language processing. It has ridden the wave of cheap and widely available computation (namely general-purpose GPUs) and large human-annotated datasets. In this talk, I will highlight some of our group’s recent efforts in using hardware accelerators to speed up deep learning algorithms. First, I will motivate the need for hardware accelerators from a model search perspective. Then I will describe multi-GPU implementations of convolutional neural networks. Finally, I will describe an implementation of convnets on Field-programmable gate arrays (FPGAs), which are a type of low-power, reconfigurable hardware device. FPGAs can achieve comparable throughput to GPUs, but at about an order of magnitude less power.
------
Bonus - lectures coming Monday:
I would like to advertise the upcoming lectures by Graham Taylor and his post-doc Jon Schneider— titles and abstracts can be found below.
The talks will be back-to-back (with a short break in between) in the MTFS Auditorium on the first floor, from 14.00 - 15.30 on Monday, April 4.
Members of the computational, engineering and big-data community are warmly invited.
Graham Taylor, PhD
Learning Multi-scale Temporal Dynamics with Recurrent Neural Networks
The last three years have seen an explosion of activity studying recurrent neural networks (RNNs), a generalization of feedforward neural networks which can map sequences to sequences. Training RNNs using backpropagation through time can be difficult, and was thought up until recently to be hopeless due to vanishing and exploding gradients used in training. Recent advances in optimization methods and architectures have led to impressive results in modeling speech, handwriting and language. Applications to other areas are emerging. In this talk, I will review some recent progress on RNNs and discuss our work on extending and improving the Clockwork RNN (Koutnick et al.), a simple yet powerful model that partitions its hidden units to model specific temporal scales. Our “Dense clockworks” are a shift-invariant form of the architecture which which we show to be more efficient and effective than their predecessor. I will also describe a recent collaboration with Google in which we apply Dense clockworks to authenticating mobile phone users based on the movement of the device as captured by the accelerometer and gyroscope.
Jon Schneider, PhD
Unsupervised Modeling of Shared and Unique Social behaviors in Drosophila melanogaster
We explore the utility of an unsupervised method to model common, as well as trial-specific behaviors in Drosophila melanogaster. The method uses a Hidden Markov Model (HMM) with a beta-process prior to model the time series, allowing features to be shared across the dataset (Fox et al. 2009). This gives the flexibility of unsupervised modeling of common behavioral actions (such as walking forward) while still allowing category-specific actions to arise (setting-specific social interactions) without the need to label the dataset. The Beta-Process HMM is combined with the ‘infinite HMM’ (Beal et al. 2001), which allows a suitably large library of behaviors to be discovered from the data without a priori setting the number of behaviors (states) to model. We used the Caltech Fly-vs-Fly dataset (Eyjolfsdottir et al. 2014), which features three social paradigms, to assess the modelling power of the method. With small training datasets, the Beta-process iHMM improved generalization (via augmented data for common actions) and test data classification (over simple iHMMs each trained on a specific treatment). In addition to improving generalization this framework also allowed the isolation and investigation of potentially interesting treatment-specific behaviors that are detected.

AI - Ben Dunn and Graham Taylor !