Do not miss this meetup with talks by Ben and Graham!
Ben Dunn: Driving model selection with persistent homology
The optimal choice of model for a given set of data depends greatly on the often unknown properties of the underlying features. A particularly relevant property is the feature topology. For example, seasonal effects in stock trading is easily accounted for once it is recognised as being cyclical, i.e. having a circular topology. Here we leverage tools developed in persistent homology to uncover the topological properties of a priori unknown features. To demonstrate the utility of the method, we consider data recorded from neurons and systematically explore and visualise the underlying code.
Graham Taylor: Hardware Accelerators for Deep Learning
Deep learning is a branch of machine learning which is based on learning feature hierarchies from high-dimensional, complex data sets. It has transformed industry, powering the major players such as Google, Facebook, IBM and Microsoft, as well as hundreds of startups, enabling new products and services in areas such as computer vision, speech analysis, and natural language processing. It has ridden the wave of cheap and widely available computation (namely general-purpose GPUs) and large human-annotated datasets. In this talk, I will highlight some of our group’s recent efforts in using hardware accelerators to speed up deep learning algorithms. First, I will motivate the need for hardware accelerators from a model search perspective. Then I will describe multi-GPU implementations of convolutional neural networks. Finally, I will describe an implementation of convnets on Field-programmable gate arrays (FPGAs), which are a type of low-power, reconfigurable hardware device. FPGAs can achieve comparable throughput to GPUs, but at about an order of magnitude less power.