London Meetup: Deep Dive into TensorFlow #15

Details
Please remember to register here: https://bit.ly/2KFXnji
Agenda:
6:00 - Doors open. Networking. Wine, beer & snacks.
6:45 - Opening remarks.
7:00 - Deep Learning and challenges of scale
7:20 - Write debuggable Tensorflow code and find bugs in the herd
7:40 - Q&A break
8:00 - Wrap-up
_________________________________________________________
DETAILED AGENDA:
Speaker: Adam Grzywaczewski
Title: Deep Learning and challenges of scale
Abstract: Majority of interesting problems tackled by industry are fairly complex. Where it is relatively easy to build an early POC of a system it takes a huge amount of effort to build a solution meeting all of your functional as well as non-functional requirements. For example its fairly straightforward to build a POC Self Driving Vehicle that will drive across a small number of streets with human supervision. On the other hand building a Self-Driving Car which a robust and safe is an engineering feet requiring petabytes of data for training and validation. In this talk we will tackle some of the key challenges of building complex Deep Learning based systems with a primary focus on scalability of the training process.
Bio: Adam Grzywaczewski is a deep learning solution architect at NVIDIA, where his primary responsibility is to support a wide range of customers in delivery of their deep learning solutions. Adam is an applied research scientist specialising in machine learning with a background in deep learning and system architecture. Previously, he was responsible for building up the UK government’s machine-learning capabilities while at Capgemini and worked in the Jaguar Land Rover Research Centre, where he was responsible for a variety of internal and external projects and contributed to the self-learning car portfolio.
_______________________________________________________
Speaker: Yi Wei
Title: Write debuggable Tensorflow code and find bugs in the herd
Abstract: Tensorflow is powerful but difficult to use. What usually happens is after you wired up the Tensorflow graph, it is difficult to verify its correctness, and when a problem happens, it is difficult to debug. While working on various deep reinforcement learning algorithms at Prowler, I developed a set of techniques to mitigate the pain of writing, understanding and testing Tensorflow code. These techniques enable us to produce machine learning modules in a much faster speed, find bugs with minimal effort and most importantly, deliver learners whose correctness are validated against their definitions, mainly from corresponding research papers.
Bio: Yi Wei is a senior machine learning engineer at Prowler. He focuses on deep reinforcement learning algorithms for automated trading. Prior to Prowler, he was a co-founder of the CTX Fintech company which provides algo-trading infrastructure. He also worked at Microsoft Research Cambridge for three years; developed the CodeSnippets technology that synthesizes code from users’ natural language queries and from publicly available code repositories. The Bing search engine productized this technology in the tech search section and reported to have one of its core metrics, session success rate, improved by 4%, a hard to achieve improvement in the world of commercial search engines. He won the Microsoft Research Technology Transfer Award for the CodeSnippets project. Yi Wei got his PhD from ETH Zurich in 2012 on the topic of automated testing and bug fixing.

London Meetup: Deep Dive into TensorFlow #15