Kernel Embeddings & Meta Learning + Learning & Reasoning in AI

Join waitlist?

Respond by 12:00 PM on 6/19.

38 on waitlist.

Location image of event venue


Please note that Photo ID will be required. Please can attendees ensure their meetup profile name includes their full name to ensure entry.

- 18:30: Doors open, pizza, beer, networking
- 19:00: First talk
- 19:45: Break & networking
- 20:00: Second talk
- 20:45: Close

Man AHL: At Man AHL, we mix machine learning, computer science and engineering with terabytes of data to invest billions of dollars every day.

Evolution AI: Machines that Read - get answers from your text data.

* Kernel Embeddings, Meta Learning & Distributional Transfer (Dino Sejdinovic)

Abstract: Embeddings of probability distributions into reproducing kernel Hilbert spaces are a useful framework for fully nonparametric hypothesis testing and for learning on distributional inputs. I will describe recent applications of this framework in the context of meta learning. In particular, we consider hyperparameter learning using Bayesian optimisation where typically one requires initial exploration even in cases where similar prior tasks have been solved. We propose to transfer information across tasks using learned kernel-neural representations of training datasets used in those tasks. This results in a joint Gaussian process model on hyperparameters and data representations and the developed method has a faster convergence compared to existing baselines, in some cases requiring only a few evaluations of the target objective.

Bio: Dino Sejdinovic is an Associate Prof at the Department of Statistics, University of Oxford, a Fellow of Mansfield College, Oxford, and a Turing Fellow of the Alan Turing Institute. He previously held postdoctoral positions at the Gatsby Computational Neuroscience Unit, University College London [masked]) and at the Institute for Statistical Science, University of Bristol [masked]) and worked as a data science consultant in the financial services industry. He received a PhD in Electrical and Electronic Engineering from the University of Bristol (2009) and a Diplom in Mathematics and Theoretical Computer Science from the University of Sarajevo (2006).

* Learning & Reasoning in Artificial Intelligence (Thomas Lukasiewicz)

Abstract: The talk will give an overview of the research in the intersection of knowledge representation and reasoning with machine and deep learning, ranging from commonsense reasoning in deep-learning-based natural language processing as well as deep learning with explanations to deep-learning-based approaches to logical reasoning and structured data extraction from unstructured sources (such as natural language text and videos). In particular, the talk will also present a novel approach to ontology reasoning that is based on deep learning rather than logic-based formal reasoning. More specifically, a new model for statistical relational learning is introduced on top of deep recursive neural networks. Compared with one of the best logic-based ontology reasoners on large standard benchmark datasets, the implemented system shows a high reasoning quality, while being up to two orders of magnitude faster.

Bio: Thomas Lukasiewicz is a Prof of Computer Science in the Department of Computer Science at the University of Oxford and a Turing Fellow at the Alan Turing Institute in London. Prior to this, he was holding a prestigious Heisenberg Fellowship by the German Research Foundation (DFG), affiliated with the University of Oxford, TU Vienna, Austria, and Sapienza University of Rome, Italy. His research interests are in artificial intelligence (AI), machine learning, and information systems, including especially knowledge representation, uncertainty in AI, and deep learning. He received the IJCAI-01 Distinguished Paper Award, the AIJ Prominent Paper Award 2013, and the RuleML 2015 Best Paper Award. He is area editor for the journal ACM TOCL, associate editor for the journals JAIR and AIJ, and editor for the journal Semantic Web and Heliyon.