Grammatical Inference and Neural Theorem Provers


Details
Agenda:
-
18:30: doors open, pizza, beer, networking
-
19:00: First talk
-
20:00: Break & networking
-
20:15: Second talk
-
21:30: Close
End-to-end Differentiable Proving - Tim Rocktäschel
I will present our work on Neural Theorem Provers (NTPs): deep neural networks for end-to-end differentiable proving that work with dense vector representations of symbols. NTPs are recursively constructed by following the backward chaining algorithm as used in Prolog. Specifically, in NTPs unification, is replaced by a differentiable computation on vector representations of symbols using a radial basis function kernel. NTPs follow a long tradition of neural-symbolic approaches to automated knowledge base completion. They can be trained to infer facts from a given incomplete knowledge base using gradient-based optimization. By doing so, they learn to (i) place representations of similar symbols in close proximity in a vector space, (ii) make use of such similarities to prove facts, (iii) induce logical rules, and (iv) use either provided or induced logical rules for complex multi-hop reasoning. On a benchmark knowledge base, we demonstrate that NTPs outperform existing neural link prediction methods while at the same time providing interpretable logical rules.
Bio: Tim Rocktäschel is a postdoctoral researcher in the Whiteson Research Lab at University of Oxford's Department of Computer Science. Before, he was a Ph.D. student in the Machine Reading group at University College London. He is a recipient of a Google Ph.D. Fellowship in Natural Language Processing and a Microsoft Research Ph.D. Scholarship. Tim's research focus is on machine learning models that learn reusable abstractions and that generalize from few training examples by incorporating various forms of prior knowledge. His work is at the intersection of deep learning, reinforcement learning, program induction, logic, and natural language processing.
Grammatical inference: learning grammars and automata - Colin de la Higuera
Grammatical inference corresponds to learning a finite state machine or a grammar given structured data (strings, trees, now even graphs). Applications are numerous, from computational linguistics to bioinformatics, model checking or pattern recognition. The algorithms and techniques are quite different from those in statistical machine learning, often less robust and better suited for cases where the situation is close to that of identification: we know that there is a pattern of the intended family to be found.
Bio: Colin de la Higuera got his PhD at Bordeaux University, France in 1989. He has been associate Professor at the University of Montpellier, Professor at Saint-Etienne University and now at Nantes University.
He has been involved in a number of research themes, including algorithmics, formal language theory, pattern recognition. His chief interest lies in grammatical inference, a field in which he has been the author of more than 50 reviewed research and a monograph, “Grammatical Inference: Learning Automata and Grammars”, and has been the chairman of the International Community in Grammatical Inference (2002-2007).
He has been the founding president of the SIF: the French Informatics Society, and has launched the Class’Code project, a large project about building and running a MOOC based blended learning programmed whose goal is to train 300 000 Education professionals, in France, to Computer Science education.
He is currently a trustee of the Knowledge for All foundation and working towards the usage of technology for an open dissemination of knowledge and education. He is the holder of the UNESCO chair in technologies for the training of teachers by open educational resources at University of Nantes.
https://ssl.gstatic.com/ui/v1/icons/mail/images/cleardot.gif

Sponsors
Grammatical Inference and Neural Theorem Provers