We are excited to invite you to the UCL NLP Inaugural meeting, celebrating the forming of the UCL Natural Language Processing group (http://nlp.cs.ucl.ac.uk/). We would like to welcome you to an afternoon of invited keynote talks, presentations by members of the UCL NLP group, a poster session, and an evening social. Below you can find the schedule, as well as titles and abstracts for the invited talks.
1:30—1:40 Tim Rocktäschel & Sebastian Riedel: Opening Remarks
1:40—1:45 Marek Rei & Helen Yannakoudakis: London NLP News
1:45—2:15 Sebastian Riedel: The Deconstruction of Automated Knowledge Base Construction
2:15—2:45 Keynote Andreas Vlachos: Automated Fact Checking
2:45—3:30 Coffee Break
3:30—4:00 Keynote Thomas Demeester: Small? Yes. But clever!
4:00—4:15 Pontus Stenetorp: Constructing Datasets for Multi-hop Reading Comprehension
4:15—4:30 Max Bartolo: Asking Harder Questions
4:30—4:45 Patrick Lewis: Unsupervised QA by Cloze Translation
4:45—5:00 Yuxiang Wu: Gaussian Match Kernel of Embeddings for Information Retrieval
5:00—5:15 Pasquale Minervini: Explainable, Data-Efficient, Relational Representation Learning
5:15—5:30 Coffee Break
5:30—7:00 Poster Session & Snacks
7:00—9:00 Drinks at The Euston Tap
Keynote Andreas Vlachos
Title: Automated Fact Checking
Abstract: Fact checking is the task of verifying a claim against sources such as knowledge bases and text collections. While this task has been of great importance for journalism, it has recently become of interest to the general public as it is one of the weapons against misinformation. In this talk, I will first discuss the task and what should be the expectations from automated methods for it. Following this, I will present our approach for fact checking simple numerical statements which we were able to learn without explicitly labelled data. Then I will describe how we automated part of the manual process of the debunking website emergent.info, which later evolved into the Fake News Challenge with 50 participants. Finally, I will present the Fact Extraction and Verification shared task, which took place in 2018 and the upcoming second edition.
Keynote Thomas Demeester
Title: Small? Yes. But clever!
Abstract: Models consisting entirely of deep neural networks, trained on large datasets, hold the state-of-the-art in many tasks these days. Such models can become very large in terms of the number of parameters. Yet, smaller models may be attractive, for example for deployment on small hand-held devices, or simply due to limitations in small-scale businesses’ computational resources.
I will use some of our recent work as examples of different strategies to avoid the need for very large general-purpose neural network models.
The first approach involves separating the model’s pattern recognition capabilities from its required reasoning functionality, and offloading the latter to a specialized classical AI tool. Secondly, I will demonstrate how highly expressive dense sequence models can be made sparse even before training. The final example will be on making neural network models more efficient, by adapting them based on the physics underlying the data generation process.