Skip to content

Learning Graphs from Data: A Signal Processing Perspective & Language Processing

Photo of Martin Goodson
Hosted By
Martin G. and Dirk G.
Learning Graphs from Data: A Signal Processing Perspective & Language Processing

Details

Please note that Photo ID will be required.

Agenda:

  • 18:30: Doors open, pizza, beer, networking
  • 19:00: First talk
  • 20:00: Break & networking
  • 20:15: Second talk
  • 21:30: Close

*Learning graphs from data: A signal processing perspective (Xiaowen Dong)

Abstract: The construction of a meaningful graph topology plays a crucial role in the success of many graph-based representations and algorithms for handling structured data. When a good choice of the graph is not readily available, however, it is often desirable to infer the graph topology from the observed data. In this talk, I will first survey classical solutions to the problem of graph learning from a machine learning viewpoint. I will then discuss a series of recent works from the fast-growing field of graph signal processing (GSP) and show how signal processing tools and concepts can be utilized to provide novel solutions to this important problem. Finally, I will end with some of the open questions and challenges that are central to the design of future signal processing and machine learning algorithms for graph learning.

Bio: Xiaowen Dong is a Departmental Lecturer (roughly Assistant Professor) in the Department of Engineering Science and a Faculty Member of the Oxford-Man Institute, University of Oxford. He is primarily interested in developing novel techniques that lie at the intersection of machine learning, signal processing, and game theory in the context of networks, and applying them to study questions across social and economic sciences, with a particular focus on understanding human behaviour, decision making and societal changes.

*Some Theoretical Underpinnings for Language Processing (Jeremy Reffin - TAG Laboratory, University of Sussex)

Abstract: The research field of distributional semantics predates the era of Deep Learning in natural language processing - but its ideas provide some intuition as to how and why the simple structures of neural networks are able to develop and demonstrate aspects of language competence. I will outline what those ideas are and illustrate how they tie back to theoretically coherent models of language developed by Wittgenstein and Ferdinand de Saussure around 100 years ago. Taking these old ideas seriously gives coherent theoretical underpinnings to current work and also offers interesting implications for how to take language processing forwards, which I will discuss. Looking ahead, I think it provides an optimistic view of the prospects for developing more general language competence using quite simple underlying architectures.

Bio: Following Jeremy's undergraduate studies in Natural Sciences at the University of Cambridge, he completed a DPhil in Biomedical Engineering at the University of Sussex. Jeremy subsequently enjoyed a 20-year business career as a consultant, a venture capitalist, and a private equity partner before returning to the academic world in 2009. Since 2010, he has co-founded two AI research laboratories at the University of Sussex, the Centre for Analysis of Social Media at the think-tank Demos, and a R&D-focused consulting firm, CASM Consulting LLP.

Photo of London Machine Learning Meetup group
London Machine Learning Meetup
See more events