• Towards Intepretable & Responsible AI & Searching for Principles of Reasoning
    Please note that Photo ID will be required. Please can attendees ensure their meetup profile name includes their full name to ensure entry. Agenda: - 18:30: Doors open, pizza, beer, networking - 19:00: First talk - 19:45: Break & networking - 20:00: Second talk - 20:45: Close * Towards Interpretable and Responsible AI in Structured Worlds (Vaishak Belle) Abstract: The field of statistical relational learning aims at unifying logic and probability to reason and learn from relational data. Logic provides a means to codify high-level dependencies between individuals, enabling descriptive clarity in the knowledge representation system, and probability theory provides the means to quantify our uncertainty about this knowledge. In this talk, we report on some recent progress in the field while touching on the themes of interpretability and responsibility in AI. If time permits, we will also discuss very recent work on automating responsible decision making, by explicitly capturing the blame that should be accorded to a system in regards to a decision taken by it. Bio: Vaishak Belle is a Chancellor’s Fellow at the School of Informatics, University of Edinburgh, an Alan Turing Institute Faculty Fellow, and a member of the RSE (Royal Society of Edinburgh) Young Academy of Scotland. Vaishak’s research is in artificial intelligence, and is motivated by the need to augment learning and perception with high-level structured, commonsensical knowledge, to enable AI systems to learn faster and more accurate models of the world. He is interested in computational frameworks that are able to explain their decisions, modular, re-usable, and robust to variations in problem description. He has co-authored over 40 scientific articles on AI, and along with his co-authors, he has won the Microsoft best paper award at UAI, and the Machine learning journal award at ECML-PKDD. In 2014, he received a silver medal by the Kurt Goedel Society. * Searching for the Principles of Reasoning and Intelligence (Shakir Mohamed) Abstract: We are collectively committed to a common task: a search for the general principles that make machines-that-learn possible. This leads to the question: What are the universal principles, if there are any, of reasoning and intelligence in machines? For me, these are the principles of probability, and of probabilistic inference. My search begins with four statistical operations that expose the dual tasks of learning, and of testing. We can instantiate many different types of inferential questions, and I share some of the pathways I've followed in attempting to find general-purpose approaches to them. One such area is variational inference, and I'll briefly discuss the roles of amortised inference, stochastic optimisation, and universal density estimation. For the most part, I'll explore recent work in testing as an inferential principle for implicit probabilistic models, and discuss work in estimation-by-comparison, density ratio estimation, and the method-of-moments. Different types of models require different types of inference, making any type of universal inferential elusive. But these are ongoing efforts, and as usual, there remain many questions and much more to do. My search for the principles of reasoning and intelligence continues. Bio: Shakir Mohamed is a staff research scientist at DeepMind. Shakir's research is in areas of statistical machine learning and artificial intelligence. His work focusses on the interface between probabilistic reasoning, deep learning and reinforcement learning, and how the computational solutions that emerge at that intersection can be used to develop general-purpose learning systems. Shakir focusses his efforts around 3 pillars: Searching for the Principles of Reasoning and Intelligence, on Global Challenges, and Transformation and Diversity.

    This Meetup is past

    1 Angel Ln

    1 Angel Ln · London

    2 comments
  • Learning to Navigate & Learning to leverage disentangled representations for RL
    Please note that Photo ID will be required. Please can attendees ensure their meetup profile name includes their full name to ensure entry. Agenda: - 18:30: Doors open, pizza, beer, networking - 19:00: First talk - 19:45: Break & networking - 20:00: Second talk - 20:45: Close *Learning to Navigate (Piotr Mirowski) Abstract: Navigation is an important cognitive task that enables humans and animals to traverse, with or without maps, over long distances in the complex world. Such long-range navigation can simultaneously support self-localisation (“I am here”) and a representation of the goal (“I am going there”). For this reason, studying navigation is fundamental to the study and development of artificial intelligence, and trying to replicate navigation in artificial agents can also help neuroscientists understand its biological underpinnings. This talk will cover our own journey to understand navigation by building deep reinforcement learning agents, starting from learning to control a simple agent that can explore and memorise large 3D mazes, to building agents that can learn to read and write to memory in order to generalise goal acquisition skills to previously unseen environments. I will show how these artificial agents relate to navigation in the real world, both through the study of the emergence of grid cell representations in neural networks -- akin to those found in the mammalian entorhinal cortex -- and by demonstrating that these agents can navigate in Street View-based real world photographic environments. Bio: Piotr Mirowski is a Senior Research Scientist working in the Deep Learning department at DeepMind, focusing on navigation-related research and in scaling up agents to real world environments. Piotr studied computer science in France (ENSEEIHT, Toulouse) and obtained his PhD in computer science in 2011 at New York University, with a thesis on “Time Series Modeling with Hidden Variables and Gradient-based Algorithms” supervised by Prof. Yann LeCun (Outstanding Dissertation Award, 2011). Prior to joining DeepMind, Piotr worked at Schlumberger Research, at the NYU Comprehensive Epilepsy Center, at Bell Labs and at Microsoft Bing, on problems including geostatistics, geological image processing, epileptic seizure prediction from EEG, WiFi-based geolocalisation, robotics, NLP and search query auto-completion. In his spare time, Piotr performs theatre and improv, with or without robots on the stage, and investigates the use of AI for artistic human and machine-based co-creation. *Learning and leveraging disentangled representations for RL (Loic Matthey) Abstract: Deep Reinforcement Learning has shown great success in tackling increasingly more complex tasks, but it still lacks the kind of general and modular reasoning that humans and animals can readily deploy when solving new tasks. A key challenge to overcoming this limitation is learning better state representations for our RL algorithms, to make them more general, useful, interpretable and able to reason about the statistics of the world. I will cover advances in unsupervised representation learning that our team has published over the years, including Beta-VAE, SCAN and more recent works. I will then show how one can leverage such representations for RL and talk about the challenges that arise while doing so. Bio: Loic Matthey is a Senior Research Scientist at DeepMind, working in the Neuroscience team. He obtained his PhD on Computational Neuroscience and Machine Learning from the Gatsby Unit at UCL, under the supervision of Peter Dayan, working on probabilistic models of visual working memory. Previously, he obtained a MSc in Computer Science and Biocomputing from EPFL in Switzerland. His current research focuses on unsupervised representation learning with a focus on using them for reinforcement learning, and assessing different ways to move towards more general-purpose agents capable of conceptual reasoning.

    This Meetup is past

    1 Angel Ln

    1 Angel Ln · London

    7 comments
  • Recurrent Neural Models for Machine Perception & Deep Neural Networks
    Please note that Photo ID will be required. Agenda: - 18:30: Doors open, pizza, beer, networking - 19:00: First talk - 20:00: Break & networking - 20:15: Second talk - 21:30: Close *Recurrent Neural Models for Machine Perception (Ronnie Clark) Abstract: Visual perception models are needed to enable mobile robots to actively explore a scene and understand their surroundings. Lifelong machine perception has thus long been a goal of computer vision for robotics. However, traditional perception models have fallen short of this goal; requiring extensive hand engineering and being plagued by issues of robustness and adaptability. In this talk we will look at ways in which new machine learning methods (specifically deep neural models) are used for robot perception with the goal of eliminating the restrictions incurred by traditional approaches. We will look at the role recurrence plays in perception and how recurrent networks can be used to model, recognise and interpret a robot's environment and how they can aid in creating dense, semantically annotated reconstructions of the world. We will specifically focus on applying these models in real world environments with low cost cameras, fast motion and changing lighting conditions. Bio: Ronnie is a postdoctoral fellow at Imperial College London where he holds a Dyson Fellowship. He obtained his PhD from the Uni of Oxford Dept of Computer Science and MSc degree in Information Engineering from the Uni of Witwatersrand. Ronnie is interested in the general topic of visual machine perception which is needed to enable mobile devices to model, explore and understand their surroundings. His current research focuses on ways in which deep neural models can be used alongside traditional methods and existing domain knowledge, and how these methods can be used to create consistent, dense, semantically annotated reconstructions of the world. *What optimisation can do for deep learning: training and verifying deep neural networks? (Rudy Bunel & Leonard Berrada) Abstract: In the past decade, deep neural networks have allowed undeniable and impressive progress in the domain of supervised learning. However, important challenges remain ahead to further their development. Training new models is still a tedious process, particularly because the commonly used SGD requires a manual schedule of the learning rate. In order to use deep models in safety-critical applications, it's crucial to formally verify their properties. In this talk, we'll present two axes of work to remedy these issues. First, we present an optimization algorithm - DFW - that trains neural networks by relying on Support Vector Machines. DFW trains modern deep networks several times faster than a manually tuned SGD, leads to similar or superior generalization performance, all the while requiring only a single hyper-parameter. Second, we present a framework for generalising existing algorithms for formal verification, as well as a new method that can be extracted from it. This novel approach achieves order of magnitude speedups compared to previous work. Bios: Rudy Bunel is a PhD student at the University of Oxford. He started his PhD by publishing on optimization for computer vision and program synthesis problems but is now working on formal verification of Neural Networks, even though he is generally interested in everything related to Machine Learning and Optimization. His supervisors are Pawan Kumar (Oxford), Philip Torr (Oxford) and Pushmeet Kohli (Deepmind). Leonard Berrada is a PhD student at Uni of Oxford, where he is supervised by Pawan Kumar and Andrew Zisserman. He is part of the Centre of Doctoral Training for Autonomous Intelligent Machines & Systems. He holds a MEng in Operations Research & Industrial Engineering at Uni of Berkeley, California and a MSc in Engineering Science at Ecole Centrale Paris. He obtained a BSc in Engineering Science at Ecole Centrale Paris and a BSc in Fundamental Physics at Universite Paris-Sud.

    This Meetup is past

    1 Angel Ln

    1 Angel Ln · London

    5 comments
  • Learning Graphs from Data: A Signal Processing Perspective & Language Processing
    Please note that Photo ID will be required. Agenda: - 18:30: Doors open, pizza, beer, networking - 19:00: First talk - 20:00: Break & networking - 20:15: Second talk - 21:30: Close *Learning graphs from data: A signal processing perspective (Xiaowen Dong) Abstract: The construction of a meaningful graph topology plays a crucial role in the success of many graph-based representations and algorithms for handling structured data. When a good choice of the graph is not readily available, however, it is often desirable to infer the graph topology from the observed data. In this talk, I will first survey classical solutions to the problem of graph learning from a machine learning viewpoint. I will then discuss a series of recent works from the fast-growing field of graph signal processing (GSP) and show how signal processing tools and concepts can be utilized to provide novel solutions to this important problem. Finally, I will end with some of the open questions and challenges that are central to the design of future signal processing and machine learning algorithms for graph learning. Bio: Xiaowen Dong is a Departmental Lecturer (roughly Assistant Professor) in the Department of Engineering Science and a Faculty Member of the Oxford-Man Institute, University of Oxford. He is primarily interested in developing novel techniques that lie at the intersection of machine learning, signal processing, and game theory in the context of networks, and applying them to study questions across social and economic sciences, with a particular focus on understanding human behaviour, decision making and societal changes. *Some Theoretical Underpinnings for Language Processing (Jeremy Reffin - TAG Laboratory, University of Sussex) Abstract: The research field of distributional semantics predates the era of Deep Learning in natural language processing - but its ideas provide some intuition as to how and why the simple structures of neural networks are able to develop and demonstrate aspects of language competence. I will outline what those ideas are and illustrate how they tie back to theoretically coherent models of language developed by Wittgenstein and Ferdinand de Saussure around 100 years ago. Taking these old ideas seriously gives coherent theoretical underpinnings to current work and also offers interesting implications for how to take language processing forwards, which I will discuss. Looking ahead, I think it provides an optimistic view of the prospects for developing more general language competence using quite simple underlying architectures. Bio: Following Jeremy's undergraduate studies in Natural Sciences at the University of Cambridge, he completed a DPhil in Biomedical Engineering at the University of Sussex. Jeremy subsequently enjoyed a 20-year business career as a consultant, a venture capitalist, and a private equity partner before returning to the academic world in 2009. Since 2010, he has co-founded two AI research laboratories at the University of Sussex, the Centre for Analysis of Social Media at the think-tank Demos, and a R&D-focused consulting firm, CASM Consulting LLP.

    This Meetup is past

    1 Angel Ln

    1 Angel Ln · London

    12 comments
  • Understanding & Generalising the Convolution and Scalable Bayesian Inference
    VERY IMPORTANT INFORMATION: The venue has changed to 1 Angel Ln, London EC4R 3AB. We have a great new venue (very close to the old one) - please note that Photo ID will be required. Agenda: - 18:30: Doors open, pizza, beer, networking - 19:00: First talk - 20:00: Break & networking - 20:15: Second talk - 21:30: Close *Sponsors* Man AHL: At Man AHL, we mix machine learning, computer science and engineering with terabytes of data to invest billions of dollars every day. Evolution AI: Build a state-of-the-art NLP pipeline in seconds. *Understanding and Generalising the Convolution (Daniel Worrall) Abstract: Classifying cat vs. dog images should not be affected by the translation/rotation/scaling of the animal. But off-the-shelf convolutional neural networks (CNNs) can only deal with translation. To solve the problem of rotation and scaling invariance, we need to first ask “Why can CNNs cope with translation in the first place”? The answer is because they use convolutions. Just what is it about convolution that makes CNNs so useful for dealing with translation? And how can we leverage these insights to build tomorrow’s state-of-the-art models for other transformations? It turns out that convolutions and their generalisations are a unique operation that must be included in any model when there is an invariance or symmetry in the task to data transformations. In this talk, I will introduce some light theory and insights to these questions, some of the models my lab and I have developed, and potential avenues of future research. Bio: Daniel Worrall is a postdoctoral researcher working with Prof. Dr. Max Welling at the University of Amsterdam, in the Philips Laboratory. He is interested in equivariant neural networks, approximate Bayesian inference, uncertainty quantification, and medical imaging. He read Information Engineering at the University of Cambridge (BA, MEng) and Computer Vision at University College London (PhD), where he was briefly involved with Amnesty International’s Decoders Unit working on AI for Good. *Scalable Bayesian Inference with Hamiltonian Monte Carlo (Michael Betancourt) Abstract: Despite the promise of big data, inferences are often limited not by sample size but rather by systematic effects. Only by carefully modeling these effects can we take full advantage of the data -- big data must be complemented with big models and the algorithms that can fit them. One such algorithm is Hamiltonian Monte Carlo, which exploits the inherent geometry of the posterior distribution to admit full Bayesian inference that scales to the complex models of practical interest. In this talk, I will discuss the conceptual foundations of Hamiltonian Monte Carlo, elucidating the geometric nature of its scalable performance and stressing the properties critical to a robust implementation. Bio: Michael Betancourt is the principle research scientist with Symplectomorphic, LLC where he develops theoretical and methodological tools to support practical Bayesian inference. He is also a core developer of Stan, where he implements and tests these tools. In addition to hosting tutorials and workshops on Bayesian inference with Stan, he also collaborates on analyses in epidemiology, pharmacology, and physics, amongst others. Before moving into statistics, Michael earned a B.S. from the California Institute of Technology and a Ph.D. from the Massachusetts Institute of Technology, both in physics.

    This Meetup is past

    1 Angel Ln

    1 Angel Ln · London

    18 comments
  • Deep Image Prior and Generating Humans
    VERY IMPORTANT INFORMATION: The venue has changed to 1 Angel Ln, London EC4R 3AB. We have a great new venue (very close to the old one) - please note that Photo ID will be required. Agenda: - 18:30: Doors open, pizza, beer, networking - 19:00: First talk - 20:00: Break & networking - 20:15: Second talk - 21:30: Close * Deep Image Prior (Dmitry Ulyanov) Deep convolutional networks have become a popular tool for image generation and restoration. Generally, their excellent performance is imputed to their ability to learn realistic image priors from a large number of example images. In this paper, we show that, on the contrary, the structure of a generator network is sufficient to capture a great deal of low-level image statistics prior to any learning. In order to do so, we show that a randomly-initialized neural network can be used as a handcrafted prior with excellent results in standard inverse problems such as denoising, superresolution, and inpainting. Furthermore, the same prior can be used to invert deep neural representations to diagnose them and to restore images based on flash-no flash input pairs. Apart from its diverse applications, our approach highlights the inductive bias captured by standard generator network architectures. It also bridges the gap between two very popular families of image restoration methods: learning-based methods using deep convolutional networks and learning-free methods based on handcrafted image priors such as self-similarity. Bio: Dmitry Ulyanov received his major in Machine Learning at Moscow State University and now studies for his Ph.D. degree at Skoltech Institute. His supervisors are Victor Lempitsky and Andrea Vedaldi and his work is mostly focused on image synthesis and generative models. Dmitry also serves as teaching assistant at Deep Learning class at Skoltech and Yandex's School of Data Analysis. He worked in Yandex and had an internship at Google. Dmitry is a prize winner in more than 10 Data Science contests and runs class about competitive Data Science on Coursera. * Generative Adversarial Networks (GANs) for synthesizing realistic visual humans (Stefanos P. Zafeiriou) GANs have been recently proposed as a very promising direction for developing algorithms that learn to generate data. The learning paradigm is very different than the conventional generative methodologies used in machine learning and statistics the past 50 years, since it applied ideas from game theory. In this talk, I will present recent work on GAN architectures and particular how GANs can be used in order to create synthetic realistic visual human samples in 2D/3D. I will comment on the application of such GANs in security, VR & AR and computer games. Stefanos P. Zafeiriou is currently a Reader in Machine Learning and Computer Vision with the Department of Computing, Imperial College London, London, U.K, a Distinguishing Research Fellow with University of Oulu under Finish Distinguishing Professor Programme and a CTO of the Imperial College Startup FaceSoft. He was a recipient of the Prestigious Junior Research Fellowships from Imperial College London in 2011 to start his own independent research group. He was the recipient of the President's Medal for Excellence in Research Supervision for 2016. He has received various awards during his doctoral and post-doctoral studies. He currently serves as an Associate Editor of the IEEE Transactions on Affective Computing and Computer Vision and Image Understanding journal. In the past he held editorship positions in IEEE Transactions on Cybernetics the Image and Vision Computing Journal.

    This Meetup is past

    1 Angel Ln

    1 Angel Ln · London

    7 comments
  • Graphcore and Symbolic Representation Learning
    VERY IMPORTANT INFORMATION: The venue has changed to 1 Angel Ln, London EC4R 3AB. We have a great new venue (very close to the old one) - please note that Photo ID will be required. Agenda: - 18:30: Doors open, pizza, beer, networking - 19:00: First talk - 20:00: Break & networking - 20:15: Second talk - 21:30: Close Simon Knowles (Graphcore) Title: More complex models and more powerful machines Abstract: Machine Intelligence (MI) is a new class of computing workload of sufficient importance to rethink computing from top to bottom. This is the beginning of the real age of massively parallel computing, combined with a new dominant data class -- high-dimensional probability distributions. This talk will discuss how physics constrains silicon processor evolution, as performance demand is exploding, and what is needed in terms of abstraction to make programming tractable. Bio: Simon is co-founder, CTO and EVP Engineering of Graphcore, a British start-up developing processors ground-up for machine intelligence. Graphcore is Simon's third processor start-up, preceded by Element14 (acquired by Broadcom in 2000) and Icera (acquired by Nvidia in 2011). Before that, Simon was responsible for processor design at STmicro, via the acquisition of Inmos. Many of Graphcore's engineers share this heritage of developing new types of processor and software tools for newly emergent workloads. Marta Garnelo Abellanas (Imperial College) Title: Symbolic representation learning Abstract: A remarkable property of deep learning algorithms is their ability to learn useful task-specific representations from data directly without the need for hand-crafted feature engineering. As they have grown in popularity over the past decade, deep neural networks (NNs) have been successfully applied to a wide range of machine learning tasks, achieving state of the art results across many research areas. However, as the complexity of the research problems increase, some of the limitations of NN become increasingly clear: NNs suffer from interpretability issues, poor generalisation that leads to very data-hungry algorithms and the inability to be combined with other old, well established AI algorithms. Some of the research tackling these drawbacks takes inspiration from symbolic AI. It focusses, for example, on obtaining interpretable representations from NNs or thinking about objects and relations when building network architectures. This talk reviews symbolic approaches and properties that might be interesting to keep in the back of our heads for current representation learning and reviews current research that merges deep and symbolic methods. Bio: Marta is a research scientist at DeepMind working on generative models and reinforcement learning. She is currently also in denial about having entered the last stage of her PhD at Imperial College London under the supervision of Prof Murray Shanahan. Her main research focus over the past years addressed drawbacks of current deep learning models, such as their data inefficiency.

    This Meetup is past

    1 Angel Ln

    1 Angel Ln · London

    8 comments
  • Neural Program Interpreters and Data-Efficient Reinforcement Learning
    Agenda: - 18:30: Doors open, pizza, beer, networking - 19:00: First talk - 20:00: Break & networking - 20:15: Second talk - 21:30: Close * Reading and Reasoning with Neural Program Interpreters - Sebastian Riedel We are getting better at teaching end-to-end neural models how to answer questions about content in natural language text. However, progress has been mostly restricted to extracting answers that are directly stated in the text. In this talk, I will present our work towards teaching machines not only to read but also to reason with what was read and to do this in an interpretable and controlled fashion. Our main hypothesis is that this can be achieved by the development of neural abstract machines that follow the blueprint of program interpreters for real-world programming languages. We test this idea using two languages: an imperative (Forth) and a declarative (Prolog/Datalog) one. In both cases, we implement differentiable interpreters that can be used for learning reasoning patterns. Crucially, because they are based on interpretable host languages, the interpreters also allow users to easily inject prior knowledge and inspect the learnt patterns. Moreover, on tasks such as math word problems and relational reasoning, our approach compares favourably to state-of-the-art methods. Bio: Sebastian Riedel is a reader in Natural Language Processing and Machine Learning at the University College London (UCL), where he is leading the Machine Reading lab. He is also the head of research at Bloomsbury AI and an Allen Distinguished Investigator. He works in the intersection of Natural Language Processing and Machine Learning, and focuses on teaching machines how to read and reason. He was educated in Hamburg-Harburg (Dipl. Ing) and Edinburgh (MSc., PhD), and worked at the University of Massachusetts Amherst and Tokyo University before joining UCL. * Data-Efficient Reinforcement Learning - Haitham Bou-Ammar Though successful in numerous applications, current reinforcement learning techniques suffer from high computational and sample complexities. This limits their application in real-world scenarios, where environmental interactions are expensive. At PROWLER.io, we are developing next-generation reinforcement learning algorithms that are efficient, scalable, and robust. To do so, we draw upon a variety of methodologies from different fields, including probabilistic modelling, game theory, and optimisation. In this talk, I demonstrate how reinforcement learning can be made much more data-efficient. As an example, I present a result for controlling Montezuma's revenge in the order of thousands – not millions – of interactions with the environment. Bio: Dr Haitham Bou-Ammar leads the reinforcement learning group at PROWLER.io located in Cambridge, the United Kingdom. Prior to joining PROWLER.io, Haitham was a Professor at the Department of Computer Science at the American University of Beirut (AUB), Lebanon. Before the AUB, Dr Bou Ammar was a postdoctoral researcher at the Department of Operations Research and Financial Engineering at Princeton University. Prior to joining Princeton, Haitham was a postdoctoral research associate at the Department of Computer and Information Science at the University of Pennsylvania and a member of the General Robotics, Automation, Sensing, and Perception (GRASP) lab. Dr Bou-Ammar's primary research interests lie in statistical machine learning and artificial intelligence focusing on reinforcement learning, lifelong learning, multitask learning, and knowledge transfer. He is also interested in learning using massive amounts of data over extended time horizons (i.e., big-data problems). Dr Bou-Ammar's research interests also extend to analysing behavioural emergence in large-scale networks. His research also spans different areas of control theory, where he contributes to designing algorithms at the intersection of control theory and machine learning.

    AHL

    AHL Riverbank House, 2 Swan Lane, EC4R 3AD · London

    12 comments
  • Robot Supervision and Real-Time 3D Scene Perception
    Agenda: - 18:30: Doors open, pizza, beer, networking - 19:00: First talk - 20:00: Break & networking - 20:15: Second talk - 21:30: Close * Saving Time - Increasing Human Efficiency for Robot Supervision - Markus Wulfmeier Automation and applications of robots in various fields bear the promise of reducing expenses as well as time requirements for production, logistics, transportation, and others. The first step towards automation included writing down our own rules and intuitions about how machines should solve tasks: programming. Machine learning (a.k.a. programming 2.0 / differentiable programming) enables us to generate rules which are too complex to be manually formulated by training highly flexible models with large datasets. Our efforts have been shifted from rule design to the collection, cleaning, and annotation of data. To overcome increasing time demands for larger and larger datasets, we rely on methods from fields such as transfer learning, domain adaptation, learning from demonstration and reinforcement learning. In this talk, I will summarise some of our recent work from the Oxford Robotics Institute (University of Oxford) and the Berkeley AI Research lab (UC Berkeley) aiming at conceptualising the current challenges as well as the potentials for increasing the efficiency of humans to increase the efficiency of robotic automation. BIO: Markus is a postdoctoral research scientist at the Oxford Robotics Institute as well as a member of Oxford University’s New College. Most recently, he was a visiting scholar with the UC Berkeley Artificial Intelligence Research lab. The principal focus of his research is the development of approaches for increasing the efficiency of processes for providing supervision to guide autonomous systems with particular emphasis on transfer learning and learning from demonstration, work which was awarded as Best Student Paper at IROS16. Furthermore in early 2016, he was fortunate to lead ORIs path planning software development for the presentation of a self driving prototype at the Shell Eco Marathon (SEM). This work has paved the way for to the introduction of a new autonomous challenge category at the SEM scheduled for 2018. Being in the field of robotics since 2010, he has been part of research efforts on space exploration robots, GPU-based simulations and robotic platforms for first responders as well as mobile autonomy at various research institutions including MIT, ETHZ and the University of Oxford. * Real-Time 3D Scene Perception Using Vision - Andrew Davison Research in robotics and computer vision is leading us towards the generic real-time 3D scene understanding which will enable the next generation of smart robots and mobile devices. SLAM (Simultaneous Localisation and Mapping) is the problem of joint estimation of a robot's motion and the shape of the environment it moves through, and cameras of a variety of types are now the main outward looking sensors used to achieve this. While early visual SLAM systems concentrated on real-time localisation as their main output, the latest ones are now capable of dense and detailed 3D reconstruction and, increasingly, semantic labelling and object awareness. Andy will describe and connect the research that he and others have conducted in this field over the recent years, with examples from some of the key breakthrough systems. Bio: Since 1994 Andy has worked almost continuously on SLAM using vision, with a particular emphasis on methods that work in real-time with commodity cameras. Andy’s background includes many world-firsts, including in 2003 the first ever, real-time, single-camera SLAM system (MonoSLAM) which is widely acknowledged to be one of the key prototypes for recent commercial projects and products in low-cost mobile robotics (e.g. Dyson) or mobile phone/tablet/wearable 3D localisation and sensing (e.g. Google Project Tango). In 2016 he was behind the first ever 3D Event Camera based SLAM sytem.

    AHL

    AHL Riverbank House, 2 Swan Lane, EC4R 3AD · London

    7 comments
  • Beyond Recognition and AI Frontiers
    Agenda: - 18:30: Doors open, pizza, beer, networking - 19:00: First talk - 20:00: Break & networking - 20:15: Second talk - 21:30: Close * Is recognition enough to learn how to see? - Alex Kendall Computer vision has provided a challenging setting for machine learning research. Problems like ImageNet recognition have driven advances in deep learning over the last few years. However, in this talk I'm going to argue that learning to see requires so much more than recognition - which ImageNet classification models provide. For example, mobile robots such as autonomous vehicles need to know the geometry of the scene around them, the motion of the scene and predict the trajectories of other agents. I will explain how to formulate deep learning models to understand scene geometry and semantics using examples from my research. In addition, I will discuss recent advances in Bayesian deep learning which allows us to quantify our model's uncertainty and learn to see in data-efficient ways. Bio: Alex Kendall co-founded and is CTO at Wayve Technologies. He also holds a Research Fellowship at Trinity College at the University of Cambridge. He graduated with a Bachelor of Engineering from the University of Auckland, New Zealand. In 2014, he was awarded a Woolf Fisher Scholarship to study towards a Ph.D. at the University of Cambridge. Alex’s research investigates applications of deep learning for robot perception and control. His technology has been used to power smart-city infrastructure with Vivacity, control self-driving cars with Toyota Research Institute and enable next-generation drone flight with Skydio. * (Some) Artificial Intelligence Frontiers - Oriol Vinyals In this talk I'll describe three select challenges which are actively researched in our community, yet still elusive in supervised, unsupervised, and reinforcement learning respectively. In supervised learning, learning new concepts quickly is still far from solved, however research directions such as meta learning recently brought us some exciting advances. Despite impressive samples from unsupervised learning, meaningful metrics are still needed in order to assess progress and utility of generative models. And lastly, RL proved successful in e.g. beating the world champion of Go, but finding or building environments that are realistic and meaningful remains challenging. Bio: Oriol Vinyals is a Sr Staff Research Scientist at Google DeepMind, working in Deep Learning. Prior to joining DeepMind, Oriol was part of the Google Brain team. He holds a Ph.D. in EECS from University of California, Berkeley and is a recipient of the 2016 MIT TR35 innovator award. His research has been featured multiple times at the New York Times, BBC, etc., and his articles have been cited over 15000 times. At DeepMind he continues working on his areas of interest, which include artificial intelligence, with particular emphasis on machine learning, deep learning and reinforcement learning.

    AHL

    AHL Riverbank House, 2 Swan Lane, EC4R 3AD · London

    13 comments