This Meetup is past

79 people went

Location image of event venue


*****************IF YOU ARE NO LONGER COMING ************

*****************PLEASE CHANGE YOUR RSVP*****************

After a summer recess, we're back with the 3rd edition of the deep tech botsBerlin meetup. Here we discuss the challenges in building bots and assistants, and get into the details of the code and the mathematics. Big thank you to Google for hosting us. NB you cannot show up if you don't RSVP. For security reasons only registered attendees will be allowed in.


Doors open 7pm. We will have 2 talks with time for questions. As always, the majority of the time is dedicated to discussions.


What do neural networks learn about language?
Neural networks have redefined the state-of-the-art in many areas of natural language processing. Much of this success is attributed to their ability to learn representations of their input, and this has invited assumptions that these representations encode important semantic, syntactic, and morphological properties of language. However, these assumptions have not been tested empirically. I’ll discuss our attempts to do so, focusing on two questions: what do character-level models learn about morphology? And what do LSTMs learn about negation?

This is work with Clara Vania, Federico Fancellu, Yova Kementchedjhieva, Andreas Grivas, and Bonnie Webber.

Adam Lopez is a Reader (Associate Professor) in the School of Informatics at the University of Edinburgh. His research group develops computational models of natural language learning, understanding and generation in people and machines, and their research focuses on basic scientific, mathematical, and engineering problems related to these models.

* Speaker 2 TBC