
What we’re about
Love talking about papers? So do we!
Do you have a paper within the realm of computing that excites you — recent or classic — and want to share it with others? Or would you enjoy hearing accessible, enthusiastic explanations of important research?
Whether you have implemented the ideas, used them in a project, or simply want to learn and discuss, this is a welcoming, inclusive space for presenters and listeners alike: we celebrate diverse perspectives, encourage practical demos and honest struggles. Everyone — students, researchers, engineers and curious minds — is invited.
Logistics — we meet monthly in Zürich, usually on a Thursday, 18:15–20:00; RSVP on Meetup.
Subjects — papers live within the broad realms of computing and computer science, kept intentionally open-ended.
Audience — ideal for anyone who wants accessible explanations of complex computer-science papers, where the maths is typically simplified.
Culture — inclusive, respectful and welcoming to diverse perspectives.
Presentation format — talks are typically 45–60 minutes, followed by discussion, Q&A and networking.
We are curating this repository for papers presented at PWL Zürich. You can contribute by adding Pull Requests for papers, code, and/or links to our repository here. We keep a list of papers that we would like to talk about.
We follow the Papers We Love Code of Conduct.
More details can be found on the event page.
Upcoming events
1
 - Automatic Differentiation in Machine Learning and AIETH Zurich, CAB G 56, Universitätstrasse 6, 8006, Zurich, CH- After a long hiatus, we are proud to revive Papers We Love, Zürich. Papers We Love is a community series where people present and discuss influential computing research in an informal, welcoming setting. - In our first session back, Abhiroop will discuss an elegant programming language and scientific computing technique closely related to the training of modern machine learning models. - What if we could take a program and, without changing a single line, make it compute its own derivative? That’s the magic of Automatic Differentiation — a generalisation of backpropagation, which underpins much of modern machine learning. - Why care? Automatic Differentiation gives you fast, exact gradients for optimisation — it’s the practical engine behind training neural networks, differentiable programming, scientific simulation optimisation and many recent ML advances. - Abhiroop will present the paper “Automatic Differentiation in Machine Learning: A Survey”. Automatic Differentiation, or AD in short, is a well-established technique for computing derivatives of numeric functions expressed as programs. Backpropagation is the same idea, specialised to neural networks, used to compute gradients of the loss during training of the vast majority of ML models. - The talk will be 45–60 minutes, followed by discussion, Q&A and (maybe) snacks. - The talk will highlight the elegance of AD while remaining accessible to a broad audience, tracing its roots from early programming languages and scientific computing work to modern machine learning. We look forward to a lively discussion on its applications across domains! 6 attendees
Group links
Organizers


















