Skip to content

What we’re about

Love talking about papers? So do we!

Do you have a paper within the realm of computing that excites you — recent or classic — and want to share it with others? Or would you enjoy hearing accessible, enthusiastic explanations of important research?

Whether you have implemented the ideas, used them in a project, or simply want to learn and discuss, this is a welcoming, inclusive space for presenters and listeners alike: we celebrate diverse perspectives, encourage practical demos and honest struggles. Everyone — students, researchers, engineers and curious minds — is invited.

Logistics — we meet monthly in Zürich, usually on a Thursday, 18:15–20:00; RSVP on Meetup.
Subjects — papers live within the broad realms of computing and computer science, kept intentionally open-ended.
Audience — ideal for anyone who wants accessible explanations of complex computer-science papers, where the maths is typically simplified.
Culture — inclusive, respectful and welcoming to diverse perspectives.
Presentation format — talks are typically 45–60 minutes, followed by discussion, Q&A and networking.

We are curating this repository for papers presented at PWL Zürich. You can contribute by adding Pull Requests for papers, code, and/or links to our repository here. We keep a list of papers that we would like to talk about.
We follow the Papers We Love Code of Conduct.
More details can be found on the event page.

Automatic Differentiation in ML and AI : A Programmer's Perspective

Automatic Differentiation in ML and AI : A Programmer's Perspective

ETH Zurich, CAB G 56, Universitätstrasse 6, 8006, Zurich, CH

After a long hiatus, we are proud to revive Papers We Love, Zurich — a community series where people present and discuss influential computing research in an informal, welcoming setting.

In our first session back, Abhiroop will present “Automatic Differentiation in Machine Learning: A Survey”. Automatic Differentiation (AD) — the generalisation of backpropagation — computes fast, exact derivatives of numeric functions expressed as programs. Backpropagation is the same idea, specialised to neural networks, used to compute gradients of the loss during training of the vast majority of ML models.

Why care?
For ML enthusiasts: Automatic Differentiation is the practical engine behind most modern ML frameworks. It underpins libraries such as PyTorch (torch.autograd), TensorFlow (autodiff) and JAX (grad), making gradient-based training and advanced optimisation feasible.

For programmers: a code-friendly explanation of how the training phase of Machine Learning actually computes gradients.
The talk will be 45–60 minutes, followed by discussion, Q&A and snacks.

The talk will highlight the elegance of AD while remaining accessible to a broad audience; LITTLE TO NO prior background in calculus, machine learning or programming languages is required. We look forward to a lively discussion on its applications across domains!

  • Photo of the user
  • Photo of the user
  • Photo of the user
31 attendees

Upcoming events

1

See all

Group links

Members

52
See all