Skip to content

Details

After a long hiatus, we are proud to revive Papers We Love, Zurich — a community series where people present and discuss influential computing research in an informal, welcoming setting.

In our first session back, Abhiroop will present “Automatic Differentiation in Machine Learning: A Survey”. Automatic Differentiation (AD) — the generalisation of backpropagation — computes fast, exact derivatives of numeric functions expressed as programs. Backpropagation is the same idea, specialised to neural networks, used to compute gradients of the loss during training of the vast majority of ML models.

Why care?
For ML enthusiasts: Automatic Differentiation is the practical engine behind most modern ML frameworks. It underpins libraries such as PyTorch (torch.autograd), TensorFlow (autodiff) and JAX (grad), making gradient-based training and advanced optimisation feasible.

For programmers: a code-friendly explanation of how the training phase of Machine Learning actually computes gradients.
The talk will be 45–60 minutes, followed by discussion, Q&A and snacks.

The talk will highlight the elegance of AD while remaining accessible to a broad audience; LITTLE TO NO prior background in calculus, machine learning or programming languages is required. We look forward to a lively discussion on its applications across domains!

Events in Zurich, CH
Artificial Intelligence
Programming Languages
Python
Scientific Computing
Applied Math

Members are also interested in