Skip to content

Details

After a long hiatus, we are proud to revive Papers We Love, Zürich. Papers We Love is a community series where people present and discuss influential computing research in an informal, welcoming setting.

In our first session back, Abhiroop will discuss an elegant programming language and scientific computing technique closely related to the training of modern machine learning models.

What if we could take a program and, without changing a single line, make it compute its own derivative? That’s the magic of Automatic Differentiation — a generalisation of backpropagation, which underpins much of modern machine learning.

Why care? Automatic Differentiation gives you fast, exact gradients for optimisation — it’s the practical engine behind training neural networks, differentiable programming, scientific simulation optimisation and many recent ML advances.

Abhiroop will present the paper “Automatic Differentiation in Machine Learning: A Survey”. Automatic Differentiation, or AD in short, is a well-established technique for computing derivatives of numeric functions expressed as programs. Backpropagation is the same idea, specialised to neural networks, used to compute gradients of the loss during training of the vast majority of ML models.

The talk will be 45–60 minutes, followed by discussion, Q&A and (maybe) snacks.

The talk will highlight the elegance of AD while remaining accessible to a broad audience, tracing its roots from early programming languages and scientific computing work to modern machine learning. We look forward to a lively discussion on its applications across domains!

Events in Zurich, CH
Artificial Intelligence
Programming Languages
Python
Scientific Computing
Applied Math

Members are also interested in