A case study in the power of abstraction.
Differentiation of a function `f: ℝ -> ℝ` is inherently a numerical process, and as such is subject to all of the limitations associated with implementing numerical algorithms, e.g. potentially slow convergence and round-off error. Engineers and Mathematicians have known since the seventeenth century that abstracting the notion of a function `f: ℝ -> ℝ` via algebraic expressions exposes an "algebra of differentiation," i.e. rules that can be applied to replace the costly and inaccurate process of numerical differentiation with simple symbol replacement that can be done by hand, the difficult of which is bounded by the complexity of the initial algebraic expression for `f` and the result of which is exact.
So is the computer useless? Automatic Differentiation leverages the algebraic abstraction in order to overcome the limitations of numerical algorithms. We encode, in a mere 42 lines of Haskell, the algebra of differentiation, resulting in a library that produces exact numerical results (up to machine precision) in CPU time that is bounded by the complexity of `f`. We discuss extending the library to include symbolic results as well (and I'll implement it, if I have time before the talk).
As a pièce de résistance, we lift the abstraction of differentiation as an algebraic system (rather than a numerical process) to the Haskell type system, definitively putting the "Algebraic" in "Algebraic Data Types". We learn how to find the derivatives of Haskell data types, and we discuss reasons why this unholy mess might actually be a useful concept.
Inspired by and incorporating material from Justin Domke's, Benjamin Kovach's, and Chris Taylor's blogs.