- Paper 57: Shor's Algorithm
We'll read and discuss the following paper: Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer Peter W. Shor https://arxiv.org/pdf/quant-ph/9508027.pdf Abstract A digital computer is generally believed to be an efficient universal computing device; that is, it is believed able to simulate any physical computing device with an increase in computation time by at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. This paper considers factoring integers and finding discrete logarithms, two problems which are generally thought to be hard on a classical computer and which have been used as the basis of several proposed cryptosystems. Efficient randomized algorithms are given for these two problems on a hypothetical quantum computer. These algorithms take a number of steps polynomial in the input size, e.g., the number of digits of the integer to be factored. ** Please note the change of venue. **
- Paper 56: Idioms are Oblivious, Arrows are Meticulous, Monads are Promiscuous
Rongmin Lu will take us through the following paper: Idioms are Oblivious, Arrows are Meticulous, Monads are Promiscuous Sam Lindley, PhilipWadler, Jeremy Yallop Abstract We revisit the connection between three notions of computation: Moggiʼs monads, Hughesʼs arrows and McBride and Patersonʼs idioms (also called applicative functors). We show that idioms are equivalent to arrows that satisfy the type isomorphism A ↝ B ≃ 1 →(A ↝ B) and that monads are equivalent to arrows that satisfy the type isomorphism A ↝ B ≃ A→(1 ↝ B). Further, idioms embed into arrows and arrows embed into monads.
- Paper 55: Binders unbound
Roger Qiu will take us through the following paper: Binders Unbound https://www.seas.upenn.edu/~sweirich/papers/icfp11.pdf Stephanie Weirich, Brent A. Yorgey, Tim Sheard Abstract Implementors of compilers, program refactorers, theorem provers, proof checkers, and other systems that manipulate syntax know that dealing with name binding is difficult to do well. Operations such as α-equivalence and capture-avoiding substitution seem simple, yet subtle bugs often go undetected. Furthermore, their implementations are tedious, requiring “boilerplate” code that must be updated whenever the object language definition changes. Many researchers have therefore sought to specify binding syntax declaratively, so that tools can correctly handle the details behind the scenes. This idea has been the inspiration for many new systems (such as Beluga, Delphin, FreshML, FreshOCaml, Cαml, FreshLib, and Ott) but there is still room for improvement in expressivity, simplicity and convenience. In this paper, we present a new domain-specific language, UNBOUND, for specifying binding structure. Our language is particularly expressive — it supports multiple atom types, pattern binders, type annotations, recursive binders, and nested binding (necessary for telescopes, a feature found in dependently-typed languages). However, our specification language is also simple, consisting of just five basic combinators. We provide a formal semantics for this language derived from a locally nameless representation and prove that it satisfies a number of desirable properties. We also present an implementation of our binding specification language as a GHC Haskell library implementing an embedded domain specific language (EDSL). By using Haskell type constructors to represent binding combinators, we implement the EDSL succinctly using datatype-generic programming. Our implementation supports a number of features necessary for practical programming, including flexibility in the treatment of user-defined types, best-effort name preservation (for error messages), and integration with Haskell’s monad transformer library.
- Paper 54: Generative Adversarial Nets
Ishaan Varshney will take us through the following paper. Generative Adversarial Nets Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf Abstract We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1/2 everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.
- Paper 53: Generalising Monads to Arrows
Paweł Sączawa will take us through the following paper: Generalising monads to arrows John Hughes Preprint http://www.cse.chalmers.se/~rjmh/Papers/arrows.pdf Published version https://www.sciencedirect.com/science/article/pii/S0167642399000234 Abstract Monads have become very popular for structuring functional programs since Wadler introduced their use in 1990. In particular, libraries of combinators are often based on a monadic type. Such libraries share (in part) a common interface, from which numerous benefits flow, such as the possibility to write generic code which works together with any library. But, several interesting and useful libraries are fundamentally incompatible with the monadic interface. In this paper I propose a generalisation of monads, which I call arrows, with significantly wider applicability. The paper shows how many of the techniques of monadic programming generalise to the new setting, and gives examples to show that the greater generality is useful. In particular, three non-monadic libraries for efficient parsing, building graphical user interfaces, and programming active web pages fit naturally into the new framework.
- Paper 52: Capturing the Future by Replaying the Past
Anund will be taking us through the following paper: Capturing the Future by Replaying the Past James Koppel, Gabriel Scherer, Armando Solar-Lezama https://arxiv.org/abs/1710.10385 Abstract Delimited continuations are the mother of all monads! So goes the slogan inspired by Filinski's 1994 paper, which showed that delimited continuations can implement any monadic effect, letting the programmer use an effect as easily as if it was built into the language. It's a shame that not many languages have delimited continuations. Luckily, exceptions and state are also the mother of all monads! In this Pearl, we show how to implement delimited continuations in terms of exceptions and state, a construction we call thermometer continuations. While traditional implementations of delimited continuations require some way of "capturing" an intermediate state of the computation, the insight of thermometer continuations is to reach this intermediate state by replaying the entire computation from the start, guiding it using a recording it so that the same thing happens until the captured point. Along the way, we explain delimited continuations and monadic reflection, show how the Filinski construction lets thermometer continuations express any monadic effect, share an elegant special-case for nondeterminism, and discuss why our construction is not prevented by theoretical results that exceptions and state cannot macro-express continuations.
- Paper 51: Selective functors
Mark Hopkins will take us through the following paper: Selective Applicative Functors https://www.staff.ncl.ac.uk/andrey.mokhov/selective-functors.pdf Andrey Mokhov, Georgy Lukyanov, Simon Marlow, Jeremie Dimino Abstract Applicative functors and monads have conquered the world of functional programming by providing general and powerful ways of describing effectful computations using pure functions. Applicative functors provide a way to compose independent effects that cannot depend on values produced by earlier computations, and all of which are declared statically. Monads extend the applicative interface by making it possible to compose dependent effects, where the value computed by one effect determines all subsequent effects, dynamically. This paper introduces an intermediate abstraction called selective applicative functors that requires all effects to be declared statically, but provides a way to select which of the effects to execute dynamically. We demonstrate applications of the new abstraction on several examples, including two real-life case studies.
- Paper 50: Demystifying Differentiable Programming
Rongmin Lu will be taking us through the following paper: "Demystifying Differentiable Programming: Shift/Reset the Penultimate Backpropagator" Fei Wang, Xilun Wu, Gregory Essertel, James Decker, Tiark Rompf https://arxiv.org/abs/1803.10228 Abstract Deep learning has seen tremendous success over the past decade in computer vision, machine translation, and gameplay. This success rests in crucial ways on gradient-descent optimization and the ability to learn parameters of a neural network by backpropagating observed errors. However, neural network architectures are growing increasingly sophisticated and diverse, which motivates an emerging quest for even more general forms of differentiable programming, where arbitrary parameterized computations can be trained by gradient descent. In this paper, we take a fresh look at automatic differentiation (AD) techniques, and especially aim to demystify the reverse-mode form of AD that generalizes backpropagation in neural networks. We uncover a tight connection between reverse-mode AD and delimited continuations, which permits implementing reverse-mode AD purely via operator overloading and without any auxiliary data structures. We further show how this formulation of AD can be fruitfully combined with multi-stage programming (staging), leading to a highly efficient implementation that combines the performance benefits of deep learning frameworks based on explicit reified computation graphs (e.g., TensorFlow) with the expressiveness of pure library approaches (e.g., PyTorch). There is an implementation of this in Scala, called Lantern, available from Wang's Github: https://github.com/feiwang3311/Lantern A short summary is in a 5-page announcement paper by Wang and Rompf in ICLR 2018, the 6th International Conference on Learning Representations: https://openreview.net/forum?id=SJxJtYkPG Click the "PDF" icon at the top to download the paper.
- Paper 49: Revisiting the Futamura Projections: A Visual Tutorial
Mark Hopkins will take us through the following paper: Revisiting the Futamura Projections: A Visual Tutorial Brandon M. Williams and Saverio Perugini https://arxiv.org/abs/1611.09906 Partial evaluation, while powerful, is not widely studied or used by the pragmatic programmer. To address this, we revisit the Futamura Projec- tions from a visual perspective by introducing a diagramming scheme that helps the reader navigate the complexity and abstract nature of the Futamura Projections while emphasizing their recurring patterns. We an- ticipate that this approach will improve the accessibility of the Futamura Projections to a general computing audience. This very readable blog post by Dan Piponi, "The Three Projections of Doctor Futamura", may be useful accompaniment. http://blog.sigfpe.com/2009/05/three-projections-of-doctor-futamura.html Anyone who's hungry for more content may enjoy "Is There a Fourth Futamura Projection?" by Robert Glück https://dl.acm.org/citation.cfm?id=1480954 (requires ACM Digital Library access).