addressalign-toparrow-leftarrow-rightbackbellblockcalendarcameraccwcheckchevron-downchevron-leftchevron-rightchevron-small-downchevron-small-leftchevron-small-rightchevron-small-upchevron-upcircle-with-checkcircle-with-crosscircle-with-pluscrossdots-three-verticaleditemptyheartexporteye-with-lineeyefacebookfolderfullheartglobegmailgooglegroupshelp-with-circleimageimagesinstagramFill 1linklocation-pinm-swarmSearchmailmessagesminusmoremuplabelShape 3 + Rectangle 1ShapeoutlookpersonJoin Group on CardStartprice-ribbonprintShapeShapeShapeShapeImported LayersImported LayersImported Layersshieldstartickettrashtriangle-downtriangle-uptwitteruserwarningyahoo

Lambda Gets Fancy: Polymorphism and Recursive Types

  • Dec 16, 2013 · 7:00 PM
  • Mixrank

Tikhon Jelvis continues his exposition of the lambda calculus and its variants, looking into more sophisticated type systems than the simply typed lambda calculus we saw last time.

Note the move from Thursday night to Monday night. We're hoping Monday won't conflict with as many interesting meetups as Thursday seems to, but it's hard to guarantee a completely event-free evening in the Bay Area.

Join or login to comment.

  • Sergei W.

    So after applying the CH correspondence, we get a bunch of propositions that are _all_ true. We can surely prove implications, A->(B->A) and so on, but all such implications are trivially true since _every_ proposition is true. We can "prove" any implication, (...) -> (...) between any propositions, since we can always just write the corresponding lambda-expression. So it seems we have a construction that says: "lambda-calculus expressions correspond to implications between some identically true propositions." What exactly is the CH correspondence useful for?

    December 21, 2013

    • Tikhon J.

      As for category theory, I think it makes sense as a topic for a few reasons. It's certainly relevant to programming languages and PL theory; it's much easier to introduce before talking about polymorphism and it's a very natural extension of Curry-Howard (cf Curry-Howard-Lambek). Moreover, I think it gives us some interesting insights when we go to discuss things like subtyping and variance.

      December 24, 2013

    • Sergei W.

      I have updated my notes on lambda calculus... Feel free to comment. Also I'm looking forward to understanding recursive types... https://docs.google.co...­

      January 12, 2014

  • Sergei W.

    Since I missed the last meetup, Curry-Howard remains mysterious to me. Here is my question. According to CH, we have a proposition A for every type t, and the proposition says that the type t is inhabited. The simply typed lambda calculus allows us to compute only with inhabited types, and computations correspond to proofs that some propositions are true. Now, so far so good, but I don't see how CH can be used in practice. What is our benefit? _Every_ type in STLC is inhabited because we can always write a lambda-expression for every type. unit -> (unit->unit), or whatever else, we can always write a lambda-expression of that type.

    December 21, 2013

  • Jack F.

    Good review of Curry Howard correspondence. Maybe we will get ot polymorphism next time?

    December 19, 2013

    • Adrian K.

      Actually, Tikhon, if you can do as good a job at demystifying category theory for me as you did with the Curry-Howard correspondence, I'll be both impressed and eternally grateful.

      December 20, 2013

    • Jack F.

      Ha, ha! Actually I think if you stick to 2 particles and approach it from an information theoretic perspective, QM is much easier to comprehend.

      December 20, 2013

  • Jack F.

    Tikhon, thanks for a great talk. Looking forward to the next one (and hope I can attend).

    December 19, 2013

  • Tikhon J.

    I put the slides for Curry-Howard up online at http://jelv.is/talks/curry-howard.html (or .pdf).

    I also added some slides about what we covered at the end: CPS and double negation translation, including some of the comments other people made. I also added a couple of slides showing how CPS and double negation are related which might be interesting.

    December 17, 2013

  • Sergei W.

    Can I see the talk slides? It's too bad I had to miss this time.

    December 17, 2013

  • priya

    Seem to think there was a cell # 2 call on late arrivals? At the door

    December 16, 2013

  • Byron H.

    Meetings over the Winter Holidays, or between the end of June and Labor Day, often have less attendance than hoped for. Still, you have good attendance, and great topics! I'm trying to follow online. Happy Holidays!

    December 16, 2013

  • Byron H.

    Looks like I'll miss again. sigh

    December 16, 2013

  • Sergei W.

    @Juan

    Initially I learned lambda-calculus from these nice lectures by John Harrison, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.112.7072&rep=rep1&type=pdf

    He explains very briefly how the predecessor operation works, on page 31. In any case, this is a very nontrivial trick that, to me, gives some insight why computations are in general undecidable. You just write some lambdas and, by looking at them, there is no way you can figure out what exactly they do.

    December 16, 2013

  • Juan P.

    December 15, 2013

  • Richard M.

    It looks interesting, but I have a conflicting appointment. ...RM

    December 15, 2013

  • Juan P.

    A bit of an insight into the operation of predecessor. First H specified with a parameter is helpful:

    H(z) = (λh.h z)

    Then P restated using H :

    P = λg.H(g f)

    And now a derivation for what P does when applied to X and H:

    P X = (λg.H(g f)) X = H(X f) = H(x)
    P H(x) = (λg.H(g f)) H(x) = H(H(x) f)) = H((λh.h x) f) = H(f x)

    So here's the fundamental trick for why predecessor works. P applied to X multiple times results in f applied to x one time less:

    P^n X = H(f^(n-1) x)

    December 15, 2013

  • Juan P.

    Derivations:

    predecessor(0) = (λn.λf.λx.n P X I) 0
    = λf.λx.(0 P X) I
    = λf.λx.X I
    = λf.λx.x
    = 0

    predecessor(1) = (λn.λf.λx.n P X I) 1
    = λf.λx.1 P X I
    = λf.λx.(P X) I
    = λf.λx.((λg.λh.h (g f)) X) I
    = λf.λx.(λh.h (X f)) I
    = λf.λx.(λh.h x) I
    = λf.λx.x
    = 0

    predecessor(2) = (λn.λf.λx.n P X I) 2
    = λf.λx.2 P X I
    = λf.λx.(P(P X)) I
    = λf.λx.(P ((λg.λh.h (g f)) X) ) I
    = λf.λx.(P (λh.h (X f)) ) I
    = λf.λx.(P (λh.h x) ) I

    H = (λh.h x)

    = λf.λx.( P H ) I
    = λf.λx.( (λg.λh.h (g f)) H ) I
    = λf.λx.( (λh.h (H f)) ) I
    = λf.λx.( (λh.h ((λh.h x) f)) ) I
    = λf.λx.( (λh.h (f x)) ) I
    = λf.λx.(f x)
    = 1

    I'm still puzzling to explain in simple terms why this works.

    December 15, 2013

  • Juan P.

    I thought I’d share some derivations I did while trying to understand the operation of predecessor (untyped Lambda Calculus.) I'm starting with these definitions from Wikipedia:

    0 = λfλx.x
    1 = λfλx.(f x)
    2 = λfλx.(f(f s))
    predecessor(n) = λn.λf.λx.n (λg.λh.h (g f)) (λu.x) (λu.u)

    Simplified a bit (my notation):

    I = λu.u — Identity function
    X = λu.x — Constant function
    P = λg.λh.h (g f) — the main logic

    predecessor(n) = λn.λf.λx.n P X I

    December 15, 2013

  • Juan P.

    Fun interesting historical talk about the lambda calculus: http://www.youtube.com/watch?v=SphBW9ILVPU

    December 14, 2013

  • Sergei W.

    Sorry, won't be able to attend - going to Munich (Germany) until end of year. Looking forward to the slides though!

    December 12, 2013

  • Tikhon J.

    After thinking about it a bit, I believe there are actually a bunch of interesting things to talk about before getting to polymorphism: more on Curry-Howard, more about algebraic data types, recursive types, types for things like exceptions and continuations and a bit of category theory. We could also look at how to implement a simple type checker (not that different from the simple interpreter from before!) and maybe even type inference. When you put the three pieces (interpreter, type checker and type inference) together, you get a surprisingly simple core for a functional programming language. This will also help in the future when we want to implement polymorphism and then dependent types (à la "Simply Easy").

    This will also give everyone a chance to better internalize types and inference rules, which is very valuable on its own.

    1 · December 10, 2013

    • Christopher B.

      I like this idea.

      December 10, 2013

    • Adrian K.

      All excellent topics. And after all, it's not the destination, it's the journey.

      December 10, 2013

  • Christopher B.

    For those who like actual hardcopy books, Adam Chlipala's Certified Programming with Dependent Types is now available for preorder on Amazon:

    http://www.amazon.com/gp/product/0262026651/ref=oh_details_o00_s00_i00?ie=UTF8&psc=1

    December 9, 2013

    • Christopher B.

      Most Coq users are focused on theorem proving. The unique approach to dependently typed programming that Adam has pioneered still involves nontrivial proof, but as he writes in the introduction, "The crucial angle in this book is that there are 'design patterns' for reliably avoiding the really grungy parts of theorem proving, and consistent use of these patterns can get you over the hump to the point where it is worth your while to use Coq to prove your theorems and certify your programs, even if formal verification is not your main concern in a project. We will follow this theme by pursuing two main methods for replacing manual proofs with more understandable artifacts: dependently typed functions and custom Ltac decision procedures." (Ltac is a functional language for proof automation built into Coq. The "you" in the quote refers mainly to PL researchers.)

      December 9, 2013

    • Adrian K.

      Sounds as if there's plenty of fun in store!

      December 9, 2013

  • Christopher B.

    More background for possible future session(s): --Andrej Bauer has an Ocaml implementation of dependent types, with a tutorial in three parts: http://math.andrej.com/2012/11/08/how-to-implement-dependent-type-theory-i/ http://math.andrej.com/2012/11/11/how-to-implement-dependent-type-theory-ii/ http://math.andrej.com/2012/11/29/how-to-implement-dependent-type-theory-iii/

    --Andres Loh, Conor McBride, and Wouter Swierstra have a tutorial implementation of dependently typed lambda calculus in Haskell: http://www.andres-loeh.de/LambdaPi/

    December 9, 2013

24 went

People in this
Meetup are also in:

Sign up

Meetup members, Log in

By clicking "Sign up" or "Sign up using Facebook", you confirm that you accept our Terms of Service & Privacy Policy