Past Meetup

Gershom Bazerman on "Homological Computations for Term Rewriting Systems" & Mini

This Meetup is past

170 people went

Two Sigma

101 Ave. of the Americas, 23rd Fl. J · New York

How to find us

Cross Streets: Watt and Grand. Note: Please make sure you’re signed-up for the meetup, including your first and last name. Without this info you won’t be allowed into the building by security.

Location image of event venue

Details

We're happy to be hosting Gershom Bazerman (http://gbaz.github.io), software developer at S&P/Capital IQ, author of various Haskell libraries, and organizer/founder of the NY Haskell Users Group (https://www.meetup.com/NY-Haskell/), who'll be presenting on Homological Computations for Term Rewriting Systems (http://math.univ-lyon1.fr/~malbos/Art/hcTRS.pdf) by Philippe Malbos and Samuel Mimram.

In addition to Gershom's talk, Stephen Tu (https://people.eecs.berkeley.edu/~stephentu/) will be opening the event with a lightning talk on Least Squares Policy Iteration (https://users.cs.duke.edu/~parr/jmlr03.pdf) by Michail G. Lagoudakis and Ronald Parr.

Talks

• Gershom Bazerman on Homological Computations for Term Rewriting Systems:

In 1987, C. Squier wrote "Word problems and a homological finiteness condition for monoids (http://www.sciencedirect.com/science/article/pii/0022404987901290)," [1] which proved a fascinating result that spawned an entire field, but which is little known outside of it. The great mathematical popularizer and category theorist John Baez (https://en.wikipedia.org/wiki/John_C._Baez) sketched the ideas in 1995 (http://math.ucr.edu/home/baez/week70.html). We consider "word problems," which ask the equality of two terms modulo a set of equivalences, restrict ourselves to simple objects called "monoids" that many functional programmers are fond of, and ask about the decidability of equality over them. This is the same as looking at strings and asking when they are equal if you consider mappings that equate some contiguous sequences with other contiguous sequences. (Such problems arise ubiquitously in interesting computational problems -- consider for example the equivalence of sequences of patches, or of edit actions across a distributed system). The way computer scientists would think to answer this is to see if you can rewrite both sides of the equation into a single canonical form that you can compare for equality. Indeed, that's what Don Knuth and Peter Bendix did, and the result is the Knuth-Bendix algorithm, used in theorem provers and many other applications.

But just how universal is the Knuth-Bendix algorithm? Well, Squier showed that there are finite monoids with decidable word problems that cannot be turned into such canonical rewrite procedures as Knuth-Bendix gives us. And furthermore, he showed that this result derives from considering our systems using the tools of modern algebraic topology! In particular, he showed how to calculate a homology of a monoid presentation.

Ever since then, people have been seeking to generalize Squier's result in new and exciting ways. One of the niftiest and newest was presented last year at FSCD (http://fscd2016.dcc.fc.up.pt/programme/acceptedPapers/), this talk's paper (http://math.univ-lyon1.fr/~malbos/Art/hcTRS.pdf), which I love, but do not claim to fully understand. Instead of a monoid, we consider an arbitrary "algebraic theory" (say, a syntax tree of a programming language with some equalities between certain forms of trees). And we now ask not about the word problem, but just the minimum number of equalities necessary to present such a theory. The answer, which can be computed with an algorithm, comes from even more, and more generalized homology. The purpose of this talk is to make the above understandable to a lay audience, to sketch some idea of how to think about things that arise in computer science topologically, and to provide an invitation to basic notions of homology (https://en.wikipedia.org/wiki/Homology_(mathematics)).

• Stephen Tu on Least Squares Policy Iteration:

Policy iteration is a classic dynamic programing algorithm for solving a Markov Decision Process (MDP). In policy iteration, the algorithm alternates between two steps: 1) a policy evaluation step which, given the current policy, computes the state-action value function (commonly known as the Q-function) for the policy, and 2) a policy improvement step, which uses the Q-function to greedily improve the current policy. When the number of states and actions of the MDP is finite and small, policy iteration performs well and comes with nice theoretical guarantees. However, when the state and action spaces are large (possibly continuous), policy iteration becomes intractable, and approximate methods for solving MDPs must be used.

Least Squares Policy Iteration (LSPI) is one method for approximately solving an MDP. The key idea here is to approximate the Q-function as a linear functional in a lifted, higher dimensional space, analogous to the idea of feature maps in supervised learning. Plugging this approximation into the Bellman equation gives a tractable linear system of equations to solve for the policy evaluation step. Furthermore, the policy improvement step remains the same as before.

In this talk, I will describe LSPI and some of its subtleties. One subtlety arises due to the fact that the Bellman operator is not necessarily invariant on our approximate function class, and hence an extra projection step is typically used to minimize the Bellman residual after projecting back on the function space. Furthermore, in order to build intuition for LSPI, I will also talk about what the LSPI algorithm does in the context of a well studied continuous optimal control problem known as the Linear Quadratic Regulator (LQR).

Bios

Gershom Bazerman is a software developer in New York. He is an organizer of the NY Haskell Users Group (https://www.meetup.com/NY-Haskell/), the NY Homotopy (or Topos) Theory Reading group (https://groups.google.com/forum/#!forum/hott-nyc), and the Compose conference (http://www.composeconference.org) among other things. He's written some useful and used Haskell libraries, occasionally contributed to others, as well as to the PureScript compiler, and has also helped maintain Haskell infrastructure for some time. He likes reading complicated things and trying to understand and explain them.

Stephen Tu (https://people.eecs.berkeley.edu/~stephentu/) is a PhD student in the EECS department at UC Berkeley, studying problems in the intersection of optimization, control theory, and statistics. This summer, he is interning at the Google Brain team in NYC, which has sparked his interest in reinforcement learning.

Details

Doors open at 6:30 pm; the presentations will begin right around 7:00 pm; and, yes, there will be refreshments of all kinds and pizza.

You'll have to check-in with security with your Name/ID. Definitely sign-up if you’re going to attend–unfortunately people whose names aren’t entered into the security system in advance won’t be allowed in.

After Gershom's presentation, we will open up the floor to discussion and questions.

We hope that you'll read some of the papers and references before the meetup, but don't stress if you can't. If you have any questions, thoughts, or related information, please visit #pwlnyc (https://paperswelove.slack.com/messages/pwlnyc/) on slack (http://papersweloveslack.herokuapp.com/), our GitHub repository (https://github.com/papers-we-love/papers-we-love), or add to the discussion on this event's thread.

Additionally, if you have any papers you want to add to the repository above (papers that you love!), please send us a pull request (https://github.com/papers-we-love/papers-we-love/pulls). Also, if you have any ideas/questions about this meetup or the Papers-We-Love org, just open up an issue.

------------------------------------------------------------------------------------------

TwoSigma (https://www.twosigma.com/) - Platinum Sponsor of the New York chapter

------------------------------------------------------------------------------------------