
What we’re about
Hands-on project-oriented data science, with a heavy focus on machine learning and artificial intelligence. We're here to get neck-deep into projects and actually do awesome things!
Join our new discord https://discord.gg/xtFVsSZuPG where you can:
- discuss more AI/ML papers
- suggest/plan events
- share and discuss github projects
- find and post jobs on our jobs channel
- buy/sell used local gpu/server equipment
- scroll our social media aggregators for the latest AI research news across Bsky, X, Reddit, Youtube, Podcasts, and more
The meetup consists of:
- recurring study groups (if you want to start one, just notify Ben to be made a meetup co-organizer).
- intermediate/advanced working groups (starting in 2019)
- occasional talks and gathering (aiming for at least quarterly starting in 2019)
Upcoming events (4+)
See all- Reinforcement Learning: Chapter 4 Dynamic ProgrammingLink visible for attendees
Dynamic programming is a collection of techniques used to solve the Bellman equations for value functions in reinforcement learning. Last chapter, we introduced the value functions and their associated recursive equations. This chapter, we apply the techniques of dynamic programming to calculate solutions to these equations for arbitrary environments. Once we have these solutions, we can easily derive policies which perform optimally in any reinforcement learning environment for which we have complete information. These solution methods are versions of generalized policy iteration which combine the calculation of a value function with a step of policy improvement. The policy improvement theorem is the key idea that justifies the process used to derive optimal policies from value functions, and we prove that theorem in this chapter as well.
As usual you can find below links to the textbook, previous chapter notes, slides, and recordings of some of the previous meetings.
Useful Links:
Reinforcement Learning: An Introduction by Richard S. Sutton and Andrew G. Barto
Recordings of Previous Meetings
Short RL Tutorials
My exercise solutions and chapter notes
Kickoff Slides which contain other links
Video lectures from a similar course - Reinforcement Learning: Topic TBALink visible for attendees
Typically covers chapter content from Sutton and Barto's RL book
As usual you can find below links to the textbook, previous chapter notes, slides, and recordings of some of the previous meetings.
Useful Links:
Reinforcement Learning: An Introduction by Richard S. Sutton and Andrew G. Barto
Recordings of Previous Meetings
Short RL Tutorials
My exercise solutions and chapter notes
Kickoff Slides which contain other links
Video lectures from a similar course - Reinforcement Learning: Topic TBALink visible for attendees
Typically covers chapter content from Sutton and Barto's RL book
As usual you can find below links to the textbook, previous chapter notes, slides, and recordings of some of the previous meetings.
Useful Links:
Reinforcement Learning: An Introduction by Richard S. Sutton and Andrew G. Barto
Recordings of Previous Meetings
Short RL Tutorials
My exercise solutions and chapter notes
Kickoff Slides which contain other links
Video lectures from a similar course - Reinforcement Learning: Topic TBALink visible for attendees
Typically covers chapter content from Sutton and Barto's RL book
As usual you can find below links to the textbook, previous chapter notes, slides, and recordings of some of the previous meetings.
Useful Links:
Reinforcement Learning: An Introduction by Richard S. Sutton and Andrew G. Barto
Recordings of Previous Meetings
Short RL Tutorials
My exercise solutions and chapter notes
Kickoff Slides which contain other links
Video lectures from a similar course