Skip to content

Details

This meeting will continue to cover Chapter 6 in Multi-Agent Reinforcement Learning: Foundations and Modern Approaches building upon the concept in value iteration for games. We will extend the approach to sampling in which we estimate the joint-action values at each state for an agent thus reducing each state to a non-repeated game which we can solve with equilibrium techniques such as minimax for zero-sum two-player games.

An alternative approach to solving the game is simply to calculate the best response policy on a per agent basis using the estimated joint-action values and a model of the other agents' current policies. We will study these approaches with the soccer game example introduced last time and compare their success to other solutions such as value iteration.

As usual you can find below links to the textbook, previous chapter notes, slides, and recordings of some of the previous meetings.

Meetup Links:
Recordings of Previous RL Meetings
Recordings of Previous MARL Meetings
Short RL Tutorials
My exercise solutions and chapter notes for Sutton-Barto
My MARL repository
Kickoff Slides which contain other links
MARL Kickoff Slides

MARL Links:
Multi-Agent Reinforcement Learning: Foundations and Modern Approaches
MARL Summer Course Videos
MARL Slides

Sutton and Barto Links:
Reinforcement Learning: An Introduction by Richard S. Sutton and Andrew G. Barto
Video lectures from a similar course

AI Algorithms
Artificial Intelligence
Machine Learning
Education & Technology
Game Theory

AI summary

By Meetup

Reinforcement Learning meetup for students and researchers to cover MARL foundations and modern approaches; outcome: gain core MARL concepts and access notes.

Members are also interested in