Skip to content

Details

This meeting will cover the first two sections of Chapter 6 in Multi-Agent Reinforcement Learning: Foundations and Modern Approaches. The algorithms in this chapter incorporate the concept of equilibrium solutions from game theory into reinforcement learning. We will cover the value iteration algorithm for stochastic games which is an exact solution technique. For cases where we do not have access to the full transition function, we will introduce joint-action learning with game theory which is a q-learning style estimation algorithm. Both algorithms can be applied to different game models, but the most straightforward examples are for two-player zero-sum games where the minimax solution concept can be applied. We will discuss this case with a simple soccer game example and then touch on how other solution concepts might be applied for general sum games with more agents.

As usual you can find below links to the textbook, previous chapter notes, slides, and recordings of some of the previous meetings.

Meetup Links:
Recordings of Previous RL Meetings
Recordings of Previous MARL Meetings
Short RL Tutorials
My exercise solutions and chapter notes for Sutton-Barto
My MARL repository
Kickoff Slides which contain other links
MARL Kickoff Slides

MARL Links:
Multi-Agent Reinforcement Learning: Foundations and Modern Approaches
MARL Summer Course Videos
MARL Slides

Sutton and Barto Links:
Reinforcement Learning: An Introduction by Richard S. Sutton and Andrew G. Barto
Video lectures from a similar course

AI Algorithms
Artificial Intelligence
Artificial Intelligence Applications
Machine Learning
Game Theory

Members are also interested in