Skip to content

Details

This meeting will begin Chapter 9 in Multi-Agent Reinforcement Learning: Foundations and Modern Approaches. This chapter contains all the modern techniques used in MARL, but we will just focus on the first part which extends independent learning we saw for tabular problems to the approximation case. These algorithms will train multiple agents simultaneously where each agent views the problem as its own reinforcement learning task. The difficulty is in dealing with the changing behavior of the other agents which makes the problem non-stationary. We will try to cover through section 9.3 and use the level based foraging task to evaluate the multi-agent versions of dqn, reinforce, and actor-critic.

As usual you can find below links to the textbook, previous chapter notes, slides, and recordings of some of the previous meetings.

Meetup Links:
Recordings of Previous RL Meetings
Recordings of Previous MARL Meetings
Short RL Tutorials
My exercise solutions and chapter notes for Sutton-Barto
My MARL repository
Kickoff Slides which contain other links
MARL Kickoff Slides

MARL Links:
Multi-Agent Reinforcement Learning: Foundations and Modern Approaches
MARL Summer Course Videos
MARL Slides

Sutton and Barto Links:
Reinforcement Learning: An Introduction by Richard S. Sutton and Andrew G. Barto
Video lectures from a similar course

You may also like