What we're about

Held as an informal online weekly session for the time being.

Hi all,

I hope you are all well and healthy.

Most of us have been affected in some way or another by the ongoing pandemic, and clearly, the need of the hour is to stay home, take care of ourselves and our loved ones, and be kind to each other. While many are fighting the virus on the frontlines, others are helping the fight in their own ways, be it through technology or in their communities.

Amidst all this, life is also going on from home, indeed for us who are indeed very privileged. We should do our bit to help the fight as best we can, and indeed carry on with our, likely unusual days, with good cheer. On the latter point, let's have some cheerful RL sessions online every week for an hour or so, and discuss whatever RL related papers you’ve been reading or ideas you’ve been exploring.

Thanks, and chat soon.


What is this meetup about?

We know that scaling reinforcement learning (RL) by stitching it together with deep nets works. This brings excitement to all keen on understanding and building autonomous agents solving hard real world problems.

But, we have only seen early results brought about by excellent engineering innovations, barring a few fundamental revisits to RL, e.g. distributional perspective on RL. The latter has shown even greater promise, thankfully.

We need more revisits like these. We need to take a step back and examine the building blocks of RL, deep nets, training regimens, and the nature of dynamic systems, to see how and when these complement each other. This may lead to novel learning schemes and substrates.

In so doing, we will find ways to make RL work orders of magnitude better than it does today. We want RL to be data efficient, to generalise, and to go wild.

This meetup is a Grad school style reading group for those interested in the aforementioned exercise. One volunteer leads the discussion of some recent work, while the others chip in to take the work apart.

The discussion is at a deep technical level. We like rich intuitive explanations, math, and code. We also like applications on occasion, but are more keen on generality and basic methods and findings. As a thought exercise and when relevant, we entertain applications of the discussed work. Beginners are welcome, specially students and hobbyists who want to play a meaningful role in understanding and advancing the state-of-the-art.

Do join our Slack workspace, where we keep the discussion going and plan our future meetings:

https://join.slack.com/t/rlrg/shared_invite... (https://join.slack.com/t/rlrg/shared_invite/enQtNTM5Njc3NjIwNDY4LTNjZjA5MTZkYTY1MTFmMzliZDFlNmY0MGU1NzIwYWIzNWI5YjJhY2ZiOWUyMTMwZWU1NWNlOWVlZDg5YzRjNjY)

Past events (24)

RL as a sequence modeling problem?

This event has passed

Building agents for the Google Research football RL challenge

This event has passed

Online RL weekly — Kicking off 2021

This event has passed

Online RL weekly — What’s on at NeurIPS 2020?

This event has passed

Photos (15)