Skip to content

AI Safety Thursdays: When Good Rewards Go Bad - Reward Overoptimization in RLHF

Photo of Juliana Eberschlag
Hosted By
Juliana E. and 2 others
AI Safety Thursdays: When Good Rewards Go Bad - Reward Overoptimization in RLHF

Details

Reinforcement learning with human feedback (RLHF) has become a popular way to align AI behavior with human preferences. But what happens when the system gets too good at optimizing the reward signal?

Evgenii Opryshko will guide us through an exploration of how overoptimization can lead to unintended behaviors, why it happens, and what we can do about it. We'll look at examples, discuss open challenges, and consider what this means for aligning advanced AI systems.
​​
Event Schedule
6:00 to 6:45 - Networking and refreshments
6:45 to 8:00 - Main Presentation
8:00 to 9:00 - Breakout Discussions

Photo of Toronto AI Safety group
Toronto AI Safety
See more events
30 Adelaide East, Industrious Office 12th Floor Common Area
30 Adelaide East, 12th Floor · Toronto, ON