Skip to content

Valentina Pyatkin | TÜLU 3: Pushing Frontiers in Open LM Post-Training

Network event
92 attendees from 4 groups hosting
Photo of Martin Goodson
Hosted By
Martin G.
Valentina Pyatkin | TÜLU 3: Pushing Frontiers in Open LM Post-Training

Details

Please be aware that this talk will not be recorded.
Title: TÜLU 3: Pushing Frontiers in Open Language Model Post-Training
Speaker: Valentina Pyatkin (postdoctoral researcher at the Allen Institute for AI and the University of Washington)
Paper: https://arxiv.org/pdf/2411.15124
Abstract: Language model post-training is applied to refine behaviors and unlock new skills across a wide range of recent language models, but open recipes for applying these techniques lag behind proprietary ones. The underlying training data and recipes for post-training are simultaneously the most important pieces of the puzzle and the portion with the least transparency. To bridge this gap, we introduce TÜLU 3, a family of fully-open state-of-the-art post-trained models, alongside its data, code, and training recipes, serving as a comprehensive guide for modern post-training techniques.TÜLU 3, which builds on Llama 3.1 base models, achieves results surpassing the instruct versions of Llama 3.1, Qwen 2.5, Mistral, and even closed models such as GPT-4o-mini and Claude 3.5-Haiku. The training algorithms for our models include supervised finetuning (SFT), Direct Preference Optimization (DPO), and a novel method we call Reinforcement Learning with Verifiable Rewards (RLVR). With TÜLU 3, we build a multi-task evaluation scheme for posttraining with development and unseen evaluations, standard benchmark implementations, and substantial decontamination of existing open datasets on said benchmarks. We conclude with analysis and discussion of training methods that did not reliably improve performance.
The TÜLU 3 release includes model weights, a demo, and the complete recipe — datasets for diverse core skills, a robust toolkit for data curation and evaluation, the training code and infrastructure, and, most importantly, a detailed report for reproducing and further adapting the TÜLU 3 approach to more domains.
Speaker Bio: Valentina Pyatkin is a postdoctoral researcher at the Allen Institute for AI and the University of Washington. She is additionally supported by an Eric and Wendy Schmidt Postdoctoral Award. Previously, she completed her PhD in Computer Science at the NLP lab of Bar Ilan University. Her work has been awarded an ACL Outstanding Paper Award and the ACL Best Theme Paper Award. Previously, she did research internships at Google and the Allen Institute for AI and obtained an MSc from the University of Edinburgh and a BA from the University of Zurich. Valentina’s research focuses on Post-Training and the Adaptation of Language Models, for example, to make them better semantic and pragmatic reasons.
Agenda:

  • 18:25: Virtual doors open
  • 18:30: Talk
  • 19:10: Q&A session
  • 19:30: Close

Sponsor: Evolution AI - Generative AI-powered data extraction from financial documents.

Photo of Apache Spark+AI London group
Apache Spark+AI London
See more events