Zum Inhalt springen

Online event: Scaling Down to Scale Up: A Guide to PEFT

Foto von Muhtasham Oblokulov
Hosted By
Muhtasham O.
Online event: Scaling Down to Scale Up: A Guide to PEFT

Details

How to RLHF #LLAMA if you don't have hundreds of GPUS? Do it in a parameter-efficient way. Vladislav will share parameter-efficient fine-tuning #PEFT survey! [http://arxiv.org/abs/2303.15647](https://t.co/R8w1DVZXJi)

This paper presents a systematic overview and comparison of parameter-efficient fine-tuning methods covering over 40 papers published between February 2019 and February 2023. These methods aim to resolve the infeasibility and impracticality of fine-tuning large language models by only training a small set of parameters. We provide a taxonomy that covers a broad range of methods and present a detailed method comparison with a specific focus on real-life efficiency and fine-tuning multibillion-scale language models.

This event is brought to you in collaboration with the Munich🥨NLP community. Join their Discord to discuss the latest developments and also stimulate exchange on research and innovation around NLP.

Photo of PyData Munich group
PyData Munich
Mehr Events anzeigen
Online-Event
Dieses Event ist verstrichen