Skip to content

About us

You are in love with Data, Artificial Intelligence, Machine learning, Big Data or IoT so join us to learn about the cutting edge of AI! \

Whether you are in computer science, mathematics, statistics, management, marketing, etc. If you think this activity is for scientists, change your mind by joining us. We will prove to you that AI is easy. Welcome to the new world.

Upcoming events

1

See all
  • ROBUST FINE-TUNING FROM NON-ROBUST PRETRAINED MODELS: MITIGATING SUBOPTIMAL T..

    ROBUST FINE-TUNING FROM NON-ROBUST PRETRAINED MODELS: MITIGATING SUBOPTIMAL T..

    ·
    Online
    Online

    Nous aurons le plaisir d’accueillir Jonas NGNAWE, doctorant en informatique au Mila – Quebec AI Institute et à l'Université Laval, ainsi que chercheur invité au Stanford Trustworthy AI Research (STAIR) Lab.

    Il est diplômé de l’École Polytechnique de Yaoundé et est titulaire d’un Master en sciences mathématiques de l'AIMS – African Institute for Mathematical Sciences, et d’un Master en Machine Learning du programme AMMI – African Master’s in Machine Intelligence.

    Il a également été AI Resident chez Google.

    Il nous présentera son article intitulé :
    « ROBUST FINE-TUNING FROM NON-ROBUST PRETRAINED MODELS: MITIGATING SUBOPTIMAL TRANSFER WITH EPSILON-SCHEDULING », dont l’abstract suit :

    "Fine-tuning pretrained models is a standard and effective workflow in modern machine learning. However, robust fine-tuning (RFT), which aims to simultaneously achieve adaptation to a downstream task and robustness to adversarial examples, remains challenging. Despite the abundance of non-robust pretrained models in open-source repositories, their potential for RFT is less understood. We address this knowledge gap by systematically examining RFT from such non-robust models. Our experiments reveal that fine-tuning non-robust models with a robust objective, even under small perturbations, can lead to poor performance, a phenomenon that we dub suboptimal transfer. In challenging scenarios (eg, difficult tasks, high perturbation), the resulting performance can be so low that it may be considered a transfer failure. We find that fine-tuning using a robust objective impedes task adaptation at the beginning of training and eventually prevents optimal transfer. However, we propose a novel heuristic, Epsilon-Scheduling, a schedule over perturbation strength used during training that promotes optimal transfer. Additionally, we introduce expected robustness, a metric that captures performance across a range of perturbations, providing a more comprehensive evaluation of the accuracy-robustness trade-off of diverse models at test-time. Extensive experiments on wide range of configurations (six pretrained models and five datasets) show that Epsilon-Scheduling successfully prevents suboptimal transfer and consistently improves expected robustness."

    • Photo of the user
    • Photo of the user
    25 attendees

Group links

Find us also at