Skip to content

Details

Today’s world needs orders of magnitude more efficient ML to address environmental and energy crises, optimize resource consumption and improve sustainability. With the end of Moore’s Law and Dennard Scaling, we can no longer expect more and faster transistors for a similar cost and power budgets. It is particularly problematic given the growing data volumes collected by populated sensors and systems, the larger and larger models we train, and the fact that most ML models have to run on edge devices to minimize latency, preserve privacy and save energy. The algorithmic efficiency of deep learning becomes essential to achieve desirable speed-ups, along with efficient hardware implementations and compiler optimizations for standard math operations. Current research highlights sparsity, model and data augmentation, search for efficient network architectures, algorithmic training speed-ups, and new nature-inspired local ways of computation as promising research directions to build efficient ML systems.

In this workshop, we would like to discuss and celebrate recent advances in efficient ML and sketch the way forward.

Among the speakers:

  • Jonathan Frankle - Harvard / MosaicML
  • Olga Saukh - TU Graz / CSH Vienna
  • Mostafa Dehghani - Google Brain
  • Dan Alistarh - IST Austria
  • You?
    If you wish to use this opportunity to present your research on Efficient ML, please submit your research paper: Call for Papers

To attend, please register using the link: 2nd workshop on Efficient ML

Events in Wien
Machine Learning
Knowledge Sharing
New Technology
Open Source
Researchers

Members are also interested in