LLVM Social #10: Auto-tuning Compiler Transformations with Machine Learning

LLVM Social Berlin
LLVM Social Berlin
Öffentliche Gruppe

Mozilla Berlin Community Space

Schlesische Straße 27, Gebäude 3, 3. Obergeschoss · Berlin

Wie du uns findest

On Schlesische Str. find the big gate between "Standesamt" and "Der Berg Ruft", go to the very end of the courtyard and enter the buiding through the door on your left. You will find the MozSpace on third floor.

Bild des Veranstaltungsortes


For our last meetup this year we are proud to welcome Dr. Biagio Cosenza, Senior Researcher at Technische Universität Berlin, presenting his state of the art research on machine learning applications in autotuning!

Thanks to your friends from Mozilla, we can host this meetup in the brand new Mozilla Community Space in Kreuzberg.

We are looking forward to seeing you all again for this special evening. Here comes the abstract for the presentation.

Auto-tuning Compiler Transformations with Machine Learning

To deliver higher performance, today's computer architecture has evolved in complexity. Hardware design is taking an irreversible step toward parallel architectures, which burdens application programmers in porting and tuning their codes. It is desirable to write programs that execute efficiently on highly parallel computing systems, but peak performance is notoriously hard to reach, and the valuable cost of wasting these precious resources motivates application programmers to devote significant time to tuning their codes.

Program automatic tuning (autotuning) is an emerging approach that relies on automated search or machine learning to off-load the traditionally time-consuming manual tuning of applications, and while it can apply to very different scenarios, it has become particularly important for parallel architectures.

This talk will show how machine learning can be a powerful tool to design portable and efficient autotuners. Machine learning application to this context is challenging and requires to address very specific problems (encoding, modeling, and training data availability). I will show practical examples of such autotuners for vectorization, loop unrolling, heterogeneous task partitioning and stencil computations, and which have been applied on a variety of compiler infrastructures such as LLVM, GCC, Insieme, and Patus. In particular, I will show how modeling can be greatly enhanced by structural learning methods that adapt to the structure of the problem.