This time, we will have two talks, from Odd Erik Gundersen and Lester Solbakken. Both subjects highly relevant, please RSVP early, we expect high attendance! Refreshments served, as usual - see you there!
18:15 - 18:45 Reproducible empirical AI and Machine Learning results
Reproducing empirical AI and Machine Learning results is surprisingly hard - for many reasons. The problems are related to some intrinsic properties of computer systems, properties of the data and properties of the machine learning and AI algorithms as well as their environment in addition to how we document our experiments. These issues are well recognized by the AI and machine learning communities and lately many papers have been published and workshops have been conducted in order to discuss the problems and possible solutions. Everyone that conducts experiments - whether as part of academia or industry - should give some thoughts to these problems and how they relate to our work.
Odd Erik Gundersen is an adjunct associate professor at NTNU where he teaches courses in and conducts research on AI. During day-time he investigates how AI and machine learning can be utilized in the domain of renewable energy at TrønderEnergi.
19:00 - 19:30 Scaling up ONNX and TensorFlow model evaluation
With the advances in deep learning and the corresponding increase in machine learning frameworks in recent years, a new class of software has emerged: model servers. These promise, among other things, performance and scalability. There is however a large class of applications where such model servers are inadequate. For instance, search and recommendation applications must efficiently evaluate models over potentially many thousands of data points as part of handling a single query. In such cases the amount of data transferred to the model servers can quickly saturate the network and thus decrease total system throughput and degrade quality of service.
In this talk we will go through our solution to this problem which is to evaluate the models where data is stored rather than moving data to where the model is hosted. We base our solution on Vespa, an open-sourced platform developed at Yahoo for building scalable real-time data processing applications, where we have implemented import of ONNX and TensorFlow models. We will show that even if one does not take advantage of specialized hardware, the total system throughput can scale much better.
Lester Solbakken is a Principal Software Engineer at Oath (previously Yahoo).