Sparse inverse is all you need


Details
Abstract:
Imagine searching for the perfect winter jacket without knowing exactly what you want. Collaborative filtering, which deduces preferences from past user-item interactions, can make this tedious process almost effortless. However, at a large scale, this technique faces two fundamentally opposing challenges: user and item interaction scarcity and computational cost. Recommending niche items to users with limited interactions is only possible when considering longer chains of user-item interactions. As a trade-off, requirements for computing and storing these chains explode (quadratically) with the growing item catalog. Traditional implementations of these strategies are resource-intensive and infeasible for large industry tasks.
In this talk, we’ll explain and demonstrate how exploiting sparsity produces an approach order of magnitude more efficient during training and inference alike. Remarkably, where other methods require significant computing, our model SANSA can be trained quickly even on a standard laptop. To explore the capabilities of SANSA, we invite you to check out our open-source implementation at https://github.com/glami/sansa.
Program:
17:30 Welcome chat
18:00 Talk
18:50 Discussion
19:10 Networking (Impact Hub)
About MLMUs:
Machine Learning Meetups (MLMU) is an independent platform for people interested in Machine Learning, Information Retrieval, Natural Language Processing, Computer Vision, Pattern Recognition, Data Journalism, Artificial Intelligence, Agent Systems and all the related topics. MLMU is a regular community meeting usually consisting of a talk, a discussion and subsequent networking. Except of Prague, MLMU also spread to Brno, Bratislava and Košice.
COVID-19 safety measures

Sparse inverse is all you need