Will Rice on Distilling the Knowledge in a Neural Network

Hosted By
Neil M. and Noel W.

Details
Link: https://arxiv.org/abs/1503.02531
Description: Previous efforts have shown that a compressed model can be developed from an ensemble of models that would be infeasible to run in a deployed setting; the authors propose a different compression technique, demonstrate it, and more.
Bio: Will Rice is an ML Engineer at Spokestack where he tries to make computers talk good.

Papers We Love Chattanooga
See more events
Online event
This event has passed
Will Rice on Distilling the Knowledge in a Neural Network