Will Rice on Distilling the Knowledge in a Neural Network
Details
Link: https://arxiv.org/abs/1503.02531
Description: Previous efforts have shown that a compressed model can be developed from an ensemble of models that would be infeasible to run in a deployed setting; the authors propose a different compression technique, demonstrate it, and more.
Bio: Will Rice is an ML Engineer at Spokestack where he tries to make computers talk good.
