Skip to content

Details

DataTalks #30: Model Distillation šŸ§ āš—ļø

Our 30th DataTalks meetup will be held online in cooperation with Windward, and will focus on model distillation.

š—­š—¼š—¼š—ŗ š—¹š—¶š—»š—ø: https://us02web.zoom.us/j/87493432159?pwd=bjlLSk1GaGZ2VWFBQVdLUkk3VzYvZz09

š—”š—“š—²š—»š—±š—®:
šŸ”¶ 16 :30 - 17:15 – Model Distillation for Object Detection – Shani Gamrian
šŸ”“ 17:20 - 18:05 – Distilling Maritime Insights with Deep Learning – Gilad Landau, Windward

---------------------

š— š—¼š—±š—²š—¹ š——š—¶š˜€š˜š—¶š—¹š—¹š—®š˜š—¶š—¼š—» š—³š—¼š—æ š—¢š—Æš—·š—²š—°š˜ š——š—²š˜š—²š—°š˜š—¶š—¼š—» – š—¦š—µš—®š—»š—¶ š—šš—®š—ŗš—æš—¶š—®š—»

Object Detection networks are commonly used on many applications and products nowadays and are capable of achieving very high performances in different real-life scenarios. However, when implementing these networks on limited resources, real-time solutions are required.

Model Distillation refers to the idea of model compression by teaching a smaller network, how to behave using a bigger, pre-trained network. There are two types of knowledge representations that can be transferred from teacher to student. The first is knowledge from direct outputs (also known as Knowledge Distillation) and the second is knowledge transferred from intermediate layers. In this talk, we will discuss the ideas and approaches of both types and the differences between them. We will also cover recent distillation works and solutions designed specifically for object detection networks such as SSD and FPN that show significant improvement of the results.

š—£š—®š—½š—²š—æ š—¹š—¶š—»š—ø: https://arxiv.org/abs/1906.03609

š—•š—¶š—¼: Shani is an Applied Machine Learning and Computer Vision researcher at Brodmann17.

---------------------

š——š—¶š˜€š˜š—¶š—¹š—¹š—¶š—»š—“ š— š—®š—æš—¶š˜š—¶š—ŗš—² š—œš—»š˜€š—¶š—“š—µš˜š˜€ š˜„š—¶š˜š—µ š——š—²š—²š—½ š—Ÿš—²š—®š—æš—»š—¶š—»š—“ – š—šš—¶š—¹š—®š—± š—Ÿš—®š—»š—±š—®š˜‚, š—Ŗš—¶š—»š—±š˜„š—®š—æš—±

I will present Windward's process of developing and deploying a deep learning pipeline in the maritime domain. The lecture will focus on the real-world challenges of training a deep learning model with a small amount of labeled data by utilizing distillation and active learning techniques.

š—£š—®š—½š—²š—æ š—¹š—¶š—»š—ø:
https://arxiv.org/abs/1503.02531
https://arxiv.org/abs/1711.00941
https://arxiv.org/abs/1609.03499

š—•š—¶š—¼: Gilad is a Technologist and a Senior Data Scientist Windward. He is enthusiastic about creating real business value with Deep Learning.

---------------------

š—­š—¼š—¼š—ŗ š—¹š—¶š—»š—ø: https://us02web.zoom.us/j/87493432159?pwd=bjlLSk1GaGZ2VWFBQVdLUkk3VzYvZz09

Members are also interested in