It is all about the data


Details
Agenda:
17:45-18:15: Mingling
18:15-18:30: Ground Truth Data - Who needs it and why? by Gil Shapira, HI4AI
18:30-19:00: Data-driven development in Ultrasound environment, by Elad Goshen, DiA
19:00-19:10: Break
19:10-19:40: A column-based deep learning method in OCT scans, by Adi Szeskin, Intel (HUJI)
19:40-19:55: SUPER-IVIM-DC, by Noam Korngut, technion
19:55-20:10: Automatic bone loss detection and quantification in shoulder CT scans and data collection challenges, by Avichai Haimi, HUJI
Ground Truth Data - Who needs it and why?
Ground Truth Data is critical and can help companies build their ML/DL models. Come to learn the introduction and get tips to improve your impact.
Gil Shapira, Director of AI/CV Data Quality
Data-driven development in Ultrasound environment
Data-driven development is a software development approach that emphasizes the use of data to inform and guide the development process. It involves collecting, analyzing, and using data to inform decision making and shape the direction of the development process. This approach can help developers create more effective and efficient software solutions, as well as enable them to better understand the needs and behavior of their users. In this lecture, we will discuss how data-driven development approach can be applied in practice: supporting facilities and best practices.
Elad has over 18 years of experience and leadership in the fields of Algorithm, Software, Hardware and System Engineering of multi-disciplinary systems. Before that he held several roles including R&D Director, Algorithms and Software Groups Management in Orbotech, and Elta. He holds B.Sc. and M.Sc. in Physics from Ben-Gurion University. Elad is the Vice President of R&D in DiA Imagining Analysis Ltd., a leading provider of AI-powered ultrasound analysis solutions that make ultrasound imaging capture and analysis smarter and accessible to clinicians with all levels of experience.
A column-based deep learning method for the detection and quantification of atrophy associated with AMD in OCT scans
This paper describes a novel automatic method for the identification and quantification of atrophy associated with AMD in OCT scans and its visualization in the corresponding infrared imaging (IR) image. The method is based on the classification of light scattering patterns in vertical pixel-wide columns (A-scans) in OCT slices (B-scans) in which atrophy appears with a custom column-based convolutional neural network (CNN). Experimental results yield a mean F1 score of 0.78 (std 0.06) and an AUC of 0.937, both close to the observer variability.
Adi recently completed his PhD in the Hebrew University’s Computer Science department, supervised by Prof. Leo Joskowicz. He is a member of the core AI & data science research team of Intel’s Advanced Analytics group.
SUPER-IVIM-DC: Intra-voxel Incoherent Motion Based Fetal Lung Maturity Assessment from Limited DWI Data Using Supervised Learning Coupled with Data-Consistency
Intra-voxel incoherent motion (IVIM) analysis of fetal lungs Diffusion-Weighted MRI (DWI) data shows potential in providing quantitative imaging bio-markers. However, long acquisition times precluded clinical feasibility. We introduce SUPER-IVIM-DC a deep-neural-networks (DNN) approach which couples supervised loss with a data-consistency term to enable IVIM analysis of DWI data acquired with a limited number of b-values. We demonstrated the added-value of SUPER-IVIM-DC over both classical and recent DNN approaches for IVIM analysis through numerical simulations, healthy volunteer study, and IVIM analysis of fetal lung maturation from fetal DWI data.
Noam Korngut, graduated with a B.Sc. in electrical engineering and a master's degree at TCML (Technion computational MRI lab)
Automatic glenoid bone loss detection and quantification in shoulder CT scans and data collection challenges
Glenoid bone loss is common following shoulder dislocation. Its detection and quantification in a CT scan are usually required to determine if surgery is needed, and if so, what type of surgery. However, estimating glenoid bone loss is time consuming, requires expertise and is subject to observer variability. In this work, we present a novel, fully automatic method for glenoid bone loss quantification in CT scans.
During the process of evaluating the method, we retrospectively collected 50 CT scans from 42 patients, both with and without glenoid bone loss, and dealt with the challenges associated with collecting medical imaging data.
Avichai Haimi is a M.Sc. researcher at the Hebrew University of Jerusalem, mentored by Prof. Leo Joskowicz. With a focus on Computer Vision and Medical image processing, his research involves developing an automatic pipeline to help orthopedic doctors measure Glenoid Bone Loss (GBL). He graduated from The Hebrew University of Jerusalem, Israel, in 2016 and 2020 with a B.Sc. degree in Electrical and Computer Engineering.

It is all about the data