Skip to content

Details

Nakul Verma will be our main speaker (webpage (http://cseweb.ucsd.edu/~naverma/)).

Title: Learning with data having low intrinsic dimension

Abstract:
In several machine learning applications, data is typically represented in very high dimensions. This rich representation of data comes at a cost: the accuracy of classification and prediction algorithms scale poorly with data dimension. This is commonly referred to as the 'curse of dimensionality'. In this talk we will study typical real-world data more closely. We will see that even though the data is represented in high dimensions, it generally only has few relevant degrees of freedom. We shall call these few degrees of freedom as the 'intrinsic' dimension of the data. We will systematically characterize what we mean by intrinsic dimension and survey various machine learning techniques developed recently to exploit such intrinsic structure. We will explore how to find compact representations of such data and how modifications to existing learning algorithms can help 'escape' the curse of dimensionality.

Speaker Bio:
Nakul Verma is a Ph.D. candidate in the Computer Science Department at UC San Diego. His primary area of research involves high dimensional data analysis and its effects on learning algorithms.

Members are also interested in