Papers club Detecting wrong classifications from hyper-confident Neural Networks
Details
Neural networks are now commonly used in many tasks, among which classification is one of the main applications. Neural networks and other ML models can achieve excellent performance in classifying unseen samples, be it images, audio files, or the status of a process from a time series, and it is not uncommon to see accuracy scores well over 80-90%.
Some samples being misclassified is a natural occurrence in any modeling framework, but neural networks tend to be hyper-confident about their decision, even when wrong. This can be critical when the cost associated with a wrong classification is large, e.g., in autonomous vehicles. How can these hyperconfident errors be detected? What would be a sensible strategy? We will present the problem and discuss possible approaches.
