Session #8: ignorance and confidence

Details
Mark Twain said: "To succeed in life, you need two things: ignorance and confidence."
In our last session we had a miniature discussion around the question: Why don't classifiers tell how confident they are in their output? (*)
I think this is a fascinating question. Let's discuss it together.
We're gonna scratch the surface of this topic with the following papers:
- On optimum recognition error and reject tradeoff, C. Chow -- http://bit.ly/1hUdynD
This is a classical paper from the 70th (the linked copy is manually typed with a typing machine - which I find kinda cool). Chow lays down the foundations of the power of learning-abstinence (http://en.wikipedia.org/wiki/Abstinence). In a nutshell: if you let the classifier abstain from classifications then you can guarantee the point-wise(!) level of his mistakes.
- Selective Prediction of Financial Trends with Hidden Markov Models, El-Yaniv & Pidan -- http://bit.ly/1gc6AYC
What happens when you apply selective learning for doing short term trend prediction in a financial context? El-Yaniv & Pidan's work is all about applied selective learning at its best. As a bonus, we'll have a chance to talk about HMMs and fintech.
Feel free to share your thoughts (post it here/send me a msg). Drop me a message if you are willing to lead a session thread.
---
(*) It came up from the question: "Can an NLP syntactical parser tell how confident it is in its output?"


Session #8: ignorance and confidence