Skip to content

Information Theoretic Metrics for Multi-Class Predictor Evaluation

Photo of Max Kesin
Hosted By
Max K. and Paul D.
Information Theoretic Metrics for Multi-Class Predictor Evaluation

Details

Talk abstract:

The most common metrics used to evaluate a classifier are accuracy, precision, recall and F1 score.

These metrics are widely used in machine learning, information retrieval, and text analysis (e.g., text categorization).

Each of these metrics is imperfect in some way (captures only one aspect of predictor performance and can be fooled by a weird data set).

None of them can be used to compare predictors across different datasets.

In this paper we present an information-theoretic performance metric which does not suffer from the aforementioned flaws and can be used in both classification (binary and multi-class) and categorization (each example can be placed in several categories) settings.

The code to compute the metric is available under the Apache open-source license:

https://github.com/Magnetic/proficiency-metric

Speaker info:

Sam Steingold has been doing data science since before it got that swanky name.He is the lead data scientist at Magnetic Media Online (they're hiring- http://www.magnetic.com/careers/ )and holds a PhD in Math from UCLA.He contributed to various open source projects (e.g., GNU Emacs,CLISP, Vowpal Wabbit).

Photo of NYC Machine Learning group
NYC Machine Learning
See more events
NYC Machine Learning
Photo of NYC Machine Learning group
No ratings yet
Pivotal Labs
625 Avenue of Americas, 2nd Floor · New York, NY