Cross-Posting: One Metric to Fool Yourself - A Cautionary Tale
Details
When fitting a model, statistical or machine learning, we often want to evaluate its performance. We have a wealth of different methods for all types of scenarios, from classification and regression to survival analysis. While these performance metrics work as intended, we can often get more out of models by carefully combining and using these metrics to capture what we really care about in our models. Optimal performance and minimal bias.
Register for this event at the R Meetup - Real Data Science USA
https://www.meetup.com/real-data-science-usa-r-meetup/
AI summary
By Meetup
R data science meetup on model evaluation: for practitioners, learn how to combine metrics to reduce bias in performance assessment.
AI summary
By Meetup
R data science meetup on model evaluation: for practitioners, learn how to combine metrics to reduce bias in performance assessment.
