Data scientists often struggle between model accuracy and model interpretability. To make models more interpretable, sometimes tree-based models have to be replaced with much simpler models like logistic regression. There is a growing need to better explain more complicated models. SHAP is now one of the best tools for this task; both XGBoost and LightGBM have incorporated SHAP into their library, and it’s now available in the most recent H2O.ai package.
I will talk about the math behind SHAP, comparison of SHAP with other feature importance algorithms, and a few code examples.