XAI - Musings with Explainable AI (Beyond LIME and SHAP)
Details
SHAP and LIME became the defaults because they’re model-agnostic, easy to drop in, and had strong docs early. But for neural networks specifically, there’s a rich ecosystem now—often model-specific and technique-specific (gradients, relevance, concepts, counterfactuals).
# DiCE (Diverse Counterfactual Explanations) This library is VERY interesting. It does What-If Analysis .
Given a model and a specific prediction, DiCE searches for small, feasible changes to the input (counterfactuals) that would change the prediction—and does so in a diverse way so you see multiple realistic paths to the desired outcome. You can constrain which features are allowed to change, their ranges, costs, and how many features should move.
# What will be shown in MEETUP ?
- Brief History on XAI
- Will show you an example and output
- Take Questions
- Next Topic Discussion
