Skip to content

Details

Popular explainability methods like SHAP, LIME, and Integrated Gradients often yield conflicting results for the same model. How do you decide which one to trust? This hands-on workshop moves beyond "visual intuition" to introduce quantitative evaluation of XAI, providing objective tools to measure explanation quality.
Working through three interactive Google Colab notebooks, participants will:

  • Identify Disagreement: Generate multiple explanations on real datasets to see exactly where and why they diverge.
  • Apply Metric Frameworks: Measure explanations across three critical dimensions: Faithfulness (accuracy to the model), Robustness (stability), and Complexity (interpretability).
  • Build Evaluation Pipelines: Create comparison tables and learn a decision framework for choosing explainers based on use cases like regulatory audits or stakeholder communication.

Related topics

Events in Tampa, FL
Artificial Intelligence Machine Learning Robotics
Artificial Intelligence Programming
Machine Learning
Machine Learning with Python

Sponsors

Job Board

Job Board

The reverse job board built by and for Tampa Bay's Developer Community.

Join Over 1500 Developers On Slack

Join Over 1500 Developers On Slack

It's the best place to learn, collaborate, and keep in touch.

Speak at Tampa Devs

Speak at Tampa Devs

Share your knowledge with everyone.

Free Mentorship Program

Free Mentorship Program

Volunteer as a mentor. Learn from local experts. It's completely free.

You may also like