Simplified Way to Learn XAI
What you will learn
Importance of XAI in modern world
Differentiation of glass box, white box and black box ML models
Categorization of XAI on the basis of their scope, agnosticity, data types and explanation techniques
Trade-off between accuracy and interpretability
Application of InterpretML package from Microsoft to generate explanations of ML models
Need of counterfactual and contrastive explanations
Working principles and mathematical modeling of XAI techniques like LIME, SHAP, DiCE, LRP, counterfactual and contrastive explanationss
Application of XAI techniques like LIME, SHAP, DiCE, LRP to generate explanations for black-box models for tabular, textual, and image datasets.
What-if tool from Google to analyze data points and to generate counterfactuals
Description
XAI with Python
This course provides detailed insights into the latest developments in Explainable Artificial Intelligence (XAI). Our reliance on artificial intelligence models is increasing day by day, and it’s also becoming equally important to explain how and why AI makes a particular decision. Recent laws have also caused the urgency about explaining and defending the decisions made by AI systems. This course discusses tools and techniques using Python to visualize, explain, and build trustworthy AI systems.
This course covers the working principle and mathematical modeling of LIME (Local Interpretable Model Agnostic Explanations), SHAP (SHapley Additive exPlanations) for generating local and global explanations. It discusses the need for counterfactual and contrastive explanations, the working principle, and mathematical modeling of various techniques like Diverse Counterfactual Explanations (DiCE) for generating actionable counterfactuals.
The concept of AI fairness and generating visual explanations are covered through Google’s What-If Tool (WIT).Β This course covers the LRP (Layer-wise Relevance Propagation) technique for generating explanations for neural networks.
In this course, you will learn about tools and techniques using Python to visualize, explain, and build trustworthy AI systems. The course covers various case studies to emphasize the importance of explainable techniques in critical application domains.
All the techniques are explained through hands-on sessions so that learns can clearly understand the code and can apply it comfortably to their AIΒ models. The dataset and code used in implementing various XAIΒ techniques are provided to the learners for their practice.
Content