• Post category:StudyBullet-17
  • Reading time:6 mins read

AI Quality Workshop: How to Test and Debug ML Models
Supercharge your ability to drive ML performance with ML testing, drift detection, debugging, and AI bias minimization.

What you will learn

Rapidly evaluate machine learning models for performance

Identify and address model drift

Debug production ML models

Identify and address possible ML bias issues

Description

Want to skill up your ability to test and debug machine learning models? Ready to be a powerful contributor to the AI era, the next great wave in software and technology?


Get Instant Notification of New Courses on our Telegram channel.


Get taught by leading instructors who have previously taught at Carnegie Mellon University and Stanford University, and who have provided training to thousands of students from around the globe, including hot startups and major global corporations:

  • You will learn the analytics that you need to drive model performance
  • You will understand how to create an automated test harness for easier, more effective ML testing
  • You will learn why AI explainability is the key to understanding the key mechanics of your model and to rapid debugging
  • Understand what Shapley Values are, why they are so important, and how to make the most of them
  • You will be able to identify the types of drift that can derail model performance
  • You will learn how to debug model performance challenges
  • You will be able to understand how to evaluate model fairness and identify when bias is occurring – and then address it
  • You will get access to some of the most powerful ML testing and debugging software tools available, for FREE
    (after signing up for the course, terms and conditions apply)Testimonials from the live, virtual version of the course:
  • “This is what you would pay thousand of dollars for at a university.” – Mike
  • “Excellent course!!! Super thanks to Professor Datta, Josh, Arri, and Rick!! :D” – Trevia
  • “Thank you so very much. I learned a ton. Great job!” – K. M.
  • “Fantastic series. Great explanations and great product. Thank you.” – Santosh
  • “Thank you everyone to make this course available… wonderful sessions!” – Chris
English
language

Content

Welcome! Let’s get set up

Welcome – what you’ll get from this course
How to set up your free TruEra access at app.truera.net/signup
How to use Google Colab for TruEra

ML Testing

Introduction to ML Testing
Running and Interpreting Tests
Creating New Tests

ML Explainability

Introduction to ML Explainability
Overview of Feature Importance Methods
Shapley Values – Query Definition
Shapley Values – Comparing Model Outputs
Shapley Values – Dealing with Feature Interactions
Shapley Values – Summarization
Overview – Gradient Based Explanations for Computer Vision
Design – Gradient-Based Explanations for Computer Vision
Evaluation – Gradient-Based Explanations for Computer Vision
Hands-On Learning – Explainability
Quiz – Explainability
Demonstration – Global and Local Explainability Analysis

Drift

Introduction to Drift
Sources of Drift: Why Does Drift Happen?
Drift Quiz #1
Identifying Drift: Metrics
Identifying Drift: Challenges
How to Mitigate Drift
Hands-on Learning: Drift
Drift Quiz #2
Demonstration – Going from the Model Summary to Drift Analytics

ML Performance Debugging

Introduction to ML Performance Debugging
ML Peformance Debugging Methodology
ML Performance Metrics – Classification
ML Performance Metrics – Regression
Narrowing Down the Scope of ML Performance Issues
Hands-On Learning: Performance Debugging
Quiz: Performance Debugging
Demonstration – Performance Debugging

Bias and Fairness in Machine Learning

Introduction to Bias and Fairness in ML
Worldviews of Fairness in Machine Learning
How to Pick a Fairness Metric
How Does Your ML Model Become Unfair?
Demonstration: Fairness and Bias in ML
Hands-On Learning: Bias and Fairness in ML
Quiz: Fairness and Bias in ML