ML.NET Survey: Model Explainability

Jessie Houghton

Model Explainability ensures you can debug or audit your machine learning models. By understanding how and why your model reacts in certain situations, you can ensure reliability and robustness, while avoiding bias.

Tell us about how you want to interpret your models and assess bias in ML.NET by taking this ~10 minute survey.

At the end, you can optionally leave your contact information if you’d like to talk with the ML.NET team about your Model Explainability and Fairness feedback.

ML.NET Survey: Model Explainability

1 comment

Discussion is closed. Login to edit/delete existing comments.

  • Tom Giles 0

    I did the survey, but after also just watching the standup recording, had another thought. I have found it helpful to watch the “shape” of the predictions that are asked of the model, and comparing it to the shape of the data that was used to train the model. Say, for multi class classification, is the distribution of the predicted classes coming out similar to the distribution of the classes that were in the training set? Over time (in production), if that’s not the case, then perhaps the training set doesn’t fairly represent the target population, whatever that may be. I don’t think that there is any specific functionality that needs to be added to ML.Net for this – just passing on a simple way of thinking about judging fairness of the model. Basically if a model is predicting the same shape as the training data, in the real world, then the model is fair. If not, and it’s accurate (in the stats) then it’s likely that the training data is not representative – i.e., is not fair. Enjoyed the standup, by the way. 🙂

Feedback usabilla icon