Model Interpretability
Posted on October 5, 2021 Big Data Machine Learning & AI
Why should you trust predictions made by a machine?
Frequently machine learning (ML) models are labeled as ‘black-boxes’ where the inner workings that convert the input data into the output prediction are necessarily unknown. Depending on the model, experts can sometimes interpret which variables, or features, are the most important.
However, this approach is not always possible.
What if you want to find out which features impact your neural network the most? Or learn about the behaviour around a specific operating point?
One of the tools we use in these cases is the SHapley Additive exPlanations (SHAP) library (http://ow.ly/3LNI50GilLJ). In contrast to some other methods SHAP is founded on solid theoretical foundations instead of mere heuristics. The SHAP package assigns an importance value to each feature, describing its contribution to a given prediction.
This method can be used on any model, and applied to any number of data points, thus informing on the model’s local and global behaviour.
We have used ML model interpretation analysis for optimization, sensitivity studies, and process scale-up.
Look out for our next post on how SHAP works.