Shapley Values

Posted on October 7, 2021

Alex Rigby

How to understand a model
To better interpret machine learning (ML) models, we can use Shapley values.

The concept behind a Shapley value was first developed in the context of Game Theory for games where multiple players strive to maximize a reward or score.

Consider a cooperative game such as a trivia quiz where players form teams and try to answer questions. To identify the strongest players, we must evaluate them in the context of their teams. For example, perhaps Player A makes many useful contributions when on a team with Players B and C, but makes fewer contributions when Player D joins the team, possibly because they have overlapping knowledge. To accurately evaluate Player A’s performance, we look at all possible combinations of teams and so determine how much Player A contributes overall. This is exactly the idea behind Shapley values.

Analogously, we can determine Feature A’s contribution to a ML model from its Shapley value. Here, we look at the change in prediction based on different Feature A values, evaluated for all possible combinations and values of each feature. Having found each feature’s contribution, we can better understand, interpret, and inform on our ML model.

View original post on LinkedIn