Technical

Shapley Additive Explanations (SHAP), derived from Shapley values in game theory, is a popular and mathematically well-grounded example of a feature importance measure. According to SHAP, the importance of feature for the output of model , , is a weighted sum of the feature's contribution to the model's output over all possible feature combinations:

Where is feature , is a subset of features, and is the number of features in the model.

In practice, is estimated by substituting in values for the remaining features, , from a randomly selected observation in a background dataset . Suppose we compute the model output for an observation and a background observation . Each SHAP value is the amount of this difference due to feature .

G-SHAP allows us to compute the feature importance of any function of a model's output. Define a G-SHAP value as:

Where is a set of additional arguments.

G-SHAP values have a similar interpretation as SHAP values. Suppose we compute a function of a model’s output for a sample and a background dataset . Each G-SHAP value is the amount of this difference due to feature .