# Understanding Tree SHAP for Simple Models

The SHAP value for a feature is the average change in model output by conditioning on that feature when introducing features one at a time over all feature orderings. While this is easy to state, it is challenging to compute. So this notebook is meant to give a few simple examples where we can see how this plays out for very small trees. For arbitrary large trees it is very hard to intuitively guess these values by looking at the tree.

:

import sklearn
import shap
import numpy as np
import graphviz


## Single split example

:

# build data
N = 100
M = 4
X = np.zeros((N,M))
X.shape
y = np.zeros(N)
X[:N//2, 0] = 1
y[:N//2] = 1

# fit model
single_split_model = sklearn.tree.DecisionTreeRegressor(max_depth=1)
single_split_model.fit(X, y)

# draw model
dot_data = sklearn.tree.export_graphviz(single_split_model, out_file=None, filled=True, rounded=True, special_characters=True)
graph = graphviz.Source(dot_data)
graph

: ### Explain the model

Note that the bias term is the expected output of the model over the training dataset (0.5). The SHAP value for features not used in the model is always 0, while for $$x_0$$ it is just the difference between the expected value and the output of the model.

:

xs = [np.ones(M), np.zeros(M)]
for x in xs:
print()
print("          x =", x)
print("shap_values =", shap.TreeExplainer(single_split_model).shap_values(x))


x = [1. 1. 1. 1.]
shap_values = [0.5 0.  0.  0. ]

x = [0. 0. 0. 0.]
shap_values = [-0.5  0.   0.   0. ]


## Two feature AND example

:

# build data
N = 100
M = 4
X = np.zeros((N,M))
X.shape
y = np.zeros(N)
X[:1 * N//4, 1] = 1
X[:N//2, 0] = 1
X[N//2:3 * N//4, 1] = 1
y[:1 * N//4] = 1

# fit model
and_model = sklearn.tree.DecisionTreeRegressor(max_depth=2)
and_model.fit(X, y)

# draw model
dot_data = sklearn.tree.export_graphviz(and_model, out_file=None, filled=True, rounded=True, special_characters=True)
graph = graphviz.Source(dot_data)
graph

: ### Explain the model

Note that the bias term is the expected output of the model over the training dataset (0.25). The SHAP value for features not used in the model is always 0, while for $$x_0$$ and $$x_1$$ it is just the difference between the expected value and the output of the model split equally between them (since they equally contribute to the AND function).

:

xs = [np.ones(M), np.zeros(M)]
for x in xs:
print()
print("          x =", x)
print("shap_values =", shap.TreeExplainer(and_model).shap_values(x))


x = [1. 1. 1. 1.]
shap_values = [0.375 0.375 0.    0.   ]

x = [0. 0. 0. 0.]
shap_values = [-0.125 -0.125  0.     0.   ]


## Two feature OR example

:

# build data
N = 100
M = 4
X = np.zeros((N,M))
X.shape
y = np.zeros(N)
X[:N//2, 0] = 1
X[:1 * N//4, 1] = 1
X[N//2:3 * N//4, 1] = 1
y[:N//2] = 1
y[N//2:3 * N//4] = 1

# fit model
or_model = sklearn.tree.DecisionTreeRegressor(max_depth=2)
or_model.fit(X, y)

# draw model
dot_data = sklearn.tree.export_graphviz(or_model, out_file=None, filled=True, rounded=True, special_characters=True)
graph = graphviz.Source(dot_data)
graph

: ### Explain the model

Note that the bias term is the expected output of the model over the training dataset (0.75). The SHAP value for features not used in the model is always 0, while for $$x_0$$ and $$x_1$$ it is just the difference between the expected value and the output of the model split equally between them (since they equally contribute to the OR function).

:

xs = [np.ones(M), np.zeros(M)]
for x in xs:
print()
print("          x =", x)
print("shap_values =", shap.TreeExplainer(or_model).shap_values(x))


x = [1. 1. 1. 1.]
shap_values = [0.125 0.125 0.    0.   ]

x = [0. 0. 0. 0.]
shap_values = [-0.375 -0.375  0.     0.   ]


## Two feature XOR example

:

# build data
N = 100
M = 4
X = np.zeros((N,M))
X.shape
y = np.zeros(N)
X[:N//2, 0] = 1
X[:1 * N//4, 1] = 1
X[N//2:3 * N//4, 1] = 1
y[1 * N//4:N//2] = 1
y[N//2:3 * N//4] = 1

# fit model
xor_model = sklearn.tree.DecisionTreeRegressor(max_depth=2)
xor_model.fit(X, y)

# draw model
dot_data = sklearn.tree.export_graphviz(xor_model, out_file=None, filled=True, rounded=True, special_characters=True)
graph = graphviz.Source(dot_data)
graph

: ### Explain the model

Note that the bias term is the expected output of the model over the training dataset (0.5). The SHAP value for features not used in the model is always 0, while for $$x_0$$ and $$x_1$$ it is just the difference between the expected value and the output of the model split equally between them (since they equally contribute to the XOR function).

:

xs = [np.ones(M), np.zeros(M)]
for x in xs:
print()
print("          x =", x)
print("shap_values =", shap.TreeExplainer(xor_model).shap_values(x))


x = [1. 1. 1. 1.]
shap_values = [-0.25 -0.25  0.    0.  ]

x = [0. 0. 0. 0.]
shap_values = [-0.25 -0.25  0.    0.  ]


## Two feature AND + feature boost example

:

# build data
N = 100
M = 4
X = np.zeros((N,M))
X.shape
y = np.zeros(N)
X[:N//2, 0] = 1
X[:1 * N//4, 1] = 1
X[N//2:3 * N//4, 1] = 1
y[:1 * N//4] = 1
y[:N//2] += 1

# fit model
and_fb_model = sklearn.tree.DecisionTreeRegressor(max_depth=2)
and_fb_model.fit(X, y)

# draw model
dot_data = sklearn.tree.export_graphviz(and_fb_model, out_file=None, filled=True, rounded=True, special_characters=True)
graph = graphviz.Source(dot_data)
graph

: ### Explain the model

Note that the bias term is the expected output of the model over the training dataset (0.75). The SHAP value for features not used in the model is always 0, while for $$x_0$$ and $$x_1$$ it is just the difference between the expected value and the output of the model split equally between them (since they equally contribute to the AND function), plus an extra 0.5 impact for $$x_0$$ since it has an effect of $$1.0$$ all by itself (+0.5 if it is on and -0.5 if it is off).

:

xs = [np.ones(M), np.zeros(M)]
for x in xs:
print()
print("          x =", x)
print("shap_values =", shap.TreeExplainer(and_fb_model).shap_values(x))


x = [1. 1. 1. 1.]
shap_values = [0.875 0.375 0.    0.   ]

x = [0. 0. 0. 0.]
shap_values = [-0.625 -0.125  0.     0.   ]