shap.TreeExplainer

class shap.TreeExplainer(model, data=None, model_output='raw', feature_perturbation='auto', feature_names=None, approximate=<object object>, link=None, linearize_link=None)

Uses Tree SHAP algorithms to explain the output of ensemble tree models.

Tree SHAP is a fast and exact method to estimate SHAP values for tree models and ensembles of trees, under several different possible assumptions about feature dependence. It depends on fast C++ implementations either inside an external model package or in the local compiled C extension.

Examples

See Tree explainer examples

__init__(model, data=None, model_output='raw', feature_perturbation='auto', feature_names=None, approximate=<object object>, link=None, linearize_link=None)

Build a new Tree explainer for the passed model.

Parameters:
modelmodel object

The tree based machine learning model that we want to explain. XGBoost, LightGBM, CatBoost, Pyspark and most tree-based scikit-learn models are supported.

datanumpy.array or pandas.DataFrame

The background dataset to use for integrating out features.

This argument is optional when feature_perturbation="tree_path_dependent", since in that case we can use the number of training samples that went down each tree path as our background dataset (this is recorded in the model object).

feature_perturbation“auto” (default), “interventional” or “tree_path_dependent”

Since SHAP values rely on conditional expectations, we need to decide how to handle correlated (or otherwise dependent) input features.

  • if "interventional", a background dataset data is required. The dependencies between features are handled according to the rules dictated by causal inference [1]. The runtime scales linearly with the size of the background dataset you use: anywhere from 100 to 1000 random background samples are good sizes to use.

  • if "tree_path_dependent", no background dataset is required and the approach is to just follow the trees and use the number of training examples that went down each leaf to represent the background distribution.

  • if "auto", the “interventional” approach will be used when a background is provided, otherwise the “tree_path_dependent” approach will be used.

New in version 0.47: The “auto” option was added.

Changed in version 0.47: The default behaviour will change from “interventional” to “auto” in 0.47. In the future, passing feature_pertubation=”interventional” without providing a background dataset will raise an error.

model_output“raw”, “probability”, “log_loss”, or model method name

What output of the model should be explained.

  • If “raw”, then we explain the raw output of the trees, which varies by model. For regression models, “raw” is the standard output. For binary classification in XGBoost, this is the log odds ratio.

  • If “probability”, then we explain the output of the model transformed into probability space (note that this means the SHAP values now sum to the probability output of the model).

  • If “log_loss”, then we explain the natural logarithm of the model loss function, so that the SHAP values sum up to the log loss of the model for each sample. This is helpful for breaking down model performance by feature.

  • If model_output is the name of a supported prediction method on the model object, then we explain the output of that model method name. For example, model_output="predict_proba" explains the result of calling model.predict_proba.

Currently the “probability” and “log_loss” options are only supported when feature_perturbation="interventional".

approximatebool

Deprecated, will be deprecated in v0.47.0 and removed in version v0.49.0. Please use the approximate argument in the shap_values() or __call__ methods instead.

References

[1]

Janzing, Dominik, Lenon Minorics, and Patrick Blöbaum. “Feature relevance quantification in explainable AI: A causal problem.” International Conference on artificial intelligence and statistics. PMLR, 2020.

Methods

__init__(model[, data, model_output, ...])

Build a new Tree explainer for the passed model.

assert_additivity(phi, model_output)

explain_row(*row_args, max_evals, ...)

Explains a single row and returns the tuple (row_values, row_expected_values, row_mask_shapes, main_effects).

load(in_file[, model_loader, masker_loader, ...])

Load an Explainer from the given file stream.

save(out_file[, model_saver, masker_saver])

Write the explainer to the given file stream.

shap_interaction_values(X[, y, tree_limit])

Estimate the SHAP interaction values for a set of samples.

shap_values(X[, y, tree_limit, approximate, ...])

Estimate the SHAP values for a set of samples.

supports_model_with_masker(model, masker)

Determines if this explainer can handle the given model.

explain_row(*row_args, max_evals, main_effects, error_bounds, outputs, silent, **kwargs)

Explains a single row and returns the tuple (row_values, row_expected_values, row_mask_shapes, main_effects).

This is an abstract method meant to be implemented by each subclass.

Returns:
tuple

A tuple of (row_values, row_expected_values, row_mask_shapes), where row_values is an array of the attribution values for each sample, row_expected_values is an array (or single value) representing the expected value of the model for each sample (which is the same for all samples unless there are fixed inputs present, like labels when explaining the loss), and row_mask_shapes is a list of all the input shapes (since the row_values is always flattened),

classmethod load(in_file, model_loader=<bound method Model.load of <class 'shap.models._model.Model'>>, masker_loader=<bound method Serializable.load of <class 'shap.maskers._masker.Masker'>>, instantiate=True)

Load an Explainer from the given file stream.

Parameters:
in_fileThe file stream to load objects from.
save(out_file, model_saver='.save', masker_saver='.save')

Write the explainer to the given file stream.

shap_interaction_values(X, y=None, tree_limit=None)

Estimate the SHAP interaction values for a set of samples.

Parameters:
Xnumpy.array, pandas.DataFrame or catboost.Pool (for catboost)

A matrix of samples (# samples x # features) on which to explain the model’s output.

ynumpy.array

An array of label values for each sample. Used when explaining loss functions (not yet supported).

tree_limitNone (default) or int

Limit the number of trees used by the model. By default, the limit of the original model is used (None). -1 means no limit.

Returns:
np.array

Returns a matrix. The shape depends on the number of model outputs:

  • one output: matrix of shape (#num_samples, #features, #features).

  • multiple outputs: matrix of shape (#num_samples, #features, #features, #num_outputs).

The matrix (#num_samples, # features, # features) for each sample sums to the difference between the model output for that sample and the expected value of the model output (which is stored in the expected_value attribute of the explainer). Each row of this matrix sums to the SHAP value for that feature for that sample. The diagonal entries of the matrix represent the “main effect” of that feature on the prediction. The symmetric off-diagonal entries represent the interaction effects between all pairs of features for that sample. For models with vector outputs, this returns a list of tensors, one for each output.

Changed in version 0.45.0: Return type for models with multiple outputs changed from list to np.ndarray.

shap_values(X: Any, y: ndarray | Series | None = None, tree_limit: int | None = None, approximate: bool = False, check_additivity: bool = True, from_call: bool = False)

Estimate the SHAP values for a set of samples.

Parameters:
XAny

Can be a dataframe like object, e.g. numpy.array, pandas.DataFrame or catboost.Pool (for catboost). A matrix of samples (# samples x # features) on which to explain the model’s output.

ynumpy.array

An array of label values for each sample. Used when explaining loss functions.

tree_limitNone (default) or int

Limit the number of trees used by the model. By default, the limit of the original model is used (None). -1 means no limit.

approximatebool

Run fast, but only roughly approximate the Tree SHAP values. This runs a method previously proposed by Saabas which only considers a single feature ordering. Take care since this does not have the consistency guarantees of Shapley values and places too much weight on lower splits in the tree.

check_additivitybool

Run a validation check that the sum of the SHAP values equals the output of the model. This check takes only a small amount of time, and will catch potential unforeseen errors. Note that this check only runs right now when explaining the margin of the model.

Returns:
np.array

Estimated SHAP values, usually of shape (# samples x # features).

Each row sums to the difference between the model output for that sample and the expected value of the model output (which is stored as the expected_value attribute of the explainer).

The shape of the returned array depends on the number of model outputs:

  • one output: array of shape (#num_samples, *X.shape[1:]).

  • multiple outputs: array of shape (#num_samples, *X.shape[1:], #num_outputs).

Changed in version 0.45.0: Return type for models with multiple outputs changed from list to np.ndarray.

static supports_model_with_masker(model, masker)

Determines if this explainer can handle the given model.

This is an abstract static method meant to be implemented by each subclass.