shap.ExactExplainer

class shap.ExactExplainer(model, masker, link=CPUDispatcher(<function identity>), linearize_link=True, feature_names=None)

Computes SHAP values via an optimized exact enumeration.

This works well for standard Shapley value maskers for models with less than ~15 features that vary from the background per sample. It also works well for Owen values from hclustering structured maskers when there are less than ~100 features that vary from the background per sample. This explainer minimizes the number of function evaluations needed by ordering the masking sets to minimize sequential differences. This is done using gray codes for standard Shapley values and a greedy sorting method for hclustering structured maskers.

__init__(model, masker, link=CPUDispatcher(<function identity>), linearize_link=True, feature_names=None)

Build an explainers.Exact object for the given model using the given masker object.

Parameters:
modelfunction

A callable python object that executes the model given a set of input data samples.

maskerfunction or numpy.array or pandas.DataFrame

A callable python object used to “mask” out hidden features of the form masker(mask, *fargs). It takes a single a binary mask and an input sample and returns a matrix of masked samples. These masked samples are evaluated using the model function and the outputs are then averaged. As a shortcut for the standard masking used by SHAP you can pass a background data matrix instead of a function and that matrix will be used for masking. To use a clustering game structure you can pass a shap.maskers.TabularPartitions(data) object.

linkfunction

The link function used to map between the output units of the model and the SHAP value units. By default it is shap.links.identity, but shap.links.logit can be useful so that expectations are computed in probability units while explanations remain in the (more naturally additive) log-odds units. For more details on how link functions work see any overview of link functions for generalized linear models.

linearize_linkbool

If we use a non-linear link function to take expectations then models that are additive with respect to that link function for a single background sample will no longer be additive when using a background masker with many samples. This for example means that a linear logistic regression model would have interaction effects that arise from the non-linear changes in expectation averaging. To retain the additively of the model with still respecting the link function we linearize the link function by default.

Methods

__init__(model, masker[, link, ...])

Build an explainers.Exact object for the given model using the given masker object.

explain_row(*row_args, max_evals, ...)

Explains a single row and returns the tuple (row_values, row_expected_values, row_mask_shapes).

load(in_file[, model_loader, masker_loader, ...])

Load an Explainer from the given file stream.

save(out_file[, model_saver, masker_saver])

Write the explainer to the given file stream.

supports_model_with_masker(model, masker)

Determines if this explainer can handle the given model.

explain_row(*row_args, max_evals, main_effects, error_bounds, batch_size, outputs, interactions, silent)

Explains a single row and returns the tuple (row_values, row_expected_values, row_mask_shapes).

classmethod load(in_file, model_loader=<bound method Model.load of <class 'shap.models._model.Model'>>, masker_loader=<bound method Serializable.load of <class 'shap.maskers._masker.Masker'>>, instantiate=True)

Load an Explainer from the given file stream.

Parameters:
in_fileThe file stream to load objects from.
save(out_file, model_saver='.save', masker_saver='.save')

Write the explainer to the given file stream.

static supports_model_with_masker(model, masker)

Determines if this explainer can handle the given model.

This is an abstract static method meant to be implemented by each subclass.