API Reference

This page contains the API reference for public objects and functions in SHAP. There are also example notebooks available that demonstrate how to use the API of each object/function.


shap.Explanation(values[, base_values, ...])

A slicable set of parallel arrays representing a SHAP explanation.


shap.Explainer(model[, masker, link, ...])

Uses Shapley values to explain any machine learning model or python function.

shap.explainers.Tree(model[, data, ...])

Uses Tree SHAP algorithms to explain the output of ensemble tree models.

shap.explainers.GPUTree(model[, data, ...])

Experimental GPU accelerated version of TreeExplainer.

shap.explainers.Linear(model, masker[, ...])

Computes SHAP values for a linear model, optionally accounting for inter-feature correlations.

shap.explainers.Permutation(model, masker[, ...])

This method approximates the Shapley values by iterating through permutations of the inputs.

shap.explainers.Partition(model, masker, *)

shap.explainers.Sampling(model, data, **kwargs)

This is an extension of the Shapley sampling values explanation method (aka.

shap.explainers.Additive(model, masker[, ...])

Computes SHAP values for generalized additive models.


Simply returns the model coefficents as the feature attributions.

shap.explainers.other.Random(model, masker)

Simply returns random (normally distributed) feature attributions.

shap.explainers.other.LimeTabular(model, data)

Simply wrap of lime.lime_tabular.LimeTabularExplainer into the common shap interface.

shap.explainers.other.Maple(model, data)

Simply wraps MAPLE into the common SHAP interface.

shap.explainers.other.TreeMaple(model, data)

Simply tree MAPLE into the common SHAP interface.


Simply returns the global gain/gini feature importances for tree models.


shap.plots.bar(shap_values[, max_display, ...])

Create a bar plot of a set of SHAP values.

shap.plots.waterfall(shap_values[, ...])

Plots an explantion of a single prediction as a waterfall plot.

shap.plots.scatter(shap_values[, color, ...])

Create a SHAP dependence scatter plot, colored by an interaction feature.

shap.plots.heatmap(shap_values[, ...])

Create a heatmap plot of a set of SHAP values.

shap.plots.force(base_value[, shap_values, ...])

Visualize the given SHAP values with an additive force layout.

shap.plots.text(shap_values[, ...])

Plots an explanation of a string of text using coloring and interactive labels.

shap.plots.image(shap_values[, ...])

Plots SHAP values for image inputs.

shap.plots.partial_dependence(ind, model, data)

A basic partial dependence plot function.



This is the superclass of all maskers.

shap.maskers.Independent(data[, max_samples])

This masks out tabular features by integrating over the given background dataset.

shap.maskers.Partition(data[, max_samples, ...])

This masks out tabular features by integrating over the given background dataset.

shap.maskers.Text([tokenizer, mask_token, ...])

This masks out tokens according to the given tokenizer.

shap.maskers.Image(mask_value[, shape])

This masks out image regions with blurring or inpainting.



This is the superclass of all models.


shap.utils.hclust(X[, y, linkage, metric, ...])

shap.utils.sample(X[, nsamples, random_state])


shap.utils.MaskedModel(model, masker, link, ...)

This is a utility class that combines a model, a masker object, and a current input.



Return the Adult census data in a nice package.


Return the boston housing data in a nice package.


Predict total number of non-violent crimes per 100K popuation.


Correlated Groups 60


Return the diabetes data in a nice package.

shap.datasets.imagenet50([display, resolution])

This is a set of 50 images representative of ImageNet images.


Return the clssic IMDB sentiment analysis training data in a nice package.


A simulated dataset with tight correlations among distinct groups of features.


Return the classic iris data in a nice package.


A nicely packaged version of NHANES I data with surivival times as labels.