Machine Translation Explanations

This notebook demonstrates model explanations for a text to text scenario using a pretrained transformer model for machine translation. In this demo, we showcase explanations on two different models: English to Spanish (https://huggingface.co/Helsinki-NLP/opus-mt-en-es), and English to French (https://huggingface.co/Helsinki-NLP/opus-mt-en-fr).

[1]:
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer

import shap

English to Spanish model

[2]:
# load the model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-es")
model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-en-es").cuda()

# define the input sentences we want to translate
data = [
    "Transformers have rapidly become the model of choice for NLP problems, replacing older recurrent neural network models"
]

Explain the model’s predictions

[3]:
# we build an explainer by passing the model we want to explain and
# the tokenizer we want to use to break up the input strings
explainer = shap.Explainer(model, tokenizer)

# explainers are callable, just like models
shap_values = explainer(data, fixed_context=1)
floor_divide is deprecated, and will be removed in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values.
To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). (Triggered internally at  /pytorch/aten/src/ATen/native/BinaryOps.cpp:467.)

Visualize shap explanations

[4]:
shap.plots.text(shap_values)


[0]
outputs
Los
transformador
es
se
han
convertido
rápidamente
en
el
modelo
de
elección
para
problemas
N
LP
,
reemplaza
ndo
modelos
de
red
neuro
nal
recurrente
s
más
antiguos


-0-1010-7.50297-7.50297base value0.7807650.780765fLos(inputs)5.114 ers 1.965 ▁Transform 1.903 ▁have 0.325 ▁of 0.186 ▁become 0.172 ▁neural 0.105 ▁network 0.101 ▁the 0.081 ▁for 0.053 ▁replacing 0.05 ▁recurrent 0.025 ▁older 0.021 LP -0.505 ▁rapidly -0.417 , -0.247 ▁problems -0.225 ▁model -0.114 ▁choice -0.114 ▁models -0.1 -0.096 ▁N
inputs
1.965
▁Transform
5.114
ers
1.903
▁have
-0.505
▁rapidly
0.186
▁become
0.101
▁the
-0.225
▁model
0.325
▁of
-0.114
▁choice
0.081
▁for
-0.096
▁N
0.021
LP
-0.247
▁problems
-0.417
,
0.053
▁replacing
0.025
▁older
0.05
▁recurrent
0.172
▁neural
0.105
▁network
-0.114
▁models
-0.1

English to French

[5]:
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-fr")
model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-en-fr").cuda()
[6]:
explainer = shap.Explainer(model, tokenizer)
shap_values = explainer(data)
Partition explainer: 2it [00:12,  6.35s/it]
[7]:
shap.plots.text(shap_values)


[0]
outputs
Les
transformateurs
sont
rapidement
devenus
le
modèle
de
choix
pour
les
problèmes
de
N
LP
,
remplaçant
les
anciens
modèles
de
réseaux
neuro
naux
récurrent
s


-4-7-10-12-4.13925-4.13925base value0.6050570.605057fLes(inputs)2.359 s 1.472 ▁Trans 1.248 ▁have 0.9 ▁become▁the 0.828 ▁rapidly 0.205 ▁NLP 0.174 ▁problems 0.041 ural 0.0 -0.393 ▁replacing -0.353 ▁older -0.292 ▁model -0.258 former -0.25 ▁of -0.246 ▁choice -0.226 ▁for -0.213 ▁network -0.165 ▁recurrent -0.067 , -0.017 ▁ne -0.005 ▁models
inputs
1.472
▁Trans
-0.258
former
2.359
s
1.248
▁have
0.828
▁rapidly
0.9 / 2
▁become▁the
-0.292
▁model
-0.25
▁of
-0.246
▁choice
-0.226
▁for
0.205 / 2
▁NLP
0.174
▁problems
-0.067
,
-0.393
▁replacing
-0.353
▁older
-0.165
▁recurrent
-0.017
▁ne
0.041
ural
-0.213
▁network
-0.005
▁models
0.0

Have an idea for more helpful examples? Pull requests that add to this documentation notebook are encouraged!