CS计算机代考程序代写 scheme database deep learning Bayesian AI Bayesian network algorithm Generating Visual Explanations

Generating Visual Explanations

Lisa Anne Hendricks1 Zeynep Akata2 Marcus Rohrbach1,3

Jeff Donahue1 Bernt Schiele2 Trevor Darrell1

1UC Berkeley EECS, CA, United States
2Max Planck Institute for Informatics, Saarbrücken, Germany

3ICSI, Berkeley, CA, United States

Abstract. Clearly explaining a rationale for a classification decision
to an end-user can be as important as the decision itself. Existing ap-
proaches for deep visual recognition are generally opaque and do not
output any justification text; contemporary vision-language models can
describe image content but fail to take into account class-discriminative
image aspects which justify visual predictions. We propose a new model
that focuses on the discriminating properties of the visible object, jointly
predicts a class label, and explains why the predicted label is appropri-
ate for the image. We propose a novel loss function based on sampling
and reinforcement learning that learns to generate sentences that real-
ize a global sentence property, such as class specificity. Our results on
a fine-grained bird species classification dataset show that our model is
able to generate explanations which are not only consistent with an im-
age but also more discriminative than descriptions produced by existing
captioning methods.

1 Introduction

Explaining why the output of a visual system is compatible with visual evidence
is a key component for understanding and interacting with AI systems [1]. Deep
classification methods have had tremendous success in visual recognition [2,3,4],
but their predictions can be unsatisfactory if the model cannot provide a consis-
tent justification of why it made a certain prediction. In contrast, systems which
can justify why a prediction is consistent with visual elements to a user are more
likely to be trusted [5].

We consider explanations as determining why a certain decision is consistent
with visual evidence, and differentiate between introspection explanation systems
which explain how a model determines its final output (e.g., “This is a Western
Grebe because filter 2 has a high activation…”) and justification explanation
systems which produce sentences detailing how visual evidence is compatible
with a system output (e.g., “This is a Western Grebe because it has red eyes…”).
We concentrate on justification explanation systems because such systems may
be more useful to non-experts who do not have detailed knowledge of modern
computer vision systems [1].

We argue that visual explanations must satisfy two criteria: they must both
be class discriminative and accurately describe a specific image instance. As

ar
X

iv
:1

60
3.

08
50

7v
1

[
cs

.C
V

]
2

8
M

ar
2

01
6

2 L. A. Hendricks, Z. Akata, M. Rohrbach, J. Donahue, B. Schiele, T. Darrell

Description: This is a large bird with a white neck and a black back in the water.
Class Definition: The Western Grebe is a waterbird with a yellow pointy beak, white neck and belly,
and black back.
Explanation: This is a Western Grebe because this bird has a long white neck, pointy yellow beak
and red eye.

Im
ag

e
R

el
ev

an
ce

Class Relevance

Image
Description

Visual
Explanation

Class Definition

Laysan Albatross

Description: This is a large flying bird with black wings and a white belly.
Class Definition: The Laysan Albatross is a large seabird with a hooked yellow beak, black back
and white belly.
Visual Explanation: This is a Laysan Albatross because this bird has a large wingspan, hooked
yellow beak, and white belly.

Description: This is a large bird with a white neck and a black back in the water.
Class Definition: The Laysan Albatross is a large seabird with a hooked yellow beak, black back
and white belly.
Visual Explanation: This is a Laysan Albatross because this bird has a hooked yellow beak white
neck and black back.

Laysan Albatross

Western Grebe

Fig. 1. Our proposed model generates visual explanations. Visual explanations are both
image relevant and class relevant. In contrast, image descriptions are image relevant,
but not necessarily class relevant, and class definitions are class relevant but not nec-
essarily image relevant. In the visual explanations above, class discriminative visual
features that are also present in the image are discussed.

shown in Figure 1, explanations are distinct from descriptions, which provide
a sentence based only on visual information, and definitions, which provide a
sentence based only on class information. Unlike descriptions and definitions,
visual explanations detail why a certain category is appropriate for a given image
while only mentioning image relevant features. As an example, let us consider
an image classification system that predicts a certain image belongs to the class
“western grebe” (Figure 1, top). A standard captioning system might provide a
description such as “This is a large bird with a white neck and black back in the
water.” However, as this description does not mention discriminative features, it
could also be applied to a “laysan albatross” (Figure 1, bottom). In contrast, we
propose to provide explanations, such as “This is a western grebe because this
bird has a long white neck, pointy yellow beak, and a red eye.” The explanation
includes the “red eye” property, e.g., when crucial for distinguishing between
“western grebe” and “laysan albatross”. As such, our system explains why the
predicted category is the most appropriate for the image.

We outline our approach in Figure 2. We condition language generation on
both an image and a predicted class label which allows us to generate class-
specific sentences. Unlike other caption models, which condition on visual fea-
tures from a network pre-trained on ImageNet [6], our model also includes a
fine-grained recognition pipeline to produce strong image features [3]. Like many
contemporary description models [7,8,9,10,11], our model learns to generate a
sequence of words using an LSTM [12]. However, we design a novel loss function
which encourages generated sentences to include class discriminative informa-
tion. One challenge in designing a loss to optimize for class specificity is that
class specificity is a global sentence property: e.g., whereas a sentence “This is
an all black bird with a bright red eye” is class specific to a “Bronzed Cowbird”,
words and phrases in the sentence, such as “black” or “red eye” are less class
discriminative on their own. Our proposed generation loss enforces that gener-
ated sequences fulfill a certain global property, such as category specificity. Our
final output is a sampled sentence, so we backpropagate the discriminative loss

Generating Visual Explanations 3

Deep Finegrained Classifier

C
om

pa
ct

B
ili

ne
ar

F

ea
tu

re

VGG

P
re

di
ct

ed
L

ab
el

LSTM

LSTM

it

LSTM

LSTM

has

LSTM

LSTM

a

LSTM

LSTM

bright

LSTM

LSTM

red

LSTM

LSTM

Concat

Recurrent explanation generator model
This is a cardinal because …

Fig. 2. Generation of explanatory text with our joint classification and language model.
Our model extracts visual features using a fine-grained classifier before language gen-
eration. Additionally, unlike description models we also condition sentence generation
on the predicted class label.

through the sentence sampling mechanism via a technique from the reinforce-
ment learning literature. While typical sentence generation losses optimize the
alignment between generated and ground truth sentences, our discriminative loss
specifically optimizes for class-specificity.

To the best of our knowledge, ours is the first method to produce deep visual
explanations using natural language justifications. We describe below a novel
joint vision and language explanation model which combines classification and
sentence generation and incorporates a loss function operating over sampled
sentences. We show that this formulation is able to focus generated text to be
more discriminative and that our model produces better explanations than a
description-only baseline. Our results also confirm that generated sentence qual-
ity improves with respect to traditional sentence generation metrics by including
a discriminative class label loss during training. This result holds even when class
conditioning is ablated at test time.

2 Related Work

Explanation. Automatic reasoning and explanation has a long and rich history
within the artificial intelligence community [1,13,14,15,16,17,18,19]. Explanation
systems span a variety of applications including explaining medical diagnosis [13],
simulator actions [14,15,16,19], and robot movements [17]. Many of these systems
are rule-based [13] or solely reliant on filling in a predetermined template [16].
Methods such as [13] require expert-level explanations and decision processes.
In contrast, our visual explanation method is learned directly from data by
optimizing explanations to fulfill our two proposed visual explanation criteria.
Our model is not provided with expert explanations or decision processes, but
rather learns from visual features and text descriptions. In contrast to systems
like [13,14,15,16,17,18] which aim to explain the underlying mechanism behind
a decision, authors in [1] concentrate on why a prediction is justifiable to a user.
Such systems are advantageous because they do not rely on user familiarity with
the design of an intelligent system in order to provide useful information.

4 L. A. Hendricks, Z. Akata, M. Rohrbach, J. Donahue, B. Schiele, T. Darrell

A variety of computer vision methods have focused on discovering visual
features which can help “explain” an image classification decision [20,21,22]. Im-
portantly, these models do not attempt to link discovered discriminative features
to natural language expressions. We believe methods to discover discriminative
visual features are complementary to our proposed system, as such features could
be used as additional inputs to our model and aid producing better explanations.

Visual Description. Early image description methods rely on first detecting
visual concepts in a scene (e.g., subject, verb, and object) before generating
a sentence with either a simple language model or sentence template [23,24].
Recent deep models [7,8,9,10,11,25,26] have far outperformed such systems and
are capable of producing fluent, accurate descriptions of images. Many of these
systems learn to map from images to sentences directly, with no guidance on
intermediate features (e.g., prevalent objects in the scene). Likewise, our model
attempts to learn a visual explanation given only an image and predicted la-
bel with no intermediate guidance, such as object attributes or part locations.
Though most description models condition sentence generation only on image
features, [27] propose conditioning generation on auxiliary information, such as
the words used to describe a similar image in the train set. However, [27] does not
explore conditioning generation on category labels for fine-grained descriptions.

The most common loss function used to train LSTM based sentence genera-
tion models [7,8,9,10,26] is a cross-entropy loss between the probability distribu-
tion of predicted and ground truth words. Frequently, however, the cross-entropy
loss does not directly optimize for properties that are desired at test time. [28]
proposes an alternative training scheme for generating unambiguous region de-
scriptions which maximizes the probability of a specific region description while
minimizing the probability of other region descriptions. In this work, we propose
a novel loss function for sentence generation which allows us to specify a global
constraint on generated sentences.

Fine-grained Classification. Object classification, and fine-grained classifi-
cation in particular, is attractive to demonstrate explanation systems because
describing image content is not sufficient for an explanation. Explanation models
must focus on aspects that are both class-specific and depicted in the image.

Most fine-grained zero-shot and few-shot image classification systems use
attributes [29] as auxiliary information that can support visual information.
Attributes can be thought of as a means to discretize a high dimensional fea-
ture space into a series of simple and readily interpretable decision statements
that can act as an explanation. However, attributes have several disadvantages.
They require fine-grained object experts for annotation which is costly. For each
additional class, the list of attributes needs to be revised to ensure discrimina-
tiveness so attributes are not generalizable. Finally, though a list of image at-
tributes could help explain a fine-grained classification, attributes do not provide
a natural language explanation like the user expects. We therefore, use natural
language descriptions collected in [30] which achieved superior performance on
zero-shot learning compared to attributes.

Generating Visual Explanations 5

LS
T

M
LS

T
M

LS
T

M
LS

T
M

p(w1|w0,I,C)

Compact
Bilinear

Classifier

Deep Finegrained Classifier

Relevance Loss
p(w1|w0,I,C)

p(w2|w0:1,I,C)

p(wT|w0:T-1,I,C)

LS
T

M

LS
T

M

p(w2|w0:1,I,C)

p(wT|w0:T-1,I,C)

w0:

w1: a

wT-1: beak

Discriminative Loss

Sampled Sentence:
“a red bird with black

cheeks.”

Reward
Function

Cross
Entropy

Loss

Target Sentence
“a bright red bird with an

orange beak.”

Compact Bilinear
Feature

Sentence
Classifier

Concat

Image Category:
Cardinal

Fig. 3. Training our explanation model. Our explanation model differs from other
caption models because it (1) includes the object category as an additional input and
(2) incorporates a reinforcement learning based discriminative loss

Reinforcement Learning in Computer Vision. Vision models which incor-
porate algorithms from reinforcement learning, specifically how to backpropagate
through a sampling mechanism, have recently been applied to visual question
answering [31] and activity detection [32]. Additionally, [10] use a sampling
mechanism to attend to specific image regions for caption generation, but use
the standard cross-entropy loss during training.

3 Visual Explanation Model

Our visual explanation model (Figure 3) aims to produce an explanation which
(1) describes visual content present in a specific image instance and (2) con-
tains appropriate information to explain why an image instance belongs to a
specific category. We ensure generated descriptions meet these two requirements
for explanation by including both a relevance loss (Figure 3, bottom right) and
discriminative loss (Figure 3, top right). Our main technical contribution is the
inclusion of a loss which acts on sampled word sequences during training. Our
proposed loss enables us to enforce global sentence constraints on sentences and
by applying our loss to sampled sentences, we ensure that the final output of
our system fulfills our criteria for an explanation. In the following sections we
consider a sentence to be a word sequence comprising either a complete sentence
or a sentence fragment.

3.1 Relevance Loss

Image relevance can be accomplished by training a visual description model.
Our model is based on LRCN [8], which consists of a convolutional neural net-
work, which extracts powerful high level visual features, and two stacked recur-
rent networks (specifically LSTMs), which learn how to generate a description
conditioned on visual features. During inference, the first LSTM receives the
previously generated word wt−1 as input (at time t = 0 the model receives

6 L. A. Hendricks, Z. Akata, M. Rohrbach, J. Donahue, B. Schiele, T. Darrell

a “start-of-sentence” token), and produces an output lt. The second LSTM, re-
ceives the output of the first LSTM lt as well as an image feature f and produces
a probability distribution p(wt) over the next word. At each time step, the word
wt is generated by sampling from the distribution p(wt). Generation continues
until an “end-of-sentence” token is generated.

We propose two modifications to the LRCN framework to increase the im-
age relevance of generated sequences (Figure 3, top left). First, our explanation
model uses category predictions as an additional input to the second LSTM in
the sentence generation model. Intuitively, category information can help inform
the caption generation model which words and attributes are more likely to occur
in a description. For example, if the caption generation model conditioned only
on images mistakes a red eye for a red eyebrow, category level information could
indicate the red eye is more likely for a given class. We experimented with a few
methods to represent class labels, but found a vector representation in which we
first train a language model, e.g., an LSTM, to generate word sequences condi-
tioned on images, then compute the average hidden state of the LSTM across
all sequences for all classes in the train set worked best. Second, we use rich
category specific features [3] to generate relevant explanations.

Each training instance consists of an image, category label, and a ground
truth sentence. During training, the model receives the ground truth word wt
for each time step t ∈ T . We define the relevance loss as:

LR =
1

N

N−1∑
n=0

T−1∑
t=0

log p(wt+1|w0:t, I, C)

where wt is a ground truth word, I is the image, C is the category, and N is
the batch size. By training the model to predict each word in a ground truth
sentence, the model is trained to produce sentences which correspond to image
content. However, this loss does not explicitly encourage generated sentences to
discuss discerning visual properties. In order to generate sentences which are
both image relevant and category specific, we include a discriminative loss to
focus sentence generation on discriminative visual properties of an image.

3.2 Discriminative Loss

Our discriminative loss is based on a reinforcement learning paradigm for learn-
ing with layers which require intermediate activations of a network to be sam-
pled. In our formulation, we first sample a sentence and then input the sampled
sentence into a discriminative loss function. By sampling the sentence before
computing the loss, we ensure that sentences sampled from our model are more
likely to be class discriminative. We first overview how to backpropagate through
the sampling mechanism, then discuss how we calculate the discriminative loss.

The overall function we minimize in the explanation network weights W
is LR − λEw̃∼p(w) [RD(w̃)], a linear combination of the relevance loss LR and
the expectation of the negative discriminator reward −RD(w̃) over descriptions
w̃ ∼ p(w|I, C), where p(w|I, C) is the model’s estimated conditional distribution

Generating Visual Explanations 7

over descriptions w given the image I and category C. Since this expectation
over descriptions is intractable, we estimate it at training time using Monte
Carlo sampling of descriptions from the categorical distribution given by the
model’s softmax output at each timestep. As a discrete distribution, the sampling
operation for the categorical distribution is non-smooth in the distribution’s
parameters {pi}, so the gradient ∇WRD(w̃) of the reward RD for a given sample
w̃ with respect to the weights W is undefined.

Following REINFORCE [33], we make use of the following equivalence prop-
erty of the expected reward gradient:

∇WEw̃∼p(w) [RD(w̃)] = Ew̃∼p(w) [RD(w̃)∇W log p(w̃)]

In the reformulation on the right-hand side, the gradient ∇W log p(w̃) is well-
defined: log p(w̃) is the log-likelihood of the sampled description w̃, just as LR
was the log-likelihood of the ground truth description. In this case, however, the
sampled gradient term is weighted by the reward RD(w̃), pushing the weights
to increase the likelihood assigned to the most highly rewarded (and hence most
discriminative) descriptions.

Therefore, the final gradient we compute to update the weights W , given a
description w̃ sampled from the model’s softmax distribution, is:

∇WLR − λRD(w̃)∇W log p(w̃).

RD(w̃) should be high when sampled sentences are discriminative. We define
our reward simply as RD(w̃) = p(C|w̃), or the probability of the ground truth
category C given only the generated sentence w̃. By placing the discriminative
loss after the sampled sentence, the sentence acts as an information bottleneck.
For the model to produce an output with a large reward, the generated sentence
must include enough information to classify the original image properly. For the
sentence classifier, we train a single layer LSTM-based classification network
to classify ground truth sentences. Our sentence classifier correctly predicts the
class of unseen validation set sentences 22% of the time. This number is possibly
low because descriptions in the dataset do not necessarily contain discriminative
properties (e.g., “This is a white bird with grey wings.” is a valid description
but can apply to multiple bird species). Nonetheless, we find that this classifier
provides enough information to train our explanation model. We do not update
the sentence classifier weights when training our explanation model.

4 Experimental Setup

Dataset. In this work, we employ the Caltech UCSD Birds 200-2011 (CUB)
dataset [34] which contains 200 classes of North American bird species and 11,788
images in total. A recent extension to this dataset [30] collected 5 sentences for
each of the images. These sentences do not only describe the content of the
image, e.g., “This is a bird”, but also gives a detailed description of the bird,
e.g., “that has a cone-shaped beak, red feathers and has a black face patch”.

8 L. A. Hendricks, Z. Akata, M. Rohrbach, J. Donahue, B. Schiele, T. Darrell

Unlike other image-sentence datasets, every image in the CUB dataset belongs
to a class, and therefore sentences as well as images are associated with a single
label. This property makes this dataset unique for the visual explanation task,
where our aim is to generate sentences that are both discriminative and class-
specific. We stress that sentences collected in [30] were not collected for the task
of visual explanation. Consequently, they do not explain why an image belongs
to a certain class, but rather include discriptive details about each bird class.

Implementation. For image features, we extract 8,192 dimensional features
from the penultimate layer of the compact bilinear fine-grained classification
model [3] which has been pre-trained on the CUB dataset and achieves an accu-
racy of 84%. We use one-hot vectors to represent input words at each time step
and learn a 1, 000-dimensional embedding before inputting each word into the a
1000-dimensional LSTM. We train our models using Caffe [35], and determine
model hyperparameters using the standard CUB validation set before evaluating
on the test set. All reported results are on the standard CUB test set.

Baseline and Ablation Models. In order to investigate our explanation
model, we propose two baseline models: a description model and a definition
model. Our description baseline is trained to generate sentences conditioned
only on images and is equivalent to LRCN [8] except we use features from a
fine-grained classifier. Our definition model is trained to generate sentences us-
ing only the image label as input. Consequently, this model outputs the same
sentence for different image instances of the same class. By comparing these
baselines to our explanation model, we demonstrate that our explanation model
is both more image and class relevant, and thus generates superior explanations.

Our explanation model differs from a description model in two key ways.
First, in addition to an image, generated sentences are conditioned on class pre-
dictions. Second, our explanations are trained with a discriminative loss which
enforces that generated sentences contain class specific information. To under-
stand the importance of these two contributions, we compare our explanation
model to an explanation-label model which is not trained with the discriminative
loss, and to an explanation-discriminative model which is not conditioned on the
predicted class. By comparing our explanation model to the explanation-label
model and explanation-discriminative model, we demonstrate that both class in-
formation and the discriminative loss are important in generating descriptions.

Metrics. To evaluate our explanation model, we use both automatic metrics
and a human evaluation. Our automatic metrics rely on the common sentence
evaluation metrics, METEOR [36] and CIDEr [37]. METEOR is computed by
matching words in generated and reference sentences, but unlike other common
metrics such as BLEU [38], uses WordNet [39] to also match synonyms. CIDEr
measures the similarity of a generated sentence to reference sentence by counting
common n-grams which are TF-IDF weighted. Consequently, the metric rewards
sentences for correctly including n-grams which are uncommon in the dataset.

A generated sentence is image relevant if it mentions concepts which are
mentioned in ground truth reference sentences for the image. Thus, to mea-

Generating Visual Explanations 9

sure image relevance we simply report METEOR and CIDEr scores, with more
relevant sentences producing higher METEOR and CIDEr scores.

Measuring class relevance is considerably more difficult. We could use the
LSTM sentence classifier used to train our discriminative loss, but this is an un-
fair metric because some models were trained to directly increase the accuracy as
measured by the LSTM classifier. Instead, we measure class relevance by consid-
ering how similar generated sentences for a class are to ground truth sentences for
that class. Sentences which describe a certain bird class, e.g., “cardinal”, should
contain similar words and phrases to ground truth “cardinal” sentences, but not
ground truth “black bird” sentences. We compute CIDEr scores for images from
each bird class, but instead of using ground truth image descriptions as reference
sentences, we use all reference sentences which correspond to a particular class.
We call this metric the class similarity metric.

More class relevant sentences should result in a higher CIDEr scores, but it
is possible that if a model produces better overall sentences it will have a higher
CIDEr score without generating more class relevant descriptions. To further
demonstrate that our sentences are class relevant, we also compute a class rank
metric. To compute this metric, we compute the CIDEr score for each generated
sentence and use ground truth reference sentences from each of the 200 classes
in the CUB dataset as references. Consequently, each image is associated with
a CIDEr score which measures the similarity of the generated sentences to each
of the 200 classes in the CUB dataset. CIDEr scores computed for generated
sentences about cardinals should be higher when compared to cardinal reference
sentences than when compared to reference sentences from other classes.

We choose to emphasize the CIDEr score when measuring class relevance
because it includes the TF-IDF weighting over n-grams. Consequently, if a bird
includes a unique feature, such as “red eyes”, generated sentences which men-
tion this attribute should be rewarded more than sentences which just mention
attributes common across all bird classes.

The ultimate goal of an explanation system is to provide useful information
to a human. We therefore also consulted experienced bird watchers to rate our
explanations against our two baseline and ablation models. We provided a ran-
dom sample of images in our test set with sentences generated from each of our
five models and asked the bird watchers to rank which sentence explained the
classification best. Consulting experienced bird watchers is important because
some sentences may list correct, but non-discriminative, attributes. For exam-
ple, a sentence “This is a Geococcyx because this bird has brown feathers and a
brown crown.” may be a correct description, but if it does not mention unique
attributes of a bird class, it is a poor explanation. Though it is difficult to expect
an average person to infer or know this information, experienced bird watchers
are aware of which features are important in bird classification.

10 L. A. Hendricks, Z. Akata, M. Rohrbach, J. Donahue, B. Schiele, T. Darrell

Table 1. Comparison of our explanation model to our definition and description base-
line, as well as the explanation-label and explanation-discriminative (explanation-dis. in
the table) ablation models. We demonstrate that our generated explanations are image
relevant by computing METEOR and CIDEr scores (higher is better). We demonstrate
class relevance using a class similarity metric (higher is better) and class rank metric
(lower is better) (see Section 4 for details). Finally, we ask experienced bird watchers
to rank our explanations. On all metrics, our explanation model performs best.

Image Relevance Class Relevance Best Explanation
METEOR CIDEr Similarity Rank Bird Expert Rank

(1-200) (1-5)

Definition 27.9 43.8 42.60 15.82 2.92
Description 27.7 42.0 35.3 24.43 3.11

Explanation-Label 28.1 44.7 40.86 17.69 2.97
Explanation-Dis. 28.8 51.9 43.61 19.80 3.22
Explanation 29.2 56.7 52.25 13.12 2.78

5 Results

We demonstrate that our model produces visual explanations by showing that
our generated explanations fulfill the two aspects of our proposed definition of
visual explanation and are image relevant and class relevant. Furthermore, we
demonstrate that by training our model to generate class specific descriptions, we
generate higher quality sentences based on common sentence generation metrics.

5.1 Quantitative Results

Image Relevance. Table 5, columns 2 & 3, record METEOR and CIDEr scores
for our generated sentences. Importantly, our explanation model has higher ME-
TEOR and CIDEr scores than our baselines. The explanation model also out-
performs the explanation-label and explanation-discriminative model suggesting
that both label conditioning and the discriminative loss are key to producing
better sentences. Furthermore, METEOR and CIDEr are substantially higher
when including a discriminative loss during training (compare rows 2 and 4 and
rows 3 and 5) demonstrating that including this additional loss leads to better
generated sentences. Surprisingly, the definition model produces more image rel-
evant sentences than the description model. Information in the label vector and
image appear complimentary as the explanation-label model, which conditions
generation both on the image and label vector, produces better sentences.
Class Relevance. Table 5, columns 4 & 5, record the class similarity and class
rank metrics (see Section 4 for details). Our explanation model produces a higher
class similarity score than other models by a substantial margin. The class rank
for our explanation model is also lower than for any other model suggesting
that sentences generated by our explanation model more closely resemble the
correct class than other classes in the dataset. We emphasize that our goal is to
produce reasonable explanations for classifications, not rank categories based on
our explanations. We expect the rank of sentences produced by our explanation

Generating Visual Explanations 11

Fig. 4. Visual explanations generated by our system. Our explanation model produces
image relevant sentences that also discuss class discriminative attributes.

model to be lower, but not necessarily rank one. Our ranking metric is quite
difficult; sentences must include enough information to differentiate between
very similar bird classes without looking at an image, and our results clearly
show that our explanation model performs best at this difficult task. Accuracy
scores produced by our LSTM sentence classifier follow the same general trend,
with our explanation model producing the highest accuracy (59.13%) and the
description model producing the lowest accuracy (22.32%).
Explanation. Table 5, column 6 details the evaluation of two experienced bird
watchers. The bird experts evaluated 91 randomly selected images and answered
which sentence provided the best explanation for the bird class. Our explanation
model has the best mean rank (lower is better), followed by the description
model. This trend resembles the trend seen when evaluating class relevance.
Additionally, all models which are conditioned on a label (lines 1, 3, and 5) have
lower rank suggesting that label information is important for explanations.

5.2 Qualitative Results

Figure 4 shows sample explanations produced by first outputing a declaration
of the predicted class label (“This is a warbler…”) and then a justification con-
junction (e.g., “because”) followed by the explantory text sentence fragment
produced by the model described above in Section 3. Qualitatively, our expla-
nation model performs quite well. Note that our model accurately describes fine
detail such as “black cheek patch” for “Kentucky warbler” and “long neck” for
“pied billed grebe”. For the remainder of our qualitative results, we omit the
class declaration for easier comparison.
Comparison of Explanations, Baselines, and Ablations. Figure 5 com-
pares sentences generated by our definition and description baselines, explanation-
label and explanation-discriminative ablations and explanation model. Each
model produces reasonable sentences, however, we expect our explanation model
to produce sentences which discuss class relevant attributes. For many images,
the explanation model mentions attributes that not all other models mention. For
example, in Figure 5, row 1, the explanation model specifies that the “bronzed
cowbird” has “red eyes” which is a rarer bird attribute than attributes mentioned
correctly by the definition and description models (“black”, “pointy bill”). Simi-
larly, when explaining the “White Necked Raven” (Figure 5 row 3), the explana-
tion model identifies the “white nape”, which is a unique attribute of that bird.
Based on our image relevance metrics, we also expect our explanations to be
more image relevant. An obvious example of this is in Figure 5 row 7 where the
explanation model includes only attributes present in the image of the “hooded
merganser”, whereas all other models mention at least one incorrect attribute.

12 L. A. Hendricks, Z. Akata, M. Rohrbach, J. Donahue, B. Schiele, T. Darrell

This is a Bronzed Cowbird because …
Definition: this bird is black with blue on its wings and has a long pointy beak.
Description: this bird is nearly all black with a short pointy bill.
Explanation-Label: this bird is nearly all black with bright orange eyes.
Explanation-Dis.: this is a black bird with a red eye and a white beak.
Explanation: this is a black bird with a red eye and a pointy black beak.This is a Bronzed Cowbird because …

This is a Black Billed Cuckoo because …
Definition: this bird has a yellow belly and a grey head.
Description: this bird has a yellow belly and breast with a gray crown and green wing.
Explanation-Label: this bird has a yellow belly and a grey head with a grey throat.
Explanation-Dis.: this is a yellow bird with a grey head and a small beak.
Explanation: this is a yellow bird with a grey head and a pointy beak.
This is a White Necked Raven because …
Definition: this bird is black in color with a black beak and black eye rings.
Description: this bird is black with a white spot and has a long pointy beak.
Explanation-Label: this bird is black in color with a black beak and black eye rings.
Explanation-Dis.: this is a black bird with a white nape and a black beak.
Explanation: this is a black bird with a white nape and a large black beak.
This is a Northern Flicker because …
Definition: this bird has a speckled belly and breast with a long pointy bill.
Description: this bird has a long pointed bill grey throat and spotted black and white mottled crown.
Explanation-Label: this bird has a speckled belly and breast with a long pointy bill.
Explanation-Dis.: this is a grey bird with black spots and a red spotted crown.
Explanation: this is a black and white spotted bird with a red nape and a long pointed black beak.
This is a American Goldfinch because …
Definition: this bird has a yellow crown a short and sharp bill and a black wing with a white breast.
Description: this bird has a black crown a yellow bill and a yellow belly.
Explanation-Label: this bird has a black crown a short orange bill and a bright yellow breast and belly.
Explanation-Dis.: this is a yellow bird with a black wing and a black crown.
Explanation: this is a yellow bird with a black and white wing and an orange beak.
This is a Yellow Breasted Chat because …
Definition: this bird has a yellow belly and breast with a white eyebrow and gray crown.
Description: this bird has a yellow breast and throat with a white belly and abdomen.
Explanation-Label: this bird has a yellow belly and breast with a white eyebrow and gray crown.
Explanation-Dis.: this is a bird with a yellow belly and a grey back and head.
Explanation: this is a bird with a yellow breast and a grey head and back.
This is a Hooded Merganser because …
Definition: this bird has a black crown a white eye and a large black bill.
Description: this bird has a brown crown a white breast and a large wingspan.
Explanation-Label: this bird has a black and white head with a large long yellow bill and brown tarsus and feet.
Explanation-Dis.: this is a brown bird with a white breast and a white head.
Explanation: this bird has a black and white head with a large black beak.

Fig. 5. Example sentences generated by our baseline models, ablation models, and
proposed explanation model. Correct attributes are highlighted in green, mostly correct
attributes are highlighted in yellow, and incorrect attributes are highlighted in red. The
explanation model consistently discusses image relevant and class relevant features.

Comparing Definitions and Explanations. Figure 6 directly compares ex-
planations to definitions for three bird categories. Explanations in the left column
include an attribute about an image instance of a bird class which is not present
in the image instance of the same bird class in the right column. Because the
definition remains constant for all image instances of a bird class, the definition
can produce sentences which are not image relevant. For example, in the second
row, the definition model indicates that the bird has a “red spot on its head”.
Though this is true for the image on the left and for many “Downy Woodpecker”
images, it is not true for the image on the right. In contrast, the explanation
model produces image relevant sentences for both images.

Training with the Discriminative Loss. To illustrate how the discrimina-
tive loss impacts sentence generation we directly compare the description model
to the explanation-discriminative model in Figure 7. Neither of these models

Generating Visual Explanations 13

Definition: this bird is brown and white in color
with a skinny brown beak and brown eye rings.

Explanation: this is a small brown bird with a
long tail and a white eyebrow.

Definition: this bird is brown and white in color
with a skinny brown beak and brown eye rings.

Explanation: this is a small bird with a long bill
and brown and black wings.

Definition: this bird has a white breast black
wings and a red spot on its head.

Explanation: this is a white bird with a black wing
and a black and white striped head.

Definition: this bird has a white breast black
wings and a red spot on its head.

Explanation: this is a black and white bird with
a red spot on its crown.

Definition: this bird is black with a long tail and
has a very short beak.

Explanation: this is a black bird with a small black
beak.

Definition: this bird is black with a long tail and
has a very short beak.

Explanation: this is a black bird with a long tail
feather and a pointy black beak.

This is a Marsh Wren because…

This is a Downy Woodpecker because…

This is a Shiny Cowbird because…

This is a Marsh Wren because…

This is a Downy Woodpecker because…

This is a Shiny Cowbird because…

Fig. 6. We compare generated explanations and descriptions. All explanations on the
left include an attribute which is not present on the image on the right. In contrast to
definitions, our explanation model can adjust its output based on visual evidence.

receives class information at test time, though the explanation-discriminative
model is explicitly trained to produced class specific sentences. Both models can
generate visually correct sentences. However, generated sentences trained with
our discriminative loss contain properties specific to a class more often than the
ones generated using the image description model, even though neither has access
to the class label at test time. For instance, for the class “black-capped vireo”
both models discuss properties which are visually correct, but the explanation-
discriminative model mentions “black head” which is one of the most prominent
distinguishing properties of this vireo type. Similarly, for the “white pelican” im-
age, the explanation-discriminative model mentions the properties “long neck”
and “orange beak”, which are fine-grained and discriminative.
Class Conditioning. To qualitatively observe the relative importance of im-
age features and label features in our explanation model, we condition expla-
nations for a “baltimore oriole”, “cliff swallow”, and “painted bunting” on the
correct class and incorrect classes (Figure 8). When conditioning on the “painted
bunting”, the explanations for “cliff swallow” and “baltimore oriole” both in-
clude colors which are not present suggesting that the “painted bunting” label
encourages generated captions to include certain color words. However, for the
“baltimore oriole” image, the colors mentioned when conditioning on “painted
bunting” (red and yellow) are similar to the true color of the oriole (yellow-
orange) suggesting that visual evidence informs sentence generation.

6 Conclusion

Explanation is an important capability for deployment of intelligent systems.
Visual explanation is a rich research direction, especially as the field of com-
puter vision continues to employ and improve deep models which are not easily
interpretable. Our work is an important step towards explaining deep visual

14 L. A. Hendricks, Z. Akata, M. Rohrbach, J. Donahue, B. Schiele, T. Darrell

Definition:
This bird is brown and white in color
with a skinny brown beak and brown
eye rings.
Explanation:
This is a small brown bird with a long
tail and a white eyebrow.M

ar
sh

W
re

n Definition: This bird is brown and white in color
with a skinny brown beak and brown
eye rings.
Explanation:
This is a small bird with a long bill and
brown and black wings.

D
ow

ny

W
oo

dp
ec

ke
r

Definition:
This bird has a white breast black wings
and a red spot on its head.
Explanation:
This is a white bird with a black wing
and a black and white striped head.

Definition:
This bird has a white breast black wings
and a red spot on its head.
Explanation:
This is a black and white bird with a red
spot on its crown.

Definition:
This bird is black with a long tail and
has a very short beak.
Explanation:
This is a black bird with a small black
beak.

Definition:
This bird is black with a long tail and
has a very short beak.
Explanation:
This is a black bird with a long tail
feather and a pointy black beak.

S
hi

ny
C

ow
bi

rd

Description: this bird is black and white in color with a
orange beak and black eye rings.

Explanation-Dis.: this is a black bird with a white eye
and an orange beak.

Description: this bird has a bright blue crown and a
bright yellow throat and breast.

Explanation-Dis.: this is a yellow bird with a blue
head and a black throat.

Description: this bird has a long black bill a white
throat and a brown crown.

Explanation-Dis.: this is a black and white spotted bird
with a long tail feather and a pointed beak.

Description: this bird is blue and black in color with a
stubby beak and black eye rings.

Explanation-Dis.: this is a blue bird with a red eye and
a blue crown.

Description: this bird has a white belly and breast
black and white wings with a white wingbar.

Explanation-Dis: this is a bird with a white belly yellow
wing and a black head.

Description: this bird is white and black in color with a
long curved beak and white eye rings.

Explanation: this is a large white bird with a long
neck and a large orange beak.

This is a Black-Capped Vireo because…

This is a Crested Auklet because…

This is a Green Jay because…

This is a White Pelican because…

This is a Geococcyx because…

This is a Cape Glossy Starling because…

Fig. 7. Comparison of sentences generated using description and explanation-
discriminative models. Though both are capable of accurately describing visual
attributes, the explanation-discriminative model captures more “class-specific” at-
tributes.

This is a Baltimore Oriole because this is a small orange bird with a black head and a small orange beak.
This is a Cliff Swallow because this is a black bird with a red throat and a white belly.
This is a Painted Bunting because this is a colorful bird with a red belly green head and a yellow throat.

This is a Baltimore Oriole because this is a small bird with a black head and a small beak.
This is a Cliff Swallow because this bird has a black crown a brown wing and a white breast.
This is a Painted Bunting because this is a small bird with a red belly and a blue head.

This is a Baltimore Oriole because this is a small bird with a black head and orange body with black wings and tail.
This is a Cliff Swallow because this bird has a black crown a black throat and a white belly.
This is a Painted Bunting because this is a colorful bird with a red belly green head and a yellow throat.

Fig. 8. We observe how explanations change when conditioning on different classes.
Some bird categories, like “painted bunting” carry strong class information that heavily
influence the explanation.

models. We anticipate that future models will look “deeper” into networks to
produce explanations and perhaps begin to explain the internal mechanism of
deep models.

To build our explanation model, we proposed a novel reinforcement learning
based loss which allows us to influence the kinds of sentences generated with
a sentence level loss function. Though we focus on a discriminative loss in this
work, we believe the general principle of including a loss which operates on a
sampled sentence and optimizes for a global sentence property is potentially
beneficial in other applications. For example, [40,41] propose introducing new
vocabulary words into a captioning system. Though both models aim to optimize
a global sentence property (whether or not a caption mentions a certain concept),
neither optimizes for this property directly.

In summary, we have presented a novel framework which provides explana-
tions of a visual classifier. Our quantitative and qualitative evaluations demon-
strate the potential of our proposed model and effectiveness of our novel loss

Generating Visual Explanations 15

function. Our explanation model goes beyond the capabilities of current cap-
tioning systems and effectively incorporates classification information to pro-
duce convincing explanations, a potentially key advance for adoption of many
sophisticated AI systems.

Acknowledgements. This work was supported by DARPA, AFRL, DoD MURI
award N000141110688, NSF awards IIS-1427425 and IIS-1212798, and the Berke-
ley Vision and Learning Center. Marcus Rohrbach was supported by a fellowship
within the FITweltweit-Program of the German Academic Exchange Service
(DAAD). Lisa Anne Hendricks is supported by an NDSEG fellowship. We thank
our experienced bird watchers, Celeste Riepe and Samantha Masaki, for helping
us evaluate our model.

References

1. Biran, O., McKeown, K.: Justification narratives for individual classifications. In:
Proceedings of the AutoML workshop at ICML 2014. (2014)

2. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep con-
volutional neural networks. In: Advances in neural information processing systems.
(2012) 1097–1105

3. Gao, Y., Beijbom, O., Zhang, N., Darrell, T.: Compact bilinear pooling. In:
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
(CVPR). (2016)

4. Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., Darrell,
T.: Decaf: A deep convolutional activation feature for generic visual recognition.
Proceedings of the International Conference on Machine Learning (ICML) (2013)

5. Teach, R.L., Shortliffe, E.H.: An analysis of physician attitudes regarding
computer-based clinical consultation systems. In: Use and impact of computers
in clinical medicine. Springer (1981) 68–85

6. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: A large-scale
hierarchical image database. In: Computer Vision and Pattern Recognition, 2009.
CVPR 2009. IEEE Conference on, IEEE (2009) 248–255

7. Vinyals, O., Toshev, A., Bengio, S., Erhan, D.: Show and tell: A neural image
caption generator. In: CVPR. (2015)

8. Donahue, J., Hendricks, L.A., Guadarrama, S., Rohrbach, M., Venugopalan, S.,
Saenko, K., Darrell, T.: Long-term recurrent convolutional networks for visual
recognition and description. In: Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition (CVPR). (2015)

9. Karpathy, A., Li, F.: Deep visual-semantic alignments for generating image de-
scriptions. In: CVPR. (2015)

10. Xu, K., Ba, J., Kiros, R., Courville, A., Salakhutdinov, R., Zemel, R., Bengio,
Y.: Show, attend and tell: Neural image caption generation with visual attention.
Proceedings of the International Conference on Machine Learning (ICML) (2015)

11. Kiros, R., Salakhutdinov, R., Zemel, R.: Multimodal neural language models. In:
Proceedings of the 31st International Conference on Machine Learning (ICML-14).
(2014) 595–603

12. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8)
(November 1997) 1735–1780

16 L. A. Hendricks, Z. Akata, M. Rohrbach, J. Donahue, B. Schiele, T. Darrell

13. Shortliffe, E.H., Buchanan, B.G.: A model of inexact reasoning in medicine. Math-
ematical biosciences 23(3) (1975) 351–379

14. Lane, H.C., Core, M.G., Van Lent, M., Solomon, S., Gomboc, D.: Explainable
artificial intelligence for training and tutoring. Technical report, DTIC Document
(2005)

15. Core, M.G., Lane, H.C., Van Lent, M., Gomboc, D., Solomon, S., Rosenberg, M.:
Building explainable artificial intelligence systems. In: Proceedings of the national
conference on artificial intelligence. Volume 21., Menlo Park, CA; Cambridge, MA;
London; AAAI Press; MIT Press; 1999 (2006) 1766

16. Van Lent, M., Fisher, W., Mancuso, M.: An explainable artificial intelligence sys-
tem for small-unit tactical behavior. In: PROCEEDINGS OF THE NATIONAL
CONFERENCE ON ARTIFICIAL INTELLIGENCE, Menlo Park, CA; Cam-
bridge, MA; London; AAAI Press; MIT Press; 1999 (2004) 900–907

17. Lomas, M., Chevalier, R., Cross II, E.V., Garrett, R.C., Hoare, J., Kopack, M.:
Explaining robot actions. In: Proceedings of the seventh annual ACM/IEEE in-
ternational conference on Human-Robot Interaction, ACM (2012) 187–188

18. Lacave, C., Dı́ez, F.J.: A review of explanation methods for bayesian networks.
The Knowledge Engineering Review 17(02) (2002) 107–127

19. Johnson, W.L.: Agents that learn to explain themselves. In: AAAI. (1994) 1257–
1263

20. Berg, T., Belhumeur, P.: How do you tell a blackbird from a crow? In: Proceedings
of the IEEE International Conference on Computer Vision. (2013) 9–16

21. Jiang, Z., Wang, Y., Davis, L., Andrews, W., Rozgic, V.: Learning discrimina-
tive features via label consistent neural network. arXiv preprint arXiv:1602.01168
(2016)

22. Doersch, C., Singh, S., Gupta, A., Sivic, J., Efros, A.: What makes paris look like
paris? ACM Transactions on Graphics 31(4) (2012)

23. Kulkarni, G., Premraj, V., Dhar, S., Li, S., choi, Y., Berg, A., Berg, T.: Baby talk:
understanding and generating simple image descriptions. In: CVPR. (2011)

24. Guadarrama, S., Krishnamoorthy, N., Malkarnenkar, G., Venugopalan, S., Mooney,
R., Darrell, T., Saenko, K.: Youtube2text: Recognizing and describing arbitrary
activities using semantic hierarchies and zero-shot recognition. In: Proceedings of
the IEEE International Conference on Computer Vision. (2013) 2712–2719

25. Fang, H., Gupta, S., Iandola, F., Srivastava, R.K., Deng, L., Dollár, P., Gao, J.,
He, X., Mitchell, M., Platt, J.C., et al.: From captions to visual concepts and
back. In: Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition. (2015) 1473–1482

26. Mao, J., Xu, W., Yang, Y., Wang, J., Yuille, A.L.: Explain images with multimodal
recurrent neural networks. NIPS Deep Learning Workshop (2014)

27. Jia, X., Gavves, E., Fernando, B., Tuytelaars, T.: Guiding long-short term memory
for image caption generation. Proceedings of the IEEE International Conference
on Computer Vision (ICCV) (2015)

28. Mao, J., Huang, J., Toshev, A., Camburu, O., Yuille, A., Murphy, K.: Generation
and comprehension of unambiguous object descriptions. In: Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition (CVPR). (2016)

29. Lampert, C., Nickisch, H., Harmeling, S.: Attribute-based classification for zero-
shot visual object categorization. In: TPAMI. (2013)

30. Reed, S., Akata, Z., Lee, H., Schiele, B.: Learning deep representations of fine-
grained visual descriptions. In: Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition (CVPR). (2016)

Generating Visual Explanations 17

31. Andreas, J., Rohrbach, M., Darrell, T., Klein, D.: Learning to compose neural
networks for question answering. In: Proceedings of the Conference of the North
American Chapter of the Association for Computational Linguistics: Human Lan-
guage Technologies (NAACL). (2016)

32. Yeung, S., Russakovsky, O., Jin, N., Andriluka, M., Mori, G., Fei-Fei, L.: Ev-
ery moment counts: Dense detailed labeling of actions in complex videos. arXiv
preprint arXiv:1507.05738 (2015)

33. Williams, R.J.: Simple statistical gradient-following algorithms for connectionist
reinforcement learning. Machine Learning (1992)

34. Wah, C., Branson, S., Welinder, P., Perona, P., Belongie, S.: The Caltech-UCSD
Birds-200-2011 Dataset. Technical Report CNS-TR-2011-001, California Institute
of Technology (2011)

35. Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadar-
rama, S., Darrell, T.: Caffe: Convolutional architecture for fast feature embedding.
In: Proceedings of the ACM International Conference on Multimedia, ACM (2014)
675–678

36. Banerjee, S., Lavie, A.: Meteor: An automatic metric for mt evaluation with im-
proved correlation with human judgments. In: Proceedings of the acl workshop on
intrinsic and extrinsic evaluation measures for machine translation and/or summa-
rization. Volume 29. (2005) 65–72

37. Vedantam, R., Lawrence Zitnick, C., Parikh, D.: Cider: Consensus-based image
description evaluation. In: Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition. (2015) 4566–4575

38. Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: Bleu: a method for automatic
evaluation of machine translation. In: Proceedings of the 40th annual meeting on
association for computational linguistics, Association for Computational Linguis-
tics (2002) 311–318

39. Miller, G.A., Beckwith, R., Fellbaum, C., Gross, D., Miller, K.J.: Introduction to
wordnet: An on-line lexical database*. International journal of lexicography 3(4)
(1990) 235–244

40. Hendricks, L.A., Venugopalan, S., Rohrbach, M., Mooney, R., Saenko, K., Darrell,
T.: Deep compositional captioning: Describing novel object categories without
paired training data. In: Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition (CVPR). (2016)

41. Mao, J., Wei, X., Yang, Y., Wang, J., Huang, Z., Yuille, A.L.: Learning like a
child: Fast novel visual concept learning from sentence descriptions of images. In:
Proceedings of the IEEE International Conference on Computer Vision. (2015)
2533–2541

Generating Visual Explanations