The choice of a Taylor base point x0 is a free parameter in this setup. As stated above, in the case of classification, we are interested in finding out the contribution of each pixel relative to the state of maximal uncertainty of the prediction given by the set of points f(x0) = 0, since f(x) > 0 denotes the presence and f(x) < 0 denotes the absence of the learned structure. Thus, x0 should be chosen to be a root of the predictor f. Thus, the above equation simplifies to
(4.14)
The pixel‐wise decomposition contains a nonlinear dependence on the prediction point x beyond the Taylor series, as a close root point x0 needs to be found. Thus, the whole pixel‐wise decomposition is not a linear, but a locally linear algorithm, as the root point x0 depends on the prediction point x.
4.2.2 Pixel‐wise Decomposition for Multilayer NN
Pixel‐wise decomposition for multilayer networks: In the previous chapter, we discussed NN networks built as a set of interconnected neurons organized in a layered structure. They define a mathematical function when combined with each other that maps the first‐layer neurons (input) to the last‐layer neurons (output). In this section, we denote each neuron by xi , where i is an index for the neuron. By convention, we associate different indices for each layer of the network. We denote by ∑i the summation over all neurons of a given layer, and by ∑j the summation over all neurons of another layer. We denote by x(d) the neurons corresponding to the pixel activations (i.e., with which we would like to obtain a decomposition of the classification decision). A common mapping from one layer to the next one consists of a linear projection followed by a nonlinear function: zij = xj wij , zj = ∑i zij + bj , xj = g(zj), where wij is a weight connecting neuron xi to neuron xj, bj is a bias term, and g is a nonlinear activation function. Multilayer networks stack several of these layers, each of them being composed of a large number of neurons. Common nonlinear functions are the hyperbolic tangent g(t) = tanh (t) or the rectification function g(t) = max (0, t)
Taylor‐type decomposition: Denoting by f : ℝM ↦ ℝN the vector‐valued multivariate function implementing the mapping between input and output of the network, a first possible explanation of the classification decision x ↦ fx) can be obtained by Taylor expansion at a near root point x0 of the decision function f:
(4.15)
The derivative ∂fx)/∂x(d) required for pixel‐wise decomposition can be computed efficiently by reusing the network topology using the backpropagation algorithm discussed in the previous chapter. Having backpropagated the derivatives up to a certain layer j, we can compute the derivative of the previous layer i using the chain rule:
(4.16)
Figure 4.2 Relevance propagation.
Layer‐wise relevance backpropagation: As an alternative to Taylor‐type decomposition, it is possible to compute relevances at each layer in a backward pass, that is, express relevances
(4.17)
must hold. In the case of a linear network f(x) = ∑i zij where the relevance Rj = f(x), such a decomposition is immediately given by Ri ← j = zij . However, in the general case, the neuron activation xj is a nonlinear function of zj . Nevertheless, for the hyperbolic tangent and the rectifying function – two simple monotonically increasing functions satisfying g(0) = 0‐the pre‐activations zij still provide a sensible way to measure the relative contribution of each neuron xi to Rj. A first possible choice of relevance decomposition is based on the ratio of local and global pre‐activations and is given by
These relevances Ri ← j are easily shown to approximate the conservation properties, in particular: