Artificial Intelligence and Quantum Computing for Advanced Wireless Networks. Savo G. Glisic. Читать онлайн. Newlib. NEWLIB.NET

Автор: Savo G. Glisic
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: Программы
Год издания: 0
isbn: 9781119790310
Скачать книгу
x. The following two theorems show that such an intuitive approach has a formal justification. The former theorem proves that function ϕw is differentiable.

      Theorem 5.1

      (Differentiability) [1] : Let Fw and Gw be the global transition and the global output functions of a GNN, respectively. If Fw(x, l) and Gw(x, lN) are continuously differentiable w.r.t. x and w, then ϕw is continuously differentiable w.r.t. w (for the proof, see [1]).

      (Backpropagation): Let Fw and Gw be the transition and the output functions of a GNN, respectively, and assume that Fw(x, l) and Gw(x, lN) are continuously differentiable w.r.t. x and w. Let z(t) be defined by

      Then, the sequence z(T), z(T − 1)…. . converges to a vector z = limt → − ∞ z(t), and the convergence is exponential and independent of the initial state (T). In addition, we have

      where x is the stable state of the GNN (for the proof, see [1]).

equation equation

      The function FORWARD computes the states, whereas BACKWARD calculates the gradient. The procedure MAIN minimizes the error by calling FORWARD and BACKWARD iteratively.Transition and output function implementations: The implementation of the local output function gw does not need to fulfill any particular constraint. In GNNs, gw is a multilayered FNN. On the other hand, the local transition function fw plays a crucial role in the proposed model, since its implementation determines the number and the existence of the solutions of Eq. (5.74). The assumption underlying GNN is that the design of fw is such that the global transition function Fw is a contraction map with respect to the state x. In the following, we describe two neural network models that fulfill this purpose using different strategies. These models are based on the nonpositional form described by Eq. (5.76). It can be easily observed that there exist two corresponding models based on the positional form as well.

      1 Linear (nonpositional) GNN. Eq. (5.76) can naturally be implemented by(5.82) where the vector bn ∈ Rs and the matrix An, u ∈ Rs × s are defined by the output of two FNNs, whose parameters correspond to the parameters of the GNN. More precisely, let us call the transition network an FNN that has to generate An, u and the forcing network another FNN that has to generate bn . Let φw : and ρw : be the functions implemented by the transition and the forcing network, respectively. Then, we define(5.83) where μ ∈ (0, 1) and Θ =resize((φw(ln, l(n, u), lu)) hold, and resize (·) denotes the operator that allocates the elements of an s2‐dimensional vector into an s × s matrix. Thus, An, u is obtained by arranging the outputs of the transition network into the square matrix Θ and by multiplication with the factor μ/s ∣ ne[u]∣. On the other hand, bn is just a vector that contains the outputs of the forcing network. In the following, we denote the 1‐norm of a matrix M = {mi, j} as ‖M‖1 = maxj ∑ ∣ mi, j∣ and assume that ‖φw(ln, l(n, u), lu)‖1 ≤ s holds; this can be straightforwardly verified if the output neurons of the transition network use an appropriately bounded activation function, for example, a hyperbolic tangent.

M ain initialize w; x = Forward(w) ; repeat until (a stopping criterion); return w; end F orward ( w ) Initialize x (0) , t = 0 ; repeat x (t + 1) = Fw ( x (t), l ) ; t = t + 1 ; until ‖ x (t) − x (t − 1)‖ ≤ εf return x (t) ; end Backward ( x , w ) end

      The function FORWARD computes the states, whereas BACKWARD calculates the gradient. The procedure MAIN minimizes the error by calling FORWARD and BACKWARD iteratively.