Artificial Intelligence and Quantum Computing for Advanced Wireless Networks. Savo G. Glisic. Читать онлайн. Newlib. NEWLIB.NET

Автор: Savo G. Glisic
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: Программы
Год издания: 0
isbn: 9781119790310
Скачать книгу
incorporates the attention mechanism into the propagation step. It computes the hidden states of each node by attending to its neighbors, following a selfattention strategy. The work defines a single graph attentional layer and constructs arbitrary GATs by stacking this layer. The layer computes the coefficients in the attention mechanism of the node pair (i, j) by

      (5.24)equation

      where αij is the attention coefficient of node j to images represents the neighborhoods of node i in the graph. The input set of node features to the layer is h = {h1, h2, …, hN}, hiF, where N is the number of nodes, and F is the number of features of each node; the layer produces a new set of node features (of potentially different cardinality F), images images, as its output. images is the weight matrix of a shared linear transformation that is applied to every node, and images is the weight vector of a single‐layer FNN. It is normalized by a softmax function, and the LeakyReLU nonlinearity (with negative input slope α = 0.2) is applied. After applying a nonlinearity, the final output features of each node can be obtained as

      (5.25)equation

      The layer utilizes multihead attention similarly to [33] to stabilize the learning process. It applies K independent attention mechanisms to compute the hidden states and then concatenates their features (or computes the average), resulting in the following two output representations:

      (5.26)equation

      (5.27)equation

      where images is the normalized attention coefficient computed by the k‐th attention mechanism. The attention architecture in [35] has several properties: (i) the computation of the node‐neighbor pairs is parallelizable, thus making the operation efficient; (ii) it can be applied to graph nodes with different degrees by specifying arbitrary weights to neighbors; and (iii) it can be easily applied to inductive learning problems.

      Apart from different variants of GNNs, several general frameworks have been proposed that aim to integrate different models into a single framework.

      Message passing neural networks (MPNNs) [36]: This framework abstracts the commonalities between several of the most popular models for graph‐structured data, such as spectral approaches and non‐spectral approaches in graph convolution, gated GNNs, interaction networks, molecular graph convolutions, and deep tensor neural networks. The model contains two phases, a message passing phase and a readout phase. The message passing phase (namely, the propagation step) runs for T time steps and is defined in terms of th message function Mt and the vertex update function Ut . Using messages images, the updating functions of the hidden states images are

      (5.28)equation

      where evw represents features of the edge from node v to w. The readout phase computes a feature vector for the whole graph using the readout function R according to

      (5.29)equation

      (5.30)equation

      where images is the adjacency matrix, one for each edge label e. The is the gated recurrent unit introduced in [25]. i and j are neural networks in function R.

      (5.31)equation

      where i is the index of an output position, and j is the index that enumerates all possible positions. f(hi, hj) computes a scalar between i and j representing the relation between them. g(hj) denotes a transformation of the input hj , and a factor 1/ images is utilized to normalize the results.

      There are several instantiations with different f and g settings. For simplicity, the linear transformation can be used as the function g. That means g(hj) = Wghj , where Wg is a learned weight matrix. The Gaussian function is a natural choice for function f, giving images, where images is dot‐product similarity and C (h) =∑∀j f(hi, hj). It is straightforward to extend the Gaussian function by computing similarity in the embedding space giving images with θ (hi) = Wθhi , φ(hj ) = Wφhj , and images. The function f can also be implemented as a dot‐product similarity f(hi, hj) = θ(hi)T φ(hj ). Here, the factor images, where N is the number of positions in h. Concatenation can also be used, defined as images, where wf is a weight vector projecting the vector to a scalar and images

      5.1.3 Graph Networks

      The Graph Network (GN) framework [37] generalizes and extends various GNN, MPNN, and NLNN approaches. A graph is defined as a 3‐tuple G = (u, H, E) (H is used instead of V for notational consistency). u is a global attribute, images