Artificial Intelligence and Quantum Computing for Advanced Wireless Networks. Savo G. Glisic. Читать онлайн. Newlib. NEWLIB.NET

Автор: Savo G. Glisic
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: Программы
Год издания: 0
isbn: 9781119790310
Скачать книгу
are defined recursively by Ti(x) = 2xTi − 1(x) − Ti − 2(x) with T0(x) = 1 and T1(x) = x. As a result, the convolution of a graph signal x with the defined filter gθ is

      (5.39)equation

      where images. As images, which can be proved by induction on i. ChebNet takes the form

      As an improvement over spectral CNN, the filters defined by ChebNet are localized in space, which means filters can extract local features independently of the graph size. The spectrum of ChebNet is mapped to [−1, 1] linearly. CayleyNet [47] further applies Cayley polynomials, which are parametric rational complex functions, to capture narrow frequency bands. The spectral graph convolution of CayleyNet is defined as

      (5.41)equation

      (5.42)equation

      To restrict the number of parameters and avoid overfitting, GCN further assume θ = θ0 = − θ1 , leading to the following definition of a graph convolution:

      (5.45)equation

      Several recent works made incremental improvements over GCN [14] by exploring alternative symmetric matrices.

      Adaptive Graph Convolutional Network (AGCN) [16] learns hidden structural relations unspecified by the graph adjacency matrix. It constructs a so‐called residual graph adjacency matrix through a learnable distance function that takes two nodes’ features as inputs.

      DGCN [17] introduces a dual graph convolutional architecture with two graph convolutional layers in parallel. While these two layers share parameters, they use the normalized adjacency matrix images and the PPMI matrix, which capture nodes’ co‐occurrence information through random walks sampled from a graph. The PPMI matrix is defined as

      (5.46)equation

      where v1, v2V, ∣Dimages, and the count (·) function returns the frequency with which node v and/or node u co‐occur/occur in sampled random walks. By ensembling outputs from dual graph convolutional layers, DGCN encodes both local and global structural information without the need to stack multiple graph convolutional layers.

      Neural Network for Graphs (NN4G) [48]: Proposed in parallel with GNN*, this is the first work toward spatial‐based ConvGNNs. Distinctively different from RecGNNs, NN4G learns graph mutual dependency through a compositional neural architecture with independent parameters at each layer. The neighborhood of a node can be extended through incremental construction of the architecture. NN4G performs graph convolutions by summing up a node’s neighborhood information directly. It also applies residual connections and skips connections to memorize information over each layer. As a result, NN4G derives its next layer node states by

      (5.47)equation

      where f(·) is an activation function and images. The equation can also be written in a matrix form:

      (5.48)equation

      which resembles the form of GCN [14]. One difference is that NN4G uses the unnormalized adjacency matrix, which may potentially cause hidden node states to have extremely different scales.

      Contextual Graph Markov Model (CGMM) [20] proposes a probabilistic model inspired by NN4G. While maintaining spatial locality, CGMM has the benefit of probabilistic interpretability