Artificial Intelligence and Quantum Computing for Advanced Wireless Networks. Savo G. Glisic. Читать онлайн. Newlib. NEWLIB.NET

Автор: Savo G. Glisic
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: Программы
Год издания: 0
isbn: 9781119790310
Скачать книгу
ωy, j are the weights associated with the delayed outputs. The previous networks exhibit a locally recurrent structure, but when connected into a larger network, they have a feedforward architecture and are referred to as locally recurrent–globally feedforward (LRGF) architectures. A general LRGF architecture is shown in Figure 3.14. It allows dynamic synapses to be included within both the input (represented by H1, … , HM) and the output feedback (represented by HFB), some of the aforementioned schemes. Some typical examples of these networks are shown in Figures 3.153.18.

      (3.65)StartLayout 1st Row y Subscript n Baseline left-parenthesis k right-parenthesis equals normal upper Phi left-parenthesis v Subscript n Baseline left-parenthesis k right-parenthesis right-parenthesis comma n equals 1 comma 2 comma period ellipsis comma upper N 2nd Row v Subscript n Baseline left-parenthesis k right-parenthesis equals sigma-summation Underscript l equals 0 Overscript p plus upper N plus 1 Endscripts omega Subscript n comma l Baseline left-parenthesis k right-parenthesis u Subscript l Baseline left-parenthesis k right-parenthesis 3rd Row u Subscript n Superscript upper T Baseline left-parenthesis k right-parenthesis equals left-bracket s left-parenthesis k minus 1 right-parenthesis comma period ellipsis comma s left-parenthesis k minus 1 right-parenthesis comma 1 comma y 1 left-parenthesis k minus 1 right-parenthesis comma y 2 left-parenthesis k minus 1 right-parenthesis comma period period comma y Subscript upper N Baseline left-parenthesis k minus 1 right-parenthesis right-bracket EndLayout

Schematic illustration of general locally recurrent–globally feedforward (LRGF) architecture. Schematic illustration of an example of Elman recurrent neural network (RNN). Schematic illustration of an example of Jordan recurrent neural network (RNN).

      Figure 3.16 An example of Jordan recurrent neural network (RNN).

      Training: Here, we discuss training the single fully connected RNN shown in Figure 3.17. The nonlinear time series prediction uses only one output neuron of the RNN. Training of the RNN is based on minimizing the instantaneous squared error at the output of the first neuron of the RNN which can be expressed as

      (3.66)min left-parenthesis e squared left-parenthesis k right-parenthesis slash 2 right-parenthesis equals min left-parenthesis left-bracket s left-parenthesis k right-parenthesis minus y 1 left-parenthesis k right-parenthesis right-bracket squared slash 2 right-parenthesis

      where e(k) denotes the error at the output y1 of the RNN, and s(k) is the training signal. Hence, the correction for the l‐th weight of neuron k at the time instant k is

      (3.67)normal upper Delta omega Subscript n comma l Baseline left-parenthesis k right-parenthesis equals minus StartFraction eta Over 2 EndFraction StartFraction partial-differential Over partial-differential omega Subscript n comma l Baseline left-parenthesis k right-parenthesis EndFraction e squared left-parenthesis k right-parenthesis equals minus italic eta e left-parenthesis k right-parenthesis StartFraction partial-differential e left-parenthesis k right-parenthesis Over partial-differential omega Subscript n comma l Baseline left-parenthesis k right-parenthesis EndFraction

Schematic illustration of a fully connected recurrent neural network.

      (3.68)StartLayout 1st Row partial-differential y 1 left-parenthesis k right-parenthesis slash partial-differential omega Subscript n comma l Baseline left-parenthesis k right-parenthesis equals normal upper Phi prime left-parenthesis v 1 left-parenthesis k right-parenthesis right-parenthesis partial-differential v 1 left-parenthesis k right-parenthesis slash partial-differential omega Subscript n comma l Baseline left-parenthesis k right-parenthesis 2nd Row equals normal upper Phi prime left-parenthesis v 1 left-parenthesis k right-parenthesis right-parenthesis left-parenthesis sigma-summation Underscript alpha equals 1 Overscript upper N Endscripts StartFraction partial-differential y Subscript alpha Baseline left-parenthesis k right-parenthesis Over partial-differential omega Subscript n comma l Baseline left-parenthesis k right-parenthesis EndFraction omega Subscript 1 comma alpha plus p plus 1 Baseline left-parenthesis k right-parenthesis plus delta Subscript italic n l Baseline u Subscript l Baseline left-parenthesis k right-parenthesis right-parenthesis EndLayout

      where δnl = 1 if n = l and 0 otherwise. When the learning rate η is sufficiently small, we have ∂yα(k − 1)/∂ωn, l(k) ≈ ∂yα(k − 1)/∂ωn, l(k − 1). By introducing the notation theta Subscript n comma l Superscript j Baseline equals partial-differential y Subscript j Baseline left-parenthesis k right-parenthesis slash partial-differential omega Subscript n comma l Baseline left-parenthesis k right-parenthesis semicolon italic 1 less-than-or-equal-to j comma n less-than-or-equal-to upper N comma italic 1 less-than-or-equal-to l less-than-or-equal-to p plus italic 1 plus upper N comma we have recursively for every time step k and all appropriate j, n and l

      (3.69)theta Subscript n comma l Superscript j Baseline left-parenthesis k plus 1 right-parenthesis equals normal upper Phi prime left-parenthesis v Subscript j Baseline left-parenthesis k right-parenthesis right-parenthesis left-parenthesis sigma-summation Underscript m equals 1 Overscript upper N Endscripts omega Subscript j comma m plus p plus 1 Baseline left-parenthesis k right-parenthesis theta Subscript n comma l Superscript m Baseline left-parenthesis k right-parenthesis plus delta Subscript italic n j Baseline u Subscript l Baseline left-parenthesis k right-parenthesis right-parenthesis

Schematic illustration of nonlinear IIR filter structures. (a) A recurrent nonlinear neural filter, (b) a recurrent linear/nonlinear neural filter structure.