Computational Statistics in Data Science. Группа авторов. Читать онлайн. Newlib. NEWLIB.NET

Автор: Группа авторов
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: Математика
Год издания: 0
isbn: 9781119561088
Скачать книгу
5 8 9 10 11 13 14 16 6 4 5 6 9 10 11 12 14 15 16

      The row names are indices of input matrices, and the second column shows indices of output matrices that are connected to the corresponding input matrix. There are 60 connections in total, meaning 60 different kernel matrices.

      The fourth layer (S4) is a Max Pooling layer that produces 16 feature matrices with size 5 times 5. The kernel size of this layer is 2 times 2, and the stride is 2. Therefore, each of the input matrices is reduced to 5 times 5. The fifth layer (C5) is the last convolutional layer in LeNet‐5. The 16 input matrices are fully connected to 120 output matrices. Since both the input matrices and kernel matrices are of size 5 times 5, the output matrices are of size 1 times 1. Therefore, the output is actually a 120‐dimensional vector. Each number in the vector is computed by applying 16 different kernel matrices on the 16 different input matrices and then combining the results and bias.

      The sixth and seventh layers are fully connected layers, which are introduced in the previous section. In the sixth layer (S6), 120 input neurons are fully connected to 84 output neurons. In the last layer, 84 neurons are fully connected to 10 output neurons, where the 10‐dimensional output vector contains predict scores of each class. For the classification task, cross‐entropy loss between the model output and the label is usually used to train the model.

      There are many other architectures of CNNs, such as AlexNet [10], VGG [11], and ResNet [12]. These neural networks demonstrated state‐of‐the‐art performances on many machine learning tasks, such as image classification, object detection, and speech processing.

      5.1 Introduction

      5.2 Objective Function

      Autoencoder is first introduced in Rumelhart et al. [16] as a model with the main goal of learning a compressed representation of the input in an unsupervised way. We are essentially creating a network that attempts to reconstruct inputs by learning the identity function. To do so, an autoencoder can be divided into two parts, bold-italic upper E colon double-struck upper R Superscript n Baseline right-arrow double-struck upper R Superscript p (encoder) and bold-italic upper D colon double-struck upper R Superscript p Baseline right-arrow double-struck upper R Superscript n (decoder), that minimize the following loss function w.r.t. the input bold-italic x:

StartLayout 1st Row 1st Column double-vertical-bar bold-italic x minus bold-italic upper D left-parenthesis bold-italic upper E left-parenthesis bold-italic x right-parenthesis right-parenthesis double-vertical-bar squared 2nd Column Blank EndLayout

      The encoder (bold-italic upper E) and decoder (bold-italic upper D) can be any mappings with the required input and output dimensions, but for image analysis, they are usually CNNs. The norm of the distance can be different, and regularization can be incorporated. Therefore, a more general form of the loss function is

      (3)StartLayout 1st Row 1st Column upper L left-parenthesis bold-italic x comma ModifyingAbove bold-italic x With Ì‚ right-parenthesis plus regularizer 2nd Column Blank EndLayout

stat08316fgz005

      Source: Krizhevsky [14]