The row names are indices of input matrices, and the second column shows indices of output matrices that are connected to the corresponding input matrix. There are
The third layer (C3) is the second convolutional layer in LeNet‐5. It consists of 60 kernel matrices of size
The fourth layer (S4) is a Max Pooling layer that produces 16 feature matrices with size
The sixth and seventh layers are fully connected layers, which are introduced in the previous section. In the sixth layer (S6), 120 input neurons are fully connected to 84 output neurons. In the last layer, 84 neurons are fully connected to 10 output neurons, where the 10‐dimensional output vector contains predict scores of each class. For the classification task, cross‐entropy loss between the model output and the label is usually used to train the model.
There are many other architectures of CNNs, such as AlexNet [10], VGG [11], and ResNet [12]. These neural networks demonstrated state‐of‐the‐art performances on many machine learning tasks, such as image classification, object detection, and speech processing.
5 Autoencoders
5.1 Introduction
An autoencoder is a special type of DNN where the target of each input is the input itself [13]. The architecture of an autoencoder is shown in Figure 5, where the encoder and decoder together form the autoencoder. In the example, the autoencoder takes a horse image as input and produces an image similar to the input image as output. When the embedding dimension is greater than or equal to the input dimension, there is a risk of overfitting, and the model may learn an identity function. One common solution is to make the embedding dimension smaller than the input dimension. Many studies showed that the intrinsic dimension of many high‐dimensional data, such as image data, is actually not truly high‐dimensional; thus, they can be summarized by low‐dimensional representations. Autoencoder summarizes the high‐dimensional data information with low‐dimensional embedding by training the framework to produce output that is similar to the input. The learned representation can be used in various downstream tasks, such as regression, clustering, and classification. Even if the embedding dimension is as small as 1, overfitting is still possible if the number of parameters in the model is large enough to encode each sample to an index. Therefore, regularization [15] is required to train an autoencoder that reconstructs the input well and learns a meaningful embedding.
5.2 Objective Function
Autoencoder is first introduced in Rumelhart et al. [16] as a model with the main goal of learning a compressed representation of the input in an unsupervised way. We are essentially creating a network that attempts to reconstruct inputs by learning the identity function. To do so, an autoencoder can be divided into two parts,
The encoder (
(3)
Figure 5 Architecture of an autoencoder.
Source: Krizhevsky [14]
where