– the possibility of compressing big data sets via a data tensorization and the use of a tensor decomposition, in particular, a low multilinear rank approximation;
– a greater flexibility in representing and processing multimodal data by considering the modalities separately, instead of stacking the corresponding data into a vector or a matrix. This allows the multilinear structure of data to be preserved, meaning that interactions between modes can be taken into account;
– a greater number of modalities can be incorporated into tensor representations of data, meaning that more complementary information is available, which allows the performance of certain systems to be improved, e.g. wireless communication, recommendation, diagnostic, and monitoring systems, by making detection, interpretation, recognition, and classification operations easier and more efficient. This led to a generalization of certain matrix algorithms, like SVD (singular value decomposition) to MLSVD (multilinear SVD), also known as HOSVD (higher order SVD) (de Lathauwer et al. 2000a); similarly, certain signal processing algorithms were generalized, like PCA (principal component analysis) to MPCA (multilinear PCA) (Lu et al. 2008) or TRPCA (tensor robust PCA) (Lu et al. 2020) and ICA (independent component analysis) to MICA (multilinear ICA) (Vasilescu and Terzopoulos 2005) or tensor PICA (probabilistic ICA) (Beckmann and Smith 2005).
It is worth noting that, with a tensor model, the number of modalities considered in a problem can be increased either by increasing the order of the data tensor or by coupling tensor and/or matrix decompositions that share one or several modes. Such a coupling approach is called data fusion using a coupled tensor/matrix factorization. Two examples of this type of coupling are presented later in this introductory chapter. In the first, EEG signals are coupled with functional magnetic resonance imaging (fMRI) data to analyze the brain function; in the second, hyperspectral and multispectral images are merged for remote sensing.
The other approach, namely, increasing the number of modalities, will be illustrated in Volume 3 of this series by giving a unified presentation of various models of wireless communication systems designed using tensors. In order to improve system performance, both in terms of transmission and reception, the idea is to employ multiple types of diversity simultaneously in various domains (space, time, frequency, code, etc.), each type of diversity being associated with a mode of the tensor of received signals. Coupled tensor models will also be presented in the context of cooperative communication systems with relays.
I.2. For what uses?
In the big data3 era, digital information processing plays a key role in various fields of application. Each field has its own specificities and requires specialized, often multidisciplinary, skills to manage both the multimodality of the data and the processing techniques that need to be implemented. Thus, the “intelligent” information processing systems of the future will have to integrate representation tools, such as tensors and graphs, signal and image processing methods, with artificial intelligence techniques based on artificial neural networks and machine learning.
The needs of such systems are diverse and numerous – whether in terms of storage, visualization (3D representation, virtual reality, dissemination of works of art), transmission, imputation, prediction/forecasting, analysis, classification or fusion of multimodal and heterogeneous data. The reader is invited to refer to Lahat et al. (2015) and Papalexakis et al. (2016) for a presentation of various examples of data fusion and data mining based on tensor models.
Some of the key applications of tensor tools are as follows:
– decomposition or separation of heterogeneous datasets into components/factors or subspaces with the goal of exploiting the multimodal structure of the data and extracting useful information for users from uncertain or noisy data or measurements provided by different sources of information and/or types of sensor. Thus, features can be extracted in different domains (spatial, temporal, frequential) for classification and decision-making tasks;
– imputation of missing data within an incomplete database using a low-rank tensor model, where the missing data results from defective sensors or communication links, for example. This task is called tensor completion and is a higher order generalization of matrix completion (Candès and Recht 2009; Signoretto et al. 2011; Liu et al. 2013);
– recovery of useful information from compressed data by reconstructing a signal or an image that has a sparse representation in a predefined basis, using compressive sampling (CS; also known as compressed sensing) techniques (Candès and Wakin 2008; Candès and Plan 2010), applied to sparse, low-rank tensors (Sidiropoulos and Kyrillidis 2012);
– fusion of data using coupled tensor and matrix decompositions;
– design of cooperative multi-antenna communication systems (also called MIMO (multiple-input multiple-output); this type of application, which led to the development of several new tensor models, will be considered in the next two volumes of this series;
– multilinear compressive learning that combines compressed sensing with machine learning;
– reduction of the dimensionality of multimodal, heterogeneous databases with very large dimensions (big data) by solving a low-rank tensor approximation problem;
– multiway filtering and tensor data denoising.
Tensors can also be used to tensorize neural networks with fully connected layers by expressing the weight matrix of a layer as a tensor train (TT) whose cores represent the parameters of the layer. This considerably reduces the parametric complexity and, therefore, the storage space. This compression property of the information contained in layered neural networks when using tensor decompositions provides a way to increase the number of hidden units (Novikov et al. 2015). Tensors, when used together with multilayer perceptron neural networks to solve classification problems, achieve lower error rates with fewer parameters and less computation time than neural networks alone (Chien and Bao 2017). Neural networks can also be used to learn the rank of a tensor (Zhou et al. 2019), or to compute its eigenvalues and singular values, and hence the rank-one approximation of a tensor (Che et al. 2017).
I.3. In what fields of application?
Tensors have applications in many domains. The fields of psychometrics and chemometrics in the 1970s and 1990s paved the way for signal and image processing applications, such as blind source separation, digital communications, and computer vision in the 1990s and early 2000s. Today, there is a quantitative explosion of big data in medicine, astronomy, meteorology, with fifth-generation wireless communications (5G), for medical diagnostic aid, web services delivered by recommendation systems (video on demand, online sales, restaurant and hotel reservations, etc.), as well as for information searching within multimedia databases (texts, images, audio and video recordings) and with social networks. This explains why various scientific communities and the industrial world are showing a growing interest in tensors.
Among the many examples of applications of tensors for signal and image processing, we can mention:
– blind source separation and blind system identification. These problems play a fundamental role in signal processing. They involve separating the input signals (also called sources) and identifying a system from the knowledge of only the output signals and certain hypotheses about the input signals, such as statistical independence in the case of independent component analysis (Comon 1994), or the assumption of a finite alphabet in the context of digital communications. This type of processing is, in particular, used to jointly estimate communication channels and information symbols emitted by a transmitter. It can also be used for speech or music separation, or to process seismic signals;
– use of tensor decompositions to analyze biomedical signals (EEG, MEG, ECG, EOG4)