12 Part VII: High‐Performance Computing 29 Massive Parallelization 1 Introduction 2 Gaussian Process Regression and Surrogate Modeling 3 Divide‐and‐Conquer GP Regression 4 Empirical Results 5 Conclusion Acknowledgments References 30 Divide‐and‐Conquer Methods for Big Data Analysis 1 Introduction 2 Linear Regression Model 3 Parametric Models 4 Nonparametric and Semiparametric Models 5 Online Sequential Updating 6 Splitting the Number of Covariates 7 Bayesian Divide‐and‐Conquer and Median‐Based Combining 8 Real‐World Applications 9 Discussion Acknowledgment References 31 Bayesian Aggregation 1 From Model Selection to Model Combination 2 From Bayesian Model Averaging to Bayesian Stacking 3 Asymptotic Theories of Stacking 4 Stacking in Practice 5 Discussion References 32 Asynchronous Parallel Computing 1 Introduction 2 Asynchronous Parallel Coordinate Update 3 Asynchronous Parallel Stochastic Approaches 4 Doubly Stochastic Coordinate Optimization with Variance Reduction 5 Concluding Remarks References
13 Index
List of Tables
1 Chapter 2Table 1 Summary of selected statistical software.Table 2 Summary of selected user environments/workflows.
2 Chapter 3Table 1 Connection between input and output matrices in the third layer of L...
3 Chapter 4Table 1 Streaming data versus static data [9, 10]
4 Chapter 5Table 1 Probabilities for each action figure
5 Chapter 8Table 1 Summary of ingredients of Algorithm 2 for the four adaptive MCMC me...Table 2 Summary of recommended algorithms for specific problems and their s...
6 Chapter 9Table 1 Summary of the notation.Table 2 Comparison of various AIS algorithms according to different feature...Table 3 Comparison of various AIS algorithms according to the computational...
7 Chapter 21Table 1 Summary of uncertainty visualization theory detailed in this chapte...
8 Chapter 29Table 2 Updated GPU/CPU results based on a more modern cascade of supercomp...
List of Illustrations
1 Chapter 1Figure 1 A nontraditional and critically important application in computatio...
2 Chapter 3Figure 1 An MLP with three layers.Figure 2 Convolution operation with stride size .Figure 3 Pooling operation with stride size .Figure 4 LeNet‐5 of LeCun et al. [8].