Seismic Reservoir Modeling. Dario Grana. Читать онлайн. Newlib. NEWLIB.NET

Автор: Dario Grana
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: География
Год издания: 0
isbn: 9781119086192
Скачать книгу
alt="bold-italic d equals bold upper F bold-italic m italic plus bold-italic epsilon comma"/>

      The model m* that minimizes the L2‐norm is called the least‐squares solution because it minimizes the sum of the squares of the differences of measured and predicted data, and it is given by the following equation, generally called the normal equation (Aster et al. 2018):

      (1.49)bold-italic m Superscript asterisk Baseline equals left-parenthesis bold upper F Superscript upper T Baseline bold upper F right-parenthesis Superscript negative 1 Baseline bold upper F Superscript upper T Baseline bold-italic d period

      If we consider the data points to be imperfect measurements with random errors, the inverse problem associated with Eq. (1.47) can be seen, from a statistical point of view, as a maximum likelihood estimation problem. Given a model m, we assign to each observation di a PDF fi(dim) for i = 1, … , nd and we assume that the observations are independent. The joint probability density of the vector of independent observations d is then:

      The L2‐norm is not the only misfit measure that can be used in inverse problems. For example, to avoid data points inconsistent with the chosen mathematical model (namely the outliers), the L1‐norm is generally preferable to the L2‐norm. However, from a mathematical point of view, the L2‐norm is preferable because of the analytical tractability of the associated Gaussian distribution.

      In science and engineering applications, many inverse problems are not linear; therefore, the analytical solution of the inverse problem might not be available. For non‐linear inverse problems, several mathematical algorithms are available, including gradient‐based deterministic methods, such as Gauss–Newton, Levenberg–Marquardt, and conjugate gradient; Markov chain Monte Carlo methods, such as Metropolis, Metropolis Hastings, and Gibbs sampling; and stochastic optimization algorithms, such as simulated annealing, particle swarm optimization, and genetic algorithms. For detailed descriptions of these methods we refer the reader to Tarantola (2005), Sen and Stoffa (2013), and Aster et al. (2018).

      (1.52)upper P left-parenthesis bold-italic m bar bold-italic d right-parenthesis equals StartFraction upper P left-parenthesis bold-italic d bar bold-italic m right-parenthesis upper P left-parenthesis bold-italic m right-parenthesis Over upper P left-parenthesis bold-italic d right-parenthesis EndFraction equals StartFraction upper P left-parenthesis bold-italic d bar bold-italic m right-parenthesis upper P left-parenthesis bold-italic m right-parenthesis Over integral upper P left-parenthesis bold-italic d bar bold-italic m right-parenthesis upper P left-parenthesis bold-italic m right-parenthesis d bold-italic m EndFraction comma

      where P(dm) is the likelihood function, P(m) is the prior distribution, and P(d) is the marginal distribution. The probability P(d) is a normalizing constant that guarantees that P(md) is a valid PDF.

      In geophysical inverse problems, we often assume that the physical relation f in Eq. () is linear and that the prior distribution P(m) is Gaussian (Tarantola 2005). These two assumptions are not necessarily required to solve the Bayesian inverse problem, but under these assumptions, the inverse solution can be analytically derived. Indeed, in the Gaussian case, the solution to the Bayesian linear inverse problem is well‐known (Tarantola 2005). If we assume that: (i) the prior distribution of the model is Gaussian, i.e. bold-italic m tilde 


                  <div class= Скачать книгу