Seismic Reservoir Modeling. Dario Grana. Читать онлайн. Newlib. NEWLIB.NET

Автор: Dario Grana
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: География
Год издания: 0
isbn: 9781119086192
Скачать книгу
in a reservoir and P(E) the probability of finding hydrocarbon sand at a given location.

      Probability theory is based on three axioms:

      1 The probability P(E) of an event E is a real number in the interval [0, 1]:(1.1)

      2 The probability P(S) of the sample space S is 1:(1.2)

      3 If two events, E1 and E2, are mutually exclusive, i.e. E1 ∩ E2 = ∅ (in other words, if the event E1 occurs, then the event E2 does not occur and if the event E2 occurs, then the event E1 does not occur), then the probability of the union E1 ∪ E2 of the two events, i.e. the probability of one of two events occurring, is the sum of the probabilities of the two events P(E1) and P(E2):(1.3)

i.e. the probability that the event E does not occur, is given by:

      (1.4)

      since

. From the axioms, we can also derive that the probability of the union of two generic events, not necessarily mutually exclusive, is:

      A fundamental concept in probability and statistics is the definition of conditional probability, which describes the probability of an event based on the outcome of another event. In general, the probability of an event E can be defined more precisely if additional information related to the event is available. For example, seismic velocity depends on a number of factors, such as porosity, mineral volumes, and fluid saturations. If one of these factors is known, then the probability of seismic velocity can be estimated more precisely. For instance, if the average porosity of the reservoir is 0.30, it is more likely that the seismic velocity will be relatively low, whereas if the average porosity of the reservoir is 0.05, then it is more likely that the seismic velocity will be relatively high. This idea can be formalized using the concept of conditional probability. Given two events A and B, the conditional probability P(AB) is defined as:

      Two events A and B are said to be independent if the joint probability P(A, B) is the product of the probability of the events, i.e. P(A, B) = P(A)P(B). Therefore, given two independent events A and B, the conditional probability P(AB) reduces to:

      (1.7)

      This means that the probability of A does not depend on the outcome of the event B. For a more detailed description of probability theory, we refer the reader to Papoulis and Pillai (2002).

      Bayes' theorem states that the conditional probability P(AB) is given by:

      where P(A) is the probability of A, P(BA) is the probability of the event B given the event A, and P(B) is the probability of B. The term P(A) is called the prior probability of A since it measures the probability before considering additional information associated with the event B. The term P(BA) is the likelihood function of the event B to be observed for each possible outcome of the event A. The term P(B) represents the marginal probability of the event B and it is a normalizing constant to ensure that P(AB) satisfies the axioms. The resulting conditional probability P(AB) is also called the posterior probability of the event A, since it is computed based on the outcome of the event B.

      In the Bayesian setting, the prior probability represents the prior knowledge about the event of interest and the likelihood function is the probabilistic formulation of the relation between the event of interest and the observable event (i.e. the data). The intuitive idea of Bayes' theorem is to reduce the uncertainty in the prior probability by integrating additional information from the data.