– the interval between two points with a possibility α corresponds to an α-cut and represents the subset with a possibility at least equal to α. There is no equivalent meaning in probability theory.
Figure 1.8. Comparison between a probability density function f(x) and a possibility distribution function Π(x)
In order to ensure consistency between variables modeled in a probabilistic manner and variables modeled in a possibilistic manner, the following condition is typically imposed: the possibility of an event must be greater than or equal to its probability. This is consistent with the usual meaning of the words probable and possible: when we say “it is probable that it will snow tomorrow”, this shows a stronger conviction than when we say “it is possible that it will snow tomorrow”. This condition of consistency implies that reasoning in terms of possibilities is more conservative than reasoning in terms of probabilities.
The differences in the probabilistic and possibilistic axioms, as well as the consistency requirement specified above, imply a very different interpretation of distribution functions (CDF or CPoF) in these two frameworks. Let us examine how the two theories model the information X in the interval [8,12]. If we want to model this in a purely probabilistic framework (and not in a context of probability boxes), we need to make an additional assumption, such as the principle of indifference between values in this interval, which leads to modeling by a uniform distribution between 8 and 12. The corresponding probability density is shown in Figure 1.9. The figure also represents a PoDF modeling the same information (X is in the interval [8,12]). Note that multiple possibility distributions could have been plotted. In particular, a uniform distribution with a possibility value of 1 might have been suitable. Such a distribution would have been very conservative, being the most uninformative distribution that could have been constructed. On the other hand, the triangular distribution plotted in Figure 1.9 is the most informative distribution that can be plotted and at the same time meets the consistency condition, which requires that the possibility of any event X be greater than or equal to its probability.
Figure 1.9. Probability density function and possibility distribution function satisfying the consistency condition
To conclude this comparison, it should be noted that possibility and necessity can be considered as upper and lower bounds of true probability in the presence of epistemic uncertainty. This implies that the cumulative functions of possibility (CPoF) and necessity (CNeF) will bound the cumulative probability distribution function (CDF).
This is illustrated in Figure 1.10 for the previous example of the variable X within the range [8,12]. The CPF of the uniform probability distribution is effectively bounded by the CPoF and the CNeF. This bounding models the presence of epistemic uncertainty. Indeed, the statement X in the interval [8,12] is very vague and many probability distributions could have been suitable. We had to introduce the principle of indifference, which may be debatable, to obtain the uniform distribution and its associated CDF. In the absence of this assumption, any PDF whose CDF would have been between the CPoF and CNeF curves could have been suitable. The CPoF and CNeF bounding is thus similar in spirit to that obtained in the context of probability boxes (p-boxes), but obtained within the context of possibility theory, based on axioms that are quite different from those of probability theory.
Figure 1.10. Distribution function (DF), cumulative possibility function (CPoF) and cumulative necessity function (CNeF). For a color version of this figure, see www.iste.co.uk/gogu/uncertainties.zip
1.7.3. Rules for combining possibility distributions
Possibility theory addresses the problem of quantifying uncertainties when solely based on expert opinion, which will assign likelihood levels to different values of the quantity of interest via possibility distributions Π(x) (typically via triangular or trapezoidal distributions). One of the fundamental questions that then arises is how to deal with divergent expert opinions. For this purpose, different rules for combining possibility distributions have been established.
Let Π1 and Π2 be two possibility distributions. These distributions can be aggregated according to the following rules:
– the conjunctive mode: this is the equivalent of the intersection of events. It corresponds to retaining only the consensus (the common area under the two distributions). This consensus is typically renormalized in order to satisfy the possibilistic axioms;
– the disjunctive mode: this is the equivalent of the union of events. It corresponds to the union of the two distributions;
– the intermediate mode (proposed by Dubois and Prade 1992): as its name indicates, this is an intermediate mode between the two previous ones. By defining the consensus, that is, the upper possibility bound following the intersection of the two distributions, by h, the distribution of the intermediate mode is defined by:
[1.19]
These three modes of combination usually make it possible to combine distributions of possibility from different sources (experts, for example). If there is good agreement between the sources (for example, trapezoidal distributions with overlapping cores), then either the connective or disjunctive modes are both well suited. The choice between the two depends on whether one wishes to consider only consensus or whether one wishes to integrate divergent views as well.
In the absence of good agreement between sources (for example, trapezoidal distributions with non-overlapping kernels), the conjunctive and disjunctive modes are less well suited. The disjunctive mode introduces a multimodal distribution (two peaks), which is generally not desirable in terms of quantifying and propagating uncertainties. The conjunctive mode by itself is too limiting, as it only considers consensus, which can be very limited or even null. The intermediate mode is a proposed solution to resolve these two problems associated with the conjunctive and disjunctive modes. It should be noted that variations on the intermediate mode have also been proposed in order to give more confidence to one distribution (in other words, an expert) than to another. The reader may refer to Dubois and Prade (1992) for more information.
1.8. Evidence theory
The theory of belief functions (or evidence theory) is another theory for modeling uncertainties of an epistemic nature. It was developed by Dempster (1967) and Shafer (1976) and is consequently sometimes known as the Demspter–Shafer theory. This approach is similar in spirit to the probability box theory and possibility theory, in that it seeks to obtain a bounding for a CDF.
1.8.1.