On the other side we can reconstruct the probability to be in the range of f1 to f2 by integrating over the probability density function
In Figure 1.20 the examples for above defined functions are depicted. Those distinct ways of averaging reveal that the different averaging methods must be described in more detail. Until now averaging was performed over time intervals. This must not be confused with averaging over an ensemble. average averaging means averaging over an ensemble of experiments, systems, or even random signals. It will be denoted by ⟨⋅⟩E. Ensemble averaging can be similar to time averaging but this is only valid for specific time signals or random processes.
Figure 1.20 Probability and probability density function of a continuous random process. Source: Alexander Peiffer.
In Figure 1.21 the differences between time and ensemble averaging are shown. On the left hand side (ensemble) we perform a large set of experiments and take the value at the same time t1, on the right hand side we perform one experiment but investigate sequent time intervals.
Figure 1.21 Ensemble and time averaging of signals from random processes. Source: Alexander Peiffer.
Consider now the mean value of an ensemble of N experiments. The mean value is defined by
If we assume Nk discrete results fk that occur with frequency rk=nk/N we can also write
In a continuous form this can be expressed as rk=p(fk)Δfk
For fk→f we get the definition of the based on ensemble averaging and expressed as the integral over the probability density.
Similar to the expression for time signal rms-value, we define in addition the expected mean square value
and the variance
We come back to the difference between ensemble and time averaging as shown in Figure 1.21. A process is called ergodic when the ensemble averaging can be replaced by time averaging, thus
We are usually not able to perform an experiment for an ensemble of similar but distinct experimental set-ups, but we can easily record the signals over a long time and take several separate time windows out of this signal.
1.5.2 Correlation Coefficient
Even more important than the key figures of one random process is the relationship between two different processes, the so-called correlation. It defines how much a random process is linearly linked to another process. Imagine two random processes f(t) and g(t). Without loss of generality we assume the mean values to be zero:
Let us consider that the process g is linked to f by a linear factor K:
The error or deviation of this assumption reads
or in terms of the mean square value
This can be rewritten as