Here is an illustration of the Ramsey-type argument. Let s1 … sn be a binary string. A monochromatic arithmetic progression of length k is a substring si si+t si+2t … si+ (k-1) t, i ≥ 1 and i + (k-1) t ≤ n with all characters equal (0 or 1) for some t > 0. The string 01100110 contains no arithmetic progression of length 3 because the positions 1, 4, 5, 8 (for 0) and 2, 3, 6, 7 (for 1) do not contain an arithmetic progression of length 3; however, both strings 011001100 and 011001101 do: 1, 5, 9 for 0 and 3, 6, 9 for 1.
THEOREM 1.5.– Van der Waerden. Every infinite binary sequence contains arbitrarily long monochromatic arithmetic progressions.
This is one of the many results in Ramsey theory (Soifer 2011): it shows that in any sequence there are arbitrary long simple correlations. We note the power of this type of result: the property stated is true for any sequence (in contrast with typical results in probability theory where the property is true for almost all sequences). Graham and Spencer, well-known experts in this field, subtitled their Scientific American presentation of Ramsey Theory (Graham and Spencer 1990) with the following sentence:
Complete disorder is an impossibility. Every large13 set of numbers, points or objects necessarily contains a highly regular pattern.
Even if “true randomness” does not exist, can our intuition on randomness be cast in more rigorous terms? Randomness plays an essential role in probability theory, the mathematical calculus of random events. Kolmogorov axiomatic probability theory assigns probabilities to sets of outcomes and shows how to calculate with such probabilities; it assumes randomness, but does not distinguish between individually random and non-random elements. So, we are led to ask: is it possible to study classes of random sequences with precise “randomness/symptoms” of randomness? So far we have discussed two symptoms of randomness: statistical frequency and incomputability. More general symptoms are unpredictability (of which incomputability is a necessary but not sufficient form), incompressibility and typicality.
Algorithmic information theory (AIT) (Calude 2002; Downey and Hirschfeldt 2010), which was developed in the 1960s, provides a close analysis of randomness for sequences of numbers, either given by an abstract or a concrete (machine) computation, or produced by a series of physical measurements. By this, it provides a unique tool for a comparative analysis between different forms of randomness. AIT also shows that there is no infinite sequence passing all tests of randomness, so another proof that “true randomness” does not exist.
As we can guess from the discussion of the four sequences above, randomness can be refuted, but cannot be mathematically proved: we can never be sure that a sequence is “random”, there are only forms and degrees of randomness (Calude 2002; Downey and Hirschfeldt 2010).
Finally, note that similarly to randomness in classical dynamics, which was made intelligible by Poincaré’s negative result, AIT is also rooted in a negative result: Gödel’s incompleteness theorem. As recalled above, a random sequence is a highly incomputable sequence. That is, algorithmic randomness is in a certain sense a refinement of Gödel’s undecidability, as it gives a fine hierarchy of incomputable sets that may be related, as we will hint below, to relevant forms of randomness in natural sciences.
1.6. Classical and quantum randomness revisited
1.6.1. Classical versus algorithmic randomness
As we recalled in section 1.2, classical dynamical systems propose a form of randomness as unpredictability relative to a specific mathematical model and to the properties of measurement. Along these lines, Laskar (1994) recently gave an evaluation of the unpredictability of the position (and momentum) of planets in the Solar system, a system that has motivated all classical work since Newton and Laplace (their dynamics are unpredictable at relatively short astronomical time). How can one relate this form of deterministic unpredictability to algorithmic randomness, which is a form of pure mathematical incomputability? The first requires an interface between mathematical determination, by equations or evolution functions, and “physical reality” as accessed by measurement. The latter is a formal, asymptotic notion.
A mathematical relation may be established by considering Birkhoff ergodicity. This is a pure mathematical notion as it does not (explicitly) involve physical measurement, yet it applies to the nonlinear systems where one may refer to Poincaré’s analysis or its variants, that is, to weaker forms of chaos based on positive Lyapunov exponents (see footnote 2) or sensitivity to initial conditions and “mixing” (see paragraph below). Birkhoff’s notion is derived from his famous theorem and it roughly says that a trajectory, starting from a given point and with respect to a given observable quantity, is random when the value of a given observable over time coincides asymptotically with its value over space14.
A survey of recent results that relate deterministic randomness – under the form of Birkhoff randomness for dynamical systems – to algorithmic randomness is presented in Longo (2012). This was mostly based on the relation between dynamical randomness and a weak form of Martin-Löf algorithmic randomness (due to Schnorr) (Galatolo et al. 2010; Gàcs et al. 2011). A subtle difference may be proved, by observing that some “typically” Birkhoff random points are not Martin-Löf random, some are even “pseudo-random”, in the sense that they are actually computable. By restricting, in a physically sound way, the class of dynamical systems examined, a correspondence between points that satisfy the Birkhoff ergodic theorem and Martin-Löf randomness has been obtained in Franklin and Towsner (2014). These results require an “effectivization” of the spaces and dynamical theory, a non-obvious work, proper to all meaningful dynamical systems, as the language in which we talk about them is “effective”: we formally write equations, solve them, when possible, and compute values, all by suitable algorithms. All of these results are asymptotic as they transfer the issue of the relation between theory and physical measurement to the limit behavior of a trajectory, as determined by the theory: a deterministic system sensitive to initial (or border) conditions, a mathematical notion, produces random infinite paths. To obtain these results it is enough to assume weak forms of deterministic chaos, such as the presence of dense trajectories, i.e. topological transitivity or mixing. The level of their sensitivity is reflected in the level of randomness of the trajectories, in the words of Franklin and Towsner (2014):
Algorithmic randomness gives a precise way of characterizing how sensitive the ergodic theorem is to small changes in the underlying function.
1.6.2. Quantum versus algorithmic randomness
We already summarized some of the connections between quantum and algorithmic unpredictability. The issue is increasingly in the limelight since there is a high demand for “random generators” in computing. Computers generate “random numbers” produced by algorithms and computer manufacturers took a long time to realize that randomness produced by software is only pseudo-random, that is, the generated sequences are perfectly computable, with no apparent regularity.
This form of randomness mimics the human perception of randomness well, but its quality is rather low because computability destroys many symptoms of randomness, e.g. unpredictability15. One of the reasons is that pseudo-random generators “silently fail over time, introducing biases that corrupt randomness” (Anthes 2011, p. 15).
Although, today, no computer or software manufacturer claims that their products can generate truly random numbers, these mathematically unfounded claims have re-appeared for randomness produced with physical experiments. They appear in papers published in prestigious journals, like Deutsch’s famous paper (Deutsch 1985), which describes two quantum random generators