The total entropy of any isolated thermodynamic system always increases over time, approaching a maximum value. Therefore, the total entropy of any isolated thermodynamic system never decreases.
Third Law of Thermodynamics, About the Absolute Zero of Temperature
As a system asymptotically approaches absolute zero of temperature, all processes virtually cease, and the entropy of the system asymptotically approaches a minimum value.
We metaphorically view our Cultural Algorithm as composed of two systems, a Population of individual agents moving over a performance landscape as well as a collection of knowledge sources in the Belief Space. Each of the knowledge sources can be viewed statistically as a bounding box or generator of control. If we look closely at the second law, it states that over time an individual system will always tend to increase its entropy. Thus, over time the population should randomly spread out over the surface and the bounding box for each knowledge source expand to the edge of the surface, encompassing the entire surface. Yet this does not happen here. This can be seen in terms of a contradiction posed by Maxwell relative to the second law. This Contradiction is the basis for Maxwell's Demon.
In the 1860s, the Physician Maxwell devised a thought experiment to refute the second law [5]. The basis for this refutation was that the human mind was different than pure physical systems and that the universe need not run down as predicted by the second law. In his thought experiment, there were two glass boxes. Within each box was a collection of particles moving at different rates. There was a trap door connecting the boxes controlled by a demon, see Figure 1.2. This demon was able to selectively open the door to “fast” particles from A and allow them to go to B. Therefore, increasing the entropy of B and reducing that of A.
Figure 1.2 An example of Maxwell's demon in action. The demon selectively lets particles of high entropy from one system to another. Reducing the entropy in one and increasing it in the other.
This stimulated much debate among physicists. Leo Szilard in 1929 published a refutation of this by saying that the “Demon” had to process information to make this decision and that processing activity consumed additional energy. He also postulated that the energy requirements for processing the information always exceeded the energy stored through the Demon's sorting.
As a result, when Shannon developed his model of information theory he required all information to be transported along a physical channel. This channel represented the “cost” of transmission specified by Szilard. He was then able to equate the entropy of physical energy with a certain amount of information, called negentropy [6] since it reduced entropy as the Demon does in Figure 1.2.
We can use the metaphor of Maxwell's Demon as a way to interpret the basic problem‐solving process carried out by Cultural Algorithms when successful. We will call this process the Cultural Engine. Recall that the communication protocol for Cultural Algorithms consists of three phases: vote, inherit, and promote (VIP). The voting process is carried out by the acceptance function. The inherit process is carried out by the update function. The promote function is carried out through the influence function. These functions provide the interface between the Population component and the Belief component. Together, similar to Maxwell’s Demon, they extract high entropy individuals first from the population space, update the Belief Space, and then extract high entropy Knowledge Sources from the Belief Space back to modify the Population Space like a two‐stroke thermodynamic engine.
Thus, the evolutionary learning process is viewed to be directed by an engine powered by the knowledge that is learned through the problem‐solving process. While the engine is expressed here in terms of Cultural Algorithm framework, it is postulated that any evolutionary model can be viewed to be powered by a similar type of engine.
Outline of the Book: Cultural Learning in Dynamic Environments
Earlier it was mentioned that one thing that differentiates Cultural Algorithms from other frameworks is that it is naturally able to cope with changes in its environment. In Engineering, dynamic environments are typically modeled in three basic ways. One approach is to take a general problem such as bin packing and make changes to that application problem over time. A second approach is to generate changes in a multidimensional fitness landscape over which the search problem is defined [7]. A third way is to use large‐scale problems whose solution takes place in multiple phases such as the design of a cloud‐based workflow system. In the design of a complex system such as this, the knowledge used in dealing with one phase may be different from the knowledge needed in subsequent phases.
The focus of this book is on the design of Cultural Algorithm solutions for the development of complex social and engineering systems for use in dynamic environments. Chapter 2 introduces the Cultural Algorithm toolkit (CAT). That system contains a Cultural Algorithm that is connected to a dynamic problem landscape generator. The generator, the ConesWorld, is an extension of the work of Dejong and Morrison [7]. It was selected because its dynamics were described in terms of entropy, which makes it a good fit with the Cultural Engine model discussed above. It is written in Java and available on the website associated with the book. Examples of its application to problems in the simulated landscape along with some benchmark engineering design problems are presented.
A second feature of Cultural Algorithms mentioned earlier is their ability to provide a social context for an individual and facilitate the movement of knowledge through a network. Chapters 3 and 4 investigate the use of several knowledge distribution mechanisms using that platform. Chapter 3 by Kinniard‐Heether et al. shows when an auction mechanism can be a useful tool in expediting the solution to optimization problems generated in the ConesWorld at different entropy levels. Al‐Tirawi et al. in Chapter 4 investigates the extent to which allowing the knowledge sources' specific information about an individual's location in a social network can improve performance in dynamic problems with high entropy. The approach is called common valued auctions. The common value related to the shared knowledge that knowledge sources have about the location of individuals in a network. The Common value approach is then applied to ConesWorld landscape sequences that range from low entropy to highly chaotic systems.
Auctions can be viewed as competitive games, but the strategies available to bidders are by definition limited. In Chapter 5, Faisal Waris et al. investigate the use of competitive and cooperative games in CA problem solving. First, examples are presented within the ConesWorld environment and compared with other knowledge distribution mechanisms. The latter half of the paper investigates the use of a CA in the design of a real‐world application for autonomous vehicles. The real‐world system to be designed is an Artificial Intelligence pipeline for a pattern recognition component. Such pipelines consist of a series of components, and each of the components is tuned initially by their manufacturer. However, when placed within a pipeline, the parameters for all of the participating stages need to be tuned to optimize pipeline execution. Such pipelines can consist of 50 or more stages. In the paper, it is shown that the CA that uses a competitive game framework provides a statistically more efficient solution than alternative approaches. In addition, maps of how knowledge sources that are distributed within a successful network are provided. Unsuccessful networks are more conservative with more homogeneous regions and possess