This ant-inspired system has helped Air Liquide reduce its costs dramatically, primarily by making the right gases at the right plants. Exactly how much, company officials are reluctant to say, but one published estimate put the figure at $20 million a year.
“It’s huge,” Harper says. “It’s actually huge.”
Lessons from Checkers
During the 1950s, an electrical engineer at IBM named Arthur Samuel set out to teach a machine to play checkers. The machine was a prototype of the company’s first electronic digital computer called the Defense Calculator, and it was so big it filled a room. By today’s standards, it was a primitive device, but it could execute a hundred thousand instructions a second, and that was all that Samuel needed.
He chose checkers because the game is simple enough for a child to learn, yet complicated enough to challenge an experienced player. What makes checkers fun, after all, is that no two games are likely to be exactly the same. Starting with twelve pieces on each side and thirty-two squares on the game board to choose from (checkers is played only on the dark squares), the number of possible board configurations from start to finish is practically endless. You can play over and over and never repeat the same sequence of moves. This gives checkers what complexity experts call perpetual novelty.
For Samuel’s computer, that was a problem. If every move theoretically could lead to billions of possible configurations of the game board, how could it choose the best one to make? Compiling a comprehensive list of results for each move would simply take too long—just as it would for Marco Dorigo in the traveling salesman problem. So Samuel gave the machine a few basic features to look for. One was called pieces ahead, meaning the computer should count how many pieces it had left on the board and compare that with its opponent’s. Was it two pieces ahead? Three pieces ahead? If a particular move resulted in more pieces ahead, it was likely to be favored. Other features specified favorable regions of the board. Penetrating the opponent’s side was considered advantageous, for example. So was dominating the middle. And so on.
Samuel also taught the computer to learn from its mistakes. If a move based on certain features failed to produce a favorable outcome, then the computer gave less weight to those features the next time around. In addition, he showed the computer how to recognize “stage-setting” moves—those that didn’t help out in an obvious way right now, such as a move that sacrificed a piece, but set up a later move with a bigger payoff, such as a triple jump. The machine did this after the fact by increasing the weight of features that favored the stage-setting move. Finally, he told the computer to assume that its opponent knew everything that it knew so the opponent would inflict the greatest damage possible whenever it could. That forced the machine to factor in potentially negative consequences of moves as well as positive ones. If it got surprised by an opponent anyway, it adjusted the weights to avoid that mistake next time.
Samuel’s project was so successful that the computer was soon beating him on a regular basis. By the end of the 1960s, it was defeating checkers champions.
“All in all, his was a remarkable achievement,” writes John Holland, another pioneer of artificial intelligence, in his book Emergence: From Chaos to Order. “We are nowhere near exploiting these lessons that Samuel put before us almost a half century ago.”
To Holland, who shared a lab with Samuel at IBM, the true genius of the checkers program was the way it modified the weights of a handful of features to cope with the game’s daunting complexity. Because it was impractical at the time to “solve” the game of checkers mathematically by calculating the perfect sequence of moves, as you might do with a simpler game, such as tic-tac-toe, Samuel just wanted his computer to play the game better each time. “The emergence of good play is the objective of Samuel’s study,” Holland wrote.
What Holland meant by emergence was something quite specific. He was referring to the process by which the computer formed a master strategy as a result of individual moves, or, as he put it more generally, the phenomenon of much coming from little. Although everything the program did was “fully reducible to the rules (instructions) that define it,” he says, the behaviors generated by the game were “not easily anticipated from an inspection of those rules.”
We saw the same thing, of course, in Colony 550. Even though individual ants were following simple rules about foraging, their pattern of behavior as a group added up to a surprisingly flexible strategy for the colony as a whole. One colony might tend to be more aggressive in its style of foraging, sending out lots of foragers, while another might be more conservative, keeping them safe inside. Each colony didn’t impose its strategy on the foragers; the strategy emerged from their interactions with one another.
The same could be said about many complex systems, from beehives and flocks of birds to stock markets and the Internet. Whenever you have a multitude of individuals interacting with one another, there often comes a moment when disorder gives way to order and something new emerges: a pattern, a decision, a structure, or a change in direction. This whole chapter, in fact, has been about the kinds of strategies that emerge from self-organized behavior. And what these strategies all have in common is that they represent a way to cope with the unpredictable.
Consider life in an ant colony, where survival means competing not only against other colonies but also against an ever-changing environment. Will there be enough food today? Where will it be found? How will the weather affect the nest? The colony meets such challenges through self-organized behavior, and what emerges is a pattern of activity that allocates the colony’s resources to meet its immediate needs.
Air Liquide, for its part, had its own list of unknowns. Which customers would need deliveries today? What types of gas would they need? Which production facilities could make those gases at the least cost? What would the price of electricity be at those facilities? How could the company deliver those gases most economically? By emulating an ant colony’s distributed problem-solving approach, the company’s optimizer tool provided a day-to-day plan to cope with an endless string of variables.
Like many businesses today, Air Liquide was looking for a way to cope with the perpetual novelty of its environment. The company didn’t expect a guarantee, that it would win every competition it got into, just an opportunity to stay in the game until it could adapt to the latest changes. What it needed, in other words, was a strategy to gain a degree of control over the uncontrollable—which was what Samuel’s checkers player also seemed to promise.
That was quite different, in an important way, from what Deborah Gordon’s ant colonies were trying to do. Instead of attempting to outsmart the desert environment, the ants, in a sense, were matching its complexity with their own. If Colony 550 were to play a game of checkers, each piece on the board would move by itself, acting on local information, with nobody waiting for instructions. The game would be a swirl of motion as pieces moved forward, jumped over one another, became kings, or got taken as prisoners in patterns of interactions that might be difficult to perceive at first glance. But if checkers were as important to ants as foraging, the colony, without doubt, would be a flexible and resilient competitor.
This tension between minimizing uncertainty, on the one hand, and experimenting to keep up with change, on the other, is something we’ll see time and again throughout this book. And what’s surprising about the behavior evolved by bees, birds, and fish, among other species, is the adroit way that groups of such animals manage to have it both ways—to manage complexity and to partake of it at the same time.
“If I was in charge of designing the software for a company like Air Liquide, I’d probably be stressed about doing a really great job,” Gordon says. “But the ants aren’t doing that.” Their system’s too loose and undisciplined. Information coming in is too spotty, and their responses are too unpredictable. “The amazing thing to me is how, every way you look