Some of our CCMDs have contracted out some of their wargaming requirements to make up for the lack of uniformed wargamers. This can present a challenge. Some contracting organizations have their own methods of doing a wargame; so if a command’s wargaming requirements do not quite match the method of the contracted wargaming organization, the organization may only wargame the part of the required wargame that their methods can accommodate. Most wargaming requirements are unique, and a wargaming best practice is to design the wargame around the organization’s wargaming requirement, instead of trimming the requirements of the organization to fit a predetermined wargaming method.
Wargames have multiple points of failure. Wargames fail when the wargaming team and the sponsor do not come to an agreement of the wargame’s objective and key issues. This often occurs when the wargaming sponsor is a senior official whose subordinates are reluctant to force the official to clarify and refine the initial wargaming tasking. The best‐designed wargame can be a failure if the wargaming team cannot secure the appropriate players. Wargames can also fail if not executed properly. Keeping the players immersed in the wargaming environment, ensuring the game stays on schedule, managing the game’s adjudication and data collection, and solving the inevitable glitches that often occur require an experienced and adaptive wargaming team. Analytic wargames depend on accurate and detailed data collection, so a well‐designed wargame with the best players can still be a failure if the data collection effort is flawed. Finally, a wargame may be well designed, flawlessly executed with clear and concise data collected, and the game’s analysts may fail to conduct useful analysis.
In conclusion, there are many more wargames being conducted since 2015 in DoD than before, thanks to the reinvigoration spawned by the Office of the Secretary of Defense’s stewardship. However, more does not necessarily mean better or useful. Wargames designed by teams with no wargaming experience or education will most likely encounter two or more of the points of failure enumerated above. If wargaming is to again become a part of the US DoD culture, wargaming education and wargaming experience must be directed and driven by DoD leadership.
Simulations Today
Introduction
Closed‐loop simulations provide the means to assess the combat capabilities of a collection of entities (weapon systems and formations) given that the decision that those forces will engage in battle has been already been made. These simulations are not wargames, as there are no dynamic human decisions that impact the flow of events of the operations simulated in the computer model. While it is true that there are algorithms in closed‐loop simulations that represent some decisions that humans make in combat, they are rudimentary, IF‐THEN type of decisions.
Simulation Types
For the ease of simplicity, we will use ground combat simulations as the basis for our discussion. Ground combat simulations are used by both the US Army and the US Marine Corps and are arguably the most complex of the combat simulations used throughout DoD, both in the sheer numbers of combat platforms and in the complexity of the operating environment.
Aggregate Simulations
These simulations typically array forces linearly in a series of sectors (often referred to as “pistons”) where, in each sector, algorithms will assess if an attack will occur, and if so who will be the attacker. In each sector, the simulation calculates a combat power score (also known as a firepower score) for each force to assess the combat power ratio that exists between the forces. The first simulations that used combat power scores calculated the combat power comparison assuming that each side had perfect information on all the forces, friendly and adversary, in that sector. In other words, not only did each side have perfect intelligence on its adversary’s force composition, but each side also had perfect communications because it knew the status of each and every friendly unit in that sector. Also note that the combat power assessment assumed that the opposing commanders each had an identical assessment of the combat power value of each of the systems of all the forces. That is, there was no modeling of a commander’s misperception that the adversary’s force is more or less formidable than the specified combat power values. Also note that surprise could not be modeled with this construct. Each side was omniscient with respect to its adversary, so there was no way for a commander to maneuver an unseen force to a position of advantage to attack the enemy from an unexpected direction. Quite simply, if a force has a 3 : 1 or better advantage in combat power over their adversary, then that force would attack that adversary. Attrition of each side’s combat power was then assessed based on the calculated combat power ratio, and each side’s combat power was then decremented accordingly. Movement of both forces was then assessed based on the amount of combat power lost and the type of terrain the sector consists of. Movement may have been mitigated so that a unit’s movement in one sector did not expose the flank of a friendly unit in an adjacent sector. As simulations became more sophisticated, combat power scores were modified. The Marine Corps’ Tactical Warfare Simulation, Evaluation, and Analysis System (TWSEAS) and MAGTF (Marine Air Ground Task Force) Tactical Warfare Simulation (MTWS) took into account “perceived combat power,” which limited each side’s calculation to only its current knowledge of the opposing force. Some simulations calculated dynamic combat power values, updating values based on the remaining forces on the battlefield after each time step, so, for example, an air defense weapon might have no combat power once all opposing aircraft had been destroyed.
Entity Simulations
In an entity simulation where individual combatants (systems or personnel) are engaging, each entity is assigned to travel from a starting position, via a series of waypoints, to a destination that it will reach if it survives. If the entity detects an adversary’s entity, and current rules of engagement (ROE) permit, it will fire at that entity. An algorithm will then assess the probability that the entity hit the adversary’s entity (P(hit)), and if it did hit, the amount of damage the hit inflicted (P(kill/hit)) where “kills” are typically categorized as “catastrophic,” “mobility,” “firepower,” or “mobility and firepower.”
Simulations and Prediction
The late, great Air Force analyst, Clayton Thomas, described simulation‐based analysis as an IF‐THEN statement. Both the simulation and the data comprise the IF side. The simulation and the data are then used to produce the simulation’s output, the THEN side. If the simulation represented reality and if the data were precise, then the result would be an accurate prediction.30 In general, neither is true. In the following paragraphs, we will examine how well our simulations represent reality and how precise our data are.
Standard Assumptions
There are standard assumptions for most closed‐loop computer simulations. Human factors, such as leadership, morale, combat fatigue, and training status are typically not explicitly represented in these simulations. If not represented explicitly, then the implicit assumptions are that both sides have exactly the same characteristics