Predicting the outcome of AI wars: between innovation and doubt

It’s called the Major Combat Operations Statistical Model (MCOSM) and was developed by the Naval War College and the Naval Postgraduate School in Monterey, California, two of the US Navy’s training and research institutes. When data about Russia’s push to quickly conquer Kyiv was fed into the model last February, the answer was pretty clear. On a scale of one to seven, he assigned a two in offensive power and a five in defense, a result that was later confirmed on the field by the turn of events.

Attempts to predict the outcome of battles and war situations with the help of science have existed since at least the 1920s, when the mathematician Lewis Fry Richardson Statistics were applied to study the causes of a conflict. Today it is AI-based software and algorithms that dominate the scene. The idea, just like Richardson’s a century ago, is hers minimizing the cost of human lives and resources in armed conflictas well as improving decisions about when and how to intervene.

Forecasting efforts

Recent deepening ofEconomist tells of the efforts of the US military. Today there are various simulators that apply to different situations and different army corps, but The finer point is understanding what data to feed the algorithms with for accurate results. MCOSM, for example, was fed with data from 96 conflicts between the end of the First World War and the present day. The results have been encouraging, as in the case of the Kiev siege, but we are still a long way from making real-time predictions while troops are actually engaged in a conflict. MCOSM, for example, is a so-called “deterministic” system, meaning that if the simulation is run multiple times starting from the same data, the same result is always obtained.

But there are also probabilistic prediction systems that take into account, for example, the probability that a rifle shot will hit the target or not, wound or kill it. The possible scenarios, therefore, become many and more varied. If we add to these factors the constant changes of the situation in the field – meteorological data, actual success of an operation, arrival of supplies, etc. – we understand how this is a complex field to develop a reliable simulator. The United States is still trying, as evidenced by the various development efforts by the US military and the various suppliers reported byEconomist.

Prevention is better than intervention

There is another approach to the subject, made famous in the world of artificial intelligence in recent years by a article published in 2018 on Nature. The authors are three representatives of UK research institutes: political scientist Kristian Gleditsch (University of Essex), Weisi Gui (University of Warwick) and Alun Wilson (Alan Turing Institute). Their idea, in the wake of what Richardson suggested, is hers develop algorithms capable not of predicting the outcome of an armed conflict, but of indicating when it is about to break out. That way, the three say, one could actually minimize the cost of war because one would actually avoid getting there. For this, they propose the creation of a supranational consortium that tries to develop such models, for the good of humanity.

Awareness of the costs of wars, argued Gleditsch, Gui, and Wilson in 2018, is rather limited, especially among the Western population for whom armed conflict is far removed. In part, this realization seems to have changed, at least momentarily, because of such a serious Russo-Ukrainian conflict on European soil. But if we see it Collected data fromUppsala Collision Data Program, we see how the cost of conflict from 1946 to 2021 has increased rather than decreased. Following the suggestion to create software that tells you what to do to avoid them seems reasonable.

Doubts

Underlining that the scenario mentioned by Gleditsch, Gui and Wilson is anything but in the works, there are also doubts about its potential effectiveness. A says it In-depth analysis appeared on the site Spectrum published by the Institute of Electrical and Electronics Engineers (IEEE), an international association of scientists with the aim of advancing the technological sciences and one of the most important technological think tanks in the world. THE problems highlighted are two: with which data to feed the models And what degree of decision-making autonomy should the software have. Let’s start with the latter with a history of the Cold War years.

On Bulletin of the Atomic Scientists, analyst Zachary Kallenborn pretty much sums up the concern that unsupervised AI controls nuclear weapons: “If AI controlled nuclear weapons, we could all be dead.” The anecdote that best represents the situation is the so-called “Petrov Affair” of 1983. When on the night of September 26 warning systems indicated that the United States had launched a series of nuclear missiles against the Soviet Union, Soviet Lt. Col. Stanislav Petrov decided not to alert the Kremlin, deeming it a false positive. If it was an AI automatically monitoring the situation, according to protocol, a counterattack would have to be launched. But the “human factor” prevented a global nuclear war.

Apart from delimiting AI decision-making automation, the second concern is potentially introducing bias (literally “bias”) through the data these systems use to learn how to predict the outcome of a conflict. According to Natasha Bajema’s analysis on Spectrum, this is more than ever true in real-world situations under pressure: “Creating a truly unbiased dataset designed to predict specific outcomes remains a significant challenge, especially for life-and-death situations and in areas where data availability is scarce, such as nuclear conflicts”. By introducing data into the model that even inadvertently contains a number of biases, can we trust the simulation result? “Using AI-based tools to make better decisions is one thing, but using them to predict adversary actions in order to prevent them is a whole different game,” concludes Bajema.

Leave a Comment