The Center for International Governance Innovation released a report titled, “Artificial Intelligence, the Future of War and International Politics,” written by Michael C. Horowitz. In the report, it is stated that AI is a General-Purpose Technology (GPT) and not a weapon class. It is comparable to electricity, the combustion engine, or computing power. It is used in a diffuse, dual-use, commercially driven, and rapidly evolving manner. Therefore, the report argues that any attempt to regulate AI like nuclear weapons is conceptually flawed and strategically doomed.
Private Sector Dominance and Military AI Innovation
The private sector dominates military-relevant AI innovation, with cutting-edge AI capabilities are being built by commercial firms, startups, and open-source communities. In this case, governments are users and adapters, but not originators. This reverses the Cold War model of state-led military innovation. Therefore, according to the report, states must manage their dependence on private actors whose incentives are not aligned with military stability.
The report mentions that Lethal Autonomous Weapon Systems (LAWS) are only a small fraction of military AI use. It further claims that the real transformation lies in surveillance, decision support, logistics, targeting workflows, and human-machine teaming. Additionally, the report advises that over-focusing on LAWS blinds policymakers to systemic military reorganization driven by AI.
Lessons from the Russia-Ukraine War and the Rise of Precise Mass
In the case of the Russia-Ukraine war, the report says it functions as a live laboratory, in which one-way attack drones dominate casualties, millions of low-cost systems are produced annually, and AI plus autonomy defeat electronic jamming. This marks the arrival of “precise mass,” in which systems are cheap, accurate, scalable, and attritable. This is a structural shift in which the age of small numbers of exquisite platforms is ending.
Today, autonomy is driven by electronic warfare rather than ethics, in which electronic jamming cuts human control links. Autonomy emerges as a practical battlefield necessity, not as a doctrinal preference. Both Russia and Ukraine are using algorithmic target recognition, onboard decision-making, as well as open-source AI tools. The implication is that autonomy will spread regardless of moral opposition because it solves real tactical problems.
AI is lowering the barrier to military power, as precise mass requires only precision guidance (widely available), commercial manufacturing, and non-frontier AI. This means that even non-state actors and middle powers can now field advanced capabilities. As a result, military power is diffusing downward resulting to destabilizing traditional hierarchies.
Operational Risks, Automation Bias, and Strategic Challenges
AI is increasingly embedded in target identification, strike authorization pipelines, and command decision support. This creates a danger of automation bias, in which humans over-trust AI outputs, and errors scale faster than human judgment. The insights from this are: AI failures are not primarily technical but organizational and cognitive. Militaries are adopting AI faster than they can test it realistically, including validation under stress and training operators in its limits. This creates a false sense of reliability, resulting in stability risks, where mistakes can propagate upward into strategic decisions.
AI increases the speed, complexity, and opacity of war. Faster decision cycles reduce time for reflection, verification, and de-escalation, making interactions between autonomous systems unpredictable. This could lead to the strategic fear that wars may begin without deliberate political choice.
Delegating nuclear-use decisions to AI is unacceptable. Even AI support systems pose risks such as false positives, pattern hallucinations, and escalatory feedback loops. Human judgment must remain central to nuclear command and control.
AGI raises extreme but unverifiable risks, including possible consequences such as undermining submarine-based deterrence and enabling mass proliferation of WMD knowledge. These risks are plausible but not verifiable. AGI concerns should inform caution but not drive speculative panic.
Traditional arms control will fail for AI because there is no physical bottleneck like fissile material. Export controls distort markets but do not stop diffusion, and regulation risks locking in obsolete rules. This makes prohibition-based governance ineffective for GPTs.
Confidence-Building Measures for Global AI Stability
Confidence-building measures are the only viable path, as they create shared interests among even rival states in areas such as avoiding accidents, preventing miscalculation, and maintaining escalation control. This is the document’s central policy thesis.
The three concrete and realist cooperation mechanisms are:
- Autonomous Incidents Agreement: Modeled on Cold War naval agreements, applying peacetime rules to autonomous systems and reducing accidental encounters.
- Human Control Over nuclear weapons: A binding commitment to human responsibility that reinforces accountability at the highest stakes.
- International Institutional Capacity-Building: An UN-based forum for best practices, policy templates, and technical assistance and inclusive of Global South states
A peacetime focus is crucial because wartime restrictions are unenforceable. Peacetime norms build trust, shape expectations, and influence behavior before crises occur.
The final strategic judgment is that AI will reshape war whether states cooperate or not. Therefore, the real choice is between unmanaged diffusion into instability or cooperative risk reduction into conditional stability.
Conclusion
In conclusion, military AI is a commercially driven, general-purpose technology that is already transforming warfare through precise mass and autonomy. It cannot be banned or centrally controlled. The only realistic path to international stability lies in confidence-building measures that reduce accidents and preserve human judgment. This is especially over nuclear weapons and manage escalation before crises begin.