Several international media outlets reported that the US Central Command (CENTCOM) used Claude, developed by Anthropic, for intelligence-related tasks during US strike on Iran. This was primarily reported by The Wall Street Journal and Axios. The development is notable because earlier Donald Trump announced plans to cut ties with Anthropic and its AI tools following a dispute over usage terms. The disagreement arose after the Pentagon sought broader “unrestricted lawful” use of the platform, while Anthropic declined to remove certain safety guardrails governing how the technology could be deployed.
The intelligence related tasks included analyzing intelligence data, identifying targets, and simulating battle scenarios during a February 28, 2026 attack on Iran. The operation by the US was titled “Epic Fury.” In this operation, the US military used Claude in support roles but not directly for firing weapons or autonomously controlling systems. Therefore, reading between the lines, the US military may have used Claude in the following ways:

Intelligence Analysis
In the Epic Fury operation, CENTCOM may have used the Claude AI model to rapidly process large volumes of satellite imagery, signals intelligence, and field reports. With the help of this Claude AI model, the US Army may have been able to summarize intercepted communications and flag unusual patterns or movements. Furthermore, with the help of this AI model, CENTCOM may have found it easier to cross-reference data from multiple sources. This may have made tasks easier for military officers because previously they had to manually analyze thousands of reports. With the help of Claude, the AI could produce structured summaries and highlight key risks in a very short period of time. Time is a key asset in war.
Target Identification Support
Claude may have supported correlating imagery with known military infrastructure and matching objects or facilities against intelligence databases. It may have further supported suggesting possible target classifications. Despite this, the final targeting decisions were still made by human officers.
Battle Simulations and Scenario Modeling
The US military may have used AI to run “what-if” simulations, estimate likely responses from adversaries, and model escalation scenarios. This may have included forecasting collateral damage risks and helping commanders compare options before executing strikes.
Logistics and Operational Planning
This Claude AI tool may have assisted the US military with coordinating strike timing, aircraft fueling, and route optimization. Furthermore, they may have helped with supply chain forecasting as well as mission scheduling. These functions are non-lethal but critical components of military operations.
Information Synthesis for Commanders
Large Language Models (LLMs) are particularly used for turning raw intelligence into briefings as well as drafting situation reports. This can result in the generation of structured decision memos.
Conclusion
Based on several media reports, there has been no reporting that Claude directly controlled weapons, made autonomous kill decisions, or replaced human chain-of-command authority.
As of now, most modern militaries use AI in decision-support roles rather than as autonomous commanders.