Did the U.S. Military Just Ignore Trump’s Ban on Anthropic’s Claude AI During Strikes on Iran?

In a striking display of operational urgency over political decree, the U.S. military continued to rely on Anthropic’s Claude AI for critical intelligence and targeting decisions during massive joint U.S.-Israeli airstrikes on Iran on February 28 — just hours after President Trump ordered all federal agencies to stop using the company’s technology.

According to people familiar with the matter who spoke to The Wall Street Journal, commands worldwide, including U.S. Central Command (CENTCOM) in the Middle East, used Claude for intelligence assessments, target identification, and battlefield scenario simulations as part of “Operation Epic Fury.” CENTCOM has declined to comment on the specific systems involved in its ongoing operations against Iran.

The revelation highlights a classic case of national-security reality clashing with executive directive. Trump announced the ban on February 27 via Truth Social, labeling Anthropic “Leftwing nut jobs” and accusing the company of trying to “STRONG-ARM the Department of War.” The order came roughly one hour before a Pentagon deadline for Anthropic to remove restrictions that prevented Claude from being used in mass domestic surveillance or fully autonomous weapons systems. Defense Secretary Pete Hegseth quickly designated Anthropic a “supply chain risk to national security” — a label normally reserved for foreign adversaries like China or Russia.

Yet the Pentagon had already built Claude deep into its classified infrastructure. Anthropic was the first AI company to deploy frontier models on classified Pentagon networks, and Claude remains the only such model fully approved for those systems. Last summer the company secured a contract worth up to $200 million. Trump’s directive did include a six-month phase-out period for heavily integrated agencies like the Department of Defense, giving the military legal breathing room to keep using the system in the short term.

Pentagon , USA

The simple answer is that replacing Claude is not quick or easy. Military officials and AI experts say fully swapping out the model across classified networks could take months — or longer. A senior defense official told the Journal that competing systems from OpenAI, Google, and Elon Musk’s xAI are “just behind” in specialized government applications but are not yet fully cleared or optimized for the same classified environments where Claude already operates seamlessly.

On the very day of the ban, OpenAI signed its own deal with the Defense Department. CEO Sam Altman publicly stated that his company had agreed to the same guardrails Anthropic had insisted upon — limits on mass surveillance and autonomous weapons. However, moving any new model into classified networks remains a complex, time-consuming process that cannot happen overnight.

Anthropic CEO Dario Amodei had refused to lift the company’s ethical restrictions, calling them “red lines” the firm would not cross even if it meant losing government contracts. The company maintains that those safeguards “had not affected a single government mission to date.”

The result is a temporary but revealing workaround: while politicians fight over ideology and supply-chain labels, the military quietly kept the most capable tool it had on the battlefield. Whether this episode marks the beginning of a longer power struggle between the White House and the Pentagon over AI policy — or simply the pragmatic reality of modern warfare — remains to be seen.

For now, one thing is clear: when lives and mission success are on the line, the Pentagon’s priority is operational effectiveness, even if that means using a system the commander-in-chief just tried to ban.

Leave a Comment

All You Need to Know About Arjun Tendulkar’s Fiance. Neeraj Chopra’s Wife Himani Mor Quits Tennis, Rejects ₹1.5 Cr Job . Sip This Ancient Tea to Instantly Melt Stress Away! Fascinating and Lesser-Known Facts About Tea’s Rich Legacy. Natural Ayurvedic Drinks for Weight Loss and Radiant Skin .