OpenAI has revised its controversial agreement with the U.S. Department of Defense (DoD) following widespread criticism over potential risks of domestic surveillance. CEO Sam Altman announced the amendments on March 2, 2026, via a post on X sharing an internal company memo, explicitly prohibiting the intentional use of OpenAI’s AI systems for surveilling U.S. persons.
The changes address concerns that the original deal’s broad language—allowing use for “all lawful purposes”—could enable mass monitoring of Americans, including through commercially purchased personal data.
The Revised Contract Language
The new clauses added to the agreement include:
- “Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”
- A second provision states that the DoD “understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.”
Altman emphasized the need for clarity: “There was so much focus on this, that we wanted to make this point especially clear.” The Pentagon also confirmed that OpenAI’s services would not be used by intelligence agencies like the NSA without a separate contract modification.
These updates build on earlier safeguards OpenAI claimed in the initial February 27 announcement, which prohibited domestic mass surveillance and required human responsibility for lethal force (including autonomous weapons).
What Sparked the Backlash?
The original deal was announced late on February 27—hours after President Donald Trump ordered federal agencies to cease using rival Anthropic’s AI tools. Anthropic had been effectively blacklisted by Defense Secretary Pete Hegseth after refusing to drop restrictions on mass surveillance and autonomous weapons, with the Pentagon insisting on “all lawful purposes” language.
OpenAI positioned its agreement as including similar “red lines,” but critics argued the reliance on existing laws (many predating modern surveillance revelations like Snowden’s) offered weak protections. Legal experts noted that frameworks like the Fourth Amendment and FISA have historically permitted expansive government programs when interpreted broadly.
Internal dissent was significant: Before the deal closed, 96 OpenAI employees signed an open letter urging leadership to reject Pentagon demands. Researcher Leo Gao publicly called it “window dressing.” A public campaign dubbed “QuitGPT” gained traction, with over 1.5 million people reportedly canceling subscriptions or joining protests.
The timing—released on a Friday night amid geopolitical tensions including U.S. strikes on Iran—drew accusations of opportunism. Altman later admitted the rollout was a mistake: “The issues are super complex, and demand clear communication… We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”
Lingering Skepticism and Broader Context
While the amendments strengthen explicit bans, some analysts remain doubtful about enforcement. The contract still ties restrictions to “applicable laws” rather than independent, ironclad prohibitions. Critics point out that U.S. surveillance has often operated within legal gray areas, and publicly available data (not explicitly covered) could still enable bulk analysis.
The episode highlights tensions in the AI industry’s military pivot: OpenAI secured access to classified networks where Anthropic was shut out, raising questions about negotiation leverage, safety principles, and the balance between national security needs and civil liberties.
Altman’s revisions appear aimed at rebuilding trust with employees, users, and privacy advocates amid a rapidly evolving landscape where AI’s defense applications are increasingly central—and contested. Whether the changes fully satisfy critics or prove enforceable in practice remains an open question as scrutiny continues.
Recommended for you:
