OpenAI has announced an agreement with the United States (US) Department of War (DOW) to deploy its AI systems in classified environments. The company disclosed the deal on February 28, stating that it will provide advanced AI models to support the department’s operations under a defined contractual framework.
Furthermore, OpenAI said it had previously declined to enter such an arrangement because it believed its safeguards were not ready for classified deployment. It now says it has developed the necessary architecture and contractual protections to proceed without removing technical guardrails.
In explaining its decision, OpenAI pointed to what it described as growing threats from adversaries integrating AI into military systems. The company said the US military “absolutely needs strong AI models” to support its mission and added that it remains unwilling to remove key technical safeguards to enhance performance for national security work.
It also said it sought to “de-escalate things between the DoW and the US AI labs”, arguing that deeper collaboration between government and AI companies is necessary. Notably, the company also stated that it does not believe the government should designate rival firm Anthropic as a “supply chain risk”.
For context, the announcement follows last week’s breakdown in negotiations between Anthropic and the DOW after the company resisted dropping two restrictions: not allowing Claude to be used for surveillance of US citizens and use in lethal autonomous systems without human oversight.
This led to the US military designating the AI company as a “supply chain risk”, and US President Donald Trump announcing all federal agencies will “immediately cease” all use of Anthropic technology, with a six-month phase-out period.
How OpenAI Says It Preserved Its Red Lines
Interestingly, OpenAI has said that it is preserving the same red lines that led to Anthropic being designated a “supply chain risk”. The company said, “We believe our contract provides better guarantees and more responsible safeguards than earlier agreements, including Anthropic’s original contract.” Furthermore, they said the three main red lines with the DOW are:
- No use of OpenAI technology for mass domestic surveillance
- No use of OpenAI technology to direct autonomous weapons systems
- No use of OpenAI technology for high-stakes automated decisions, such as social credit systems
Cloud Deployment To Protect Red Lines
According to the company, it has preserved these limits primarily through deployment architecture. The systems will operate through a cloud-only model, rather than being installed on edge devices or embedded directly into weapons platforms. OpenAI said this structure ensures it retains control over its safety stack and can independently monitor and update safeguards. Because the models will not run autonomously on military hardware, the company said it cannot power fully autonomous lethal systems.
The contract text includes explicit limits. It states: “The AI System will not be used to independently direct autonomous weapons” where law or policy requires human control. On surveillance, it adds, “The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information.”
The agreement also states that the DOW may use the system “for all lawful purposes”, tying permitted use to existing legal frameworks. OpenAI says this language, combined with its cloud deployment, ensures that the red lines remain enforceable even within classified environments.
In addition, OpenAI said it will place cleared, forward-deployed engineers in the loop during deployments. It said it retains full discretion over its safety stack and will not deploy models without guardrails. The company added that it could terminate the agreement if the government violates its terms, though it said it does not expect that to occur.
DOW Versus Anthropic To OpenAI Deal: A Timeline
- February 23: Elon Musk’s xAI agreed to allow its Grok model into classified DOW systems under any lawful use terms. Consequently, this meant that Claude was not the only system available to the military.
- February 24: Defence Secretary Pete Hegseth met Anthropic CEO Dario Amodei and set a Friday deadline (February 27) to remove contractual guardrails restricting Claude’s military use. He signalled that failure to comply could trigger legal and procurement consequences.
- February 26: Anthropic said it “cannot in good conscience accede” to demands that would lift restrictions on mass domestic surveillance and fully autonomous weapons. The company reiterated that those safeguards were non-negotiable, even in classified environments.
- February 27: Donald Trump announces on Truth Social that all federal agencies will immediately cease using Anthropic, with a phase-out of six months.
- February 28: Hours after the deadline lapsed, Sam Altman posted on X that OpenAI had reached an agreement with the DOW, stating the contract reflected prohibitions on domestic mass surveillance and autonomous weapons.
Why This Matters
OpenAI’s agreement with the DOW arrives at a moment when the boundaries between commercial AI safeguards and military operations are under visible strain. Notably, The Wall Street Journal reported that US Central Command still used Anthropic’s Claude AI during airstrikes on Iran hours after President Trump ordered federal agencies to halt use of the company’s technology — highlighting how deeply embedded such systems are in operational workflows.
That episode highlights a broader challenge: once defence agencies integrate an AI model into classified systems, procurement disputes do not immediately remove it from operational use. Against that backdrop, OpenAI’s insistence that its red lines will remain enforceable through cloud deployment, contractual language and technical oversight takes on added significance.
However, the question now shifts from contract design to implementation. Classified military environments prioritise urgency and secrecy, which limits external visibility into how safeguards function in practice. Consequently, the real test for OpenAI’s prohibitions on domestic mass surveillance and autonomous weapons use will not lie in the wording of its agreement but in how the DOW implements and upholds those commitments under real-world operational pressure.
Also Read:
Support our journalism by subscribing
For YouSource link

