U.S. DoD designates Anthropic national security risk; company sues to overturn procurement ban
Anthropic, the AI startup led by executives including Dario Amodei, has been designated a national security risk by the U.S. Department of Defense and faces removal from federal procurement networks. The company has responded by filing lawsuits against the U.S. government to overturn the designation and to obtain a temporary injunction blocking its effect. The suits were filed in the Northern District of California and the U.S. Court of Appeals for the District of Columbia.
The DoD’s action designates Anthropic as a security risk and signals the expected removal of its Claude model from defense contracting channels after a grace period. The move appears to extend a broader push within the U.S. government to constrain access to Anthropic’s technology across federal agencies, highlighting a dispute over who should control the military use of AI.

Anthropic’s complaint argues that the scope of the government’s demand could include sensitive applications such as mass surveillance and fully autonomous weapons systems, which the company says would violate its human-rights- and democracy-based AI guidelines. The company describes the designation as unlawful and politically motivated retaliation, asserting there is no proven security threat justifying a blanket ban.
The DoD has pushed back, saying the core issue is secure interoperability with federal systems and verified data access. In its view, the military must be able to use technology for all lawful purposes, and private vendors should not unilaterally restrict uses in a way that interferes with military command-and-control capabilities.
OpenAI’s approach is noted as divergent. In 2024, OpenAI deleted a clause prohibiting military and war-related use from its terms and expanded collaboration with the DoD on confidential networks and operation-specific AI systems. The contrasts illustrate how government requirements about broad access and use can become decisive in determining which AI platforms gain a foothold in the public sector.

The dispute carries implications beyond the United States. For South Korea and other allies, the case underscores how U.S. standards for defense-market access and AI governance may shape interoperability, procurement norms, and security policy across allied operations. As the U.S.-led alliance relies on shared information systems and joint command-and-control networks, how vendors are allowed to use and restrict AI could influence alliance readiness and modernization programs.
For U.S. readers, the Anthropic case highlights a broader tension between corporate ethical commitments and government security controls in AI. The outcome could influence the competitive landscape among leading AI firms, the trajectory of federal procurement, and the resilience of global AI supply chains as governments seek to balance innovation with national security and human-rights considerations.