U.S. DoD designates Anthropic national security risk; company sues to overturn procurement ban

Anthropic, the AI startup led by executives including Dario Amodei, has been designated a national security risk by the U.S. Department of Defense and faces removal from federal procurement networks. The company has responded by filing lawsuits against the U.S. government to overturn the designation and to obtain a temporary injunction blocking its effect. The suits were filed in the Northern District of California and the U.S. Court of Appeals for the District of Columbia.

The DoD’s action designates Anthropic as a security risk and signals the expected removal of its Claude model from defense contracting channels after a grace period. The move appears to extend a broader push within the U.S. government to constrain access to Anthropic’s technology across federal agencies, highlighting a dispute over who should control the military use of AI.

Petty Officer 229th Military Police Company, 336th MP Battalion, 49th MP Brigade, gunner, Private 1st Class Andrew Thomas pulls security for his convoy while traveling down Route Irish, Baghdad, Iraq, Dec. 9. The Virginia National Guard unit is home based in Virgina Beach, Va., and it has a primary mission of personal security detachment, or PSD.
Representative image for context; not directly related to the specific event in this article. License: Public domain. Source: Wikimedia Commons.

Anthropic’s complaint argues that the scope of the government’s demand could include sensitive applications such as mass surveillance and fully autonomous weapons systems, which the company says would violate its human-rights- and democracy-based AI guidelines. The company describes the designation as unlawful and politically motivated retaliation, asserting there is no proven security threat justifying a blanket ban.

The DoD has pushed back, saying the core issue is secure interoperability with federal systems and verified data access. In its view, the military must be able to use technology for all lawful purposes, and private vendors should not unilaterally restrict uses in a way that interferes with military command-and-control capabilities.

OpenAI’s approach is noted as divergent. In 2024, OpenAI deleted a clause prohibiting military and war-related use from its terms and expanded collaboration with the DoD on confidential networks and operation-specific AI systems. The contrasts illustrate how government requirements about broad access and use can become decisive in determining which AI platforms gain a foothold in the public sector.

Two Idaho Army National Guard Soldiers from the 126th Engineer Company react to enemy fire during a field training exercise, at 'Tactical Assembly Area Snake' at the National Training Center, Fort Irwin, Calif., Aug. 18. The unit is supporting the 116th Cavalry Brigade Combat Team during a 12-day field training exercise that simulates life in a deployed status. This training cycle is the first force-on-force battle simulation the Army National Guard has participated in since the beginning of Operation Enduring Freedom, and is composed of more than 5,200 Soldiers from 10 states’ National Guard units, the U.S. Army Reserve, and active duty U.S. Army Soldiers. (Photo by Spc. Michael Germundson, 115th Mobile Public Affairs Detachment)
Unit: 115th Mobile Public Affairs Detachment
DVIDS Tags: Oregon; Fort Irwin; National Training Center; NTC; Ft. Irwin; U.S. Army National Guard; 126th Eng. Co.; 126th Engineer Company; Army horizontal construction; U.S. Army Ntional Guard
Representative image for context; not directly related to the specific event in this article. License: Public domain. Source: Wikimedia Commons.

The dispute carries implications beyond the United States. For South Korea and other allies, the case underscores how U.S. standards for defense-market access and AI governance may shape interoperability, procurement norms, and security policy across allied operations. As the U.S.-led alliance relies on shared information systems and joint command-and-control networks, how vendors are allowed to use and restrict AI could influence alliance readiness and modernization programs.

For U.S. readers, the Anthropic case highlights a broader tension between corporate ethical commitments and government security controls in AI. The outcome could influence the competitive landscape among leading AI firms, the trajectory of federal procurement, and the resilience of global AI supply chains as governments seek to balance innovation with national security and human-rights considerations.

Subscribe to Journal of Korea

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe