Anthropic Placed on U.S. DoD Supply-Chain Risk List, Highlighting AI Governance Tensions

The U.S. Department of Defense has added Anthropic, a U.S. artificial intelligence company, to its supply chain risk list, a designation that has historically targeted tech firms from adversarial countries such as China. The move signals a high-stakes tension over how and when civilian AI technology should be used to support national security.

Anthropic’s dispute with the Defense Department centers on conditions the company set for how its Claude language model could be used for “all lawful purposes.” The company refused to meet the department’s broad usage demands, arguing limits on use for mass surveillance and fully autonomous weapons. The clash has escalated into a legal fight as Anthropic accuses the government of improper retaliation.

The disagreement underscores a broader governance question: who should decide how military-grade AI tools are deployed? Anthropic has argued for ethical boundaries—one of its “red lines”—while the Defense Department maintains that ultimate decision-making authority over military use should rest with military leadership, not private companies. This reflects deeper tensions between private-sector innovation and national-security policy.

Deputy Assistant Secretary of Defense for Arctic and Global Resilience Iris Ferguson conducts a press briefing on the Department of Defense’s 2024 Arctic Strategy at the Pentagon, Washington, D.C., July 22, 2024. (DoD photo by U.S. Navy Petty Officer 1st Class Alexander Kubitza) Kubitza)
Representative image for context; not directly related to the specific event in this article. License: Public domain. Source: Wikimedia Commons.

Last July, Anthropic agreed to provide Claude to the Defense Department under a $200 million contract to access the department’s confidential networks. Anthropic contends that the department sought to reverse aspects of that arrangement earlier this year, while the Defense Department says policy decisions on use belong to military officials. The dispute illustrates how civilian AI firms and government buyers negotiate control in urgent security contexts.

Experts say the clash points to gaps in the current governance framework for military AI. Briana Rosen of the Blavatnik School of Government at Oxford argued that the lack of formal rules governing autonomous weapons and AI-enabled targeting illustrates how contracts alone cannot align evolving technology with national-security needs. The contract-based approach, she says, cannot fully address a dynamic battlefield reality.

The dispute has rippled through the broader U.S. tech sector. After Anthropic’s friction with the Defense Department, OpenAI stepped into a separate contract relationship with the government and revised its terms to address public concerns such as mass-surveillance risks. The public and policy backlash has intensified scrutiny of private firms’ involvement in defense work, even as major tech players continue to seek government AI contracts.

Deputy Assistant Secretary of Defense for Arctic and Global Resilience Iris Ferguson conducts a press briefing on the Department of Defense’s 2024 Arctic Strategy at the Pentagon, Washington, D.C., July 22, 2024. (DoD photo by U.S. Navy Petty Officer 1st Class Alexander Kubitza) Kubitza)
Representative image for context; not directly related to the specific event in this article. License: Public domain. Source: Wikimedia Commons.

U.S. tech giants remain tightly entwined with defense ambitions. Google recently supplied its Gemini language model to the Defense Department’s GenAI.mil platform. Google previously faced internal backlash over its participation in Project Maven, a prior military AI program, and both Google and other big firms—Microsoft, Amazon Web Services, Oracle, and others—are participants in the government’s Joint Warfighting Cloud Capability, a roughly $9 billion cloud initiative for military data and AI.

As defense and industry deepen their collaboration around AI, concerns about an “AI-military-industrial complex” gaining influence over U.S. security, markets, and policy grow louder. The experience with Anthropic highlights ongoing questions about governance, accountability, and how to balance rapid technological advancement with ethical and constitutional considerations in national security.

The episode also touches on the practical mix of policy and wartime use. The report notes that Claude played a central role in a large-scale U.S. military operation against Iran shortly after President Trump reportedly banned Anthropic’s technology from federal use, illustrating how political directives and on-the-ground military needs can diverge in real time as AI tools enter critical decision-making and targeting processes.

Subscribe to Journal of Korea

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe