DoD designates Anthropic as supply-chain risk, first U.S. firm on the list
The U.S. Department of Defense has designated Anthropic, the AI start-up behind the Claude model, as a supply-chain risk, marking the first time the department has placed a U.S. company on a list that has typically highlighted foreign or adversarial suppliers such as Huawei and ZTE. The move underscores concerns over how critical AI tools could affect national security and military readiness.
Anthropic refused a DoD request to permit Claude to be used for “all lawful purposes,” prompting a legal confrontation with the department. The company has since sued, calling the designation an “unprecedented illegal retaliatory action.” The dispute highlights broader questions about who controls the use of advanced AI in national security and how ethical restrictions intersect with defense needs.

Ironically, the timing intersects with the political pressure on Anthropic from the previous administration. The day after former President Donald Trump ordered a broad ban on Anthropic technology for federal use, Anthropic’s Claude reportedly played a central role in a major U.S. strike against Iran, contributing to target identification and decision-support in what authorities described as a complex military operation.
Experts say the episode exposes a governance gap in military AI: there are internal policies and administrative rules on autonomous weapons, but there is little binding law governing AI-generated targeting, decision-making, or civilian harm. Oxford's Blavatnik School of Government policy researcher Briana Rosen has argued that current contracts and internal DoD rules cannot fully align with the practical realities of warfare where AI is involved.
Anthropic’s founders have long prioritized AI safety and ethical constraints, and the firm has historically worked closely with the DoD. Yet the company has also pursued opportunities outside the United States and has weighed the political and ethical implications of surveillance and foreign uses of AI.

The clash with Anthropic foreshadows ongoing tensions between private AI firms and the federal government over control of military technology. OpenAI, which Trump-era policy actions also affected, subsequently pursued a DoD contract under different terms, and its chief executive has promised revisions intended to address privacy and mass-surveillance concerns. The broader industry mood includes rapid leadership turnover and market shifts as major AI players recalibrate their government-facing activities.
Beyond Anthropic, DoD–tech industry ties have grown stronger. Palantir’s Maven Smart System (MMS) integrates military data to support target identification and operational planning, while Google has supplied its Gemini large language model for the DoD’s GenAI.mil platform. Major cloud providers—Microsoft, Amazon Web Services, Google, and Oracle—are involved in the Pentagon’s Joint Warfighting Cloud Capability program, a $9-billion effort launched in 2022. As the U.S. edges toward greater AI-enabled military capabilities, questions about governance, accountability, and the safety of civilian populations abroad increasingly intersect with economics, markets, and global supply chains.