Anthropic challenges Trump-era DoD security designation, sues to preserve federal contracts
Anthropic, a U.S. AI startup known for its Claude large language model, is facing a potential removal from federal procurement after being designated a supply-chain security risk by the Trump administration. The company has responded by filing lawsuits against the U.S. government, asking a California federal court and a Washington, D.C., appeals court to cancel the designation and halt its effects.
The central dispute centers on how far U.S. national security policy can extend into private AI development. The Defense Department has designated Anthropic as a security risk, citing concerns that could affect the restricted use of its systems in defense contracts. Anthropic says the designation conflicts with its own human-rights– and democracy–based AI guidelines and warns that it could stigmatize the company without evidence of an actual threat, especially if sensitive uses such as broad surveillance or fully autonomous weapons are considered.
The U.S. military’s position, however, is that the real issue is secure interoperability and verified access to data within federal systems. The DoD maintains that the military must be able to employ technology for all lawful purposes, and that private vendors cannot unilaterally limit uses to the point of interfering with military command, control, and operations.

This dispute comes as a contrasting development in the AI industry. OpenAI, a leading rival, has moved in a different direction: after removing a military-use ban from its terms in early 2024, it has announced a strategic partnership to build AI for DoD classified networks and operational environments. The split underscores how private firms are navigating government demands for broad military applicability of AI versus maintaining ethical guidelines.
US policy makers are using supply-chain risk designations in a way that has rarely targeted domestic AI startups. The Trump administration’s move to push Anthropic out of federal contracting signals a broader push to dictate how AI technology can be used in national security contexts, and it could set a precedent for other vendors in the sector.

For U.S. readers, the case matters because it touches on defense procurement, national security, and the governance of AI in sensitive environments. A government stance that imposes broad usage constraints could influence the pace and nature of innovation by private firms, affect cloud and data-sharing standards, and shape how American tech companies compete for federal and allied contracts.
The episode also has implications beyond the United States, particularly for allied defense ecosystems. In the context of the U.S.–Korea alliance, where joint operations rely on interoperable information systems and shared military technology, the case raises questions about how global security standards for AI will be defined and enforced across partners, and how that will affect Korea’s own defense platforms and procurement strategies.
Ultimately, this confrontation highlights a core tension in the AI era: how to balance ethical and human-rights considerations with the government’s desire to control military uses of powerful technology. The outcome could influence who governs AI in national security, how private firms collaborate with the state, and the future shape of the global AI industry.