South Korea braces for tighter U.S. AI controls on defense work

Anthropic, a U.S.-based AI startup known for stressing ethical guidelines and human-rights protections in its tools, has found itself in a high-stakes clash with the U.S. government over national security and control of powerful AI. The Department of Defense designated Anthropic as a supply-chain security risk and moved to remove its Claude AI from federal contracting networks. President Donald Trump at the time pressed for broad withdrawals of Anthropic’s technology from government use through executive action.

Anthropic has responded with legal action, filing to cancel the DoD designation and to suspend its effects. In its court filing, the company argued that the DoD’s proposed usage restrictions could permit large-scale surveillance and fully autonomous weapons. It described the government’s position as an unprecedented move that would threaten human rights and democracy, characterizing the designation as unlawful and politically motivated.

Poster for the prèmiere of Claude Debussy and Maurice Maeterlinck's Pelléas et Mélisande at the Théâtre de l'Opéra-Comique on 30 April 1902. Phototype by Berthaud at 31, Rue Bellefond, Paris. 0.860 x 0.620 m.[1]
Representative image for context; not directly related to the specific event in this article. License: Public domain. Source: Wikimedia Commons.

The DoD’s stance centers on the government’s right to deploy technology for legitimate defense needs, arguing that a private vendor cannot dictate or narrowly constrain military use within the armed forces. The department maintained that national security interests require broad access to AI capabilities, and that constraining legitimate military applications would itself constitute a security risk. The episode follows a broader push in U.S. policy circles to curb or shape access to advanced technologies.

In contrast, OpenAI has taken a different path. The company had already removed a military-use prohibition from its terms in early 2024 and has moved to expand collaboration with U.S. government and defense-related networks. As Anthropic faced mounting pressure, OpenAI announced a strategic partnership focused on building AI systems for classified or defense-forward environments, signaling a stark split in how leading AI firms approach government engagement.

For U.S. readers, the case highlights how national-security priorities are reshaping the competitive AI landscape. The DoD’s designation and the administration’s push to steer procurement and supplier access show that geopolitical considerations—rather than ethics alone—can influence which technologies and firms are able to operate in the public sector and in government-contracted markets. The outcome could affect supply chains, government contracting, and the pace of U.S. AI leadership globally.

Poster for the original production of The Duchess of Dantzic at the Lyric Theatre, London, 17 October 1903.  Printed by Weiners Ltd. 49×76.4cm
Representative image for context; not directly related to the specific event in this article. License: Public domain. Source: Wikimedia Commons.

The dispute also has implications beyond the United States, including in allied safety and defense ties with Korea. In an era of joint U.S.-led defense planning and interoperable systems, a government expectation of “full and lawful use” of AI by suppliers could set de facto standards for defense networks and civilian-military AI deployments overseas. Korean AI firms and government buyers may need to weigh similarly stringent security and governance requirements when courting U.S. defense and public-sector work.

Ultimately, this case underscores a central challenge of the AI era: when national security and geopolitics collide with corporate ethics and innovation. The Anthropic litigation will be watched by policymakers, industry, and allies as they gauge how far governments will go to secure access to advanced AI—and what that means for global markets, supply chains, and security cooperation.

Subscribe to Journal of Korea

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe