AI Drives Modern Warfare Strategy as U.S.-Israel-Iran Strikes Highlight Frontier Tech

U.S. and Israeli strikes on Iran have put a spotlight on how artificial intelligence is shaping modern warfare, with reports that frontier models from Anthropic and Palantir were used to plan and execute the operation. Observers say the integration of AI into targeting, surveillance, and mission analysis could mark a turning point in how wars are fought and waged.

Foreign media described Palantir’s Gotham as helping identify Iran’s military facilities and leadership hideouts by merging radar and drone data, while Anthropic’s Claude was said to run thousands of attack scenarios to minimize American and Israeli casualties. Some reports even claimed casualties in Iran and other consequences, though these claims require verification. The coverage underscores how real-time AI synthesis is becoming integral to strategic decision-making in conflict zones.

Anthropic has long debated the ethics of military use of AI. The company opposes unregulated mass surveillance of Americans and fully autonomous weapons without human oversight, a position that has clashed with U.S. Defense Department expectations about deployable tools. The DoD has argued that its programs require flexibility that can blur lines between civilian and military AI usage, raising questions about what “all lawful uses” means in practice.

This picture shows how a frog grows: .mw-parser-output .smallcaps{font-variant:small-caps}I, the eggs are fastened to the underside of a leaf in shallow water; 2 and 2a, tadpoles when first hatched, showing feather-like gills; 3, the gills have disappeared; 4, full-grown tadpoles; 5, hind legs begin to grow; 6, four legs appear; 7, the tail grows shorter; 8, the tail has disappeared, the tadpole has grown into a frog.
Representative image for context; not directly related to the specific event in this article. License: Public domain. Source: Wikimedia Commons.

The episodes also echo a broader internal dispute over AI procurement. In a separate operation in Venezuela in January, U.S. forces reportedly relied on Anthropic’s Claude during planning and execution, with official statements highlighting that the mission avoided American casualties but resulted in Venezuelan fatalities. Anthropic has disputed aspects of these reports, while Pentagon officials circulated internal documents to AI vendors that pushed for broader access to DoD-approved tools.

In parallel, defense contractors and AI firms have shown divergent responses. OpenAI reportedly accepted the Defense Department’s broad “all lawful uses” clause after previously resisting some restrictions, while others like Elon Musk’s xAI presented a more expedient stance toward DoD requirements. The divergence among top AI players has driven industry-wide debates about safety, governance, and the strategic vulnerability of supply chains in national security contexts.

How Caple Church and How Caple Court
Representative image for context; not directly related to the specific event in this article. License: CC BY-SA 2.0. Source: Wikimedia Commons.

Within the industry, a notable show of solidarity emerged as AI workers circulated a public letter vowing not to fracture along corporate lines. By early March, hundreds of employees across OpenAI, Google, and other firms had joined the effort, reflecting anxiety over how defense contracts influence research priorities and safety norms. The tension underscores how corporate culture—especially around AI safety and ethics—shapes policy decisions with national and global implications.

For the United States, the episode illustrates several high-stakes implications: how defense contracting can accelerate or constrain AI innovation, the risk that supply chains rely on a handful of frontier models, and the potential for political backlash or consumer pushback if AI tools tied to national security are perceived as unsafe or misused. Some analysts also point to market effects, including volatility in oil prices and the value of defense-related tech stocks, as investors react to a rapid shift in how AI is integrated into warfare.

Looking ahead, experts argue that the episodes underscore the urgent need for clear legal and regulatory frameworks governing AI in national security. Without robust rules on safety, accountability, transparency, and civilian protection, the rapid adoption of AI in defense could outpace public oversight and international norms, with wide-reaching consequences for U.S. security, technology leadership, and global markets.

Subscribe to Journal of Korea

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe