The Pentagon banned Anthropic's Claude AI from use in military operations, marking the first major restriction on foundation models by the U.S. Department of Defense. The decision follows leaked internal memos from Anthropic raising concerns about military applications of AI systems.
Mistral CEO Arthur Mensch stated the AI supremacy fight centers on open versus closed systems, not geographic location. "Power concentration in AI development poses greater risks than where systems are built," Mensch said, advocating for open-source models to distribute control.
India launched a $15 billion initiative to develop sovereign AI capabilities, joining a global race where nations seek independence from U.S. and Chinese foundation models. The investment targets domestic model development and computing infrastructure.
Lawsuits against AI companies over safety failures increased 340% in 2025, according to legal filings. Cases cite inadequate testing, harmful outputs, and lack of accountability mechanisms in deployed systems. Courts now examine whether foundation model developers bear liability for downstream applications.
The Anthropic memo controversy revealed internal debates about accepting military contracts. Staff members questioned whether Constitutional AI safeguards sufficiently prevent weapons development applications. The company declined comment on specific Pentagon discussions.
Foundation models now power military reconnaissance in seven countries, defense analysts confirmed. Applications include satellite imagery analysis, logistics optimization, and strategic planning. None currently make autonomous weapons decisions, per international law.
Geopolitical tensions escalated after the U.S. restricted GPU exports to 23 countries, citing AI weapons proliferation risks. China responded with rare earth mineral export controls critical for chip manufacturing. The European Union proposed AI sovereignty requirements for government contracts.
Three major AI labs now employ former defense officials in leadership roles, raising concerns about revolving door dynamics. Transparency advocates demanded disclosure of military partnerships and dual-use research.
The crisis highlights fundamental questions about who controls transformative AI systems. As capabilities advance, the gap between technical progress and governance frameworks widens. International AI safety summits scheduled for April aim to address coordination mechanisms, though binding agreements remain unlikely.

