Anthropic AI Prohibits Military Applications Following Pentagon Directive
Anthropic has implemented a policy against military applications of its AI models, a move that follows their initial deployment on classified US military cloud networks.

Anthropic, a prominent AI safety and research company, has issued a directive prohibiting the military use of its artificial intelligence models. This policy change comes after the company had previously enabled its AI to operate on classified United States military cloud networks.
Dario Amodei, CEO of Anthropic, confirmed the company's decision to restrict military applications. This represents a significant pivot for Anthropic, marking a departure from its prior engagement with defense sector cloud infrastructure.
The initial deployment on sensitive government networks highlighted the company's early steps into integrating advanced AI with national security operations. However, the new policy signifies a deliberate step back from this particular application of their technology.
This development is significant for the broader Web3 and AI ecosystem. As AI technologies become increasingly powerful and integrated into various sectors, the ethical considerations surrounding their deployment, particularly in defense, are paramount. Anthropic's stance sets a precedent for how AI developers can navigate complex ethical landscapes and user restrictions within sensitive industries.
Originally reported by CoinTelegraph.