News   /   Politics   /   Editor's Choice

Anthropic resists US War Department push for unrestricted AI access

Anthropic refuses Pentagon demand to remove AI safeguards despite $200m contract threat. (Photo by AP)

American AI company Anthropic has said it “cannot in good conscience” comply with a US military demand to strip safety guardrails from its Claude artificial intelligence model, despite facing the possible cancellation of a $200 million contract. 

The standoff between the AI company and the US Department of War escalated this week after War Secretary Pete Hegseth gave Anthropic until Friday to grant the military unrestricted access to its Claude model or face punitive action.

Anthropic chief executive Dario Amodei said the company would not remove key safeguards designed to prevent the system’s use in mass domestic surveillance or fully autonomous weapons capable of killing without human oversight.

He expressed hope that Hegseth would reconsider, adding that Anthropic remains willing to support US national security “with our two requested safeguards in place.”

At the center of the dispute is the Pentagon’s insistence that Claude be made available for any lawful military application, without built-in restrictions.

Anthropic argues that current AI systems are not sufficiently safe or reliable for autonomous lethal weapons or broad surveillance programs, describing such uses as outside the responsible limits of today’s technology.

The US War Department has awarded significant AI contracts in recent years to major technology firms, including Anthropic, Google and OpenAI. Anthropic had previously been the only AI provider approved for use in classified US military systems, a status that intensified scrutiny of its safety commitments.

Earlier this week, Elon Musk’s xAI also secured approval for classified use.

The expansion of autonomous systems, particularly drones capable of operating without continuous human control, has amplified concerns among researchers and policymakers about the ethical and legal implications of AI-driven warfare.

A decision by Hegseth to classify Anthropic as a supply chain risk — a designation typically reserved for foreign adversaries — could significantly damage the company.

Such a move would likely bar other US defense contractors from using Anthropic’s products, dealing a major financial and reputational blow to the company.

Meanwhile, the Pentagon is accelerating parallel talks with other AI giants. Grok, Google’s Gemini, and OpenAI’s ChatGPT already operate in unclassified military systems.

Negotiations are underway to bring Gemini and ChatGPT into classified networks as well. Although the Times reported Google was “close” to a deal while OpenAI was “not close,” a Defense Department official disputed that portrayal, insisting discussions with both firms are ongoing and that both are expected to sign.

However, Trump administration officials are demanding the same “all lawful purposes” commitment from each company. One source admitted it is unclear whether OpenAI would accept that condition, but said “we're talking.”


Press TV’s website can also be accessed at the following alternate addresses:

www.presstv.co.uk

SHARE THIS ARTICLE
Press TV News Roku