The story of weaponized AI is unfolding in real time. On Tuesday, February 25, Defense Secretary Pete Hegseth sat across from Anthropic CEO Dario Amodei and issued a deadline: comply with the Pentagon’s demand for unrestricted access to Claude, Anthropic’s flagship AI model, or lose a $200 million government contract and be placed on a federal blacklist. According to reporting by Al Jazeera and NBC News, Hegseth also threatened to invoke the Defense Production Act—a Korean War-era statute designed for steel mills and tank factories, not software—to force the company to strip its safety guardrails entirely. This is being framed as a matter of national security. It is, more accurately, a blueprint for state-controlled weaponized AI.
The details matter. Anthropic has two firm positions it has refused to surrender: its models should not be used for the mass domestic surveillance of American citizens, and they should not be integrated into autonomous weapons systems that fire without human intervention.
These are not extreme demands. They represent the minimum ethical threshold for a technology still in its infancy—one whose developers openly acknowledge they do not yet fully understand. And yet, as of this writing, Anthropic is the last significant holdout among major AI companies. xAI, OpenAI, and Google have already agreed to the Pentagon’s “any lawful use” terms. Elon Musk’s xAI even secured approval for classified military networks this week. Each agreement expands the infrastructure of weaponized AI without public debate.
The pressure worked on Anthropic in at least one regard. The company quietly dropped its formal commitment to pause AI training if capabilities outstripped safety procedures—a policy it previously described as a “race to the top” for the industry. Anthropic’s chief science officer, Jared Kaplan, explained the change to Time magazine by noting that competitors were “blazing ahead.” That logic is precisely how races to the bottom are also rationalized, one incremental step at a time.
The “Safety” Paradox: When Guardrails Become Weaponized AI’s First Target
Pentagon officials have taken to describing Anthropic’s safety restrictions as “woke AI”—a term that, as NPR notes, AI experts regard as nebulous and ill-defined, applied broadly to any guardrail that inconveniences preferred applications. The rhetoric rebrands weaponized AI as a culture war issue. Reframe the constraint as an ideology, and dismantling it becomes a principled act rather than a power grab.
AI is not a mature technology. There is no established legal framework governing its use in surveillance, let alone in lethal operations. Anthropic’s Claude was, until this week, the only AI model deployed on classified U.S. military networks — a position now being dismantled as xAI’s Grok moves in under ‘any lawful use’ terms. Claude reportedly operates through a partnership with Palantir Technologies, the data analytics firm co-founded by Peter Thiel, whose tools are also used by federal law enforcement agencies. Al Jazeera has reported that Claude was deployed during the January operation in Caracas that resulted in the abduction of Venezuelan President Nicolás Maduro, though the precise nature of that deployment remains publicly unverified.
When a government demands unrestricted access to a model capable of such operations, it is not asking for efficiency. It is eliminating accountability. The legal analysis on the Defense Production Act, as detailed by Lawfare, makes this precise point: the statute’s applicability to AI safety guardrails is genuinely contested, and Hegseth may not need to win a legal argument to achieve his goal. He simply needs the threat to be credible enough to change behavior and it already has.
I think about what it means to normalize this logic. If a government can classify any dissent against state policy as a security threat—and there is ample recent precedent for this, including the labeling of climate activists and pipeline protesters as domestic terrorists—then an AI system with no guardrails becomes an extraordinarily efficient instrument of suppression. The targeting is automated. The scale is unlimited. The audit trail is controlled by the state.
The Prison Without Walls: Weaponized AI and the Inversion of Justice
The two principles Anthropic is defending correspond to concrete transformations in what governments can do to individuals. Mass surveillance integrated into AI systems is not limited to merely watching people. It can pre-judge them. Predictive risk scores, assigned before any action has been taken, would invert the foundational logic of our legal system: the presumption of innocence.
A person accused under a conventional legal system can confront witnesses, examine evidence, and understand the basis of the state’s case. An algorithmic risk score housed in a proprietary government model would offer none of this. There is no mechanism for rehabilitation. There is no appeal that reaches the actual decision-making process. You cannot cross-examine a black box. Weaponized AI’s accuracy is beside the point if the process itself is unchallengeable.
The financial dimension is more concrete still. Once weaponized AI is deeply integrated into banking and payment infrastructure — a process already underway in various jurisdictions — the capacity to freeze a person’s digital existence becomes an administrative act rather than a judicial one. No arrest. No charge. No courtroom. Canada’s invocation of the Emergencies Act in 2022 to freeze the bank accounts of convoy protesters established that financial exclusion as a tool of civil suppression is not theoretical in liberal democracies.
What concerns me most is not the law-and-order-versus-freedom frame that dominates these debates. It is who decides what those categories mean, and who remains exempt from them. AI-assisted enforcement, applied at scale, will automate the existing inequalities of our legal system. The wealthy will have lawyers, loopholes, and political access. Everyone else will have an algorithm.
The Monopoly on Intelligence
If the Defense Production Act is successfully used to compel control over the most powerful AI models, the resulting power imbalance has no obvious remedy. A government that holds the most capable AI systems without contractual constraint on their deployment holds a technological monopoly with no civilian counterweight. Journalists, civil society organizations, opposition political movements, and private individuals would have no comparable tool—and no legal basis to demand one.
The question is not whether this administration would use such power responsibly. The question is whether any government should hold it without constraint. Power without accountability doesn’t self-correct. A weaponized AI monopoly would simply accumulate it.
Anthropic’s resistance—partial, commercially pressured, and already compromised on the training policy question—is currently the most visible line being held against this logic. That a private company has become the primary institutional barrier between the U.S. government and an unrestricted AI surveillance capability is itself a failure of democratic governance. Congress has not legislated. Courts have not been tested. The public debate is still catching up to the technology.
What happens in the coming days will set a precedent. Weaponized AI, once normalized in military and law enforcement practice, could harden quickly into permanent administrative fact. The question being settled is not which company keeps its Pentagon contract. It is who controls the most consequential technology of our era, under what conditions, and with accountability to whom.
Those are not abstract questions. They are the architecture of the society weaponized AI is building right now.
———
Sources
Al Jazeera: “Anthropic vs the Pentagon: Why AI firm is taking on Trump administration” – https://www.aljazeera.com/news/2026/2/25/anthropic-vs-the-pentagon-why-ai-firm-is-taking-on-trump-administration
NBC News: “Anthropic offered Pentagon the ability to use AI systems for missile defense” – https://www.nbcnews.com/tech/security/anthropic-pentagon-us-military-can-use-ai-missile-defense-hegseth-rcna260534
NPR: “Hegseth threatens to blacklist Anthropic over ‘woke AI’ concerns” – https://www.npr.org/2026/02/24/nx-s1-5725327/pentagon-anthropic-hegseth-safety
Lawfare: “What the Defense Production Act Can and Can’t Do to Anthropic” – https://www.lawfaremedia.org/article/what-the-defense-production-act-can-and-can-t-do-to-anthropic
Axios: “Exclusive: Hegseth gives Anthropic until Friday to back down on AI safeguards” – https://www.axios.com/2026/02/24/anthropic-pentagon-claude-hegseth-dario
CNN Business: “Anthropic ditches its core safety promise in the middle of an AI red line fight with the Pentagon” – https://edition.cnn.com/2026/02/25/tech/anthropic-safety-policy-change
CBS News: “Pentagon officials sent Anthropic best and final offer for military use of its AI amid dispute” – https://www.cbsnews.com/news/pentagon-anthropic-offer-ai-unrestricted-military-use-sources/
