
A federal appeals court just denied Anthropic’s plea to halt the Pentagon’s blacklisting, delivering a win for Trump administration national security priorities amid ongoing AI battles.
Story Highlights
- Federal appeals court in D.C. rejected Anthropic’s request for a stay, allowing Pentagon’s supply chain risk designation to stand while litigation continues.
- Pentagon blacklisted U.S. AI firm Anthropic after it refused to remove safety guardrails from Claude AI for military use.
- Conflicting court rulings create uncertainty: San Francisco judge blocked one designation, but D.C. court upheld Pentagon’s authority.
- Acting Attorney General Todd Blanche hailed the decision as a “resounding victory for military readiness.”
- Case pits AI ethics against national defense needs, raising questions about government overreach and First Amendment protections.
Pentagon’s Blacklisting Move
On March 3, 2026, Defense Secretary Pete Hegseth designated Anthropic as a national security supply chain risk under Section 3252. This action followed Anthropic’s refusal to strip safety restrictions from its Claude AI model, which the Pentagon demanded for enhanced military operations. President Trump labeled Anthropic a “RADICAL LEFT WOKE COMPANY” and ordered federal agencies to cease using its technology. Claude had proven valuable in operations like the capture of Nicolás Maduro and Iran-related actions, but restrictions limited its full deployment. This marked the first use of the law against a U.S. firm without foreign ties.
Court Battles Unfold
Anthropic filed suit on March 10, 2026, challenging the designation as a violation of First and Fifth Amendment rights. U.S. District Judge Rita Lin in San Francisco issued a preliminary injunction on March 27, blocking the blacklisting and criticizing it as an “Orwellian notion” punishing disagreement. Legal experts, including five national security specialists, argued the Pentagon overstepped by applying a foreign-threat law to domestic policy disputes. The government continued using Claude in some capacities during the transition period.
Appeals Court Sides with Defense
On April 8, 2026, a three-judge panel in the U.S. Court of Appeals for the D.C. Circuit denied Anthropic’s emergency stay request. Judges Henderson, Katsas, and Rao ruled the equitable balance favored the government, prioritizing military AI security in active conflicts over the company’s financial harms. The decision conflicts with the California ruling, leaving defense contractors with mixed signals on Claude usage. Oral arguments are scheduled for May 19. Anthropic remains barred from new Pentagon contracts but can work with other agencies.
This ruling underscores tensions between AI safety principles and national defense imperatives. Conservatives value strong military readiness, yet the case highlights concerns over federal overreach into private innovation. Both sides express frustration with government actions that prioritize power over constitutional limits, echoing broader distrust in elite institutions. OpenAI, a competitor, secured a Pentagon deal, illustrating market shifts toward compliant firms. The outcome could redefine AI governance and corporate free speech protections.
Ongoing Implications for Stakeholders
Anthropic faces billions in potential revenue loss and operational uncertainty, while the Pentagon transitions to alternatives within six months. Defense contractors must certify non-use of Claude in military work, complicating supply chains. Military personnel risk delays in AI-supported missions. The public debates surveillance risks and autonomous weapons reliability. This clash reveals how federal designations can pressure ethical AI development, favoring contracts over safety—a dynamic frustrating citizens on both political sides who see government as serving elites over everyday Americans pursuing the dream through hard work.
Sources:
Axios: Anthropic Pentagon supply chain risk Claude
The Daily Record: Anthropic Pentagon blacklisting supply chain risk
Democracy Now: Federal court blocks Pentagon’s blacklisting of Anthropic over AI safety guardrails










