Free the Panamanian Golden Frog!
Plus a lot of catching up
Good Morning:
Friday on #DogShirtTV, the estimable Mike Feinberg and I discussed the impacts of the ongoing FBI purge, and the estimable Albert Craig—disguised once more as Eve gaumond—talked AI misinformation and how to fool large language models about how many hotdogs you eat:
Sunday evening, the estimable Andrew Steele led a MARA book club discussion of The Euthyphro and The Apology. It’s available in full for paid subscribers here:
And yesterday on #DogShirtTV, the estimable Holly Berkley Fletcher, the estimable Mike Feinberg, and I discussed CIA coups, COINTELPRO, the Crusades, and other historical context that people lend too much weight:
The Situation
(with the estimable Alan Rozenshtein)
The Situation on Thursday catalogued what certain federal judges are saying about the conduct of the Trump administration and the lawyers who represent it.
Today, let’s talk about the Pentagon and Anthropic.
AI companies worth gazillions of dollars aren’t exactly the victim of administration lawlessness most likely to garner public sympathy. The AI that is coming for your job on its way to posing a catastrophic risk of ending the human race on Earth—and the tech bros and gals who are making billions of dollars by building such products—are figures of public fear, and sometimes loathing and resentment, more than admiration and intuitive identification.
So there may be a tendency to regard the ongoing attempts by Secretary of Defense Pete Hegseth to bully Anthropic into removing all restrictions on its AI product Claude for Department of Defense use as a battle between powerful malign forces in which the anti-authoritarian member of the general public has no dog.
And to be sure, Anthropic is no powerless victim of masked ICE agents, who have been scooped up on the streets and held in detention or deported summarily under the Alien Enemies Act. The battle between the AI company and the Pentagon is, indeed, a clash of the titans.
That said, the battle between Hegseth and Anthropic is not one from which the public should turn away in disgusted neutrality. The attack on Anthropic is no different from the attack on Harvard University, the attack on any number of law firms, or the attack on National Public Radio. It is, to put the matter simply, a retaliation against a private actor for asserting its rights—in this case rights under a contract the federal government signed that protects matters of conscience important to executives of the company—aimed at destroying an entity that has displeased the administration. It is designed both to target and punish Anthropic for not submitting and, by doing so, to intimidate the other frontier AI labs into a more accommodating posture.
Whether this effort succeeds on either front remains to be seen. What is clear already is that, like the attacks on other universities, law firms, and others, the attack on Anthropic is poisonous conduct in a society that purports to be governed by anything like a rule of law.
The law that governs this situation is not all that complicated. One of us summarized it in detail earlier today. Without repeating that analysis here, let’s just say that the government can’t simply label an American company a risk to Defense Department—or wider government—supply chains for a product that same government actively wants to deploy because the company that makes it won’t give the government its preferred terms of deployment. And the government really can’t impose a secondary boycott on that product, forbidding business with any company that itself does business with the blacklisted entity. The statutes simply don’t give the government the power to engage in these sorts of extortionate activities, and when Anthropic litigates the matter, expect it to prevail.
What the government can do, merely by dint of being the government, is scare the crap out of investors and enterprise clients of a company like Anthropic. Merely having the president and the secretary of defense—of course fashioning himself the “secretary of war”—announce that they are blacklisting Anthropic and anyone who does business with it creates doubt about the company and its viability. This is, after all, still a young company. And it’s a company going up in a ferociously competitive environment against major players: Google, OpenAI, Meta, and Elon Musk’s xAI. An actor as powerful as the United States federal government doesn’t have to have much of a legal leg to stand on to raise doubts about it to enterprise clients who make up the core of Anthropic’s business and investors.
And it doesn’t need to have a legal leg to stand on to send a loud message to Anthropic’s competitors.
That message, at least, is clearly being heard.
In response to the dispute with Anthropic, Xai was quick to agree that its product, Grok, would have no restrictions—as the government has demanded.
OpenAI’s response was more uncertain. The day after Anthropic’s designation, OpenAI announced its own classified deployment deal with the Pentagon and publicly claimed three red lines that mirror Anthropic’s: no mass domestic surveillance, no autonomous weapons, and no high-stakes automated decisions.
But the actual contract language OpenAI published tells a different story. The restrictions on autonomous weapons apply only where “law, regulation, or Department policy requires human control”—meaning that the operative safeguard is a Defense Department directive that Hegseth can rewrite at will. The surveillance restriction prohibits only “unconstrained monitoring” of “private information”—leaving plenty of room for slightly constrained surveillance of private information, or unconstrained surveillance of information the government deems public.
In other words, OpenAI’s “red lines” track whatever the government decides the law and its own policies already require. That is not a constraint on the Pentagon; it is a restatement of the status quo with better PR. As LASST, a legal advocacy group focused on AI, put it, the contract language “does not purport to prohibit the government from any uses beyond what is already prohibited by law.”
The sort of extortionate relationship between an administration and private institutions the administration is engaged in here is toxic stuff in a democratic society. It’s toxic if the institution is a university. It’s toxic if the institution is a law firm. And it’s toxic if the institution is a business with clients and investors.
And just as it was thus important that law firms not cave and challenge the administration’s executive orders, and just as it was thus important that Harvard University took the administration to court, and just as it was thus important that NPR went to court, it is important that Anthropic not capitulate.
And there is reason for optimism: The administration’s legal position in these fights is often far weaker than its bluster suggests. Indeed, only today, the Wall Street Journal reported that the Justice Department plans to withdraw its appeals defending the punitive executive orders against law firms—a reminder that this administration frequently backs down when its targets actually fight back in court. Anthropic’s legal position is no less powerful than that of the law firms.
We are not AI engineers, much less are we businesspeople who have ever been responsible for managing either the client-side or the investor-side of a business like Anthropic. That said, it is fair to observe that the law firms that fought the administration seem to be doing okay. None has been obviously denuded of its client base. And the courts have been effective in protecting the firms from extortionate predation by the administration.
There are, of course, many more law firms and universities than there are frontier AI labs. So there are more opportunities for some firms and schools to capitulate and still leave others to fight.
In the case of AI labs, there are only a small number of total players, and there is only one—Anthropic—that has centered its identity on standing for an ethical approach to inventing god-machines that might just mean the end of humanity. If Anthropic doesn’t fight, in other words, it’s completely unclear who will.
It thus seems of no small importance that the administration not get away with a frankly lawless assertion of power to force the company into designing the future the way Pete Hegseth prefers—important for the future of AI, important for the future of the relationship between the administration and big tech, and important for the notion that the law meaningfully constrains presidential actions.
You might not like Anthropic, but as Donald Rumsfeld might have put it, you go to war with the plaintiffs you have, not the plaintiffs you wish you had.
The Situation continues tomorrow.
Recently On Lawfare
Compiled by the estimable Marissa Wang
Can State Law Remedy Constitutional Violations by Federal Officers?
Harrison Stark argues that “converse 1983” statutes advanced by states to authorize tort damages suits against federal officials who violate constitutional rights have stronger legal footing than critics may assume, founded within statutory text and jurisprudential history.
The Constitution’s federalist structure envisions an active role for states in redressing the constitutional violations of federal actors. As Alexander Hamilton explained in the Federalist Papers, governments may not reliably right the wrongs of their own officials, and the Constitution’s system of dual sovereignty exists so that “if [the people’s] rights are invaded by either [level of government], they can make use of the other as the instrument of redress.” State torts were once the primary way for individuals to recover for injuries caused by federal officials. Today’s converse 1983 proposals carry forward this rich tradition. Although there are open legal questions, states possess powerful arguments that these laws are neither barred by the Supremacy Clause nor federally preempted.
Is Claude Too Woke For War?
In the latest edition of the Seriously Risky Business cybersecurity newsletter, Tom Uren unpacks the ongoing feud between the Pentagon and Anthropic on military artificial intelligence (AI) use restrictions, potential complications in the U.S.’s strategy against Volt Typhoon, and more.
So far, Claude is the only model to have been approved for Defense Department classified work, although the Pentagon this week negotiated a deal for xAI’s Grok. Regarding Claude, one department official told Axios, “[W]e need them and we need them now,” because, “they are that good.” If Anthropic doesn’t cave, Hegseth has reportedly threatened to either force the company to remove its limits by invoking the Defense Production Act, or declare Anthropic a supply chain risk and freeze it out of the department’s supply chain. It’s like an abusive relationship: We must have you or no one can.
“Information Looking for People”
Paul M. Barrett reviews Emily Baker-White’s new book, “Every Screen on the Planet: The War Over TikTok,” which offers an account of TikTok’s rise and abuse of user data. Barrett explains that, while it documents censorship, data access, and corporate deception in its narrative, the book downplays their significance and shows unwarranted dismissal for the dangers of TikTok’s powerful algorithm.
“Every Screen on the Planet” is a useful but odd book. Thoroughly reported, it chronicles the hypocrisy and lies that seem endemic in the tech industry and which I’ve written about extensively for Lawfare and other outlets. But in spite of the sordid record she has assembled, Baker-White is perplexingly sympathetic to the interrelated social media companies she describes and to most of the entrepreneurs and executives who populate her pages. The author does not attempt to reconcile the tension between her troubling factual findings and her mild reaction to them.
Pentagon’s Anthropic Designation Won’t Survive First Contact with Legal System
Michael Endrias and Alan Z. Rozenshtein argue that the Department of Defense’s decision to designate Anthropic as a supply chain risk to bar its participation in federal contracts is legally unsustainable because it likely exceeds statutory authority, and violates due process as well as the First Amendment.
Anthropic has said it will sue, and it has strong legal arguments on multiple independent grounds. Every layer of the government’s position has serious problems, and any one of them could independently be fatal. Together, they make the government’s litigation position close to untenable.
The legal problems are so glaring, in fact, that a cynical possibility suggests itself: The administration knows this won’t survive judicial review and is doing it anyway, so that when they inevitably lose, they can still claim to have gone hard against Anthropic. This is designation as political theater: a show of force that was never meant to stick.
Congress Enters the Chip Wars
Joe Khawam unpacks the provisions of the bipartisan AI OVERWATCH Act, what it means for Congress’s role in artificial intelligence (AI) technology export policy, and how the act balances national security and commercial interests.
Few corners of American artificial intelligence (AI) policy have grown stranger over the past year than exports of advanced semiconductors. For years, a bipartisan consensus held that restricting China’s access to cutting-edge AI chips was essential to maintaining America’s technological edge in the geopolitical competition that may define this century. The Trump administration has upended that consensus, embracing a new theory that American interests are better served by selling chips to China—capturing billions in revenue for American chipmakers and a 25 percent tariff for the U.S. Treasury—than by ceding the market to domestic Chinese alternatives. Congress has now responded with a bipartisan effort to impose guardrails on the administration’s new policy.
Ethiopia’s Troubled Peace
In the latest edition to Lawfare’s Foreign Policy Essay series, Hilary Matfess explains how the Ethiopia’s National Election Board’s efforts to recognize a new political party—while also barring the longstanding Tigray People’s Liberation Front from participating in elections—contributes to the mounting political insecurity in the Tigray region and may trigger renewed conflict.
The board’s decision thus amounts to a quiet coronation of Simret and leaves the TPLF without options for peaceful political contestation in the upcoming elections, currently scheduled for June 1. The NEBE’s decision contributes to mounting insecurity in Tigray and will almost certainly add fuel to the simmering conflict between the Tigray Defense Forces (aligned with the TPLF) and the Tigray Peace Forces (allegedly aligned with Simret). It is merely the latest in a series of developments threatening the incomplete peace in northern Ethiopia.
Podcasts
On Lawfare Daily, Lee Kovarsky joins Roger Parloff to discuss patronage pardons, or pardons that a president issues to reward and possibly even induce criminality by their political supporters.
On Scaling Laws, Alan Z. Rozenshtein and Kevin Frazier break down the newest ultimatum, due Feb. 27 at 5:01 pm, from the Defense Department to Anthropic to permit its unrestricted use of Claude AI.
Videos
At 4 pm ET on Feb. 27, I sat down with Scott R. Anderson, Anna Bower, Eric Columbus, Molly Roberts, Troy Edwards, and Parloff to unpack the legal challenges to the Trump administration’s cancellation of foreign aid, the Feb. 26 hearing in the Kilmar Abrego Garcia criminal case, a district judge’s finding of the DHS 3rd-country removals to be unlawful, and more.
At 4:30 pm ET on Feb. 26, Bower joined me to unpack the Feb. 26 hearing in the criminal case against Kilmar Abrego Garcia.
On March 1 at 9 am ET, I sat down with Lawfare Senior Editor Scott R. Anderson and Lawfare Public Service Fellows Ariane Tabatabai and Tory Edwards to discuss the U.S. and Israeli strikes on Iran, Iran's response, and what may happen next.
At 3:30 pm ET on Mar. 2, Rozenshtein joined me to discuss the Defense Department’s designation of Anthropic as a supply chain risk, the implications of this decision, and what Anthropic may do to respond.
Today’s #BeastOfTheDay is the Panamanian golden frog, seen here being liberated from 17 years of captivity:
It’s been 17 years since the bright yellow Panamanian golden frog (Atelopus zeteki) hopped through its native habitat. But after nearly two decades of hard work, conservationists are finally reintroducing a new generation of the tiny, fluorescent amphibians back into the tropical island’s ecosystem.
It wasn’t that long ago that golden frogs were staring down almost certain extinction. The saga began in the late 1980s, when an invasive fungus called Batrachochytrium dendrobatidis (Bd) arrived in lower Central America…
While Bd isn’t a problem for humans, it’s devastating to many amphibians like the golden frog. After infecting a host’s skin, the fungus disrupts the body’s electrolytes through a disease called chytridiomycosis. Before long, a frog’s salt and water imbalances result in heart failure and death. The chytridiomycosis crisis finally reached Panama’s last concentrated population of golden frogs at El Valle de Anton in 2004. By 2009, the animals had completely disappeared from the region.
But the species wasn’t extinct just yet. Wildlife biologists at the Smithsonian-affiliated Panama Amphibian Rescue and Conservation Project (PARC) worked for years to continue breeding both golden frogs and related species in controlled facilities. Only recently were lab populations stable enough to move on to the next stage…
The process is a harsh one. Chytridiomycosis still exists in multiple regions around Panama, and remains a problem for the frogs. Researchers estimate about 70 of the 100 golden frogs died from the disease during the initial, 12-week soft release. Fortunately, many of the surviving frogs were eventually rewilded, and the new data allows conservationists to better understand how the disease works.
Congratulations to today’s Beast on its survival as a species! In honor of today’s Beast, don’t let the bastards grind you down.
Keep reading with a 7-day free trial
Subscribe to Dog Shirt Daily to keep reading this post and get 7 days of free access to the full post archives.





