Dog Shirt Daily

Dog Shirt Daily

A Bumper Crop of Headline Questions

All from the New York Times

Benjamin Wittes's avatar
EJ Wittes's avatar
Benjamin Wittes and EJ Wittes
Mar 13, 2026
∙ Paid

Good Evening:

Generator funded by Project Batteries being installed at Mamacita in Kyiv today.

These pictures were sent to me this morning by the estimable Anastasiia Lapatina. They are of the generator you all purchased for Mamacita to bring the child education and women’s spa back into business being installed. For those for whom that is gibberish, here’s a refresher:


Tuesday on #DogShirtTV, the estimable Holly Berkley Fletcher and I discussed recent experiences with Anthropic’s Claude model:

Yesterday on #DogShirtTV, the estimable Mike Feinberg and I discussed philosophical and policy approaches to AI:


The Situation

The Situation on Friday compared the president’s Iran policy to BASE jumping.

Yesterday, the AI frontier lab Anthropic sued the Department of Defense and other federal agencies over the Trump administration’s designation of its products as a “supply chain risk.”

I have some friendly advice for the company: Your red lines need some refinement.

Let me be clear, I support the company’s lawsuit.

The government’s action against Anthropic is a gross abuse—no less so than the actions taken against law firms and universities. It is simple and overt retaliation for the company’s placing use restrictions on its Claude product.

As Anthropic describes those restrictions in its complaint: “Anthropic’s Usage Policy has always conveyed its view that Claude should not be used for two specific applications: (1) lethal autonomous warfare and (2) surveillance of Americans en masse.”

The crux of the dispute is that the Pentagon demanded that Claude be available for all lawful uses. And, as Anthropic summarizes, “[w]hen Anthropic held fast to its judgment that Claude cannot safely or reliably be used for autonomous lethal warfare and mass surveillance of Americans, the President directed every federal agency to ‘IMMEDIATELY CEASE all use of Anthropic’s technology’—even though the Department of War…had previously agreed to those same conditions.”

In my view, it should be a simple case. A company is entitled to draw lines about for what uses it does and does not wish to sell its product. If the government doesn’t like those lines, it can use a different project. It can’t move to destroy the company because it doesn’t like those lines.

I also, for the record, like the fact that Anthropic is trying to draw lines—which is more than one can say for any of its competitors. The AI world is full of impossibly difficult questions, difficult on legal, moral, ethical, practical, and philosophical grounds, nd the AI industry is filled with companies that are all too willing to sidestep those questions entirely in their pursuit of innovation and progress at any cost. It’s not a bad thing that Anthropic is imposing use restrictions on its products.

That said, I’m not honestly sure that the company is drawing the right lines here, or even that it’s drawing lines whose meaning can be easily discerned.

The term “lethal autonomous warfare” seems clear enough on the surface. When Anthropic objects to Claude being “used for” this work, it seems to be objecting to Claude’s use in killer robots.

Scratch beneath the surface even a little, however, and things get more complicated.

The trouble begins with the fact that, as the UN Office For Disarmament Affairs puts it bluntly: “At present, no commonly agreed definition of Lethal Autonomous Weapon Systems (LAWS) exists.” What Anthropic means by “lethal autonomous warfare” is defined with only modest precision—at least in the company’s public statements. In its complaint yesterday, the most it says is this: “By its terms, the Policy has always prohibited the use of Anthropic’s services for lethal autonomous warfare without human oversight.”

The company’s CEO, Dario Amodei, has elaborated a little bit in other public comments:

Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk. We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer. In addition, without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day. They need to be deployed with proper guardrails, which don’t exist today.

In this statement, Amodei seems to be saying that the problem is limited to Claude powering fully autonomous weapons systems based on current technology, and he defines full autonomy as systems that “take humans out of the loop entirely and automate selecting and engaging targets.”

But Amodei here is also acknowledging that such fully autonomous systems “may prove critical for our national defense” in the future and that research on such systems is therefore desirable. In other words, his objection is based on current technology and its limitations only. It is not a point of principle. And he actively wants to work on R&D towards full autonomy.

But that raises a different issue. The military actually does not currently use fully autonomous weapons without humans in the loop, nor is it clear that such weapons would pass legal review under the laws of armed conflict—for some of the same reasons the Amodei articulates. So if Anthropic’s position is that research is okay and the problem is limited to actually powering deployed systems based on current technology, and the military doesn’t have any such systems it wants to power with Claude, is the dispute here based on a null-set of real-world cases?

At least for the moment, I suspect the answer to this question is yes—that this issue is largely or entirely hypothetical. That said, Anthropic would benefit from a clearer public definition of the autonomy it will and won’t support, particularly with respect to defensive weapons. Specifically, Amodei defends the partial autonomy of weapons being used now in Ukraine. Would Anthropic really object to deploying Claude on more-fully autonomous drone and anti-missile defense? These are not “lethal” in the sense that they target robots, not humans, but the falling debris from intercepted weapons kills people regularly.

Anthopic’s objection to “mass surveillance” of Americans is a bigger problem. Unlike autonomous weapons, this is not a hypothetical issue. There are presumably use cases right now in which the Defense Department wants to acquire or process data that it can’t do under Anthropic’s Usage Policy.

But what is “mass surveillance”? It’s not a term of art in American surveillance law. In fact, it doesn’t map onto American law at all, even though civil liberties activists use it constantly.

Some mass surveillance is perfectly legal—like, for example, installing cameras outside the Pentagon and filming everyone who walks up to the door and matching their faces against images of known terrorists. Using satellites or airplanes to film automobile traffic is also mass surveillance. But I know of no law that forbids it.

By contrast, some forms of mass surveillance would be wildly illegal—for example, the targeting in bulk of American communications without warrants directed at the individual subjects.

So my first question is what Anthropic even means when it says it doesn’t want Claude engaged in mass surveillance of Americans. Does it mean it doesn’t want Claude engaged in any non-individualized surveillance—including, say, surveillance of military bases or other sensitive sites? Surely that is overbroad. Does it mean it doesn’t want Claude engaged in bulk acquisition of communications data involving Americans?

Another key question: Is the objection here limited to collection—in other words, the actual acquisition of data obtained about Americans based on any kind of non-individualized authority? Or is Anthropic also objecting to using Claude to analyze data that may have been obtained by means of non-individualized surveillance? If the latter, be careful. What about large datasets of, say, COVID vaccination patterns or other disease surveillance? Or, particularly pertinent to military applications, a large dataset of where Americans live in an area one is thinking about bombing?

One possibility here would be to define “mass surveillance” with reference to some existing category of surveillance law.

The most obvious approach would be to bar Claude from participating in unlawful surveillance (leaving aside the question of whether Claude should be allowed to analyze the fruits of poisonous trees). But that appears redundant of the Pentagon’s position. The Department of Defense, after all, is demanding access to Claude for all lawful uses. So Anthropic is presumably aiming to restrict some lawful uses that constitute mass surveillance. Assuming the folks that wrote their Usage Policy know their way around American surveillance law, there must thus be a category of lawful mass surveillance of Americans that its Usage Policy restricts.

A second possibility has a certain intuitive appeal: Restrict Claude’s surveillance of communications to statutorily authorized collection. This would allow Claude’s participation in, for example, the FISA 702 program but disallow its participation in surveillance under Executive Order 12333.

This approach has the benefit of a certain logical coherence: FISA 702 is targeted surveillance, though vast in scale. It’s not technically “mass surveillance,” and it has been specifically and repeatedly authorized by Congress. It also specifically disallows collection targeted at Americans or targeted at people believed to be in the United States. So by tying “mass surveillance” to statutory law, Anthropic would be effectively taking the position that Claude can participate in surveillance programs that are: (1) specifically contemplated and approved by Congress; (2) targeted at individual selectors (no matter how many of them); and (3) targeted at individual selectors who are both overseas and not Americans.

It’s clever, but it doesn’t quite work. Collection under 12333 against Americans is generally restricted too, after all—albeit under different rules. And FISA, in any event, is predominantly a statute limited to communications. But there are all kinds of other mass surveillance. What about satellite imagery? What about bulk acquisition of purchase records and banking transactions? What about ubiquitous cameras? What about the bulk collection of public records?

At some level, all collection and processing of very large datasets about humans involves mass surveillance. So the principle of barring Claude from mass surveillance per se is necessarily an overinclusive one. Conversely, limiting mass surveillance to mass surveillance of communications, as the above approach would do, is also an underinclusive principle. A giant DNA database of Americans overseas or a collection of medical records, for example, would be horribly intrusive but wouldn’t violate 702.

Anthropic isn’t asking for my advice, but I would suggest that the concept of mass surveillance from which Claude is barred requires refinement. Specifically, I would clarify the concept in the following directions.

First, surveillance for this purpose is the acquisition of material—for example in offensive cyber operations—not the processing or analysis of that material. Claude doesn’t have an exclusionary rule wherein it is prohibited from thinking about material acquired by means it couldn’t participate in. In other words, Anthropic’s goal here should be to keep Claude away from spying on Americans, not to regulate the government’s internal handling of data it has lawfully acquired.

Second, surveillance for this purpose should be understood as covert surveillance only. I don’t think Anthropic wants or means to wall Claude off from mass surveillance of disease spread or COVID cases. And I don’t think it makes sense to have a policy that by its terms would prevent the study of macro-economic data. A simple rubric here is that any collection the government acknowledges doing and does in the open for purposes other than intelligence, law enforcement, or defense is presumptively outside of the “mass surveillance” walled off by the policy.

If this sounds like weakening Anthropic’s red line on mass surveillance, let me add a third point that would strengthen it: The bar against mass surveillance should not be limited to communications surveillance. If the Defense Department is using satellites in a fashion that constitutes covert mass spying, Anthropic might perfectly reasonably apply the policy there too. In other words, to make sense, the policy should apply anywhere the Defense Department is covertly acquiring or stealing large datasets on Americans, whether it is doing so legally or not.

The policy, in short, should cover the use of Claude for intelligence-gathering purposes of a covert nature in the absence of a warrant or some other legal process in circumstances in which it is intended to collect bulk data on some large number of American nationals.

Now, I know what you’re thinking: You’re thinking, wait a second, this won’t stop ICE from using Claude to locate and round up migrants. That’s probably right, but no contract with the military is going to prevent that. The solution to that problem is for Anthropic not to do business with ICE and not to make Claude available for immigration enforcement at all. According to press reports, Anthropic does not have contracts with ICE.

Anthropic’s position here is a righteous one. It is not, however, a particularly clear one. It would benefit from greater clarity. That might mean narrowing it in certain ways. But when this matter goes to court, Anthropic is going to have to explain to the courts what its position actually means. And it’s going to need to be able to do so with much greater specificity than it has done in public so far.

The Situation continues tomorrow.


Recently On Lawfare

Compiled by the estimable Marissa Wang

Military AI Policy by Contract: The Limits of Procurement as Governance

Jessica Tillipman argues that the U.S. military’s increasing use of procurement contracts to govern its AI use is inadequate and leaves unanswered questions about domestic surveillance, autonomous weapons, and democratic accountability.

Domestic surveillance, lethal targeting, and intelligence oversight are increasingly being addressed through contract carve-outs tied to legal authorities that the government itself interprets. Those carve-outs can be amended, narrowed, or reframed, and enforcement is largely post hoc. Existing legal authorities, including the Fourth Amendment, FISA, and executive orders governing intelligence activity, apply to government conduct independent of any contract. But when the application of those authorities to new AI capabilities is increasingly addressed through bilateral procurement negotiations rather than public legal and policymaking processes, the process itself fails the public interest.

Narrative Integrity Risk: The Next Frontier in Financial Stability

Chris Beall, Chris Blask, and Jen Rosiere Reynolds warn that AI-driven market manipulation is emerging as a major threat to financial stability by weakening the market’s accuracy, authenticity, and resilience to misinformation.

Most firms still treat narrative manipulation as a communications hiccup rather than an adversarial threat. These are deliberate, adaptive attacks, capable of distorting valuations and eroding reputations. Recent reports from Marsh McLennan, Swiss Re, and World Economic Forum have already highlighted misinformation as a top global risk of instability driven by AI-accelerated narratives. The market consequence is clear: Firms that understand and anticipate narrative manipulation will outperform those that wait.

Constitutional Duels in the Court’s Rejection of Trump’s Tariffs

Michael R. Dreeben analyzes the Supreme Court’s decision striking down tariffs imposed under the International Emergency Economic Powers Act, highlighting unresolved questions about delegation, emergency powers, foreign affairs authority, and a broader dispute over how courts should protect Congress’s policymaking role against expanding presidential power.

The Supreme Court’s groundbreaking decision in Learning Resources, Inc. v. Trump had the immediate effect of removing tariffs under the International Emergency Economic Powers Act (IEEPA) as a means for President Trump to impose his will on the world. Now, new tariffing frameworks threaten to roil trade further as Trump turns to alternative legal tools in his bid to exert control over global affairs. The economic aftershocks will be felt for many months. But the time frame for understanding the jurisprudential implications of the Supreme Court’s fractured decision will likely be measured in years. This is a preliminary effort to unravel some of the deeper themes and fault lines that the Court will grapple with in the future.

Operation Epic Fury Puts Congress and the Constitution to the Test

Geoffrey S. Corn and Claire O. Finkelstein argue that the Senate’s failure to restrict President Trump’s authority after the strikes against Iran runs afoul of the War Powers Resolution, which requires express authorization from Congress for initiating hostilities absent an attack on the U.S.

The president is not vested with unilateral authority to thrust the nation into war. The Constitution demands that Congress align with any such decision. The Republican majority in Congress appears satisfied that Trump has acted constitutionally and seems content with implicit, rather than explicit, support for this latest war. That appearance of congressional acquiescence allows the Trump administration to argue that the conflict is on solid constitutional turf. Yet the public’s confusion over the president’s warmaking authority reveals that the WPR failed in its effort to ensure that Congress indicates its support for presidential warmaking authority expressly—or otherwise that it accepts the consequence of having its inaction treated as opposition.

Podcasts

On Tuesday’s Lawfare Daily, Peter Beck sits down with Seamus Hughes and Jacob Ware to discuss the FBI’s classification of nihilistic violent extremism as a category of terrorism, the online terror group “764,” and rising challenges to traditional counterterrorism approaches.

On Scaling Laws, Caleb Watney and Austin Carson join Kevin Frazier at the Ashby Workshops to discuss influential AI policy areas in the long-term, including topics such as research funding, state capacity, talent pipelines, meta-science, immigration, and congressional expertise.

On Wednesday’s Lawfare Daily, Anastasiia Lapatina sits down with Fabian Hoffman and Pavlo Litovkin to discuss what the U.S. and its allies can learn from Ukraine in rethinking air defense amidst the war with Iran.

Announcements

Lawfare is now accepting applications for our Summer 2026 internship! This is a critical role that supports Lawfare’s editorial team. Undergraduate students in their sophomore, junior, or senior year are encouraged to apply. Learn how to apply here.


It depends if you’re cold.

Some cabbages are very healthy. Other cabbages are less healthy. I think the headline writer here probably meant to ask, “How healthful is the cabbage to eat?” But if he or she was inquiring into the health of cabbages in general, I’m not sure I have a good answer.

I use steel wool.

Because a lot of women think their breasts are too big.

Dumb question, but why do people never ask this question about women who wear above-the-knee skirts in winter?

The answer is the same: Because they care about how they look more than they mind having cold legs—if they mind having cold legs at all.

Dude, it’s only the beginning of March

.I reject the premise. It has literally been weeks since I have heard anyone say the word “tranche.” Everyone? What on Earth are we talking about here?

Yes. It works. You go in. You get a table. You order food. It comes. You pay. It’s a tried and true system.

Batteries and electric blankets for every Ukrainian family.

Don’t ask me. I don’t give a damn.

Because we burn a lot of it to do stuff.

Women started wearing less clothing.

Perfect justice: Not a thing.

A lot of coders didn’t like their jobs very much.

Come on. You don’t need to read an article on this. We all know the answer: Pain is red flag when it hurts in a way that doesn’t feel good after exercise. We all know the difference between being sore and working hard and being in agony. We all know the difference between soreness and injury pain. Puhlease.

Yes. Your shoe in particular is about to get a hole. Your left shoe. It is fate. Don’t fight it.


Today’s #BeastOfTheDay is the pileated woodpecker, whom we welcomed back to #DogShirtTV yesterday. Today’s Beast is a distinguished guest each spring as it attempts to drill holes in the sides of my house, and its voice is an essential part of the Greek Chorus. Here’s a watercolor of today’s Beast from 1585:

Source

In honor of today’s Beast, join the Greek Chorus with drums. Or maybe power tools.

Keep reading with a 7-day free trial

Subscribe to Dog Shirt Daily to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
EJ Wittes's avatar
A guest post by
EJ Wittes
I just work here.
Subscribe to EJ
© 2026 Benjamin Wittes · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture