

The specific article’s framing pisses me off…
Anthropic CEO Dario Amodei picked a major fight with the Department of Defense last month, asserting that his company’s AI models couldn’t be used for mass surveillance of Americans or direct autonomous weapons systems.
As to who picked a fight with who, the DoD wanted to change the terms of their contract, to which Anthropic apparently compromised on every term except for mass surveillance of Americans (fuck the rest of the world I guess) and fully autonomous weapons (cause a human clicking “yes to confirm” makes slop-bot powered drones so much better). This wasn’t good enough for this authoritarian strongman administration, so Pete Hegseth took the fight public with tweets first. So the article framing it as Anthropic “picking a fight” is a bullshit framing. I mean, they did kind of bring it on themselves hyping up their slop machine like it was a sci-fi AGI, but they didn’t start the fight.
For one, “it’s 100 percent in the government’s prerogative to set the parameters of a contract,” Snell & Winter partner Brett Johnson told Wired, effectively meaning there may be very little chance of an appeal.
So they find a quote about contracts, but a Supply Chain Risk isn’t just the DoD deciding on contracts, it is a specific power that has specific mechanisms set by legislation. If (and it is a big if with the current Supreme Court’s composition) the court actually considers the terms set out in the legislation (including, most problematically for the DoD, a risk assessment and consideration of less intrusive alternatives), I think the DoD loses. Of course, the SC has all too often been willing to simply defer to the executive branch’s judgement, even if the process for the judgement was “Trump or one of his underlings made a choice on a spiteful or idiotic whim, announced it on twitter, and the departments underneath them rushed to retroactively invent a saner rationalization”. If the DoD decided to just end the contract (without all the public threats of SCR or invoking the Defense Production Act) Anthropic wouldn’t be in a position to sue and this drama wouldn’t have been as publicized in the first place.
But the lawsuit itself takes a dramatically different tone.
Yeah because one set of a language is a CEO trying to grovel and backtrack on one of the rare few ethical commitments he has ever made, and the other is making a court case about the actual law.
If the DoD accidentally pop the AI bubble by triggering a cascade when Anthropic runs into issues; then later the DoD loses the court case in a humiliating enough way; then DoD loses a civil case with the money going to pay the debts owed in Anthropic’s bankruptcy proceedings, and the American public blames all of (without letting one shift the blame to the other) the Trump administration, the Republican party, the parts of the Democrat that acted as pathetic enablers, and the tech ceos for the following economic depression… I would count that as a relative win?