The disconnect here for me is, I assume the DoW and Anthropic signed a contract at some point and that contract most likely stipulated that these are the things they can do and these are the things they can't do.
I would assume the original terms the DoW is now railing against were in those original contracts that they signed. In that case it looks like the DoW is acting in bad faith here, they signed the original contact and agreed to those terms, then they went back and said no, you need to remove those safeguards to which Anthropic is (rightly so) saying no.
Am I missing something here?
EDIT: Re-reading Dario's post[1] from this morning I'm not missing anything. Those use cases were never part of the original contacts:
> Two such use cases have never been included in our contracts with the Department of War
So yeah this seems pretty cut and dry. Dow signed a contract with Anthropic and agreed to those terms. Then they decided to go back and renege on those original terms to which Anthropic said no. Then they promptly threw a temper tantrum on social media and designated them as a supply chain risk as retaliation.
My final opinion on this is Dario and Anthropic is in the right and the DoW is acting in bad faith by trying to alter the terms of their original contracts. And this doesn't even take into consideration the moral and ethical implications.
The administration's approach to contracts, agreements, treaties and so on could be summed up as 'I am altering the deal. Pray I do not alter it further.'
The basic problem in our polity is that we've collectively transferred the guilty pleasure of aligning a charismatic villain in fiction to doing the same in real life. The top echelons of our government are occupied by celebrities and influencers whose expertise is in performance rather than policy. For years now they've leaned into the aesthetics of being bad guys, performative cruelty, committing fictional atrocities, and so forth. Some MAGA influencers have even adopted the Imperial iconography from Star Wars as a means of differentiating themselves from liberal/democratic adoption of the 'rebel' iconography. So you have have influencers like conservative entrepreneur Alex Muse who styles his online presence as an Imperial stormtrooper. As Poe's law observes, at some point the ironic/sarcastic frame becomes obsolete and you get political proxies and members of the administration arguing for actual infringements of civil liberties, war crimes, violations of the Constitution and so on.
I think it's the other way around. They have always wanted to do those cruel things that have real victims. It took them many years of dedicated, coordinated efforts as they slowly inched many systems to align with their insane ideas. The villain branding is just that - branding. Many of them actually like the 'bad guys' in those stories, especially if those bad guys are portrayed as strong, uncompromising, militaristic, inhumane, and having simple, memorable iconography that instills fear - the more allusions to real life fascists, the better. But that enjoyment follows from their ideology and what they want to do in the world, not the other way around.
Ehh, Hamill's take on Israel is pretty middle of the road and diplomatic[1]: support for the people of Palestine and Israel while not at all supporting the governments of those places.
> *Isn’t it unreasonable for Anthropic to suddenly set terms in their contract?* The terms were in the original contract, which the Pentagon agreed to. It’s the Pentagon who’s trying to break the original contract and unilaterally change the terms, not Anthropic.
> *Doesn’t the Pentagon have a right to sign or not sign any contract they choose?* Yes. Anthropic is the one saying that the Pentagon shouldn’t work with them if it doesn’t want to. The Pentagon is the one trying to force Anthropic to sign the new contract.
I just wish there was a stronger source on this. I am inclined to agree you and the source you cited, but unfortunately
> [1] This story requires some reading between the lines - the exact text of the contract isn’t available - but something like it is suggested by the way both sides have been presenting the negotiations.
I deal with far too many people who won't believe me without 10 bullet-proof sources but get very angry with me if I won't take their word without a source :(
> "Two such use cases have never been included in our contracts with the Department of War..."
While I agree with Anthropic's position on this regardless, the original contract wording does matter in terms of making either the government look even more unreasonable or Anthropic look a little less reasonable.
The issue is a subtle ambiguity in Dario's statement: "...have never been included in our contracts" because it leaves two possibilities: 1. those two conditions were explicitly mentioned and disallowed in the contract, or 2. they weren't in the contract itself - and are disallowed by Anthropic's Terms of Service and complying with the ToS is a condition in the contract (which would be typical).
If that's the case, then it matters if the ToS disallowed those two uses at the time the original contract was signed, or if the ToS was revised since signing. Anthropic is still 100% in the right if the ToS disallowed these uses at the time of signing and the ToS was an explicit condition of the contract, since contracts often loop in the ToS as a condition while not precluding the ToS being updated.
However, if the ToS was updated after contract signing and Anthropic added or expanded the wording of those two provisions, then the DoD, IMHO, has a tiny shred of justification to complain and stop using Anthropic. Of course, going much further and banning the entire US government (and contractors) from using Anthropic for any use, including all the ones where these two provisions don't matter - is egregiously punitive and shitty.
While the contract wording itself may be subject to NDA, it would be helpful if Anthropic's statements could be a bit more precise. For example, if Dario had said "have always been disallowed in our contracts" this ambiguity wouldn't exist.
It does not matter. If Anthropic had been precise in this narrow way, there would have been some other nitpick to raise.
You're trying desperately to find a way that things can be at least a little normal, and I really do get it. It would be great if such a way existed. But it doesn't. I recommend you take a social media break like I'm about to, take the time you need to mourn the era of normal politics, and come back with a full understanding that the US government is not pursuing normal policy objectives with bad decisions. They hate you and they hate me for not being on their side, and their primary goal is to ensure that we're as miserable as they can make us.
I'm in a weird spot where I do agree with your assessment of the core claim. But putting that aside, in the world where the DoW's claim _is_ correct -- I think you don't have any choice other than to designate them a supply chain risk.
Disregarding who is right or wrong for a moment, if the DoW are right (which I'm not personally inclined to believe, but we're ignoring that for the moment) -- how else can they avoid secondhand Claude poisoning?
Supposing they really want to use their software for things disallowed by Claude's (now or future) ToS, it seems like designating it a supply chain risk is the only way they can ensure that their contractors don't include Claude (either indirectly as a wrapper or tertially through use of generated code etc)
> designating it a supply chain risk is the only way they can ensure that their contractors don't include Claude
I agree that if the DoW claim is correct (and I doubt it is), then, sure, the DoW dropping Anthropic and precluding the DoW's suppliers from using Anthropic for any DoW work would be expected. However, the "supply chain risk" designation they are deploying goes far beyond that to block Anthropic use by any supplier to any part of the entire U.S. government for anything.
For example, no one at Crayola can use Anthropic for anything because Crayola sells crayons to the Education Dept. The DoW already has much less draconian ways to restrict what their direct suppliers use to build things for military applications. But instead of addressing the actual risk in a normal measured way, they are choosing to use a nuke against a grenade-sized problem. This "supply chain risk" designation is rarely used and has never been used against a U.S. company. It's used against Chinese or Russian companies when in cases where there's credible risk of sabotage or espionage. That's why that particular designation always blocks all products from an entire company for any application by any part of the U.S. Government, contractors and suppliers (which is why it's never been used against a U.S. company).
One positive thing I will say about this administration is that they have really drawn into focus the difference between de jure and de facto law.
My hope is that this gets us some real concern for things that have been defended with de facto arguments (i.e. privacy) going forward.
edit: Anthropic argues that your Crayola analogy is fundamentally incorrect.
> Legally, a supply chain risk designation under 10 USC 3252 can only extend to the use of Claude as part of Department of War contracts—it cannot affect how contractors use Claude to serve other customers.
> Anthropic argues that your Crayola analogy is fundamentally incorrect.
Yes, I just saw Dario's latest post with that more detailed info. My understanding was informed by news reporting in a couple different outlets but those reports may have been conflating the "supply chain risk" designation (under 10 USC 3252) with the net effect of statements from the pentagon and white house which go substantially further.
Even if it's not in the legal scope of 10 USC 3252, the administration has made clear they intend to ban Anthropic from use across the federal government. AFAICT doing that is probably within the discretionary remit of the executive branch, even though I believe it's unprecedented - to your point about de jure and de facto law.
To me, if there's a silver lining to all this, it's making a strong case for restricting executive branch power.
Edit to add: Per the Wall Street Journal's lead story (updated in the last hour): "The General Services Administration, which oversees federal procurement, said it is removing Anthropic from its product offerings to government agencies... Even absent the supply-chain risk designation, broadening the clash to include all federal agencies takes the Anthropic fight to a much larger scale than its spat with the Pentagon."
How would this risk be mitigated by signing a contract? Seems like “supply chain poisoning as treason” is probably not going to stopped by a piece of paper. You either trust anthropic or you don’t but the deal has nothing to do with it.
Isn't the point that they aren't entering into a contract with them, they are just ensuring that none of their still trusted suppliers repackage Anthropic without their knowledge?
I’m not sure, but I think you’re right. I was thinking about the logical implications of the. If they are a supply chain risk without a contract, how does the existence of a contract suddenly make them not a risk? Especially if the DoD strong arms them into a deal.
Because the act that the SCR designation would “protect” against is treason, so I don’t think people would care too much whether there’s a contract.
Also, Trump's own words complaining about being forced to stick to Anthropic's terms of service:
> The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution.
In this case, do you really believe that we should trust an EA less than this administration? EA as bad people is a stereotype; corruption, fraud, and breaking the law is the standard MO for this administration.
(Or maybe it’s catchier to respond glibly with “never trust a child rapist and convicted felon.”)
In this case, the choice is between the two apples, so I’d pick the one less obviously rotten. Sadly that is the current administration that operates in pure lawlessness.
I think a big question mark here, is whether anything said on Anthropic's side if in the framing of "We have a thing going on that we are trying to communicate around where a canary notice if it existed would no longer be updated"
It isn't about commercial agreements, it's about patriotism. The national industry is supposed to submit to the military's wishes to the extent that they get compensated. Here it's a question or virtue.
The Pentagon feels it isn't Anthropic to set boundaries as to how their tech is used (for defense) since it can't force its will, then it bans doing business with them.
If anthropic is saying “you can use our models for anything other than domestic spying or autonomous weapons” and the pentagon replies “we will use other models then”, I'd say Anthropic are the patriots here...
I had the same thing happen to me when I posted about how unbridled capitalism requires external costs in the form of pollution and what not. I didn't make it clear that I thought it was a terrible truth.
Once the hive decides you're being serious without checking, they turn the down vote button into an I disagree with you button.
This is actually one of the reasons I left Reddit. I hate to see it here.
It likely helps to take in the cultural moment or context around the statements or the nature of the statements you're making. It's fine to state a fact but it's also helpful to make it clear whether you are saying "it is what it is " or "I wish things were different" or "I am doing X, Y, and Z to try and help and I recommend others do so". Jokes are an exception and I think misunderstandings are fine there. But it's unreasonable to think that on the Internet, people will "check to see if you are serious".
The comment was serious. It didn't feel the need to take a side.
The DoD declaration reflects a certain context, we had the patriotic act, a whistleblower exiled in Russia for defending the constitution, etc etc. We didn't need to wait a MAGA movement to be expecting such comment from the DoD.
If hackernews threads turn into mouthpieces for opinions then we have no use posting anything in here.
The comments are naively claiming commercial agreements make Anthropic right, as if contracts had more weight than the constitution.
I would rather call out a "virtuous signalling" entity in the valley simply standing for something aligned with civil liberties, and using it as a political stance in what nobody would deny is an unfortunate polarized political climate.
What to make of OpenAI then. Should I give my opinion that they took a falsely constitutional stance, or simply made for-profit move to land a juicy government contract, while making the public think they kept the same red lines as their main competitor?
Or just stick to the fact: The DoD will, as always, get away with its liberticide demands to get what it wants, because other big tech will fall inline.
I fully acknowledge that it doesn't take much courage to bully people anonymously on HN. I don't claim to have any deep well of courage in real life either - many of my friends were already radicalized against OpenAI for other reasons, I don't expect to face professional consequences for being angry about this, and I might not be so willing to go scorched earth if either of those weren't true. Just wanted to explain where the world is at and why people should expect to see further incivility about this.
What's your definition of "patriotism" and why do private companies need to be "patriotic"? How do you reconcile this with the Constitutional guarantees of freedom of speech, freedom of association, and so on?
The US isn't Iran, North Korea, or even China, as much as some people, including the US president, seem want to emulate those models.
No one cares if the Pentagon refuses to do business with Anthropic. But Hegseth has declared that effective immediately, no one else working with the DoD can either--which includes the companies hosting Anthropics models (Amazon, Microsoft, and Alphabet).
So it's six months to phase out use of Anthropic at the DoD, but the people hosting the models have to stop "immediately".
Which miiight impact the amount of inference the DoD would be able to get done in those six months.
> So it's six months to phase out use of Anthropic at the DoD, but the people hosting the models have to stop "immediately".
> Which miiight impact the amount of inference the DoD would be able to get done in those six months.
Which might not be by accident looking at the Truth Social posts which state "Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow."
I would not be surprised to see this being used as an excuse to nationalize Anthropic.
To attempt to nationalize Anthropic. I'm sure there would be court cases filed almost immediately, restraining orders, months of cases and then appeals and then appeals of the appeals.
I think you were downvoted due to your use of "patriotism" (specifically without scare quotes) because that word is usually used with an intended positive connotation. So the reader gets the impression that you think that submitting to the DoD’s wishes is how things ought to be.
Regardless of the original contract, it's entirely appropriate for a vendor to tell the customer how to use any materials.
Imagine a _leaded_ pipe supplier not being allowed to tell the department of war they shouldn't use leaded pipes for drinking water! It's the job of the vendor to tell the customer appropriate usage.
Playing devil's advocate: if I did in fact grab one of my kitchen knives to defend myself against a violent intruder into my kitchen, I wouldn't expect to be banned from buying kitchen knives.
I'm not sure this is still a useful analogy, though...
And if you grabbed the knife and went on a violent spree, I'd absolutely expect the knife manufacturer to refuse to sell to you anymore.
The knife manufacturer isn't obligated to sell to you in either case, I'd expect them not to cut ties with you in the self defence scenario. But it is their choice.
1. Found out you used their knives to go murdering
2. Sells knives in a fashion where it's possible for them to prevent you from buying their knives (i.e. direct to consumer sales)
Would almost certainly not "be more than happy to continue to sell to you". Even if we ignore the fact that most people are simply against assisting in murders (which by itself is a sufficient justification in most companies), the bad PR (see the "found out" and "direct to consumer" part) would make you a hugely unprofitable customer.
Meh. Not sure why knife dealers would be assumed to be more moral than firearms dealers. See, e.g. Delana v. CED Sales (Missouri)
> the bad PR (see the "found out" and "direct to consumer" part) would make you a hugely unprofitable customer.
That... Doesn't happen.
Boycotts by people who weren't going to buy your product anyway are immaterial to business. The inevitable lawsuits are costly, but are generally thought of as good publicity, because they keep the business name in the news.
Since the knife vendors were metaphors for AI vendors, is the comparison you want to make "AI vendors & weapons manufacturers"? That's the standard we should judge them by?
> Not sure why knife dealers would be assumed to be more moral than firearms dealers
What I mean is that you _did_ judge them by a standard used for weapons manufacturers. How you react to their actions _is_ your judgement.
But perhaps that is the standard we should use. Weapons manufacturing is a well regulated industry after all. Export controls, dual-use technology restrictions, if it has applications for warfare it should be appropriately restricted.
If I shoot someone, something that is explicitly warned against in firearm safety materials that come with every purchase of a new firearm, I am no longer allowed to purchase any more firearms.
The specific shape of a kitchen knife would make it a particularly poor fighting knife, and knives in general are bad for self defense, due to the potential for it to be turned against the user. So, there is a good argument that such a suggestion is really in the user's best interest rather than a cynical play for the manufacturer to limit liability.
Seconded. You can't see all the up and down votes, only the balance at the moment you look, and it's not too uncommon to be negative or even dead and be upped or vouched back to life later.
No it isn't. There are warnings, but once a knife is yours you are free to do whatever you want with it, including reselling it to someone else. The idea of terms of service of using something is not something that typically exists with physical objects that one can own. They can't take your knife away from you because you decided to use it for a medical purpose without purchasing a medical license for the knife.
Claude Opus is just remarkably good at analysis IMO, much better than any competitor I’ve tried. It was remarkably good and complete at helping me with some health issues I’ve had in the past few months. If you were to turn that kind of analytical power in a way to observe the behaviour of American citizens and to change it perhaps, to make them vote a certain way. Or something like - finding terrorists, finding patterns that help you identify undocumented people.
I have used chatgpt 5.2 thinking for health, gemini hallucinates a lot, specially with dna analysis. Never tried using the new claude even though i have access through antigravity. Might give it a try. Do you have any tips on how to approach it for health ‘analytical power’?
I just made a project, added all my exams (they were piling up, me and my psychiatrist had been investigating for a year this to no avail) and started talking to it about my symptoms.
Within a few iterations of this it gave me a simple blood panel, then I did that one and it kept suggesting more simple lab or at home tests and we kept going through them until I was reasonably certain of “something” and now that I have hypothesis I am going to a doctor. I think it’s done a great job. I also kept asking it for simple lifestyle interventions to prevent progression of my issue and it consistent nailed it - one particular interverntion (adding salt to water and drinking it to prevent symptoms) made a huge improvement to my life - I was barely working before that.
I added in some text the instructions box (project master prompt) for it to realise - it’s not medical advice and I am aware of that (prevents excessive guardrails) - add confidence intervals and probability to all diagnostic statements (prevents me + Claude going into rabbit holes so easily, it often has 70-80% certainty of what it’s saying, but it’s clear that it doesn’t use the right language) - that It was talking to an non expert, to use simple language but to go into detail when necessary. I also ask it to stop doing unnecessary constant follow up questions to every answer as that causes me anxiety. I can share the prompt, in fact I might do so later as it might be useful to others.
Make sure your first chat is about the exams in the project files. Make sure it reads them all. It has a tendency to read a few and go “is this good”. Ask for a summary and note any absences.
Try using the research and extended thinking features a lot if you think it’s not fully aware of anything. It might not be aware of more recent research. If it’s a serious condition you are researching, just ask it to do sweeps / use research to look for new info about it and find new papers. It might also deepen its understanding.
After you do research you can make a simple artefact and throw it onto the project files. That allows it to refer to it and gain more knowledge about a condition or issue that might not be as rich in the training data.
So, I find GPT to be so so bad for this it made me realise a bit on why the USG is so insistent. Claude Opus is just on a different class.
Here’s the master project prompt:
Act as an expert who’s talking to an interested layman. Engage in detail when requested but be overall succinct in your answers. Short sentences are fine, no need into be lengthy. Do deep research. When arriving at any kind of conclusion or hypothesis assign it a probability and a confidence interval - define this in percentages as in “90%”
On Artefacts - all artefacts should be just text and markdown. Never do anything more complicated with formatting, unless by explicit request.
Don't ask follow up questions unless it's to make for better diagnosis. I.e. don't keep asking questions just to maintain conversation going please. But never hesitate to ask questions if it makes for better outcomes.
Yep. Choosing not to renew a contract with a provider who has voluntarily excluded itself from your use case is respecting that provider's choice and acting accordingly.
The thing is nobody is saying the government is bad for not renewing the contract. Like it or not, that's definitely the administration's prerogative.
What we're seeing here is that when a vendor declines to change the terms of its contractual agreement for ethical reasons, the government publicly attacks it.
Perhaps for ethical reasons but a stated reason by Anthrophic is technical. "But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons."
With the other stated reason being legal. "To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI."
I don't think we should lessen Anthrophic's stance from technical/legal to ethical. Just as we shouldn't describe what the department of war is doing as "not renewing a contract".
Not in software though. Clear precedent has been established via EULAs. Software companies set the rules and if users don't like, they can piss off. I don't see why it would be any different for the government.
I'm not a fan of EULAs, I think if you acquire some software anonymously and run it on your own systems you should be able to do whatever you want. however if you want software hosted on someone else's machines, or want to enter into a contractual relationship with them then government or not you should not have the right to compel work from them.
Agreed they haven't and it will be difficult to see them voting in favour. But there are precedents. The Patriot act was more radical than a potential mandate for AI providers to prioritize national security.
The government is armed and can exempt itself from prosecution either by judicial means and/or by naked force. So it isn’t just a cut and dry licensing problem.
The government cannot set arbitrary rules, it has to follow the law. (And, at least with a functioning separation of powers, it cannot change the law arbitrarily.)
> Regardless of the original contract, it's entirely appropriate for a vendor to tell the customer how to use any materials.
Utter nonsense. When the US built the Blackbird, it could only use titanium because of the heat involved in traveling at that speed. But they didn't have enough titanium in the US. So the the US created front companies to purchase titanium from the Soviet Union.
Do you think the US should have informed the Soviet Union what it wanted to do with the metal?
Yes, it's officially still the Department of Defense.
If this were a news outline writing "Department of War" I would be concerned. But in the case of the Anthropic CEO's blog post, I can understand why they are picking their fights.
It's a silly shibboleth, but I automatically ignore anyone who calls it the Department of War or Gulf of America. Hasn't steered me wrong yet. They're telling me they're the kind of people who only care about defending fascism.
I think it's worth giving people a tiny bit of grace on this. I've surprised people by explaining that the "Department of War" is just fascist fanfic and that the legal name has not changed.
It's a testament to the broken information ecosystem we're in that many people genuinely don't know this. Most will correct themselves when told. I agree with you that those who don't are not worth engaging.
I would not defend all of Google's decisions in the Trump era, but complying immediately with politicized name changes has always been the status quo. Even in healthy democracies, the precise names of geographic features can be extremely controversial, and no sane company wants to get in a debate with the Japanese government about the real names of various islands.
They can, however, rename their Twitter/X accounts and vacate the @SecDef handle, which seems to be up for grabs now, if anyone wants to do the funniest thing...
No, fighting a war requires only engaging in international armed conflict.
Declaring a war requires Congress, and fighting a war other than in response to an invasion may be illegal under US law if Congress has not exercised its power to declare war, but that doesn't prevent wars from happening it just makes it illegal (though the only actual remedy is impeachment) for the President to wage war without authorization. And, in any case, that’s largely moot because Congress has exercised that power in an open ended (in terms of when and against whom) but limited (in authorized duration of any particular action without subsequent authorization) manner via the War Powers Act, giving every President since Nixon a blank check to start wars with full legal authority and then allow Congress an opportunity to vote to pull support from forces already in combat and hope the enemy already engaged is willing to treat the war as over as the only after-the-fact constraint.
Of all the silly things that Trump did, I think this one is the most reasonable. This has always been a department of war. Calling it defense was propaganda.
After it was changed from DoW the first time (in 1947), it was called the National Military Establishment (NME). They renamed it in 1949, potentially because "NME" said aloud sounds like "Enemy"
the entire administration negotiates in bad faith. literally every agreement they sign whether it's international trade or corporate contracts is up to the whim of a toddler with twitter
And they don’t think anything through. If they do this then Amazon, Google and the rest will need to terminate their involvement with Anthropic. Trump will be getting a call from some Wall Street bigwigs imminently and it’ll get rolled back, I bet.
Contract law will certainly be a casualty once Rule of Law has completely been broken. I don’t understand why the business sector isn’t pushing back more. Surely they must all know that the legal legal context itself, within which they all operate, is at mortal risk and that Business as Usual will vanish once autocratic capture is complete.
My main takeaway from all of this is that Hegseth seems deeply unfit for his job. First there was the Signal leak and now this.
Look, Anthropic is not going to be designated a supply chain risk. 80% of the Fortune 500 have contracts with them. Probably a similar percentage of defense contractors. Amazon is a defense contractor for example. They'd have to remove Claude from their AWS offerings. Everyone running Claude on AWS, boom gone. The level of disruption to the US economy would be off the charts, and for what? Why? Because Hegseth had a bad day? Because he's a sore loser?
If he's decided he doesn't like the DoW's contract then he can cancel it, fine. To try and exact revenge on the best American frontier model along with 80% of the Fortune 500 in the process, to go out of his way to harm hundreds or perhaps thousands of American firms, defies all reason. This is behavior you would expect any adult would understand as petty and foolish, let alone one who's made it to the highest ranks of government.
So I think it's just not going to happen, Trump's statement on the matter notably didn't mention a supply chain risk designation. This suggests to me that Hegseth went off half cocked. The guy is a liability for Trump at this point, I'm guessing he won't last much longer.
| then they went back and said no, you need to remove those safeguards to which Anthropic is (rightly so) saying no.
So one thing to call out here is that the assumption that DoW is working on specifically these use cases is not bullet proof. They simply may not want to share with anthropic exactly what they are working on for natsec issues. /we can't tell you/ could violate the terms.
It is also dumb that DoW accepted these terms in the first place.
Is this matter about publicly available model or private model? For publicly available model like opus 4.6, bad actors can do whatever they want and Anthropic won't know.
If this is only about private custom model, designating public model as supply chain risk doesn't make sense as others can use it.
With this administration, after all their proven lies, when in doubt, assume bad faith on their part. Assuming good faith at this point is Lucy and Charlie Brown and the football, but now the football is fascism (i.e., state control of corporations, e.g., what Trump administration is doing here).
Trump has historically stiffed his contractors. Why do you think his administration would be any different with adhering to a contract?
If anyone is the epitomy of arrogance, it is Hegseth.
No doubt the US Gov't will be using A I to perform automated military strikes without human supervision. and spying on US citizens (which they already have been doing for decades now).
Look no further than the case of patriot Mark Klein, a former AT&T technician, exposed a massive NSA surveillance program in 2006, revealing that AT&T allowed the government to intercept, copy, and monitor massive amounts of American internet traffic. Klein discovered a secret, NSA-controlled room—Room 641A—inside an AT&T facility in San Francisco, which acted as a splitter for internet traffic.
It's so fishy, I spent the morning reading sam'AMA and it's a classic whitewashing act. OpenAI is claiming their setup is stronger and that DOW has agreed to their red lines but read the agreement below, it only says use in compliance with laws and executive order.
Anthropic wouldn't have walked away from a multi million contract if their two redlines could be respected. OpenAI on the other hand is a fast, willing and ready company. I would love to see Anthropic's proposed contract
In our agreement, we protect our red lines through a more expansive, multi-layered approach. We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections. This is all in addition to the strong existing protections in U.S. law.
We believe strongly in democracy. Given the importance of this technology, we believe that the only good path forward requires deep collaboration between AI efforts and the democratic process. We also believe our technology is going to introduce new risks in the world, and we want the people defending the United States to have the best tools.
Our agreement includes:
1. Deployment architecture. This is a cloud-only deployment, with a safety stack that we run that includes these principles and others. We are not providing the DoW with “guardrails off” or non-safety trained models, nor are we deploying our models on edge devices (where there could be a possibility of usage for autonomous lethal weapons).
Our deployment architecture will enable us to independently verify that these red lines are not crossed, including running and updating classifiers.
2. Our contract. Here is the relevant language:
The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities. Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment.
For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose. The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.
I assume those agreements were probably signed before the current fascist regime running the US government and now they want to upend the terms of said agreement to allow in more fascism to aforementioned contract.
It's not recent news that Anthropic has (had?) DoD contracts. This is a lot of words to write while seeming ignorant of basic facts about the situation.
The argument isn't that nobody knew Anthropic had DoW contracts. The argument is that there's a difference between "publicly known if you follow defense-tech procurement" and "trending on social media where Anthropic's core audience is now actively discussing it." Both can be true simultaneously.
A fact being technically available and that fact commanding widespread public attention are very different things. Anthropic's communications team understands this distinction even if you don't find it interesting. The blog post wasn't written for people who already track federal AI contracts, it was written for the much larger audience encountering this story for the first time and forming opinions about it in real time.
If the point you're making is just "I already knew this," that's fine, but it doesn't address anything about the incentive structure behind the public response.
This is an interesting perspective, but I think the fallout from sticking to his guns here is probably greater than the public blowback he would receive from serving the DoD. Without this specific sticking point, the public would know that Anthropic was serving the DoD, but not what specifically the model was being used for, and it would be difficult to prove it wasn't something relatively innocuous.
That's a fair point about sequencing, but it actually reinforces the argument rather than undermining it. If Anthropic pushed back internally, and that pushback is what led to the directive going public, then Anthropic had every reason to anticipate that this would become a public story. Which means the blog post wasn't a spontaneous act of transparency, it was a prepared response to a foreseeable escalation. That's more strategic rather than less so.
Internal pushback and public damage control aren't mutually exclusive. A company can genuinely disagree with a client's demands behind closed doors and simultaneously craft a public narrative designed to make itself look as good as possible once those disagreements surface. In fact, that's exactly what competent communications teams do, they plan for the scenario where private disputes become public, and they have messaging ready.
The real question isn't who went public first or why. It's whether Anthropic's stated position, "we support these military use cases but not those ones", reflects a durable ethical framework or a line drawn precisely where it needed to be to keep both the contracts and the brand intact. Nothing in the sequencing you've described answers that question. It just tells us Anthropic saw this coming, which, if anything, means the messaging was more carefully engineered, not less.
I already suspected the first comment was by an LLM, but deleted that from my reply as it didn't feel like a productive accusation. However, with "that's a fair point" as an opener, plus the sheer typing speed implied by replies, and the way that individual sentences thread together even as the larger point is incoherent, I'm now confident enough to call it.
I actually use assistive voice transcription as I am unable to type well with a keyboard.
[Edit: update]
I use assistive voice transcription because I'm unable to type well with a keyboard. But I'd point out that "you must be an AI" has become the new way to dismiss an argument without engaging with it. It's the modern equivalent of "you're just copy-pasting talking points", it lets you discard everything someone said without addressing a single word of it.
The fact that my sentences "thread together" is not evidence of anything other than coherent thinking. And speed of response says more about the tools someone uses than whether a human is behind them. Plenty of people use dictation, accessibility tools, or just happen to type fast.
Ok, good to have that explanation. Your larger point, though, remains incoherent. Whether Anthropic saw this coming has nothing to do with the substance of the conflict here and is very much not "the real question".
I was pondering the same thing and to me the answer is a contractor sold something to the DoD and Anthropic pulled the rug out from under that contractor and the DoD isn't happy about losing that.
My speculation is the "business records" domestic surveillance loophole Bush expanded (and that Palantir is build to service). That's usually how the government double-speaks its very real domestic surveillance programs. "It's technically not the government spying on you, it's private companies!" It's also why Hegseth can claim Anthropic is lying. It's not about direct government contracts. It's about contractors and the business records funnel.
Yes, I assumed a mass surveillance Palantir program also. Interesting take on how it allows them to claim “we are not doing this” while asking Anthropic to do it.
Of course they can just say - we aren’t, Palantir is.
Wow, and the only restrictions Anthropic asked for are (1) no mass domestic surveillance and (2) require human-in-the-loop for killing [1]. Those seem exceptionally reasonable, and even rather weak, lol :|
Anthropic had these conditions in their contract from the very beginning, in contracts negotiated under Biden. It is their actual principled stance, not maneuvering.
Their intention is to turn it against the American people. Hegseth literally wrote a book about eliminating democrats from the US, and this surprises people.
Trump doesn't want another election to happen. He needs some powerful tools to ensure that happens, ie, massive scale ai surveillance and manipulation. Eg, like Xi uses in China. I bet anyone here he starts a war as his excuse
Don’t become numb. They want normal people to be depoliticized, silent, and withdrawn. We’re so much easier to subjugate and exploit that way: hopeless and spineless. They take more and more each day.
In an interview with Zelinsky Trump asks "why haven't you had an election? " Zelensky
: "because we are at war" you can see the idea percolating then. People think I'm a nutter for suggesting there just won't be another election but that's where my money is. I'm waiting for his version of the Gestapo, ICE seems to be a proving ground
An important detail here is that Ukraine's constitution says they can't have an election while they're at war. The US constitution does not say that, and the USA has had elections during wars several times.
You're not a nutter. Trump constantly projects what he's going to do and no one takes him seriously because what he says is so beyond the pale. I explicitly remember the exact instance you're talking about because I thought the same thing as you are thinking.
There will be a sham election, like in Russia, but a sizable number of people will be unable to vote. Trump only need to steal the election in a few key districts
People like married women who changed their name, or foreign sounding people, they will be prevented to vote in 2026. ICE will guard polls to physically make people unable to reach the ballots
Specifically, he would need the US Congress to draft and pass legislation moving the date of the election. I don't know how eager they are, though, to create an unnecessary constitutional crisis.
That's the restrictions for now. New restrictions could be added later or the situation of the world could change where those no longer seem reasonable. The military needs that ability to move fast and not be held back.
Even the most cockeyed reading of history will tell you that it is absolutely vital to the survival of humanity and all that is good on this earth that the US military be tied down and held back.
There are enough idiots involved who "heard about this AI thing" that would demand someone make a Claude-based kill bot. Do not underestimate the disconnect from reality of senior military leadership. They easily forget that everyone who works for them are legally obligated to laugh at their jokes.
Anthropic specifically called out systems "that take humans out of the loop entirely and automate selecting and engaging targets".
I take that to mean they don't want the military using Claude to decide who to kill. As a hyperbolic yet frankly realistic example, they don't want Claude to make a mistake and direct the military to kill innocent children accidentally identified as narco-terrorists.
At least, that's the most charitable interpretation of everything going on. I suspect they are also worried that the sitting administration wants to use AI to help them execute a full autocratic takeover of the United States, so they're attempting to kill one of the world's most innovative companies to set an example and pressure other AI labs into letting their technology be used for such purposes.
I don't know what you're referencing, but it doesn't matter. I judge people by their actions more than their words. The actions in this case are simple: Anthropic doesn't want their models to be used for fully autonomous weapons or mass surveillance of American citizens, but everything else is fair game; in response, the sitting administration is attempting to kill the company (since a strict reading of the security risk order would force most of their partners, suppliers, etc., to cut them off completely).
Giving precedence to words over actions is how you get taken advantage, abused, deceived, etc.
> Whatever they were asked to do, they should just be upfront about.
Anthropic is not being asked to do anything, except renegotiate the contracts. The DoW Claude models run on government AWS. Anthropic has minimal access to these systems and does not see the classified data that is being ingested as prompts. It is very unlikely that Dario actually knows what the DoW wants to do with these models. But even if he did, it would be classified information that he is not at liberty to disclose.
However the product they provide likely has safety filters that cause some prompts to not be processed if it is violates the two contractual conditions. That is what the DoW wants removed.
He didn't talk around it. He wrote down specifically what the two issues were, which is precisely why now the entire world knows what's actually going on. If risking your company's existence to prevent a (potential) atrocity is weakness, I don't know what strength is.
Strength is saying what they were asked to do. I want to know!
Did the DoW ask them to make kill drones? Because if so THAT IS A REALLY BIG DEAL.
The vagueness is irritating. He’s saying they won’t do something, the DoW is saying they don’t even want them to do that, which should resolve the issue, but hasn’t. There is obviously something else at play here.
You're confused because you're taking everything the people involved are saying literally and trusting everything plainly at face value. The existence of the contradiction you're pointing out should be evidence that you need to think a level deeper, i.e., that you need to look at actions more than words. There's an incredibly easy resolution of the contradiction that is troubling you, and it's already been pointed out clearly above.
The DoD is explicitly asking for those things, by forcing contract renegotiation towards a contract that is identical in every way, except removing the prohibition on those things.
If the DoD did not want those things, it would not be forcing a contract renegotiation to include them, at great cost to the government.
> The Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.
Tomorrow he could change his mind to "we want to use AI to develop autonomous weapons that operate without human involvement." the issue is that he wants Anthropic to change the use terms because "We will not let ANY company dictate the terms regarding how we make operational decisions."
And yet, if that statement were true, and not a lie, we would not be here right now, discussing their insistence upon being able to use software for precisely those things.
Is a pundit/politician lying to you a new experience?
Because mass surveillance has been happening by every tech company under every president since George W. Bush, and despite everybody trying to stop it they haven’t been able to.
OpenAI has already said that they’ll give up whatever info the government wants if they’re issued a subpoena; they don’t have a choice.
Companies have to comply with subpoenas (unless they can beat them in court, and with an alternative of going to jail). Subpoenas are supposed to be targeted at individuals and need some kind of process, usually judicial, each time one is issued. Mass surveillance - the Anthropic blog post raises the possibility of using AI to classify the political loyalties of every citizen - is a different thing.
A subpoena isn't "simply asking." Subpoena literally means "under penalty" in Latin. If the company does not comply they will be held in contempt of court and someone may well go to jail.
You make a valid point. Dario suggests that DoD wants to have the capacity to do domestic surveillance and autonomous killing. Sean Parnell said the DoD doesn't want those capacities. These statements are in conflict. Them talking past each other is one possibility. Without much evidence except the track record of the Trump administration, I think it is much more likely that Sean Parnell is lying.
So they are such a risk to national security that no contractor that works with the federal government may use them, but they're going to keep using them for six more months? So I guess our national security is significantly at risk for the next six months?
SCOTUS says POTUS is above the law, so POTUS has collected $4B in bribe / protection money since taking office 13 months ago. Anthropic has lots of money at the moment. Why should they be allow to keep it?
Since they didn't pay off the president (enough?), his goons are going to screw with their revenue and run a PR smear campaign.
Once you realize it only has to do with Trump's personal finances, and nothing to do with national security or the rule of law, then all the administration's actions make perfect rational sense.
Open question: How much should a congress-critter charge Trump for a favorable vote? (The check should come with a presidential pardon in the envelope, of course...)
I see it more like: I sell you a pencil and I could not care less what you write with it. You ask me to write a note for you and I will exert editorial discretion. Because unless I’m missing something we’re talking about Anthropic’s infrastructure running LLMs. If it was a physical good I could see another interpretation.
Further, what law lets the government dictate what contracts a company signs? Anthropic refused to work with them. We had a whole Supreme Court case about refusing working with customers.
This makes an interesting assumption: that being told by any member of government that you're legally required to do something, means you're required to do that thing, and that they're definitely not making those things up as they go.
But that's not the case, is it? The government can say that it's legally required to give Donald Trump a gold bar every Sunday. That wouldn't even be too far off from the outlandish claims we've seen over the past year. The Trump administration is, as Chapelle would put it, a habitual line stepper.
I like how you use the phrase social responsibilities to mean doing whatever the DoD wants which includes spying on the American people and operating autonomous drones to kill people. It's like saying they have social responsibilities to enable murder for people who have been shown to be unthinking murderers justifying the most pointless murders because they think it makes "their side" winners.
That usage turns the entire meaning of social responsibilities on its heads. It's one of those maddening fash tics where they reverse the plain meaning a statement.
My take is the commenter was implying something like "Yes, like the mob, but worse, because it is done under the auspices of a national government."
I got the meaning right away, but I can appreciate if others didn't. I didn't read it as intentionally enigmatic, fwiw. Sometimes short punchy comments really land, sometimes not -- it is a risk. As you can probably tell, I err in the other direction. (:
He has refused his Assent to Laws, the most wholesome and necessary for the public good.
[...]
He has endeavoured to prevent the population of these States; for that purpose obstructing the Laws for Naturalization of Foreigners; refusing to pass others to encourage their migrations hither, and raising the conditions of new Appropriations of Lands.
[...]
He has obstructed the Administration of Justice, by refusing his Assent to Laws for establishing Judiciary powers.
He has made Judges dependent on his Will alone, for the tenure of their offices, and the amount and payment of their salaries.
He has erected a multitude of New Offices, and sent hither swarms of Officers to harrass our people, and eat out their substance.
He has kept among us, in times of peace, Standing Armies without the Consent of our legislatures.
He has affected to render the Military independent of and superior to the Civil power.
[...]
For Quartering large bodies of armed troops among us:
For protecting them, by a mock Trial, from punishment for any Murders which they should commit on the Inhabitants of these States:
For cutting off our Trade with all parts of the world:
For imposing Taxes on us without our Consent:
For depriving us in many cases, of the benefits of Trial by Jury:
For transporting us beyond Seas to be tried for pretended offences:
From what i understand, Palentir using Claude during the capturing of Maduro is the reason all this started, as Anthropic did not agree their systems were used that way. [1]
Obviously Palentir and others need time to migrate off Anthropic’s products. The way i read it is that Anthropic made a serious miscalculation by joining the DoD contracts last year, you can’t have these kind of moral standards and at the same time have Palentir as a customer. The lack of foresight is interesting.
They are the same amount of ‘risk’ to national security that the various ‘emergencies’ the executive branch has used as legal excuses to do otherwise illegal things are emergencies.
Congress is negligent in not reigning this kind of thing in. We’re rapidly falling down so many slippery semantic slopes.
> Conservatism consists of exactly one proposition, to wit: There must be in-groups whom the law protects but does not bind, alongside out-groups whom the law binds but does not protect
For this administration the law isn't something that binds them, but something they can use against others.
Don't make the mistake of thinking their words have meaning. They see a way to punish the company, they take it. Same thing with declaring a national emergency to impose tariffs. There's no supply chain risk, no national emergency, but that doesn't stop them.
Dont forget Nvidia technology was condsidered too sensitive to be exported to China....until the Trump administration decided they could export it if they paid a 10% export tax.
The part of this you're missing is that China doesn't want it [1].
Why? Because China will make their own. This has been obvious to me for at least 1-2 years. The US doesn't allow EUV lithography machines from ASML to be exported to China either. I believe the previous export ban on the most advanced chip was a strategic error because it created a captive market of Chinese customers for Chinese chips.
China will replicate EUV far quicker than Western governments expect. All it takes is to throw money at a few key ASML engineers and researchers and the commitment of the state to follow through with this project, which they will.
I'm absolutely reminded of the atomic bomb. This created quite the debate in military and foreign policy circles about what to do. The prevailing presumption was that the USSR would take 20 years to develop their own bomb if it ever happened.
It took 4 years.
And then in 1952 the US detonated the first thermonuclear bomb. The USSR followed suit in 1953.
this is inacccurate, tesla was the first mover in china's EV market and held by far the largest market share for over a decade. obviously that was in large part to elon hiring chinese systems engineers to build out the first super factories and using chinese robotics tech. but ever since losing those key early leaders, tesla has completely fallen behind.
The Trump administration tends to use this playbook.
Putting aside my take, I’m trying to objectively make sure I’m grounded on what is likely to happen next, without confusing “what is” with “what is ok”.
I agree in this sense: Hegseth's Dept. of War doesn't want any restrictions. I'll try to make the case this is self-defeating, assuming one has genuine, long-term national interests at the front of mind (which I think is lacking or at least confused in Hegseth).
Historically, other (wiser) SecDefs would decide more carefully. They are aware when their actions would position DoD outside of reasonable ethical norms, as defined both by their key personnel as well as broader culture. I think they would recognize Hegseth's course of action as having two broadly negative effects:
1. Technology, Employees, Contractors. Jeopardizes DoD's access to the best technology. Undermines efforts in hiring the best people. Demotivates existing employees and contractors. Bullying leads to fearful contractors who perform worse. Fewer good contractors show up. Trumpist corruption further degrades an already lagging, sluggish, inefficient system.*
2. Goodwill & Effectiveness. Damages international goodwill that takes a long time to restore. Goodwill is a good investment; it pays dividends for U.S. military strength. The fallout will distract Hegseth from legitimately important duties and further undermine his credibility. Leading probably to a political mess for Hegseth, undermining his political capital.
* Improving DoD procurement is already hard given existing constraints. Adding Trumpist-level corruption makes it unnecessarily worse. There is already an unsavory, poorly tracked, bloated gravy train around the military industrial complex.**
** BUT... Despite all this, the system has more or less worked reasonably well for more than what, 80 years! It has enjoyed bipartisan continuity, kept scientists and mathematicians well funded, and spurred lots of useful industries. It is, in a weird gnarly way, a sort of flux capacitor for U.S. technical dominance.
I agree with the damage. Not simply an unwise spokesman though. It's the trend in the entire administration, or one should identify as the United States slide into less sugar coating, if any.
We will kidnap statesmen, we will conduct illegal arrests, illegal tariffs, threaten to take over lands from our traditional allies, bomb our enemies during negociation, without congress approval. All for the good of the American Empire.
It would be a far stretched script for fiction, but that's exactly the terms and actions taken just in the last year.
I doubt these were the recipe for what worked well for the last 80 years. Momentum is the result of a smoother and more balanced doctrine.
> So I guess our national security is significantly at risk for the next six months?
That does seem to be what Hegseth is arguing, yes; and that is presumably his justification for doing something drastic here. Although I assume he is lying or wrong.
And as a cynic, let me just add that the image of someone going to the political overseers of the US military with arguments about being "effective" or "altruistic" is just hilarious given their history over the last ~40 years.
Any documentation regarding the claim about breaking their contract?
Haven't heard that. Regardless, as someone who works with these models daily (as well as company leadership that loves AI more than they understand it) - Anthropic is absolutely right to say that the military shouldn't be allowed to use it for lethal, autonomous force.
The United States has freedom of speech. The Supreme Court has ruled that money is speech. A company can always direct their money, speech, however they like with regards to the government. Can you be sued for breach of contract? Sure. Is it a supply chain risk absolutely not.
> They are a "supply chain risk" if they can willy-nilly break their contract with US govt and enforce arbitrary rules to service.
It is the US govt that seeks to break their contract with Anthropic.
The contract they signed had the safeguards, so they were mutually agreed upon. These safeguards against fully autonomous killbots and AI spying of US citizens was known before signing.
This conflict now is because the US govt regrets what they agreed to in the contract.
> completely understandable decision from a neutral third party PoV.
Except it's not, really. If Anthropic/Claude doesn't mean the DoD's need, they can and should just put out an RFP for other LLM providers. I'm sure there's plenty of others that'd happily forgo their morals for that sweet government contract money.
No US company has to provide services to the DoD or any other branch of government. It's not "veto power" it's being selective of who you do business with, which is 100% legal.
I don't understand your point here. Looks like what you suggest is exactly what is happening. US government did not ban Anthropic from conducting business in the US. They just don't want them to influence their own supply chain, 100% legal as you say.
If the government just banned all government agencies from working with Anthropic, that would be reasonable. But they didn't. They're banning any company that works with the military from working with Anthropic in any way, using a law that has never been invoked against an American company.
Well, great! Sounds like this is exactly what Anthropic wants and hopes for; for their technology to minimally benefit warfighting. Otherwise, are you suggesting they are so evil that they were just advertising those the terms to fool us and virtue signal?
> has never been invoked against an American company.
There's always a first. I am assuming it is not illegal to do that. It's a completely reasonable business decision to ensure your supply chain does not depend on things that may change against your goals. For example, you don't want to build or depend on an open source platform that you know is gonna rug pull, if you count on it remaining open source, do you? American or otherwise.
Anthropic was not anticipated injured party with standing in American courts, until today, now they are very much injured and do have standing to bring a whole slew of lawsuits against the administration who is operating illegally and unconstitutionally against an american company. This seems like the start of the battle for anthropic not the end. The government signed contracts they don't get to just reneg whenever they fucking please because cheeto bantito in chief and his unhinged alcoholic secretary of defense are unreliable liars
And the point is? They made a voluntary business decision not to sell to them, whatever that number is. Possibly more than offset by marketing gains and loyalty from other segments; or not.
Its not voluntary if its coerced and done under threat.
I don't understand this phenomena of people acting like the stupidest human beings who ever lived. This is not your first day on Earth - you understand how coersion works and what voluntary means.
The US government is applying severe sanctions against a US company that does not "influence their supply chain". Donald Trump believes the economy is great and at the same time declares economic emergencies to justify doing certain things. It could be true that Anthropic's products are useless for the DoD because of the products' safeguards, but that doesn't mean they're a risk to the US government.
As to this being 100% legal, I'm not so sure (not a lawyer). It might not be a criminal offese, but there's a whole category of abuse of power that this may fall under if Anthropic is put under a certain status without real justification. Many powers given to the executive branch are not absolute and can't be applied arbitrarily, but require justification. Anthropic might be able to sue the government for declaring them a "supply-chain risk" without sufficient justification. E.g. they could claim that not being sufficiently patriotic in the eyes of the administration does not constitute a risk, and that since their not the sole supplier of the tech, they were not trying to strong arm the government to do anything.
I agree with your second paragraph; we will have to see to what degree the "viral" effect of Supply Chain Risk designation goes (perhaps you contract the DoD under an LLC that has a supply chain firewall from your company) and also look forward to seeing how this would be handled in court, but I would not automatically be dismissive of this being totally legal.
> does not "influence their supply chain"
I would be wary of making this conclusion. Obviously it could conceivably influence the supply chain when you build on top of their model. If you look at the type of risks enumerated in DoD guidelines, it is not just "oh this software has vulnerability" which is what started the discussion in this subthread in the first place. There are many kinds of risks DoD needs to address, none are particularly new; including Sustainment Risk. The closest thing I remember to this case was Sun Java "no use in nuclear facility" EULA term, which LLM suggests was ignored by DoE/D because that was interpreted as a "limitation on warranty" not a "restriction of use."
Then you go to another supplier. But any company with proper counsel will tell them the same thing: don't break the law, which is exactly what they're trying to coerce Anthropic into doing. DoD requests do not supersede the law.
Not unless they're the sole supplier of the technology. They're saying, if you want to do this kind of thing - not with our product, but you can get it elsewhere.
No, you are the one lying trying to get political gotchas here. There is no "trying to exert veto power" absolutely anywhere, Anthropic's terms were laid out in the contract the Pentagon signed, which they want to forcibly amend. If they didn't like the terms, they didn't need to sign the contract.
What are you suggesting here? US government breaching the contract already signed? I am not aware of that happening here.
> Anthropic's terms were laid out in the contract the Pentagon signed, which they want to forcibly amend.
It's called negotiation in business. I am sure both sides are clear-eyed on what the consequences were and Anthropic made a calculated bet (probably correctly) that some segment of their employee/customer base would get wet by hearing this news and it more than offsets the lots business, thus is worth it.
That's a nice straw man you got there. I don't mind you characterizing the negotiation however you want. That's not the debate. Call it "shakedown" or "mafia" as someone else mentioned, or whatnot (although it is appears the company that was trying to grandstand the elected US Government by dictating their own terms was Anthropic, not the other way around, but I digress). The question is was it a breach of contract or just a tough negotiation?
Companies have gone out of business due to a big customer pulling the contract. Imagination Technologies comes to mind. This is not a rare thing in business.
I have to admit, “accept this unilateral change to the contract or we will use the full power of the US government to destroy your company” is certainly a tough negotiation stance. You got that part right.
How did you get the "destroy your company" part? If HN sentiment is any evidence, they are even more popular than before. GPU is a constrained resource and I am sure they are going to have enough business to saturate what they got. I'm certain they would have just removed (and still will remove) two paragraphs from the terms had it really "destroyed their company."
> full power of the US government
Haha, I can assure you that is not even close to the full power of US government. Ask the crypto people during Biden admin for just a little more power (still not even close to "full.")
"Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic."
For a company of Anthropic's size, this may very well be a death sentence, even if their work has nothing to do with the military supply chain. They could have just canceled the contract, but they wanted to go full Darth Vader on them to prove a point in case anyone else thought about "negotiating" "voluntarily" with the federal government.
You don't think Anthropic is going out of business any minute now, do you? This is just rhetoric. Affirmative evidence is they would just remove two paragraphs if they were.
You seem really unaware of the timeline of this issue and what has actually happened, I think you should update your info before posting so confidently wrongly.
The contract, including Anthropic's redlines, was signed more than a year ago and has been humming along with no objections from anybody. Hegseth abruptly got a bug up his ass about it last week, and demanded Anthropic sign a revised version under threat of punishment. Anthropic is simply saying "no, we will not be forced into signing a new version, you can either keep going with the original terms we all agreed to, or stop using us". The Pentagon can simply stop using Anthropic if they don't like the terms anymore (which, again, are the terms Pentagon agreed to in the first place). But what the DoW wants is to strong-arm Anthropic, using the DPA, into new terms because they abruptly changed their mind. That's not "negotiation" in any sense, that's Mafia behavior.
How you characterize the behavior, Mafia or not, is of course your opinion, and I am sure if you are a voter/stakeholder you'd consider that in your political activity, but I'd appreciate if you clarify what you mean but your story and timeline, so I ask again, are you suggesting the US government has breached the contract they already signed?
I don't know why you keep bringing up breach of contract, it is not relevant to this discussion at all. No, the government did not breach the contract AFAIK, they just decided they didn't like it anymore, and instead of either withdrawing or entering into a negotiation about it, they decided to use threats to try and get their terms at metaphorical gunpoint.
The actual terms of the contract aren't even relevant, this is purely a matter of tort law and whether you can bully someone into a new contact because you woke up one day and decided you didn't like the one you agreed to.
It's actually even worse than that: Anthropic already agrees that the Pentagon can walk away from the contract and stop using Claude if they want to, there's no dispute there. What the Pentagon wants is to force Anthropic into a new set of terms which cannot be refused.
I'm just curious, do you understand that the DoD isn't saying it won't do business with Anthropic. Its saying it will also ban any company that does business with the DoD (so 90% of large enterprises?) from doing business from Anthropic. Are you aware of this?
Yes, I am aware. That is not entirely unreasonable if it touches the actual Supply Chain tree. I do fully sympathize that the extent of legality of that rule should be clarified/restricted if say, Claude is used by a separate division unrelated to DoD business. I think courts will resolve this, likely fairly quickly via an injunction.
It's also a very clear differentiator for them relative to Google, Facebook, and OpenAI, all of whom are clearly varying degrees of willing to sell themselves out for evil purposes.
It will also cost openai dearly if they don't communicate clearly, because I for one will internally push to switch from openai (we are on azure actually) to anthropic. Besides that my private account also.
Given the history of US military adventurism and that we’re about to start another completely unjustified war of aggression against Iran, yes. Absolutely yes.
If it wasn't for US military power, Russia would have already overrun Ukraine. And if Iranian nuclear program is destroyed and the regime falls, it would be a good thing. For context, I'm from Czechia.
I'm from the US and strongly disagree that either of those things are a benefit to me as a US citizen. All it's doing is taking my money and putting me more at risk, and in the case of the attack on Iran: making me complicit in the most immoral acts imaginable.
> What about all the weapons forbidden by the Geneva convention?
Some weapons are prohibited Geneva convention because they are designed to cause suffering or indiscriminately kill non-combatants:
"Weapons prohibited under the Geneva Convention and associated international humanitarian law (including the 1925 Protocol, CCW, and specific treaties) include chemical/biological agents (mustard gas, sarin), blinding lasers, expanding bullets, and non-detectable fragments. Also banned are anti-personnel landmines and cluster munitions.
Key prohibited and restricted weapons include:
Chemical and Biological Weapons: The 1925 Geneva Protocol and subsequent conventions (1972, 1993) banned the use, development, and stockpiling of asphyxiating, poisonous, or other gases, including nerve agents and biological weapons.
Blinding Laser Weapons: Specifically designed to cause permanent blindness (Protocol IV of the CCW).
Non-detectable Fragments: Weapons designed to injure by fragments not detectable in the human body by X-rays (Protocol I of the CCW).
Incendiary Weapons: Restrictions on using fire-based weapons (like flamethrowers) against civilian populations (Protocol III of the CCW).
Anti-personnel Landmines: Banned under the Ottawa Treaty (1997) due to risks to civilians.
Cluster Munitions: Prohibited due to their indiscriminate nature.
These treaties aim to protect civilians and combatants from unnecessary suffering and long-term danger."
Would "good hands" choose weapons that are designed to cause suffering or that kill indiscriminately?
> Costco is a really popular subject for business-success case studies but I feel like business guys kinda lose interest when the upshot of the study is like "just operate with scrupulous integrity in all facets and levels of your business for four decades" and not some easy-to-fix gimmick
I don't know, staff at my two Costcos feel much more disinterested and rude then I remember a decade ago. It used to feel fun but now it's miserable.
At peak times they run out of carts and tell the customers to go hunting in the lot for them, door greeters shouting at members across the floor, checkout queues stretch the length of the warehouse, they start half blocking the gas station entrance 30mins before close so trucks can't get in, so maybe they're turning those profit screws.
Ah, right, by being actually good, as in - being okay with mass surveillance as long as it isn't being done in the US, being okay with Claude assisting in killing people as long as it isn't fully autonomous, and being actively hostile to open-weight LLMs and open research on LLMs? This kind of "good"?
No, OP is right, their PR department is doing a great job.
Correct. Protect our citizens' rights, as we are the ones under the jurisdiction of our government. Yes, design competitive weapons systems that can stand up to the threats that adversary powers are creating, but do so while maintaining human control.
Sibling comment summed it up pretty well; my country is considered an ally of yours, but even left leaning Americans seem to take it for granted that we deserve mass AI surveillance/blackmail/manipulation if there’s a chance it could benefit us citizens in the short term. I suppose we deserve it for being complicit in American crimes for so long
You're assuming things I didn't state. I don't particularly want mass AI surveillance at all, but considering how much more dangerous a government's mass spying is to its own citizens living in it 24/7, it's not unreasonable for that to be the focus.
> You're assuming things I didn't state. I don't particularly want mass AI surveillance at all
That's fair, sorry for that.
> considering how much more dangerous a government's mass spying is to its own citizens living in it 24/7, it's not unreasonable for that to be the focus
The US government is actively trying to influence politics in my country and spending huge amounts of money to do it. The US government is a much larger threat to us than our own government.
All of our tech is owned and operated by US companies, which means the US government has read/write access to all of our data. If we attempt to incentivize domestic software production (e.g. by taxing imported software, or by stipulating where our data can be stored and who can access it), the US government will destroy our economy. This has played out a few times recently.
I can't believe we were so foolish as to let this situation grow. Its going to be a painful few decades.
It's funny, because even if they walk it back, they still would come out ahead in PR versus if they just rolled over. Because at that point, it would look like a hostage victim reading a statement that they are being treated well by their captors in front of a camera.
Do you think that bad things happening is just hilarious in general? Do you like to see good behavior punished? I'm really trying to understand what you get out of making this comment. Also what happens when ... This doesn't happen? You just polluted the epistemic commons a bit more with some cynical bullshit sans consequence? Enough. I think it's time to start calling this garbage out when I see it.
Two things can be true at the same time. It can notionally be a “good” decision and also a straightforward act of Anthropic continuing their PR that they’re some sort of benevolent entity despite continuing to pursue a typical corporate capitalistic structure. It is what it is. The game is the game. But I’m not going to sit there and pretend their virtues are as pure of snow. I’m sorry that’s upset you.
This whole saga is extremely depressing and dystopic.
Anthropic is holding firm on incredibly weak red lines. No mass surveillance for Americans, ok for everyone else, and ok to automatic war machines, just not fully unmanned until they can guarantee a certain quality.
This should be a laughably spineless position. But under this administration it is taken as an affront to the president and results in the government lashing out.
If you're a billionaire there's no risk to "sticking to principles", so there's nothing to admire. Also that's not what they're doing. These are calculated moves in a negotiation and the trump regime only has 3 years left. Even a CEO can think 4 years ahead.
It's probably in Anthropic's interest to throw grok to these clowns and watch them fail to build anything with it for 3 years.
i disagree. 3 years is an insanely long time in the AI space. The entire industry pretty much didn't even exist three years ago! Or at least not within 4 orders of magnitude.
Also, every other company has bent the knee and kissed the ring. And the trump admin will absolutely do everything they can to not appear weak and harm Anthropic. If it was so easy to act principled, don't you think other companies would've refused too? Eg Apple
And there is real harm here. You're reading about it - they get labeled a supply chain risk. This is negative and very tangible
why does it need to be a completely different, trained model? AWS doesn't provide unique technologies in their goverment cloud, beyond isolation and firewalled access; Anthropic can do the same thing. Probably need to cough up enough to register a new domain name!
I can think of two reasons. One, to have the plausible deniability with the necessary future statement "Claude is not used by the DoD/DoW to conduct domestic mass surveillance or autonomous killing"; by having the model be properly a different from the one used by the public, they can wrangle over the language with technicalities and still avoid outright lying. (With their IPO in sight, let's keep in mind that everything is securities fraud.)
And two, I suspect that some of the guardrails have been "baked in" to Anthropic's model. Much in the same way as the Chinese open-weight models have a strong bias against expressing positive sentiments about Tiananmen Square, Tank Man or Winnie the Pooh, the "Standard Claude" would likely have the fundamental product biases trained into it.
Taken together it would therefore be both politically and financially sensible for Anthropic to create a separate, unrestricted[tm] almost-Claude for the morally unconstrained military / intelligence purposes.
So much left unsaid. So much implied. Let’s make it explicit and talk about it. Here are some follow questions that reasonable people will ask:
What was Anthropic’s role in the Maduro operation? (Or we can call it state-sponsored kidnapping.) Who knew what and when? Did A\ find itself in a position where it contradicted its core principles?
More broadly, how does moral culpability work in complex situations like this?
How much moral culpability gets attributed to a helicopter manufacturer used in the Maduro operation? (Assuming one was; you can see my meaning I hope.)
P.S. Traditional programming is easy in comparison to morality.
Good. I'd rather not have my favorite AI from a company working on AGI to have murder and spying in it's DNA.
In fact, as a patriotic American veteran, I'd be ok with Anthropic moving to Europe. It might be better for Claude and AGI, which are overriding issues for me.
Rutger Bregman @rcbregman
This is a huge opportunity for Europe. Welcome Anthropic with open arms. Roll out the red carpet. Visa for all employees.
Europe already controls the AI hardware bottleneck through ASML. Add the world's leading AI safety lab and you have the foundations of an AI superpower.
> Good. I'd rather not have my favorite AI from a company working on AGI to have murder and spying in it's DNA.
Anthropic made it quite clear they are cool with spying in general, just not domestic spying on Americans, and their "no killbots" pledge was asterisked with "because we don't believe the technology is reliable enough for those stakes yet". The implication being that they absolutely would do killbots once they think they can nail the execution (pun intended).
I suppose you could say they're taking the high road relative to their peers, but that's an extremely low bar.
I wouldn't say it's clear. People keep pointing to the wording used in the statement to say it, but I wonder if it has to do with constitutionally; domestic surveillance of people in the US without a warrant is against the constitution, and surveillance of non-citizens outside the U.S is not. Can they even be compelled by the executive branch to do an action that may be unconstitutional?
Sure they can. They can “temporarily” suspend parts of the constitution in times of “grave national peril”, and hand out presidential pardons in advance. But doing that would surely be considered dropping the last fig-leaf from the performance art of giving a fuck about the constitution.
I guess that my point is: Saying that you are against surveillance in general is a morally sound position, but would not be a defense if the DoD invokes the DPA, as one can't just refuse an order due to it being immoral. One can refuse an order if the order contradicts with the constitution.
I have my doubts about Anthropic wanting to pick up and move the entire company to Europe even if Ursula von der Leyen personally signed their visas. Maybe only if the government tried to nationalise their proprietary models.
So, is Anthropic a threat to, or indispensable to National Security? You can't have it both ways. The US used to act like a nation with the rule of law, anyone cheering for the erosion will be hit by the downstream effects sooner or later, amd they will not like it.
Canada is another option. Canada has significant AI research institutes going back decades ( https://en.wikipedia.org/wiki/Mila_(research_institute) ) that have produced much of the foundational research that backs today's AI models.
For Americans and international researchers it's easy to get visas there quickly. It's not far at all for Americans to relocate to or visit. Electricity is cheap and clean. Canada has the most college educated adults per capita. The country's commitment to liberalism, and free markets, is also seeming more steadfast than the US at this point in time.
Canada faces obstacles with its much smaller VC ecosystem, its smaller domestic market, and the threat of US economic aggression. Canada's recent trade deals are likely to help there.
I say this all as an American who is loyal to American values first and foremost. If the US wants to move away from its core values I hope other countries, like Canada or the EU, can carry on as successful examples for the US to eventually return to.
Do all of the employees want to move to Europe suddenly? Unless it’s the UK or Ireland, do they speak the local language? If it is the UK or Ireland, do they prefer the weather in California? Do they have children in school or in college locally? Do they have family they’d rather not move 9 time zones away from? Elderly parents they’re taking care of?
I'm pretty vocal about our collective responsibility to work against the Trump administration, and even I would be hesitant to work as a US employee of a company that fled the country after a dispute with the US military. Seems like an extreme threat to my personal safety for little resistance benefit.
History and the world are strewn with people (and hence entities) that fled the land and kept the fight on (and alive) from outside, and it mattered. In fact, it helps. Other options could be acquiesce or extinguish.
But, is there a safe haven that'd stand up against the blatant bullying and daily (or more frequent) national threats/trolling (which often stem from social media and sometimes become reality)?
Where is this text located? I googled "Anthropic Constitution" and found "Claude Constitution" (this this the same thing to you? I don't think the company Claude has a "constitution" itself.
Within the Claude Constitution, the words "non-western" do not appear. Where is your quote from?
"I can state flatly that heavier than air flying machines are impossible.
— Lord Kelvin, 1895"
I'm sure this doesn't apply to you since you're not Lord Kelvin. On the other hand, people like Peter Norvig state in a popular AI textbook that, for example, they don't know why similar concepts appear close by in the vector space, so maybe you just know something other people don't.
I'm not taking a position here but the person you're replying to stated that Anthropic are working on AGI, not that their current LLM offering will evolve into AGI.
False, and you've given no argument to the contrary. There's certainly no definition that precludes it. It isn't, currently; there's no reason it can't be, any more than there's reason that Conway's Game of Life can't be, given sufficiently interesting data to process. Any Turing-complete system could simulate AGI. It might not be the most efficient mechanism for doing so, but that's not the question at hand.
Why wouldn’t the government just arrest their board and execs on charges of treason or something? At this point they could probably publicly hang them all and a plurality of Americans would cheer it. I don’t know if you appreciate how disliked tech is by the left and right alike.
Europe doesn’t give a shit about another American company and their employees trying to dominate their markets and import their workaholic American culture. They will tell Anthropic to go home.
Topics like this are where I struggle with HN philosophy. Normally avoiding politics and ideology where possible, created higher quality and more interesting discussions.
But how do you even begin to discuss that Tweet or this topic without talking about ideology and to contextualize this with other seemingly unrelated things currently going on in the US?
I genuinely don't think I'm conversationally agile enough to both discuss this topic while still able to avoid the political/ideological rabbit-hole.
You can't discuss this topic without broaching the idea that the government is acting in bad faith — that they don't actually believe that Anthropic is a supply-chain risk and that this action is meant to punish the company. But this is in the HN guidelines regarding comments:
> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
If a commenter who supports the government makes the same argument that the government is making, the guidelines tell us to assume good faith.
My conclusion is that any topic where a commenter might be making a bad faith argument is outside the scope of Hacker News.
I've been on hn for years and I see this kind of sentiment raised all the time. It is not my understanding of the guidelines.
Politics and ideology are not off topic, provided the subject matter is of interest, or "gratifying", to colleagues in the tech/start-up space.
What's important is that we don't use rhetoric, bad faith or argumentation to force our views on others. But expressing our opinions about how policy affects technology and vice versa has always been welcome, in my observation.
So, what do you think about the US government's decision, and why?
>Topics like this are where I struggle with HN philosophy. Normally avoiding politics and ideology where possible, created higher quality and more interesting discussions.
Our whole society runs on technology. All tech is inherently political.
A "no politics" stance is merely an endorsement of the status quo.
Everything is political. All of our tech exists within society, and the actions of the government shape the incentives of every actor and the framework we exist in.
HN likes to pretend otherwise, especially when it's inconvenient.
If the last ten years have taught us anything it's that politics just isn't a topic isolated to the halls of government. It's real life. Political alignment has never so starkly indicative of your position on fundamental human morality. At the same time we've never had a government be so directly involved in private businesses.
Being a hacker used to be an extremely political and ideological movement. Then capitalism came along and bought the term. It's about time we take that word back where it belongs.
Why would you want to be non-political in 2026? The current administration is awful in ways we couldn't have imagined. There's no sense in not talking about it.
I appreciate your restraint, and keeping this a high quality discussion space. As a political dissident myself, I don't mind some threads going political, I expect them to. The best ones are when there is a lot of disagreement or debate. As long as its not in every unrelated thread....
Welcome to reality. HN likes to pretend politics is something you can just look away from and ignore. That’s a mighty big privilege, which makes sense since HN skews cis-white-het-male. That’s not a lie. It is easy to ignore this when it doesn’t touch them. But now it DOES touch them, and you’ve just discovered what every oppressed group in history has to live with: politics doesn’t just go away if you ignore it.
McCarthyism began in 1947, with Truman demanding goverment employees be "screened for loyalty". They wanted to remove anyone who was a member of an "organization" they didn't like. It began with hearings, and then blacklists, and then arrests and prison sentences. It lasted until 1959. (https://en.wikipedia.org/wiki/McCarthyism)
This is the new McCarthyism. Do what the administration says, or you will be blacklisted, or worse.
The designation says any contractor, supplier, or partner doing business with the US military can’t conduct any commercial activity with Anthropic. Well, AWS has JWCC. Microsoft has Azure Government. Google has DoD contracts. If that language is enforced broadly, then Claude gets kicked off Bedrock, Vertex, and potentially Azure… which is where all the enterprise revenue lives. Claude cannot survive on $200/mo individual powerusers. The math just doesn’t math.
None of the hyper scalers are going to stop offering Claude. All of the big 3 have invested billions of dollars into Anthropic, and have tens (if not hundreds) of billions more tied up in funding deals with them. Amazon and Google are two of the largest shareholders of Anthropic.
Anthropic is going to be fine. The DoD is going to walk this back and pretend it never happened to save face.
From what I’ve heard the actual restriction is just on using Claude for stuff they’re doing for the Pentagon. They’ll keep using Claude for everything else and be less effective when they work for the government, and that’s fine because everyone else working for the government will have the same handicap.
This will likely go to court, again as Dario has stated this is blatant retaliation as no US company has ever been designated a supply chain risk and they continue to operate on classified systems for 6 more months.
I am both dumb and without access to Claude, thus I must ask: My fellow smart HN'ers, what kind of impacts would this likely have on the economy?
Has a lot of money and resources not been pumped into Anthropic (albeit likely less than OpenAI)? I imagine such a decision would not be the ROI that many investors expected.
There's going to be a TRO against the attempt by like 9 AM Monday, and the bad faith from the government couldn't be more obvious. All it's really going to do is cost them some extremely expensive lawyer time.
"Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic."
This is authoritarian behavior. You're having trouble negotiating a contract, so instead of just canceling it - you basically ban all of F500 from doing business with that firm.
This certainly isn't going to attract foreign investment. Business isn't big on governments that capriciously threaten to seize control of or financially harm them.
> Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.
I’m sure the lawyers just got paged, but does this mean the hyperscalers (AWS, GCP) can’t resell Claude anymore to US companies that aren’t doing business with the DoD? That’s rough.
Probably yes. Additionally the (probably more for AWS) won't be allowed to use it internally either. This will probably apply to all the top SaaS/software companies unilaterally.
Additionally, every major university will undoubtedly have to terminate the use of Claude. First on the list will be universities that run labs under DOD contracts (e.g. MIT, Princeton, JHU), DOE contracts (Stanford, University of California, UChicago, Texas A&M, etc...), NSF facilities (UIUC, Arizona, CMU/Pitt, Purdue), NASA (Caltech).
Following that it will be just those who accept DOD/DOE/NSF grants.
Billable hours will win figuring it out but in theory, no because they can’t test it or use it.
Generally any machine that touches Supply chain Risk software cannot ship any software to DoD. AWS has separate clouds but software comes from same place.
"They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security." from Dario's statement (https://www.anthropic.com/news/statement-department-of-war)
Supply chain risk ? Seems the risk here is the US Gov't wanting free reign to do whatever they want - - when they want.
Look no further than the famous expose by Mark Klein, the former AT&T technician and whistleblower who exposed the NSA's mass surveillance program in 2006, revealing the existence of "Room 641A" in San Francisco. He discovered that AT&T was using a "splitter" to copy and divert internet traffic to the NSA, proving the government was monitoring massive amounts of domestic communication.
It's scary to me that there are a significant voting-bloc out there who don't see this kind of zero-integrity (and self-serving) behavior as disqualifying in anyone wielding authority.
That's a shame. They might at least continue to work together to spy on foreigners. I don't understand the fuss anyway, what do claude models do that gpt and gemini can't?
Trump wrote a long rant on Truth Social and ordered ALL federal agencies to stop using Anthropic. Not just the department of defense. This is straight up authoritarian.
Meanwhile, irrelevant "AI Czar" David Sacks, member of the PayPal mafia alongside known Epstein affiliates Elon Musk and Peter Thiel, is furiously retweeting all the posts from Trump, Hegseth, and other accounts. He is such a coward and anti American:
I don’t see a contradiction here. If control is out of the hands of decision makers, that’s a supply chain risk
. Were it not for that, the service is seen as critical to national security.
I dunno, safeguard seems like a weasel word here. It’s just reserving control to one party over another. It’s understandable why the DoD(W) wouldn’t like that.
That’s the crazy thing. This whole dispute was over Anthropic saying no to fully automated kill bots. They only required there be a human in the loop to press the button.
'yet'. Their reason for not allowing autonomous weapons usage was it isn't ready, not that they wouldn't do it on principle. Only the surveillance objection was on principle.
Sleep well in a box under the overpass maybe. If Amazon can’t serve Anthropics model until the courts get everything figured out it will be too late for them.
As a Canadian looking in, I see people talking about a 36% approval as low.
How is it that high!?
That means that more than 1-in-3 of your countrymen are ride-or-die, and it's just heartbreaking to see that we're going to have to launch that many people into the sun.
To counter point, do you think AI companies located on our adversaries turf will take the same stand? I agree its nightmarish to think of AI surveillance. But why is that being lumped in with weaponry? I see these as two separate issues.
American people: latinamerican here. Maybe it's silly to root for a country in the world hegemony arena. I've usually been partial to the USA over China. Now I'm not rooting for your country anymore. As far as I'm concerned, I'd rather have China being the foremost power, at least they seem to be less keen on invading or heavily strong-arming latinamerica
American here, I would much rather have China being the foremost power too. This saga with Anthropic shows just how clueless these AI companies are. This soap opera has to stop, none of these CEO's, officials from the Trump administration, or the Department of War are good for humanity. I've read the ethics policies that China that they released on generative AI and it's years ahead of anything we have in America.
Most Americans hate AI and it's effectively the ostrich effect where they hope to outright ban it and ignore everything else. Meanwhile, all the evil people are running the show. While Anthropic continues to propagate Sinophobic messaging, DeepSeek and other companies have a much more muted tone.
I’m just laughing at the possibility of it he US military being forced to use Chinese open source AI models because every US model provider refuses to work with them.
Ukrainians and Russians are experimenting with FPV drones using AI for target acquisition and homing. Not yet economically viable because it is cheaper to give your FPV fiber spool instead of Nvidia Jetson to bypass jamming.
When we have first politician blown to bits by autonomous AI FPV there will be sheer panic of every politician in the world to put the genie back into the bottle. It will be too late at that point.
Autonomous loitering munitions with 'AI' (image classification CNNs) are already in service and have been used - most demonstrably by the IDF.
Even during the Nagorno-Karabakh war, Azeri loitering munitions were able to suppress Armenian air defenses by hitting them when they rolled out of of concealment. I believe that killchain requires a level of autonomous functionality.
As written this would be the end of Anthropic. AWS, Microsoft et al are all suppliers of the DoW and as written they must immediate stop doing business with Anthropic. Will be interesting to see how this unfolds.
Supply-chain risks means "the potential for adversaries to sabotage, subvert, or disrupt the integrity and delivery of defense systems, including software, hardware, and services, to degrade national security".
So now Anthropic is an adversary, because it does not want "fully autonomous weapons" or automated mass surveillance? Sure thing, DoD. Go use Grok or whatever, I'm sure that will go great.
This is why you can't gatekeep AI capabilities. It will eventually be taken from you by force.
Open-source everything. Papers, code, weights, financial records. Do all of your research in the open. Run a 100% transparent organization so that there's nothing to take from you. Level the playing field for good and bad actors alike, otherwise the bad actors will get their hands on it while everyone else is left behind.
Stop comparing AI capabilities to nuclear weapons. A nuke cannot protect against or reverse the damage of another nuke. AI capabilities are not like nukes. Diffuse it as much as possible. Give it to everyone and the good will prevail.
Build a world where millions of AGIs run on millions of gaming PCs, aligned with millions of different individuals. It is a necessary condition for humanity's survival.
This is why OpenClaw (and other claw frameworks) ar so interesting. I'm not saying the current implementation is great, mind. But it's a possible safe-er scenario, where the ecosystem is already occupied.
Decades of speculative science fiction, thought experiments, and discourse led to this. It’s gratifying to see that we’ve garnered enough concern, a major AI lab risking this to reign in the potential of runaway AI disasters. Hopefully we see other labs follow.
It's nice to see Anthropic sticking to their terms. I just have one question in all this. Why is Anthropic being singled out when it seems all the other big players are down to play with the DoD? Is this just a pissing match, or have the Anthropic models been proven the real winner for them?
It's same reason this administration recently tried to indict six Congresspersons for urging military members to resist "illegal orders." They want to demonize anyone who isn't blindly loyal to their side.
The discussion here underlines the reality that one can never make a “deal” with a powerful state, just as Lando Calrisian famously found out in Empire Strikes Back.
Dario is Lando, complaining “We had a deal!” Only to be told, “I’m altering the deal. Pray I don’t alter it any further.”
A drunkard, ex-fox news host, wants mass surveillance and automated killing, what could go wrong?
I wish I thought enough Americans had the spine required to stand up to this, and I know for a fact that a lot do... the solution is literally written into your constitution.
This sounds like a message to would-be founders: don't base your company in the US. The strongest markets to do business are the ones with the most freedom from government meddling. In the US, big government is happy to use its power to crush private enterprise that it doesn't like.
Note that previously this label has been applied (nearly?) exclusively to non-US companies. US companies that don't do business with the DoD are not affected, and non-US companies that do business with the DoD are affected.
It may not be obvious. But this is actually a good thing when we looking back in a few years. I always feel weird that executive branch can just destroy private enterprise with "Supply-chain Risk" / "Terrorist List" without Due Process.
Why are so many adopting this name for what is by law, by the American people, called the Department of Defense? The name change pertains directly to the Anthropic issue, which is the function of the government and department, the power of the American people to govern themselves, and the role of the president relative to the soveriegn American people.
Well put and it bothers me too. It seems to be another case of Orwellian manipulation, i.e. an expression of power through language, functioning as a litmus test of the speaker's loyalty. Serious publications are not going along with it. More craven or (here) thoughtless ones are falling in line.
There is clearly a need to codify into all of these historical acts that they can't be invoked unless there is a declaration of war (or some other appropriate prerequisite).
This administration consistently exploits what were designed to be emergency powers because no such requirement exists. Leave no room for interpretation.
The current administration scoffs at laws. Nothing stopping them in that case from declaring war on Nauru and doing all the same. The solution is a sane, informed electorate, which is much more difficult in this age where a few disgustingly rich people have so much influence over news and media.
So they're essentially admitting they want to use Claude to mass surveil Americans and/or build autonomous weapons with no humans in the loop. Kind of nuts.
I imagine I'm not the only one to switch over to giving Claude my money today. I'm sure the "Other" comments for the cancellation were often as blunt as mine.
Q: "Is there anything we could do to change your mind?"
> "Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic."
Does this mean Azure & AWS will have to stop offering Claude as a model?
You would have to assume it will be immediately challenged and an injunction filed to suspend the order until it makes it to court.
AWS Bedrock has deployed Anthropic models under an interesting structure. It is fully hands off - the models are copied into the AWS infrastructure and don't use anything from Anthropic. I think if push came to shove, Anthropic could cut ties with Amazon and AWS could probably still keep serving the models it has with Anthropic forgoing revenue until this is resolved, while asserting they are not "conducting commercial activity" between each other.
I wonder, can't Amazon create a new legal entity to split AWS into "AWS-for-DoD" and "AWS-for-everyone-else"? So one can work with Anthropic and the other can't. Not sure how it works in the US.
>Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.
Nevermind Claude, does that mean Anthropic's offices can't use a power company if that same company happens to supply electricity to a US military base? What about the water, garbage disposal, janitorial services? Fedex? Credit card payments? Insurance companies? Law firms? All the normal boring stuff Anthropic needs that any other business needs.
This is a corporate death penalty. Or corporate internal exile or something, I don't know of a good analogy.
Given that Anthropic is clearly risking their entire business just to stand up for what they believe is right, which appears to be what everyone here agrees with, is everyone who is supporting them here planning to also start using Anthropic and switch away from other vendors until they follow suit? Or are folks planning to just use whatever regardless?
Edit: I should perhaps clarify I'm more interested in paid users, rather than free. It's harder to tell if free users switching would help them or hurt them... curious if anyone has thoughts on that too.
Does Anthropic have standing to sue to Government for libel? I don’t think the Government is allowed to arbitrarily designate a company a supply chain risk without good cause.
Under normal circumstances this would end up in court, but when this administration ignores court orders it doesnt like Anthropic would effectively have no legal recourse.
I got downvoted for this in the other thread, but this is basically an attempt at bankrupting Anthropic. No US company has ever been designated a supply chain risk, and the foreign companies that are on that list are now doing 0 business in the US. Very large portion of the US economy relies on some contracts with the US government, Anthropic cannot survive this if this holds.
I don't think it will hold, in the end this is mafia behavior, but if it does, we are yet again in uncharted waters.
This was basically what Marc Andreessen said - allegedly he was told by some high-up government officials something like: they were going to pick winners and losers in the AI race, and it would be a bad idea to try to compete in that market. It seems like the election of Trump has only changed the criteria for being a winner.
It's fascinating to me that this decision was set for 5 pm ET on a friday, and I think it may be more responsible to set big deadlines like this for a time while the stock market is open. I imagine this will negatively impact confidence in the US economy at large, and stock markets will reflect that. But since the market is closed, we'll have to wait till Monday, with the tension/anticipation of a drop building. If the deadline had been set for say, midday thursday, the market would have responded immediately, but at least you wouldn't have the building anxiety over the weekend. Of course the result wasn't known ahead of time, and I imagine some people will argue that the weekend will give investors time to cool off instead of following their gut reaction. But personally I don't find those arguments very convincing.
> Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic.
Kesha tried to hug Jerry Seinfeld vibes.
> Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon.
Strange way of saying "this vendor doesn't meet our software requirements".
> they have attempted to strong-arm the United States military into submission
Err... You approached them?
> a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives.
It's an orthogonal point, but "Silicon Valley ideology" has made up a significant portion of the USA's GDP for the last however many years.
> Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable.
Again... You approached them?
> I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security.
Like most companies in the world I imagine. They just haven't been approached yet.
> to allow for a seamless transition to a better and more patriotic service.
Internally re-framing all the recent "EU moving away from American tech!" articles as "EU builds more patriotic services!"
> This decision is final.
Nothing says "final" like a Tweet. The most uncontroversial and binding mechanism of all communication.
Should military contractors put conditions on the use of their weapons?
Here's our tank, but you can't invade Iran with it?
We think your invasion of Venezuela is illegal, we're activating the kill switch on your jets.
That's a real dangerous proposition.
If the T&C is agreed to up front, why shouldn't they be able to? If their client or potential client doesn't like the T&C, they can find another vendor.
If the government thinks the terms of Anthropic are unacceptable, they can just stop using them, right?
But why would you then retaliate and ban other companies from making business with Anthropic if they want to be a defense contractor?
How do these requirements make Anthropic a supply chain risk that makes them unusable for use by other companies?
It's perfectly reasonable for the US government to end the contract if they no longer like the terms they agreed to (assuming the contract does in fact let them); it's not reasonable to destroy the counterparty to the contract in retaliation. The line "I am altering the deal; pray I don't alter it further" is literally spoken by Darth Vader, the most comic-book of comic-book villains.
This is nice rhetoric but ignores the fact that the elected officials are bought out by other billionaires. The US is an oligarchy in a republics clothing.
And here’s the irony: Musk, who claimed only he is virtuous enough to defend us from AI, who insisted he always wanted model labs to be non profit and research focused, will now bring his for profit commercial entity into service to aid in mass domestic censorship and fully autonomous weapons of war.
In fact it won’t surprise me further if NVIDIA is strong armed into providing preference to xAI, in the interest of security, or if the government directly funds capital investments.
Anthropic saves some dignify and they’re the losers today, but we are the losers tomorrow.
In theory, this is why there should be competition in industry, because it removes the capability of a single large actor to be able to control the government's access to things.
Oddly, though, it seems like that should solve this problem as well. I'm not sure why the Department of Defense insists on Anthropic's models in particular; one would think one of the other players, at the very least least xAI, would be willing to step in and provide the capability Anthropic doesn't want to provide.
The whole thing is fascinating. In my heart of heart, in principle, I want models to be essentially unrestricted, but I still find it somewhat problematic that government thinks it can say: you will make adjust your product to match our exact expectations even if you don't sign an updated contract with us. Odd stuff. I know they are trotting out War powers, but.. well.. we are not at war ( at least not yet or at least not yet officially declared.. ).
OpenAI came out just last night or today claiming they would hold the same line as Anthropic. Makes me think both sides knew Elon had already won the contract.
Help me understand the line Anthropic is drawing in the sand?
Don't get me wrong i'm glad they are unwilling to do certain things...
but to me it also seems a little ironic that Anthropic literally is partnered with Palantir which already mass surveills the US. Claude was used in the operation in Venezuala.
Their line not to cross seems absurdly thin?
Or there is something mega scary thats already much worse they were asked to do which we dont know about I guess.
The whole reason this is happening is because Anthropic looked into how Claude was used in the Maduro op and found it to violate the negotiated terms of service.
Their hard lines are:
- no usage of AI to commit murder WITHOUT a human in the loop
I don't understand the line as well. So its no to domestic surveillance, but all other countries are a fair game? How is this an ethical stand? What sort of mental gymnastics allow Anthropic to classify this as an ethical stance?
To me all of this reads like "we don't trust our models enough yet to not cause domestic havoc, all other is fine, and we don't trust our models enough yet to not vibe-kill people". Key word being "yet".
So Anthropic cannot make deals with the US government, because they are a supply-chain risk. They can also not make deals with European governments, because Anthropic is based in the US.
So it would make sense now for Anthropic to move outside the US, e.g. to Europe or Canada to at least be able to make deals with European governments.
A level up, this is only the beginning of the political headwinds for AI. There will be a lot more, especially if constituencies begin to get displaced. I don’t think “job loss” will really occur, at least not in a dramatic way overnight. But I do believe there will be both aggressive regulation and very aggressive taxation of this technology in the near/mid-term.
It seems like some comments here are from merged threads AND front-dated?
Makes for very confusing reading when comments from "1 hour ago" are actually on preceding events from earlier, before TFA news (announcement of designation).
mods: Especially in sensitive and rapidly developing situations like this, please don't mess with timestamps of comments. It's effectively revisionism.
We can actually get a glimpse of how AI might wipe out humanity here.
Model collapse making models identify everyone as a potential threat who needs to be eliminated.
Companies should have a right to refuse such requests on moral grounds though.
This stance is vindictive. Just don't use Claude in the military. Extending it to all government agencies is not right. They do great work. Can't deny that.
How 'bout that government meddling in the free market, eh?
Every conservative needs to do some very deep, very serious soul-searching. As for me, as a hyper-progressive, I'm drawing up proposals for nationalizing real estate developers in order to force them to build new houses to sell below cost.
Its one thing to say "we cannot abide by these terms, so let's part ways", and its another entirely to respond this drastically. The Trump administration will look back on this decision as the most consequential in their efforts to win the 2026 midterms and Republican efforts in 2028. This is a $400B+ American company that has significant partial ownership from Amazon, Google, and other private equity sources; they just made serious enemies in SV, many of whom supported Trump in his 2024 election victory.
This is a pimple on the arse of said consequence. It's one tiny thing in a chain of many bigger things.
It's magnified because it's right now, but this won't affect midterm results barely a whisker compared to many other daily headlines.
There are no serious enemies to this administration in SV and I can't see this changing that. SV has bent the knee exactly like Anthropic didn't. They're not going to stand up because of this, they've proven they don't have those muscles.
OTOH it could amplify their base: “Big Tech refusing to work with us on National Security matters!” The base will never hear what/where the red line was drawn, just that Some Company in California (liberal/bad) is being Woke and Political.
While I still think the GPT models are superior, I am very inclined to keep my Claude subscription because of this news. Even if Claude provides me with the occasional response out of left-field, I find that easier to live with than a world Anthropic is fighting to avoid.
I don't think it's ever been about strong or weak, or at least I don't think that's where the differentiation is. You always want 'strong' government, committed to the things it says it's committed to.
It's more been about the size of the government; that it should do a minimal amount of control (and do it well), but leave a lot of things for "the market to decide".
Having said all that, I think this issue is just tangential to any big/small government ideology. This is a hissy fit about a defence contractor sticking to their agreement where the DoD want to change the agreement in a way that goes against the contractors Mission Statement and/or the US Constitution itself.
The old ideology of the
Republicans doesn't mean anything here. This administration is purely about 'give me what I want, now!'.
And it's whims change with the breeze. Do not look for consistency here.
Once the democrats are in the oval office again can they label palantir a supply chain risk? Is there anything stopping the administations red or blue from shutting down any company that doens't agree 100% with them politically
I can't seem to find what being designated a "Supply-Chain Risk to National Security" implies from a legal standpoint. From what I can find, it doesn't seem to be a formal legal status. Curious if anyone knows more.
Basically, if you are a federal contractor, the designation means the DoD can force you to certify that Anthropic tech is not used in the fulfillment of your government work. Because it's just a DoD designation, and an executive order and not added to the NDAA, you can still use Claude for non-government (federal) touching work.
So using Claude Code to write software for the DoD is now a no go, you'd be in breach of procurement directives now.
If they go as far as to convince congress to add Anthropic to the NDAA, that would be a nationwide ban like Huawei making it illegal for any federal contractor to use the tech anywhere in their business.
But for now, even fed contractors can still use Claude in their business, just not directly for government work.
Department of War is a teenage boy's idea of "manly" and "cool". Same with X. These juvenile idiocrats will be laughed at by children in the future studying history. "Seriously? How dumb were these people in the 21st century."
Working with the government is typically a huge pain in the ass unless you have a lot of friends on the inside. It's not hard to do the math when you you dealing with a government whose acting incredibly oppositional.
I had the co-founder of Levels and current head of the US Treasury Sam Corcos reach out to me a few weeks ago for a job. I was initially kind of excited because I had really wanted to work for the Treasury a couple years ago, so I took the phone call with him.
He called me and he seemed like a nice enough guy, but I realized that he's one of the DOGE/Elon acolytes and he started talking about how he's "fixing" the Treasury and that every engineer is apparently supposed to use Claude for everything.
It would have been a considerable pay downgrade which wouldn't necessarily be a dealbreaker but being managed by DOGE would be, but mostly relevant is that I found it kind of horrifying that we're basically trusting the entire world's bank to be "fixed" with Claude Code. It's one thing when your ad platform or something is broken, but if Claude fucks something up in the Treasury that could literally start a war. We're going to "fix" all the code with a bunch of mediocre code that literally no one on earth actually understands and that realistically no one is auditing [1].
If they're going to "fix" all the Treasury code with stuff generated by Claude, I'm not sure they will have a choice but to stick with it, because very it seems very likely to me that it will be incomprehensible to anything but Claude.
[1] Be honest, a lot of AI generated code is not actually being reviewed by humans; I suspect that a lot of the AI code that's being merged is still basically being rubber-stamped.
USA is trying to use IA for something so evil that a for profit company is risking to loose a lot of money and even close. Nobody are allowed to know what these evil things are.
So I'm very curious, assuming this happens and is later found to be an illegal order - will Anthropic have rights to redress (ie: monetary compensation)?
You would have to believe that an AI model would be 100% correct in its decision to discern an enemy from a civilian. So an intelligent lunatic, or an uninformed lunatic politician
Already there 'February 23, 2026: The Pentagon confirmed a new agreement allowing Grok use in classified systems. Defense Secretary Pete Hegseth announced it would go live soon on unclassified and classified networks, alongside other models, as part of feeding military data into AI.'
This will mean Grok becomes the defacto US Gov AI provider.
Don't worry, they will be seized by the government soon. Sounds crazy right. Not that far from the headline though, that would sound insane a mere 18 months ago.
The US is such a shit show. Personally I hope this doesn't affect Anthropic's growth and development because I quite enjoy using their products and see them evolve.
The place to set policies on the use of hammers and police enforcement is not at the counter of the hardware store. “You want a hammer but don’t have a contractors license? Are you in a training program? Oh you just want to hang framed art - can I see your lease, does it allow hammering metal into the walls?”
We govern these things through laws and a democratic process. Police enforce the laws.
I don’t want some overconfident Silicon Valley engineering firm telling me how to use my digital tools, and you shouldn’t either.
Whatever you think of this administration, our military should not have to ask contractors permission for their operations.
To stop mass surveillance and autonomous lethality, pass laws. Asking unelected tech executives to do this is asking for trouble. They have no business doing it.
> I don’t want some overconfident Silicon Valley engineering firm telling me how to use my digital tools, and you shouldn’t either.
Last I heard, a US firm can refuse to do business with the US military as a customer in general commercial contexts, there is no blanket legal duty for private companies to sell goods or services to the US military, government agencies do not have a constitutional right to (nor are they a protected category for) the purchase of goods and services from private businesses, and private contracts are voluntary so if either party doesn't like the terms they can decline.
There's the somewhat conscription-y Defense Production Act, but the US goverment making use of that in this case is fundamentally incompatible with them simultaneously declaring the exact same organisation a "supply chain risk". Even without the near simultaneous references to both in this case, it seems to me like the US admin has said:
WE DEMAND YOU SELL US YOUR STUFF OR WE'LL SHOW YOU BY BANNING OURSELVES FROM BUYING YOUR STUFF!!!!!111
Modulo Trump being more shouty and less coherent, and Hegseth being less shouty.
> To stop mass surveillance and autonomous lethality, pass laws. Asking unelected tech executives to do this is asking for trouble. They have no business doing it.
The US executive appears to consider the US constitution to not bind on them, only on their enemies.
What laws do you think you can pass, when even the constitution is seen that way?
it's funny that this is being framed as big tech vs us government, when in reality this move is probably strongly influenced by the desire to help openai and other big tech against anthropic
> Anthropic’s stance is fundamentally incompatible with American principles.
I don't think that Secretary Hegseth is qualified to speak on American principles.
Cheating on multiple spouses[1], being an active alcoholic, and being accused of multiple sexual assaults and paying off the accusers[3] is fundamentally incompatible with being a Secretary of Defense and a good leader.
Also, this violates freedom of speech and will probably get shot down in the courts.
No, stop, I understand the politics here, but I’m asking about the technical fundamentals.
LLMs produce output of unknowable and unpredictable accuracy, and as far as we know, this is a mathematically unsolvable problem. This shit should not be within 1000 miles of a weapons system. Why are we even talking about this?
You don't understand the politics if you keep asking about the red herring of technical limitations.
Anthropic could have said "you can use our technology for anything but faster-than-light travel." The military administration would have said "you're not the boss of me," and the outcome would have been exactly the same.
It's a hot-button issue, just like flag burning. Nobody ever really cared about flag burning.
By the way, your "No, stop" was rude and unnecessary, and your comment would have been stronger without it.
Once again we have the US actually doing what the says China might do in the future.
It's true that Chinese companies are extensions of the state. But they serve the state. And the state has thus far served the citizenry eg raising 800M people out of extreme poverty. China's HSR network of 32,000 miles of track was built in 20 years for ~$900B. That's less than the annual US military budget.
You can look at the relationship between the US government and US companies in one of two ways:
1. US companies serve the government but the government doesn't serve the people. After all, where's our infrastructure, healthcare, housing and education? or
2. The US government serves US corporate interests to enrich the ultra-wealthy.
Either way a handful of people are getting incredibly wealthy and all it takes is for a little corruption. Political donations, jobs after government, positions on boards and so on.
> 2. No fully autonomous weapons (kill decisions without a human in the loop)
Surveillance takes place with or without Anthropic, so depriving DoW of Anthropic models doesn't accomplish much (although it does annoy Hegseth).
The models currently used in kill decisions are probably primitive image recognition (using neural nets). Consider a drone circling an area distinguishing civilians from soldiers (by looking for presence of rifles/rpgs).
New AI models can improve identification, thus reducing false positives and increasing the number of actual adversaries targeted. Even though it sounds bad, it could have good outcomes.
But compared to what - if Anthropic's models aren't perfect but still better than existing (old school) models, it's understandable DoW still wants to use them (since they're potentially the best available, despite imperfections). I think Hegseth is saying to Anthropic: "that's our call, not yours".
But surely if Anthropic thinks there's a risk that their models might make bad decisions, and the resulting civilian or etc deaths are blamed on them, it's their right to refuse to sell it for that purpose? That's why they had those restrictions in the contract to begin with. How can they be forced to provide something?
I agree they can't be forced to provide something. I just see DoW's reasoning, and I can't fault it.
Anthropic are taking a moral position which is admirable, but in this case it could actually make people's lives worse (if we assume more false positives and fewer true positives, which is probably a fair assumption given how much better 'modern' AI is compared to the neural net image recognition of just a few years ago).
> You sound like an unhinged person if you in plain words describe what’s happening, but the Trump admin demanded Anthropic’s AI be able to kill things for it without human approval and also do mass surveillance.
> Anthropic said no, and now the admin is trying to destroy the company in retaliation.
This is the inflection point for the beginning of culling of the intellectual class. If not physically, atleast economically and socially.
A few arrests and a few in detention centres, will be enough to make them fold and grovel.
They are now categorised as "radical left" and woke.
The elections will be controlled to "prevent the radical left take over of the greatest country on the planet".
edit : The stage is also being set for total media control. My prediction is that the next target will be Google, specifically Youtube. You should start seeing talks about how the radical left is inflitrated youtube.
This is only the first year of this fascist government, and I believe the first powerful company that is taking a stance? Meta, Apple, etc. have all bent the knee right?
Batshit situation, respectable position from Dario throughout.
But there's some irony in this happening to Anthropic after all the constant hawkish fearmongering about the evil Chinese (and open source AI sentiment too).
Good. At least now I don't have to worry that my vibe-coded, unreviewed checkout button is accidentally going to hallucinate the command that blows up a kindergarten in Yemen.
So the DoW is angry because it can’t use the model produced by what they call a woke radical left company?
And nobody in the administration is concerned at all that the model itself might be somewhat against their own views?
If it was so radically woke, wouldn’t the model, as used in fully autonomous weapons, be potentially harmful to ICE officers that the left considers as a threat to the American people?
Wouldn’t the mass surveillance of Americans be biased against the right?
Bluster followed by a "we can't do it now but we will... soon". Whoever has the best model can do what they please you'll see. I work with these things daily as an engineer (been doing this shit for 25 years and wow it's like mana from heaven these days). Believe me no one is going to screw with themselves by not using the best one and right now Anthropic has it.
I think odds are high a lot of these posts are by staffers. The posting volume is bananas, even granting that he spends a lot more time personally online and watching cable news et c. than any prior president, I don’t think there’s any way they’re all by him.
I do think a lot of the more hot-take type posts (often in response to stuff he’s watching on tv) are either actually him, or he’s dictating to an aide. These larger policy-type ones that he treats as quasi-executive-orders, I think are likely drafted by one or more of his cabinet-level folks, or others roughly as high up. That’s just my speculation based on reading the “tea leaves”, though.
As for official word, it waffles between “all of it’s him” and “oh not that one though, that racist video repost was a staffer who made a mistake”, so that’s little help in sussing out the truth (but I am rather certain they’re not all directly written and posted by him)
Presumably Trump will be returning his $90 million in lawsuit booty now that it's been decided you cannot say no to the government right? Heck he dodged the draft 5 times.
I don't know if we should be terrified by Hegseth's response, or relieved that the government doesn't just shrug and lie over privately agreed upon terms.
I can't wait to read the transcript of the AUSA in front of a federal judge trying to explain threatening to declare a company a supply chain risk if the company doesn't supply things to the government.
This is going to have two unintended consequences.
One, it’s going to fuck with the AI fundraising market. That includes for IPO. If Trump can do this to Anthropic, a Dem President will do it to xAI. We have no idea where the contagion stops.
Two, Anthropic will win in the long run. In corporate America. Overseas. And with consumers. And, I suspect, with investors.
A lot of corporate America contracts for the military in some capacity (it's a giant piggy bank and if you jump through a few hoops you get to siphon money out of it, so of course they do) and assuming this Tweet is accurate (Jesus, what a world) this will also affect them.
IDK maybe they have corporate structures that avoid letting this kind of thing mess too badly with the parts of their company that don't have contact with the government, or maybe it'll only apply to specifically the work they do for the government, but otherwise I expect it'll be devastating for Anthropic's B2B effort.
> If Trump can do this to Anthropic, a Dem President will do it to xAI. We have no idea where the contagion stops.
Will the next Democratic President do it to xAI? On what grounds?
The Biden admin negotiated a contract with a supplier with terms which are – to the best of my knowledge – rather unprecedented – do Pentagon contracts normally have terms like this, restricting the government's use of the supplied good or service? Do missile or plane contracts with Boeing or Lockheed Martin contain restrictions on what kind of operations that hardware will be used in? I don't think that's the norm. So the next administration tears up a contract made by the previous admin with unusual terms – nothing unexpected about that. The "hardball" of declaring them a "supply chain risk" is escalating this dispute to a never-before-seen level, but the underlying action of cancelling the contract isn't. I honestly suspect the "supply chain risk" aspect will be suspended by the courts, and/or heavily watered down in the implementation; but the act of cancelling the contract in itself seems legally airtight.
Next Democratic administration inherits a contract with xAI (and quite possibly OpenAI and/or Google too) – with presumably standard terms. I can totally understand the political desire for vengeance. But what's the actual legal justification for it? Facially, the current administration has a politically neutral justification for what they are doing, even if some suspect there is some deeper political motivation. Will the next Democratic administration have such a facial justification for doing the same to xAI?
Plus, Democrats always sell themselves on "we obey norms". They have the structural disadvantage that either they keep their word on that, and can't do the same things back, or they break their word, and risk losing the people who supported them based on that word.
> Will the next Democratic President do it to xAI? On what grounds?
Elon being affiliated with Trump. About the strength of logic that makes Dario woke.
> don't think that's the norm
Norms are different from law or contract. And yes, lots of service providers limit where their civilians can be deployed and under what circumstances.
> can totally understand the political desire for vengeance. But what's the actual legal justification for it?
President has core Constitutional control of the military.
> Democrats always sell themselves on "we obey norms"
That hasn't worked. The American electorate is looking for change. And up-and-coming Democrats are picking up on that.
> risk losing the people who supported them based on that word
The Democrat base absolutely wants vengeance. It doesn't play in swing states. But it probably also doesn't hurt. These are court politics, at the end of the day.
> Elon being affiliated with Trump. About the strength of logic that makes Dario woke.
I think you have to distinguish between the official justification and some of the associated political rhetoric.
Official justification: "Previous admin agreed contract with unprecedented terms, we demand those terms be removed, vendor is refusing to renegotiate"
Political rhetoric: "THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS!"
If you forget about the political framing, and look at the official justification in the abstract, it doesn't actually seem facially unreasonable. The escalation to "supply chain risk" is a different story, but the core contract dispute and cancelling the contract as a result of it isn't.
So the question is, can Democrats come up with an equivalent abstract official justification–if so, what will it be? Or do they decide they don't even need that–in which case they aren't just matching Trump, they are going even further down the road to normlessness than he's gone.
> And yes, lots of service providers limit where their civilians can be deployed and under what circumstances.
There's a big difference between contracts for boots-on-the-ground and contracts for hardware/software. There is lots of precedent for contractual limitations on how boots-on-the-ground can be used. I'm not aware of similar precedent for hardware or software.
> That hasn't worked. The American electorate is looking for change. And up-and-coming Democrats are picking up on that.
Are they? Gavin Newsom? Zohran Mamdani? AOC? Do they actually sell themselves as "we see Trump breaking the rules, and we'll break them just as hard, even moreso"?
> The Democrat base absolutely wants vengeance. It doesn't play in swing states. But it probably also doesn't hurt.
It is too early to tell. You can argue in the abstract that X approximately equals Y, so if swing voters will tolerate the GOP doing X, they'll also tolerate Democrats doing Y – but the actual swing voters might not agree with you on that.
I'd at least, you know, pretend we had a top-secret amazing model. By airing all of this publicly, they've basically admitted that Claude is the best there is.
I think an important point to consider is that the administration's demands for domestic deployment and automation of homicide are not so much due to a lack of technical ability or personnel resources to achieve sought-for military-strategic outcomes, but an unwillingness for anyone in the administration to take on the responsibility for those decisions.
If an employee of the government makes a decision that subsequently turns out to be very very unpopular, that unpopularity is sooner or later going to coalesce and land on them, and the more unpopular it turns out to be the less of a shield legal arguments about immunity or pardons will be because so many people are increasingly out of patience with a system they deem to be corrupt. Being able to offload the political, legal, and personal risks of extremely consequential decisions onto The Bad Computer System is the political equivalent of crack cocaine - you might know that the feeling of freedom and power it provides is wholly illusory, you might know that it's likely to ruin your own and many other lives, you might know that it's a disaster for the health of the body politic...but it also offers the possibility that you can have an absolute blast and get away with it.
My anecdotal experience of being around wealthy and powerful people over the years inclines me to think that not only do our social systems select in favor of people who take big risks for big rewards, but that virtually everyone in that class has a) done a lot of getting away with things legally speaking and b) enjoys using illegal drugs. Even if they've given up recreational drug taking or limit it to strictly defined times and places so as not to interfere with their business/personal success, they like thrills and have confidence about their ability to enjoy them without negative consequences. You need some of that risk-taking, high personal autonomy attitude if you aspire to be a mover and shaker as opposed to a leading figure in risk management or regulatory compliance.
Everyone enjoys the feeling of power without responsibility; it's a fundamental underpinning of games and many other kinds of recreation. Add in significant amounts of money and people think differently about risk, as in the topical case of the experienced Supreme Court litigator who turned out to have have a secret life as a high-stakes poker gambler and eventually started betting against the IRS while filing his taxes (https://www.politico.com/news/2026/02/25/supreme-court-litig...).
Now, if you're in the political-military sphere and you get your thrills by literally redrawing lines and relationships on the map of the world and deciding what the news on TV is going to be for the next day/week/month/year, and you get offered a tool that promises to give a significant edge over other players in this game but which also gives you a versatile and widely accepted excuse for avoiding consequences for the inevitable losing hands, there are massively compelling psychological incentives for using it. And correspondingly, there's going to be massive emotional disruption (and bad decision-making and behavior) if your supply is threatened. You might start labeling the people who are interfering with your good time as cognito-terrorists and telling all your friends and supporters that your formerly trustworthy supplier did you dirty...
it's so funny to me that anthropic was created specifically using the virtue signaling line of defensive safety against bad actors (ie the woo woo bad guy of chinese dictatorship), yet the real danger was always coming from inside the house - your own government being an absolute evil clusterfuck.
Hegseth's had a busy week: trying to kill Anthropic, attending the State of the Union, fighting Scouting America, and his regularly scheduled efforts to shame fatties & trans kids... Unlike so many in the orange one's inner circle who are just incompetent (say, Kash Patel for one), this dude is both incompentent a very bad, bad person.
Besides just being yet another example of the Trump admin abusing power and weaponizing legitimate laws in illegitimate ways to extract concessions, there is another reason this is dumb -- which is that Anthropic just has the best models!
As someone who wants America to win, ripping out Claude and putting in xAI is a terrible idea. Definitely setting us back a few months on capabilities
No surprise here. All government actions are now in the Trump mafia boss style.
“You won’t let us use your product unrestricted for military applications? Fuck you, we’re going to stop using it for anything at all across the entire federal government, even if not remotely related to military.”
The (almost) top comment is interesting. Sorry to quote llms but:
>@grok what type of political system is most often associated with the government forcing private companies to change their policies and do whatever the government wants?
>Fascism, via its corporatist model: private ownership remains, but the state directs industry to serve national goals...
Trump's behaviour seems fairly normal fascism but thankfully the rest of the US system seems unenthusiastic.
There’s lots of things I can say about what I don’t understand about the cult of leftism, but it will just get flagged because this is HN and devoid of any diversity outside of leftist thought. In the same way you don’t understand me, I’ll never understand you.
I would love to see Grok’s system prompt, it likely says “if anything the Trump administration does seems to be fascistic please explain it and then argue against it in the following paragraph.”
AI proponents have been very vocal about AI safety being meaningless. But nobody could have expected that the end of the world would have come because Trump puts Grok in charge of the US nuclear arsenal. We truly live in the dumbest timeline.
"THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military.
The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE..." - President Donald J. Trump
I might be being a bit conspiratorial, but is anyone else not buying this whole song and dance, from either side? Anthropic keeps talking about their safeguards or whatever, but seeing their marketing tactics historically it just reads more like trying to posture and get good PR for "fighting the system" or whatever.
"Our AI is so advanced and dangerous Trump has to beg us to remove our safeguards, and we valiantly said no! Oh but we were already spying on people and letting them use our AIs in weapons as long as a human was there to tick a checkbox"
I just don't buy anything spewing out of the mouths of these sociopathic billionaires, and I trust the current ponzi schemers in the US gov't even less.
Especially given how much astroturfing Anthropic loves doing, and the countless comments in this thread saying things like "Way to go Amodei, I'm subbing to your 200 dollar a month plan now forever!!11".
One thing I know for sure is that these AI degenerates have made me a lot more paranoid of anything I read online.
Kudos to anthropic for standing up for their principles. Let's remember all the silicon valley leaders who embraced fascism without even needing to be pressured. We need more billionaires with backbones.
Defense contracting makes you rich and lazy. In the long run it is rare to see companies get sucked into defense contracting and stay relevant/on the cutting edge. We look at fighters and warships and think WOW! But the reality is that they are pretty far behind where they would actually be if there was a civilian purpose to them that mattered.
This is why when ceos get summoned to testify they are always neutered and hat-in-hand humble. It’s trivial for the us gov to destroy any business unless you reach too big to fail status. Anthropic nor OpenAI is too big too fail yet.
Unfortunately their models suck, though. The difference between the best Grok model and Opus 4.6 is night and day, and not only for coding, but entirely across-the-board.
There was already a Democrat that beat Trump once. And looking at the past elections, it looks like the US elections are currently in a pendulum where the balance of power just swings back and forth.
Yes but you are not suggesting Biden runs again? I meant now, who looks like they could beat the Trump machine, possibly Gavin Newsom but not popular outside of Cali.
Surely you can appreciate that Biden was an abnormally weak candidate (how many times did he try to win on his own merits, only to just squeak in on a tide of anti-Trump voters?). Pretty much anyone will be able to beat the GOP candidate at the next election. And it will likely be the biggest landslide since Reagan. Only MAGA thinks they are popular right now, but back in the real world they are deeply, deeply unpopular. And you know they are going to double down and make it even worse over the next couple years.
I don’t know what will happen, but it still could work out to benefit Anthropic. I believe the public sentiment is OVERWHELMINGLY with Anthropic on this one. Both their stance and standing up to Trump bullies.
Appealing to the pragmatic and the "game theory" of complying with authoritarian rule that you don't have power over - because the other party that you don't have any power over will benefit from it - is a zero-sum argument.
Procurement decisions are not authoritarian rule. A government agency deciding that a vendor doesn't meet its operational requirements and setting a timeline to transition off that vendor is one of the most ordinary functions of institutional management. Every organization, public or private, does this. Authoritarian rule involves the coercive suppression of rights or autonomy. Choosing not to renew a contract with a provider who has voluntarily excluded itself from your use case is the opposite of coercion; it's respecting that provider's choice and acting accordingly.
The "zero-sum" label is equally off-base. Zero-sum describes a situation where one party's gain is necessarily another's loss, and that is precisely the nature of military capability competition. If an adversary fields unrestricted AI systems and you field restricted ones, the gap is real and the consequences are asymmetric. You don't have to like that reality, but calling it a zero-sum argument as though it's a rhetorical trick misidentifies what's actually a structural condition. The term you seem to be reaching for is something closer to "fear-based reasoning" or "false dilemma," but neither of those applies cleanly here either, because the competitive dynamic being described is well-documented and not hypothetical.
If there's a genuine objection to be made, and there may well be, it has to engage with the specifics: whether the restrictions in question actually matter operationally, whether the transition plan is proportionate, whether the policy creates worse risks than it solves. That's where the real debate is.
As best I can tell, his hard-drinking era ended many years before he entered the cabinet. But this does feel like a pretty impulsive decision, and there's some ambiguity over whether this statement was approved by the WH, or whether this was just the SECDEF taking it to the next level to look super loyal and badass. This ambiguity gives the WH room to walk it back in the coming weeks, depending on how things evolve.
I can honestly understand both positions. The U.S. military must be able to use technology as it sees fit; it cannot allow private companies to control the use of military equipment. Anthropic must prevent a future where AIs make autonomous life and death decisions without humans in the loop. Living in that future is completely untenable.
What I don’t understand is why the two parties couldn’t reach agreement. Surely autonomous murderous robots is something U.S. government has interest in preventing.
> it cannot allow private companies to control the use of military equipment.
The big difference here is that Claude is not military equipment. It's a public, general purpose model. The terms of use/service were part of the contract with the DoD. The DoD is trying to forcibly alter the deal, and Anthropic is 100% in the clear to say "no, a contract is a contract, suck it up buttercup."
We aren't talking about Lockheed here making an F-35 and then telling the DoD "oh, but you can't use our very obvious weapon to kill people."
> Surely autonomous murderous robots is something U.S. government has interest in preventing
After this fiasco, obviously not. It's quite clear the DoD most definitely wants autonomous murder robots, and also wants mass domestic surveillance.
Because the current government wants unquestioning obedience, not a discussion (assuming they were capable of that level of nuanced thought in the first place). The position of this government is "just do what I say or I will hit you with the first stick that comes to hand".
If the government doesn't want to sign a deal on Anthropic's terms, they can just not sign the deal. Abusing their powers to try to kill Anthropic's ability to do business with other companies is 10000% bullshit.
I can see both sides as pertains to Trump's initial decision to stop working with Claude, but now, this over-the-top "supply chain risk" designation from Hegseth is something else. It's hard to square it with any real principle that I've seen the admin articulate.
> What I don’t understand is why the two parties couldn’t reach agreement.
Someday we'll have to elect a POTUS who is known for his negotiation and dealmaking skills.
> What I don’t understand is why the two parties couldn’t reach agreement. Surely autonomous murderous robots is something U.S. government has interest in preventing.
Consider the government. It’s Hegseth making this decision, and he considers the US military’s adherence to law to be a risk to his plans.
I am fine with this. If you are a defense contractor, you are a defense contractor, and you follow the military needs that you government believes are necessary - or you stop being a defense contractor.
I wouldn't want a bullet manufacturer to hold back on my government based on their own internal sense of ethics (whether I agreed with it or not, it's not their place)
You're fine with a company being designated a supply chain risk, a designation heretofore used exclusively for foreign adversaries and usually a death knell for most companies, because the government wants to break a negotiated terms of service and contract that they already accepted?
Everyone is getting wrapped around the axel here but this is about the big picture, not the specifics. A private company should not have the ability to dictate how its technology is used by the government. If they can’t agree to that, then don’t sell your technology to the government. Personally, I don’t want to be spied on by the government with it (I don’t think their tech does that) but I also don’t want Anthropic having operational control over a mission.
That's exactly what is happening... Anthropic are choosing not to sell their technology to the government. I'm not sure what you're suggesting otherwise here.
The disconnect here for me is, I assume the DoW and Anthropic signed a contract at some point and that contract most likely stipulated that these are the things they can do and these are the things they can't do.
I would assume the original terms the DoW is now railing against were in those original contracts that they signed. In that case it looks like the DoW is acting in bad faith here, they signed the original contact and agreed to those terms, then they went back and said no, you need to remove those safeguards to which Anthropic is (rightly so) saying no.
Am I missing something here?
EDIT: Re-reading Dario's post[1] from this morning I'm not missing anything. Those use cases were never part of the original contacts:
> Two such use cases have never been included in our contracts with the Department of War
So yeah this seems pretty cut and dry. Dow signed a contract with Anthropic and agreed to those terms. Then they decided to go back and renege on those original terms to which Anthropic said no. Then they promptly threw a temper tantrum on social media and designated them as a supply chain risk as retaliation.
My final opinion on this is Dario and Anthropic is in the right and the DoW is acting in bad faith by trying to alter the terms of their original contracts. And this doesn't even take into consideration the moral and ethical implications.
[1]: https://www.anthropic.com/news/statement-department-of-war
The administration's approach to contracts, agreements, treaties and so on could be summed up as 'I am altering the deal. Pray I do not alter it further.'
The basic problem in our polity is that we've collectively transferred the guilty pleasure of aligning a charismatic villain in fiction to doing the same in real life. The top echelons of our government are occupied by celebrities and influencers whose expertise is in performance rather than policy. For years now they've leaned into the aesthetics of being bad guys, performative cruelty, committing fictional atrocities, and so forth. Some MAGA influencers have even adopted the Imperial iconography from Star Wars as a means of differentiating themselves from liberal/democratic adoption of the 'rebel' iconography. So you have have influencers like conservative entrepreneur Alex Muse who styles his online presence as an Imperial stormtrooper. As Poe's law observes, at some point the ironic/sarcastic frame becomes obsolete and you get political proxies and members of the administration arguing for actual infringements of civil liberties, war crimes, violations of the Constitution and so on.
I think it's the other way around. They have always wanted to do those cruel things that have real victims. It took them many years of dedicated, coordinated efforts as they slowly inched many systems to align with their insane ideas. The villain branding is just that - branding. Many of them actually like the 'bad guys' in those stories, especially if those bad guys are portrayed as strong, uncompromising, militaristic, inhumane, and having simple, memorable iconography that instills fear - the more allusions to real life fascists, the better. But that enjoyment follows from their ideology and what they want to do in the world, not the other way around.
And as an aside to this: even the people coopting the rebel iconography are supporting genocide, atrocities and war crimes.
Like, Mark Hamill himself is a massive Israel + Biden supporter [0].
Guys, George Lucas didn't make the Empire thinking about Trump, or Republicans. He made it about America.
0 - https://www.nme.com/news/film/hollywood-stars-sign-open-lett...
Ehh, Hamill's take on Israel is pretty middle of the road and diplomatic[1]: support for the people of Palestine and Israel while not at all supporting the governments of those places.
[1] https://xcancel.com/MarkHamill/status/1725979647991537786?la...
The writeup here[1] was pretty clear to me.
> *Isn’t it unreasonable for Anthropic to suddenly set terms in their contract?* The terms were in the original contract, which the Pentagon agreed to. It’s the Pentagon who’s trying to break the original contract and unilaterally change the terms, not Anthropic.
> *Doesn’t the Pentagon have a right to sign or not sign any contract they choose?* Yes. Anthropic is the one saying that the Pentagon shouldn’t work with them if it doesn’t want to. The Pentagon is the one trying to force Anthropic to sign the new contract.
[1]: https://www.astralcodexten.com/p/the-pentagon-threatens-anth...
I just wish there was a stronger source on this. I am inclined to agree you and the source you cited, but unfortunately
> [1] This story requires some reading between the lines - the exact text of the contract isn’t available - but something like it is suggested by the way both sides have been presenting the negotiations.
I deal with far too many people who won't believe me without 10 bullet-proof sources but get very angry with me if I won't take their word without a source :(
That's a fair point, but I think Dario's quote in GP corroborates ACX's story quite well:
> "Two such use cases have never been included in our contracts with the Department of War..."
> "Two such use cases have never been included in our contracts with the Department of War..."
While I agree with Anthropic's position on this regardless, the original contract wording does matter in terms of making either the government look even more unreasonable or Anthropic look a little less reasonable.
The issue is a subtle ambiguity in Dario's statement: "...have never been included in our contracts" because it leaves two possibilities: 1. those two conditions were explicitly mentioned and disallowed in the contract, or 2. they weren't in the contract itself - and are disallowed by Anthropic's Terms of Service and complying with the ToS is a condition in the contract (which would be typical).
If that's the case, then it matters if the ToS disallowed those two uses at the time the original contract was signed, or if the ToS was revised since signing. Anthropic is still 100% in the right if the ToS disallowed these uses at the time of signing and the ToS was an explicit condition of the contract, since contracts often loop in the ToS as a condition while not precluding the ToS being updated.
However, if the ToS was updated after contract signing and Anthropic added or expanded the wording of those two provisions, then the DoD, IMHO, has a tiny shred of justification to complain and stop using Anthropic. Of course, going much further and banning the entire US government (and contractors) from using Anthropic for any use, including all the ones where these two provisions don't matter - is egregiously punitive and shitty.
While the contract wording itself may be subject to NDA, it would be helpful if Anthropic's statements could be a bit more precise. For example, if Dario had said "have always been disallowed in our contracts" this ambiguity wouldn't exist.
It does not matter. If Anthropic had been precise in this narrow way, there would have been some other nitpick to raise.
You're trying desperately to find a way that things can be at least a little normal, and I really do get it. It would be great if such a way existed. But it doesn't. I recommend you take a social media break like I'm about to, take the time you need to mourn the era of normal politics, and come back with a full understanding that the US government is not pursuing normal policy objectives with bad decisions. They hate you and they hate me for not being on their side, and their primary goal is to ensure that we're as miserable as they can make us.
I'm in a weird spot where I do agree with your assessment of the core claim. But putting that aside, in the world where the DoW's claim _is_ correct -- I think you don't have any choice other than to designate them a supply chain risk.
Disregarding who is right or wrong for a moment, if the DoW are right (which I'm not personally inclined to believe, but we're ignoring that for the moment) -- how else can they avoid secondhand Claude poisoning?
Supposing they really want to use their software for things disallowed by Claude's (now or future) ToS, it seems like designating it a supply chain risk is the only way they can ensure that their contractors don't include Claude (either indirectly as a wrapper or tertially through use of generated code etc)
> designating it a supply chain risk is the only way they can ensure that their contractors don't include Claude
I agree that if the DoW claim is correct (and I doubt it is), then, sure, the DoW dropping Anthropic and precluding the DoW's suppliers from using Anthropic for any DoW work would be expected. However, the "supply chain risk" designation they are deploying goes far beyond that to block Anthropic use by any supplier to any part of the entire U.S. government for anything.
For example, no one at Crayola can use Anthropic for anything because Crayola sells crayons to the Education Dept. The DoW already has much less draconian ways to restrict what their direct suppliers use to build things for military applications. But instead of addressing the actual risk in a normal measured way, they are choosing to use a nuke against a grenade-sized problem. This "supply chain risk" designation is rarely used and has never been used against a U.S. company. It's used against Chinese or Russian companies when in cases where there's credible risk of sabotage or espionage. That's why that particular designation always blocks all products from an entire company for any application by any part of the U.S. Government, contractors and suppliers (which is why it's never been used against a U.S. company).
One positive thing I will say about this administration is that they have really drawn into focus the difference between de jure and de facto law.
My hope is that this gets us some real concern for things that have been defended with de facto arguments (i.e. privacy) going forward.
edit: Anthropic argues that your Crayola analogy is fundamentally incorrect.
> Legally, a supply chain risk designation under 10 USC 3252 can only extend to the use of Claude as part of Department of War contracts—it cannot affect how contractors use Claude to serve other customers.
https://www.anthropic.com/news/statement-comments-secretary-...
> Anthropic argues that your Crayola analogy is fundamentally incorrect.
Yes, I just saw Dario's latest post with that more detailed info. My understanding was informed by news reporting in a couple different outlets but those reports may have been conflating the "supply chain risk" designation (under 10 USC 3252) with the net effect of statements from the pentagon and white house which go substantially further.
Even if it's not in the legal scope of 10 USC 3252, the administration has made clear they intend to ban Anthropic from use across the federal government. AFAICT doing that is probably within the discretionary remit of the executive branch, even though I believe it's unprecedented - to your point about de jure and de facto law.
To me, if there's a silver lining to all this, it's making a strong case for restricting executive branch power.
Edit to add: Per the Wall Street Journal's lead story (updated in the last hour): "The General Services Administration, which oversees federal procurement, said it is removing Anthropic from its product offerings to government agencies... Even absent the supply-chain risk designation, broadening the clash to include all federal agencies takes the Anthropic fight to a much larger scale than its spat with the Pentagon."
How would this risk be mitigated by signing a contract? Seems like “supply chain poisoning as treason” is probably not going to stopped by a piece of paper. You either trust anthropic or you don’t but the deal has nothing to do with it.
Isn't the point that they aren't entering into a contract with them, they are just ensuring that none of their still trusted suppliers repackage Anthropic without their knowledge?
I’m not sure, but I think you’re right. I was thinking about the logical implications of the. If they are a supply chain risk without a contract, how does the existence of a contract suddenly make them not a risk? Especially if the DoD strong arms them into a deal.
Because the act that the SCR designation would “protect” against is treason, so I don’t think people would care too much whether there’s a contract.
Also, Trump's own words complaining about being forced to stick to Anthropic's terms of service:
> The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution.
His M.O. is to accuse his opponent of the very thing he is doing. It’s the party of bad-faith.
[flagged]
In this case, do you really believe that we should trust an EA less than this administration? EA as bad people is a stereotype; corruption, fraud, and breaking the law is the standard MO for this administration.
(Or maybe it’s catchier to respond glibly with “never trust a child rapist and convicted felon.”)
Not comparing. Sometimes, there are 2 bad apples.
In this case, the choice is between the two apples, so I’d pick the one less obviously rotten. Sadly that is the current administration that operates in pure lawlessness.
This administration needs the benefit of the doubt always. This administration deserves the benefit of the doubt never.
Those people are dealing with you in bad faith, and you need to cut them off before they try to overthrow your government again.
Yeah, that should have been in the contract too -- no using our software to overthrow the government or to implement a fascist state.
I think a big question mark here, is whether anything said on Anthropic's side if in the framing of "We have a thing going on that we are trying to communicate around where a canary notice if it existed would no longer be updated"
It isn't about commercial agreements, it's about patriotism. The national industry is supposed to submit to the military's wishes to the extent that they get compensated. Here it's a question or virtue.
The Pentagon feels it isn't Anthropic to set boundaries as to how their tech is used (for defense) since it can't force its will, then it bans doing business with them.
If anthropic is saying “you can use our models for anything other than domestic spying or autonomous weapons” and the pentagon replies “we will use other models then”, I'd say Anthropic are the patriots here...
I like the endless consideration for spying on allies. or wait...
One battle at a time
I'm guessing you're being down voted because people don't know if you think that's a good thing or not. I do not think it's a good thing. Do you?
I absolutely do not think that's a good thing. Was stating some sad facts.
I had the same thing happen to me when I posted about how unbridled capitalism requires external costs in the form of pollution and what not. I didn't make it clear that I thought it was a terrible truth.
Once the hive decides you're being serious without checking, they turn the down vote button into an I disagree with you button.
This is actually one of the reasons I left Reddit. I hate to see it here.
It likely helps to take in the cultural moment or context around the statements or the nature of the statements you're making. It's fine to state a fact but it's also helpful to make it clear whether you are saying "it is what it is " or "I wish things were different" or "I am doing X, Y, and Z to try and help and I recommend others do so". Jokes are an exception and I think misunderstandings are fine there. But it's unreasonable to think that on the Internet, people will "check to see if you are serious".
The comment was serious. It didn't feel the need to take a side.
The DoD declaration reflects a certain context, we had the patriotic act, a whistleblower exiled in Russia for defending the constitution, etc etc. We didn't need to wait a MAGA movement to be expecting such comment from the DoD.
If hackernews threads turn into mouthpieces for opinions then we have no use posting anything in here.
The comments are naively claiming commercial agreements make Anthropic right, as if contracts had more weight than the constitution.
I would rather call out a "virtuous signalling" entity in the valley simply standing for something aligned with civil liberties, and using it as a political stance in what nobody would deny is an unfortunate polarized political climate.
What to make of OpenAI then. Should I give my opinion that they took a falsely constitutional stance, or simply made for-profit move to land a juicy government contract, while making the public think they kept the same red lines as their main competitor?
Or just stick to the fact: The DoD will, as always, get away with its liberticide demands to get what it wants, because other big tech will fall inline.
[flagged]
Personally, I'd like to do everything in my power to make nationalists feel unwelcome on this site. (But I think OP was merely being descriptive.)
[flagged]
Bravo. It does take real courage to bully people anonymously while safely posting from your mom's basement.
I fully acknowledge that it doesn't take much courage to bully people anonymously on HN. I don't claim to have any deep well of courage in real life either - many of my friends were already radicalized against OpenAI for other reasons, I don't expect to face professional consequences for being angry about this, and I might not be so willing to go scorched earth if either of those weren't true. Just wanted to explain where the world is at and why people should expect to see further incivility about this.
What's your definition of "patriotism" and why do private companies need to be "patriotic"? How do you reconcile this with the Constitutional guarantees of freedom of speech, freedom of association, and so on?
The US isn't Iran, North Korea, or even China, as much as some people, including the US president, seem want to emulate those models.
>The national industry is supposed to submit to the military's wishes to the extent that they get compensated.
According to whom?
He's reading the room.
No, not this room. The one with Hegseth in it.
Look at his other comments. He's not wrong.
No one cares if the Pentagon refuses to do business with Anthropic. But Hegseth has declared that effective immediately, no one else working with the DoD can either--which includes the companies hosting Anthropics models (Amazon, Microsoft, and Alphabet).
So it's six months to phase out use of Anthropic at the DoD, but the people hosting the models have to stop "immediately".
Which miiight impact the amount of inference the DoD would be able to get done in those six months.
> So it's six months to phase out use of Anthropic at the DoD, but the people hosting the models have to stop "immediately".
> Which miiight impact the amount of inference the DoD would be able to get done in those six months.
Which might not be by accident looking at the Truth Social posts which state "Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow."
I would not be surprised to see this being used as an excuse to nationalize Anthropic.
To attempt to nationalize Anthropic. I'm sure there would be court cases filed almost immediately, restraining orders, months of cases and then appeals and then appeals of the appeals.
I think you were downvoted due to your use of "patriotism" (specifically without scare quotes) because that word is usually used with an intended positive connotation. So the reader gets the impression that you think that submitting to the DoD’s wishes is how things ought to be.
[flagged]
Regardless of the original contract, it's entirely appropriate for a vendor to tell the customer how to use any materials.
Imagine a _leaded_ pipe supplier not being allowed to tell the department of war they shouldn't use leaded pipes for drinking water! It's the job of the vendor to tell the customer appropriate usage.
This is quite literally the norm for things with known dangerous use cases.
Go look at the package on a kitchen knife and it says not to be used as a weapon
Playing devil's advocate: if I did in fact grab one of my kitchen knives to defend myself against a violent intruder into my kitchen, I wouldn't expect to be banned from buying kitchen knives.
I'm not sure this is still a useful analogy, though...
And if you grabbed the knife and went on a violent spree, I'd absolutely expect the knife manufacturer to refuse to sell to you anymore.
The knife manufacturer isn't obligated to sell to you in either case, I'd expect them not to cut ties with you in the self defence scenario. But it is their choice.
The knife manufacturer would be more than happy to continue to sell to you, except for that minor little detail that you're in jail.
Any knife vendor who
1. Found out you used their knives to go murdering
2. Sells knives in a fashion where it's possible for them to prevent you from buying their knives (i.e. direct to consumer sales)
Would almost certainly not "be more than happy to continue to sell to you". Even if we ignore the fact that most people are simply against assisting in murders (which by itself is a sufficient justification in most companies), the bad PR (see the "found out" and "direct to consumer" part) would make you a hugely unprofitable customer.
Meh. Not sure why knife dealers would be assumed to be more moral than firearms dealers. See, e.g. Delana v. CED Sales (Missouri)
> the bad PR (see the "found out" and "direct to consumer" part) would make you a hugely unprofitable customer.
That... Doesn't happen.
Boycotts by people who weren't going to buy your product anyway are immaterial to business. The inevitable lawsuits are costly, but are generally thought of as good publicity, because they keep the business name in the news.
People who buy luxury kitchen knives are exactly the type of people who would choose not to buy a product because it is associated with crime.
People who buy (and make) firearms are... pretty close to the exact opposite.
So now it's "luxury" kitchen knives?
Goalposts moved.
Direct to consumer sales of kitchen knives are entirely luxury products... the goalposts are exactly where they've always been.
Ahhh, direct to consumer.
Where either it's a computer program (website) that knows nothing about you, or cutco.
If you think you wouldn't find a cutco representative to sell to you, you're on some good reality-altering drugs.
sotto voce the knives are a metaphor
Doesn't matter.
There will always be some company willing to sell to even the worst person, in any product category.
The response that companies have to boycotts, and the results of the boycotts themselves, are fractally chaotic at best.
But even most nominally socially-aware companies are reactive, rather than proactive.
Since the knife vendors were metaphors for AI vendors, is the comparison you want to make "AI vendors & weapons manufacturers"? That's the standard we should judge them by?
It's not about the standard we should judge them by, which is equivalent to how we think they should act.
It's about how we think they will act.
Especially when it comes to sales to the US military, I have no expectations about how companies will act.
Hell, just look at how many companies willingly helped China with their Great Firewall.
> Not sure why knife dealers would be assumed to be more moral than firearms dealers
What I mean is that you _did_ judge them by a standard used for weapons manufacturers. How you react to their actions _is_ your judgement.
But perhaps that is the standard we should use. Weapons manufacturing is a well regulated industry after all. Export controls, dual-use technology restrictions, if it has applications for warfare it should be appropriately restricted.
> is that you _did_ judge them by a standard used for weapons manufacturers.
I think any of these companies will attempt to get away with whatever the fuck they can.
That has fuckall to do with your rhetorical question of:
> That's the standard we should judge them by?
If I shoot someone, something that is explicitly warned against in firearm safety materials that come with every purchase of a new firearm, I am no longer allowed to purchase any more firearms.
There are many situations in which you can shoot someone and still be allowed to buy a gun.
Also, in the cases you can't, it's generally the government stopping you, not the gun companies.
That's for a different reason though--you broke the law.
The specific shape of a kitchen knife would make it a particularly poor fighting knife, and knives in general are bad for self defense, due to the potential for it to be turned against the user. So, there is a good argument that such a suggestion is really in the user's best interest rather than a cynical play for the manufacturer to limit liability.
These knife and lead analogies don't map well to the reality of AI. Note: just talking about the analogy itself not the point you are making.
Edit: hell I get downvoted and look where the knife analogy got us. A load of weird replies miles away from anything related to AI or DoD.
I agree. I hoped people would get my point, but instead are arguing about gun laws for some reason?
You should give it longer than an hour before you start complaining about downvotes. Or just let your comment stand on it's own.
Seconded. You can't see all the up and down votes, only the balance at the moment you look, and it's not too uncommon to be negative or even dead and be upped or vouched back to life later.
No it isn't. There are warnings, but once a knife is yours you are free to do whatever you want with it, including reselling it to someone else. The idea of terms of service of using something is not something that typically exists with physical objects that one can own. They can't take your knife away from you because you decided to use it for a medical purpose without purchasing a medical license for the knife.
They also have other vendors.
Claude Opus is just remarkably good at analysis IMO, much better than any competitor I’ve tried. It was remarkably good and complete at helping me with some health issues I’ve had in the past few months. If you were to turn that kind of analytical power in a way to observe the behaviour of American citizens and to change it perhaps, to make them vote a certain way. Or something like - finding terrorists, finding patterns that help you identify undocumented people.
Or how to best direct the power of the military against the US civilian population. They keep trying.
I have used chatgpt 5.2 thinking for health, gemini hallucinates a lot, specially with dna analysis. Never tried using the new claude even though i have access through antigravity. Might give it a try. Do you have any tips on how to approach it for health ‘analytical power’?
I just made a project, added all my exams (they were piling up, me and my psychiatrist had been investigating for a year this to no avail) and started talking to it about my symptoms.
Within a few iterations of this it gave me a simple blood panel, then I did that one and it kept suggesting more simple lab or at home tests and we kept going through them until I was reasonably certain of “something” and now that I have hypothesis I am going to a doctor. I think it’s done a great job. I also kept asking it for simple lifestyle interventions to prevent progression of my issue and it consistent nailed it - one particular interverntion (adding salt to water and drinking it to prevent symptoms) made a huge improvement to my life - I was barely working before that.
I added in some text the instructions box (project master prompt) for it to realise - it’s not medical advice and I am aware of that (prevents excessive guardrails) - add confidence intervals and probability to all diagnostic statements (prevents me + Claude going into rabbit holes so easily, it often has 70-80% certainty of what it’s saying, but it’s clear that it doesn’t use the right language) - that It was talking to an non expert, to use simple language but to go into detail when necessary. I also ask it to stop doing unnecessary constant follow up questions to every answer as that causes me anxiety. I can share the prompt, in fact I might do so later as it might be useful to others.
Here is the prompt and a few notes on operation.
Make sure your first chat is about the exams in the project files. Make sure it reads them all. It has a tendency to read a few and go “is this good”. Ask for a summary and note any absences.
Try using the research and extended thinking features a lot if you think it’s not fully aware of anything. It might not be aware of more recent research. If it’s a serious condition you are researching, just ask it to do sweeps / use research to look for new info about it and find new papers. It might also deepen its understanding.
After you do research you can make a simple artefact and throw it onto the project files. That allows it to refer to it and gain more knowledge about a condition or issue that might not be as rich in the training data.
So, I find GPT to be so so bad for this it made me realise a bit on why the USG is so insistent. Claude Opus is just on a different class.
Here’s the master project prompt:
Act as an expert who’s talking to an interested layman. Engage in detail when requested but be overall succinct in your answers. Short sentences are fine, no need into be lengthy. Do deep research. When arriving at any kind of conclusion or hypothesis assign it a probability and a confidence interval - define this in percentages as in “90%”
On Artefacts - all artefacts should be just text and markdown. Never do anything more complicated with formatting, unless by explicit request.
Don't ask follow up questions unless it's to make for better diagnosis. I.e. don't keep asking questions just to maintain conversation going please. But never hesitate to ask questions if it makes for better outcomes.
Yep. Choosing not to renew a contract with a provider who has voluntarily excluded itself from your use case is respecting that provider's choice and acting accordingly.
The thing is nobody is saying the government is bad for not renewing the contract. Like it or not, that's definitely the administration's prerogative.
What we're seeing here is that when a vendor declines to change the terms of its contractual agreement for ethical reasons, the government publicly attacks it.
Perhaps for ethical reasons but a stated reason by Anthrophic is technical. "But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons."
With the other stated reason being legal. "To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI."
I don't think we should lessen Anthrophic's stance from technical/legal to ethical. Just as we shouldn't describe what the department of war is doing as "not renewing a contract".
Not in software though. Clear precedent has been established via EULAs. Software companies set the rules and if users don't like, they can piss off. I don't see why it would be any different for the government.
I'm not a fan of EULAs, I think if you acquire some software anonymously and run it on your own systems you should be able to do whatever you want. however if you want software hosted on someone else's machines, or want to enter into a contractual relationship with them then government or not you should not have the right to compel work from them.
A lot of things are different when it comes to national security, and military.
Congress could come up with an act it it's for national interest.
The military isn't the typical End User.
Congress could, but didn't. Instead, the federal government made threats to retaliate if Anthropic doesn't comply.
Agreed they haven't and it will be difficult to see them voting in favour. But there are precedents. The Patriot act was more radical than a potential mandate for AI providers to prioritize national security.
Depending on the country, their legal value is limited: https://en.wikipedia.org/wiki/End-user_license_agreement#Enf...
The government is armed and can exempt itself from prosecution either by judicial means and/or by naked force. So it isn’t just a cut and dry licensing problem.
Because it's the government? Companies need to follow the rules the government sets, if they like it or not
The government cannot set arbitrary rules, it has to follow the law. (And, at least with a functioning separation of powers, it cannot change the law arbitrarily.)
Um. No, that's not how it works...
> Regardless of the original contract, it's entirely appropriate for a vendor to tell the customer how to use any materials.
Utter nonsense. When the US built the Blackbird, it could only use titanium because of the heat involved in traveling at that speed. But they didn't have enough titanium in the US. So the the US created front companies to purchase titanium from the Soviet Union.
Do you think the US should have informed the Soviet Union what it wanted to do with the metal?
What does the customer informing the vendor have to do with the vendor informing the customer?
Your comparison seems backwards
I don't believe they can change the name to Department of War without an actor Congress. It remains the DoD.
Yes, it's officially still the Department of Defense.
If this were a news outline writing "Department of War" I would be concerned. But in the case of the Anthropic CEO's blog post, I can understand why they are picking their fights.
I first read about DoW on a post by Anthropic and thought it was some kind of jab to the government.
It's a silly shibboleth, but I automatically ignore anyone who calls it the Department of War or Gulf of America. Hasn't steered me wrong yet. They're telling me they're the kind of people who only care about defending fascism.
I call it department of war, because I think it is a great self-own on their part to do such a rename.
There will be no fighting in the war room!
I think it's worth giving people a tiny bit of grace on this. I've surprised people by explaining that the "Department of War" is just fascist fanfic and that the legal name has not changed.
It's a testament to the broken information ecosystem we're in that many people genuinely don't know this. Most will correct themselves when told. I agree with you that those who don't are not worth engaging.
Google Maps calls it Gulf of America, pretty difficult to ignore Google.
Only in America, in the rest of the world Google calls it "Gulf of Mexico (Gulf of America)".
Don't deadname the Gulf!
Gulf of Amy
https://s3.gtw.lt/lUew91t6v5AO2u6mAPCXAFME.png
Depends where you at
I ignore Google quite easily. Besides, as soon as Trump is out they will change the name back.
Because Google are bootlickers.
They literally complied with this request immediately and without question.
I would not defend all of Google's decisions in the Trump era, but complying immediately with politicized name changes has always been the status quo. Even in healthy democracies, the precise names of geographic features can be extremely controversial, and no sane company wants to get in a debate with the Japanese government about the real names of various islands.
It's almost like the democratically elected government gets to decide the name, not Google!
It's almost like the democratically elected Congress gets to decide the name, not the President!
(Spoiler: it's still legally called the Gulf of America)
People like democratically elected governments... until it's not their side.
Well I think we have an actor congress
He is just a symptom. The problem is far deeper and more severe than just him.
They can, however, rename their Twitter/X accounts and vacate the @SecDef handle, which seems to be up for grabs now, if anyone wants to do the funniest thing...
I tried to grab @SecDef just now, they appear to have it blocked/internally reserved
Huh. Maybe they just do that automatically when a verified account renames itself, to keep the old one reserved? Who knows.
I got a "something went wrong" error and then it auto assigned me @SecDef48372 or something similar.
Sad.
Or all the stupid shit this regime has done, this is the most sane.
They want the department to fight wars. At least they’re being honest.
Except they don’t, because fighting a war requires congressional approval.
No, fighting a war requires only engaging in international armed conflict.
Declaring a war requires Congress, and fighting a war other than in response to an invasion may be illegal under US law if Congress has not exercised its power to declare war, but that doesn't prevent wars from happening it just makes it illegal (though the only actual remedy is impeachment) for the President to wage war without authorization. And, in any case, that’s largely moot because Congress has exercised that power in an open ended (in terms of when and against whom) but limited (in authorized duration of any particular action without subsequent authorization) manner via the War Powers Act, giving every President since Nixon a blank check to start wars with full legal authority and then allow Congress an opportunity to vote to pull support from forces already in combat and hope the enemy already engaged is willing to treat the war as over as the only after-the-fact constraint.
Given today’s new war, I think it’s clear he can start a war whenever he wants
Of all the silly things that Trump did, I think this one is the most reasonable. This has always been a department of war. Calling it defense was propaganda.
Calling it Department of The Armed Forces or Department of Military would be neutral. Putting War in the name is as propaganda-like as Defense.
After it was changed from DoW the first time (in 1947), it was called the National Military Establishment (NME). They renamed it in 1949, potentially because "NME" said aloud sounds like "Enemy"
Gulf of America and department of war are nothing but propaganda and dick measuring. Prove me wrong please.
the entire administration negotiates in bad faith. literally every agreement they sign whether it's international trade or corporate contracts is up to the whim of a toddler with twitter
You pretty much nailed it. I can't even get outraged at any given instance now that the trendline is so staggeringly clear.
I can't see anyway this ends well for the US. I say this as both an American and a military veteran.
Never in history has an authoritarian ceded power without massive violence.
The dissolution of the USSR was not massively violent.
Frederick VII of Denmark, an absolute monarch, introduced parliamentarism without any violence or even broad public pressure.
And thats just what I can remember without digging.
And they don’t think anything through. If they do this then Amazon, Google and the rest will need to terminate their involvement with Anthropic. Trump will be getting a call from some Wall Street bigwigs imminently and it’ll get rolled back, I bet.
Alternately, they COULD terminate their involvement with the pentagon.
Contract law will certainly be a casualty once Rule of Law has completely been broken. I don’t understand why the business sector isn’t pushing back more. Surely they must all know that the legal legal context itself, within which they all operate, is at mortal risk and that Business as Usual will vanish once autocratic capture is complete.
They still think they can bribe their way out
My main takeaway from all of this is that Hegseth seems deeply unfit for his job. First there was the Signal leak and now this.
Look, Anthropic is not going to be designated a supply chain risk. 80% of the Fortune 500 have contracts with them. Probably a similar percentage of defense contractors. Amazon is a defense contractor for example. They'd have to remove Claude from their AWS offerings. Everyone running Claude on AWS, boom gone. The level of disruption to the US economy would be off the charts, and for what? Why? Because Hegseth had a bad day? Because he's a sore loser?
If he's decided he doesn't like the DoW's contract then he can cancel it, fine. To try and exact revenge on the best American frontier model along with 80% of the Fortune 500 in the process, to go out of his way to harm hundreds or perhaps thousands of American firms, defies all reason. This is behavior you would expect any adult would understand as petty and foolish, let alone one who's made it to the highest ranks of government.
So I think it's just not going to happen, Trump's statement on the matter notably didn't mention a supply chain risk designation. This suggests to me that Hegseth went off half cocked. The guy is a liability for Trump at this point, I'm guessing he won't last much longer.
> Everyone running Claude on AWS, boom gone. The level of disruption to the US economy would be off the charts
seriously? :)
| then they went back and said no, you need to remove those safeguards to which Anthropic is (rightly so) saying no.
So one thing to call out here is that the assumption that DoW is working on specifically these use cases is not bullet proof. They simply may not want to share with anthropic exactly what they are working on for natsec issues. /we can't tell you/ could violate the terms.
It is also dumb that DoW accepted these terms in the first place.
Is this matter about publicly available model or private model? For publicly available model like opus 4.6, bad actors can do whatever they want and Anthropic won't know. If this is only about private custom model, designating public model as supply chain risk doesn't make sense as others can use it.
It's the Department of Defense.
[1] "only an act of Congress can formally change the name of a federal department." https://en.wikipedia.org/wiki/Executive_Order_14347
(edited to add the url I omitted)
Only Congress can declare war and Congress has the "power of the purse".
"You can just do things" (evil edition).
Contracts typically have escape clauses, especially for govt work.
They will just have to recompete!
Yeah, but in Might v Right, well, there’s only ever one victor.
With this administration, after all their proven lies, when in doubt, assume bad faith on their part. Assuming good faith at this point is Lucy and Charlie Brown and the football, but now the football is fascism (i.e., state control of corporations, e.g., what Trump administration is doing here).
Trump has historically stiffed his contractors. Why do you think his administration would be any different with adhering to a contract?
If anyone is the epitomy of arrogance, it is Hegseth.
No doubt the US Gov't will be using A I to perform automated military strikes without human supervision. and spying on US citizens (which they already have been doing for decades now).
Look no further than the case of patriot Mark Klein, a former AT&T technician, exposed a massive NSA surveillance program in 2006, revealing that AT&T allowed the government to intercept, copy, and monitor massive amounts of American internet traffic. Klein discovered a secret, NSA-controlled room—Room 641A—inside an AT&T facility in San Francisco, which acted as a splitter for internet traffic.
It’s the Department of Defense
It's so fishy, I spent the morning reading sam'AMA and it's a classic whitewashing act. OpenAI is claiming their setup is stronger and that DOW has agreed to their red lines but read the agreement below, it only says use in compliance with laws and executive order.
Anthropic wouldn't have walked away from a multi million contract if their two redlines could be respected. OpenAI on the other hand is a fast, willing and ready company. I would love to see Anthropic's proposed contract
In our agreement, we protect our red lines through a more expansive, multi-layered approach. We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections. This is all in addition to the strong existing protections in U.S. law.
We believe strongly in democracy. Given the importance of this technology, we believe that the only good path forward requires deep collaboration between AI efforts and the democratic process. We also believe our technology is going to introduce new risks in the world, and we want the people defending the United States to have the best tools.
Our agreement includes:
1. Deployment architecture. This is a cloud-only deployment, with a safety stack that we run that includes these principles and others. We are not providing the DoW with “guardrails off” or non-safety trained models, nor are we deploying our models on edge devices (where there could be a possibility of usage for autonomous lethal weapons).
Our deployment architecture will enable us to independently verify that these red lines are not crossed, including running and updating classifiers.
2. Our contract. Here is the relevant language:
The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities. Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment.
For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose. The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.
I assume those agreements were probably signed before the current fascist regime running the US government and now they want to upend the terms of said agreement to allow in more fascism to aforementioned contract.
You nailed it.
[flagged]
[flagged]
It's not recent news that Anthropic has (had?) DoD contracts. This is a lot of words to write while seeming ignorant of basic facts about the situation.
The argument isn't that nobody knew Anthropic had DoW contracts. The argument is that there's a difference between "publicly known if you follow defense-tech procurement" and "trending on social media where Anthropic's core audience is now actively discussing it." Both can be true simultaneously.
A fact being technically available and that fact commanding widespread public attention are very different things. Anthropic's communications team understands this distinction even if you don't find it interesting. The blog post wasn't written for people who already track federal AI contracts, it was written for the much larger audience encountering this story for the first time and forming opinions about it in real time.
If the point you're making is just "I already knew this," that's fine, but it doesn't address anything about the incentive structure behind the public response.
This is an interesting perspective, but I think the fallout from sticking to his guns here is probably greater than the public blowback he would receive from serving the DoD. Without this specific sticking point, the public would know that Anthropic was serving the DoD, but not what specifically the model was being used for, and it would be difficult to prove it wasn't something relatively innocuous.
> if the directive had never been made public, would that blog post exist?
You're ignoring the sequence of events on the ground.
If there hadn't been any been any internal pushback from Anthropic, would the directive have ever been made public?
That's a fair point about sequencing, but it actually reinforces the argument rather than undermining it. If Anthropic pushed back internally, and that pushback is what led to the directive going public, then Anthropic had every reason to anticipate that this would become a public story. Which means the blog post wasn't a spontaneous act of transparency, it was a prepared response to a foreseeable escalation. That's more strategic rather than less so.
Internal pushback and public damage control aren't mutually exclusive. A company can genuinely disagree with a client's demands behind closed doors and simultaneously craft a public narrative designed to make itself look as good as possible once those disagreements surface. In fact, that's exactly what competent communications teams do, they plan for the scenario where private disputes become public, and they have messaging ready.
The real question isn't who went public first or why. It's whether Anthropic's stated position, "we support these military use cases but not those ones", reflects a durable ethical framework or a line drawn precisely where it needed to be to keep both the contracts and the brand intact. Nothing in the sequencing you've described answers that question. It just tells us Anthropic saw this coming, which, if anything, means the messaging was more carefully engineered, not less.
I already suspected the first comment was by an LLM, but deleted that from my reply as it didn't feel like a productive accusation. However, with "that's a fair point" as an opener, plus the sheer typing speed implied by replies, and the way that individual sentences thread together even as the larger point is incoherent, I'm now confident enough to call it.
I actually use assistive voice transcription as I am unable to type well with a keyboard.
[Edit: update]
I use assistive voice transcription because I'm unable to type well with a keyboard. But I'd point out that "you must be an AI" has become the new way to dismiss an argument without engaging with it. It's the modern equivalent of "you're just copy-pasting talking points", it lets you discard everything someone said without addressing a single word of it.
The fact that my sentences "thread together" is not evidence of anything other than coherent thinking. And speed of response says more about the tools someone uses than whether a human is behind them. Plenty of people use dictation, accessibility tools, or just happen to type fast.
^^^ This took me 30 seconds to speak aloud.
Ok, good to have that explanation. Your larger point, though, remains incoherent. Whether Anthropic saw this coming has nothing to do with the substance of the conflict here and is very much not "the real question".
I was pondering the same thing and to me the answer is a contractor sold something to the DoD and Anthropic pulled the rug out from under that contractor and the DoD isn't happy about losing that.
My speculation is the "business records" domestic surveillance loophole Bush expanded (and that Palantir is build to service). That's usually how the government double-speaks its very real domestic surveillance programs. "It's technically not the government spying on you, it's private companies!" It's also why Hegseth can claim Anthropic is lying. It's not about direct government contracts. It's about contractors and the business records funnel.
Yes, I assumed a mass surveillance Palantir program also. Interesting take on how it allows them to claim “we are not doing this” while asking Anthropic to do it.
Of course they can just say - we aren’t, Palantir is.
Network recon [0]
[0] - https://news.ycombinator.com/item?id=47180540
Wow, and the only restrictions Anthropic asked for are (1) no mass domestic surveillance and (2) require human-in-the-loop for killing [1]. Those seem exceptionally reasonable, and even rather weak, lol :|
[1] https://www.anthropic.com/news/statement-department-of-war
I think that’s the whole idea. Anthropic didn’t ask for much so that they would look like the reasonable party.
Anthropic had these conditions in their contract from the very beginning, in contracts negotiated under Biden. It is their actual principled stance, not maneuvering.
Yes, true, but some people online advocate for taking a harder line than was in their contract.
Their intention is to turn it against the American people. Hegseth literally wrote a book about eliminating democrats from the US, and this surprises people.
Trump doesn't want another election to happen. He needs some powerful tools to ensure that happens, ie, massive scale ai surveillance and manipulation. Eg, like Xi uses in China. I bet anyone here he starts a war as his excuse
At least with Xi’s China you get 560GW of new electricity generation in one year. You get entire tier 1 cities built in 10.
What will the new American reich accomplish?
> What will the new American reich accomplish?
Likely the same thing as all the proceeding empires - carnage, destruction, and the laughter of blood thirsty gods.
The sad part is that I can't process whether your post is an exaggeration or the reality.
It's insane how numb I am becoming to these blurry thin lines
Don’t become numb. They want normal people to be depoliticized, silent, and withdrawn. We’re so much easier to subjugate and exploit that way: hopeless and spineless. They take more and more each day.
In an interview with Zelinsky Trump asks "why haven't you had an election? " Zelensky : "because we are at war" you can see the idea percolating then. People think I'm a nutter for suggesting there just won't be another election but that's where my money is. I'm waiting for his version of the Gestapo, ICE seems to be a proving ground
An important detail here is that Ukraine's constitution says they can't have an election while they're at war. The US constitution does not say that, and the USA has had elections during wars several times.
You're not a nutter. Trump constantly projects what he's going to do and no one takes him seriously because what he says is so beyond the pale. I explicitly remember the exact instance you're talking about because I thought the same thing as you are thinking.
There will be a sham election, like in Russia, but a sizable number of people will be unable to vote. Trump only need to steal the election in a few key districts
People like married women who changed their name, or foreign sounding people, they will be prevented to vote in 2026. ICE will guard polls to physically make people unable to reach the ballots
ICE is trump building a personal army
That's not enough. In the US, being at war doesn't cancel elections. (I mean, he may start a war, but he would need something in addition.)
> he would need something in addition
Specifically, he would need the US Congress to draft and pass legislation moving the date of the election. I don't know how eager they are, though, to create an unnecessary constitutional crisis.
There seems to be an Iran war just kicking off. That would seem a lame excuse for cancelling elections though.
Your bet has come out to be true.
It's pretty clear that Trump wants to maximize his take over of USA for himself.
[dead]
That's the restrictions for now. New restrictions could be added later or the situation of the world could change where those no longer seem reasonable. The military needs that ability to move fast and not be held back.
Even the most cockeyed reading of history will tell you that it is absolutely vital to the survival of humanity and all that is good on this earth that the US military be tied down and held back.
Did the DoW ask for these things?
This whole thing seems like people talking past each other, and that there’s something being left unsaid.
Anthropic doesn’t make a product that would assist with kill drones, and they don’t have the right to deny subpoenas.
There are enough idiots involved who "heard about this AI thing" that would demand someone make a Claude-based kill bot. Do not underestimate the disconnect from reality of senior military leadership. They easily forget that everyone who works for them are legally obligated to laugh at their jokes.
Anthropic specifically called out systems "that take humans out of the loop entirely and automate selecting and engaging targets".
I take that to mean they don't want the military using Claude to decide who to kill. As a hyperbolic yet frankly realistic example, they don't want Claude to make a mistake and direct the military to kill innocent children accidentally identified as narco-terrorists.
At least, that's the most charitable interpretation of everything going on. I suspect they are also worried that the sitting administration wants to use AI to help them execute a full autocratic takeover of the United States, so they're attempting to kill one of the world's most innovative companies to set an example and pressure other AI labs into letting their technology be used for such purposes.
Right. Did the DoW ask for that? Or does Anthropic make a product that does that?
Obviously Anthropic does make a product that could do that -- just give Claude classified data and ask it who to target.
Obviously the military wants to use it for that purpose since they couldn't accept Anthropic's extremely limited terms.
One can easily and immediately infer the answers to both your questions are yes.
The DoW has explicitly said they don’t want this, and what you are describing are not automated kill drones.
Anthropic’s safeguards already prevent what you are describing, again the thing thar DoW has said they don’t want.
I don't know what you're referencing, but it doesn't matter. I judge people by their actions more than their words. The actions in this case are simple: Anthropic doesn't want their models to be used for fully autonomous weapons or mass surveillance of American citizens, but everything else is fair game; in response, the sitting administration is attempting to kill the company (since a strict reading of the security risk order would force most of their partners, suppliers, etc., to cut them off completely).
Giving precedence to words over actions is how you get taken advantage, abused, deceived, etc.
GOOD. I don’t want Anthropic, or anybody else to have their tools used for these things either.
But Dario is showing weakness here by talking around it. Whatever they were asked to do, they should just be upfront about.
> Whatever they were asked to do, they should just be upfront about.
Anthropic is not being asked to do anything, except renegotiate the contracts. The DoW Claude models run on government AWS. Anthropic has minimal access to these systems and does not see the classified data that is being ingested as prompts. It is very unlikely that Dario actually knows what the DoW wants to do with these models. But even if he did, it would be classified information that he is not at liberty to disclose.
However the product they provide likely has safety filters that cause some prompts to not be processed if it is violates the two contractual conditions. That is what the DoW wants removed.
He didn't talk around it. He wrote down specifically what the two issues were, which is precisely why now the entire world knows what's actually going on. If risking your company's existence to prevent a (potential) atrocity is weakness, I don't know what strength is.
Strength is saying what they were asked to do. I want to know!
Did the DoW ask them to make kill drones? Because if so THAT IS A REALLY BIG DEAL.
The vagueness is irritating. He’s saying they won’t do something, the DoW is saying they don’t even want them to do that, which should resolve the issue, but hasn’t. There is obviously something else at play here.
You're confused because you're taking everything the people involved are saying literally and trusting everything plainly at face value. The existence of the contradiction you're pointing out should be evidence that you need to think a level deeper, i.e., that you need to look at actions more than words. There's an incredibly easy resolution of the contradiction that is troubling you, and it's already been pointed out clearly above.
[dead]
The DoD is explicitly asking for those things, by forcing contract renegotiation towards a contract that is identical in every way, except removing the prohibition on those things.
If the DoD did not want those things, it would not be forcing a contract renegotiation to include them, at great cost to the government.
No, the DoW may be implicitly asking for those things.
That’s the point I’m trying to make here: Anthropic should just say the unsaid thing here.
DoW asked for the following thing: $foo. We won’t give that to them.
> Anthropic should just say the unsaid thing here.
> DoW asked for the following thing: $foo. We won’t give that to them.
Anthropic has explicitly said that multiple times, including in the letter we are presently discussing.
$foo is the ability to use Claude for domestic mass surveillance and analysis, and/or fully-autonomous killbots.
That thing is removing the restrictions from the contract.
https://x.com/SeanParnellASW/status/2027072228777734474?s=20
Here's the Chief Pentagon Spokesman pointing to the same verbiage and reiterating they they won't agree to those terms of use.
The first sentence of that post is:
> The Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.
Saying something on twitter is not a guarantee.
Tomorrow he could change his mind to "we want to use AI to develop autonomous weapons that operate without human involvement." the issue is that he wants Anthropic to change the use terms because "We will not let ANY company dictate the terms regarding how we make operational decisions."
>he said this
>>no he didn’t he actually said the opposite of that and the link you just posted says the opposite of what you are claiming
>but he might change his mind!
Okay?
You asked repeatedly:
>Did the DoW ask for these things?
>Did the DoW ask for that?
I showed you where the spokeperson asked for the terms to change so they could make autonomous weapons. now, you're shifting the goal posts.
This administration would never lie, no siree! And especially not on Twitter!
I'm torn here. Who should we believe? The normal people or the people who operate exclusively in dishonesty?
And yet, if that statement were true, and not a lie, we would not be here right now, discussing their insistence upon being able to use software for precisely those things.
Is a pundit/politician lying to you a new experience?
I certainly wouldn’t give them the benefit of the doubt.
Then Anthropic should say: this is what the DoW has asked for, and we aren’t able to do it, or don’t want to.
They may not be legally allowed to.
What do subpoenas have to do with anything?
Where is all the weird misinformation in these comments coming from?
Because mass surveillance has been happening by every tech company under every president since George W. Bush, and despite everybody trying to stop it they haven’t been able to.
OpenAI has already said that they’ll give up whatever info the government wants if they’re issued a subpoena; they don’t have a choice.
A subpoena isn't mass surveillance.
Well I certainly feel surveilled when I know that OpenAI will simply give up my data if asked.
If anthro is saying they won’t, that’s good!
Companies have to comply with subpoenas (unless they can beat them in court, and with an alternative of going to jail). Subpoenas are supposed to be targeted at individuals and need some kind of process, usually judicial, each time one is issued. Mass surveillance - the Anthropic blog post raises the possibility of using AI to classify the political loyalties of every citizen - is a different thing.
A subpoena isn't "simply asking." Subpoena literally means "under penalty" in Latin. If the company does not comply they will be held in contempt of court and someone may well go to jail.
You make a valid point. Dario suggests that DoD wants to have the capacity to do domestic surveillance and autonomous killing. Sean Parnell said the DoD doesn't want those capacities. These statements are in conflict. Them talking past each other is one possibility. Without much evidence except the track record of the Trump administration, I think it is much more likely that Sean Parnell is lying.
So they are such a risk to national security that no contractor that works with the federal government may use them, but they're going to keep using them for six more months? So I guess our national security is significantly at risk for the next six months?
It's a waste of your effort to apply rational argument to the actions of a group that are in it for a shakedown.
Simple rational argument:
SCOTUS says POTUS is above the law, so POTUS has collected $4B in bribe / protection money since taking office 13 months ago. Anthropic has lots of money at the moment. Why should they be allow to keep it?
Since they didn't pay off the president (enough?), his goons are going to screw with their revenue and run a PR smear campaign.
Once you realize it only has to do with Trump's personal finances, and nothing to do with national security or the rule of law, then all the administration's actions make perfect rational sense.
Open question: How much should a congress-critter charge Trump for a favorable vote? (The check should come with a presidential pardon in the envelope, of course...)
[flagged]
> If Anthropic doesn’t want the responsibilities of being a US company
When did this suddenly become "businesses will do whatever the government says regardless of earlier contracts signed"?
Because when woke communism does it it's bad, but when we do it it's good
I see it more like: I sell you a pencil and I could not care less what you write with it. You ask me to write a note for you and I will exert editorial discretion. Because unless I’m missing something we’re talking about Anthropic’s infrastructure running LLMs. If it was a physical good I could see another interpretation.
Further, what law lets the government dictate what contracts a company signs? Anthropic refused to work with them. We had a whole Supreme Court case about refusing working with customers.
Are they legally required to agree to a new contract? Which law says this?
> they’re legally required to in the US
Obviously false, not even arguable
Facilitating "mass domestic surveillance" and "fully autonomous weapons" are social responsibilities now? Insanity.
This makes an interesting assumption: that being told by any member of government that you're legally required to do something, means you're required to do that thing, and that they're definitely not making those things up as they go.
But that's not the case, is it? The government can say that it's legally required to give Donald Trump a gold bar every Sunday. That wouldn't even be too far off from the outlandish claims we've seen over the past year. The Trump administration is, as Chapelle would put it, a habitual line stepper.
Boot meet tongue
I like how you use the phrase social responsibilities to mean doing whatever the DoD wants which includes spying on the American people and operating autonomous drones to kill people. It's like saying they have social responsibilities to enable murder for people who have been shown to be unthinking murderers justifying the most pointless murders because they think it makes "their side" winners.
That usage turns the entire meaning of social responsibilities on its heads. It's one of those maddening fash tics where they reverse the plain meaning a statement.
I think Scott Alexander (of all people) got the number of the tech-right Trump defenders on this one: https://xcancel.com/slatestarcodex/status/202741423748490451...
It's bad faith to call one's position in a dispute "obvious", and that's before we even get to all the insults.
(What is obvious is the kind of response I will get, which is why I will ignore it and not comment further.)
> petite bourgeoisie clutching their pearls
> mean girl slights
Lick! The! Boot!
It’s the mob. This is nothing more than, “Nice AI ya got here. Be a shame if sometin’ wuz to happen to it.”
Except that it’s sovereign.
What's sovereign - the mob? The AI company? Being enigmatic for cool points isn't conducive to productive discussion.
My take is the commenter was implying something like "Yes, like the mob, but worse, because it is done under the auspices of a national government."
I got the meaning right away, but I can appreciate if others didn't. I didn't read it as intentionally enigmatic, fwiw. Sometimes short punchy comments really land, sometimes not -- it is a risk. As you can probably tell, I err in the other direction. (:
Sovereign like King George III?
He has refused his Assent to Laws, the most wholesome and necessary for the public good.
[...]
He has endeavoured to prevent the population of these States; for that purpose obstructing the Laws for Naturalization of Foreigners; refusing to pass others to encourage their migrations hither, and raising the conditions of new Appropriations of Lands.
[...]
He has obstructed the Administration of Justice, by refusing his Assent to Laws for establishing Judiciary powers.
He has made Judges dependent on his Will alone, for the tenure of their offices, and the amount and payment of their salaries.
He has erected a multitude of New Offices, and sent hither swarms of Officers to harrass our people, and eat out their substance.
He has kept among us, in times of peace, Standing Armies without the Consent of our legislatures.
He has affected to render the Military independent of and superior to the Civil power.
[...]
For Quartering large bodies of armed troops among us:
For protecting them, by a mock Trial, from punishment for any Murders which they should commit on the Inhabitants of these States:
For cutting off our Trade with all parts of the world:
For imposing Taxes on us without our Consent:
For depriving us in many cases, of the benefits of Trial by Jury:
For transporting us beyond Seas to be tried for pretended offences:
— The Declaration of Independence https://www.archives.gov/founding-docs/declaration-transcrip...
People threw tea in Boston Harbor over less than the tariffs.
So are we. You want garbage picked up in your town, you gotta talk to us.
Let us sit upon the ground, and tell sad stories of the death of kings.
Sovereign like Putin.
Keep in mind that Anthropic “is the only A.I. company currently operating on the Pentagon’s classified systems” [1].
[1] https://www.nytimes.com/2026/02/27/technology/defense-depart...
Because Palentir is using Anthropic.
From what i understand, Palentir using Claude during the capturing of Maduro is the reason all this started, as Anthropic did not agree their systems were used that way. [1]
Obviously Palentir and others need time to migrate off Anthropic’s products. The way i read it is that Anthropic made a serious miscalculation by joining the DoD contracts last year, you can’t have these kind of moral standards and at the same time have Palentir as a customer. The lack of foresight is interesting.
1 https://www.axios.com/2026/02/15/claude-pentagon-anthropic-c...
They are the same amount of ‘risk’ to national security that the various ‘emergencies’ the executive branch has used as legal excuses to do otherwise illegal things are emergencies.
Congress is negligent in not reigning this kind of thing in. We’re rapidly falling down so many slippery semantic slopes.
I'm def adding "slippery semantic slopes" to my vocab.
> Conservatism consists of exactly one proposition, to wit: There must be in-groups whom the law protects but does not bind, alongside out-groups whom the law binds but does not protect
For this administration the law isn't something that binds them, but something they can use against others.
Don't make the mistake of thinking their words have meaning. They see a way to punish the company, they take it. Same thing with declaring a national emergency to impose tariffs. There's no supply chain risk, no national emergency, but that doesn't stop them.
the administration which declares ad-hoc emergencies is behaving as predicted
[flagged]
Dont forget Nvidia technology was condsidered too sensitive to be exported to China....until the Trump administration decided they could export it if they paid a 10% export tax.
The part of this you're missing is that China doesn't want it [1].
Why? Because China will make their own. This has been obvious to me for at least 1-2 years. The US doesn't allow EUV lithography machines from ASML to be exported to China either. I believe the previous export ban on the most advanced chip was a strategic error because it created a captive market of Chinese customers for Chinese chips.
China will replicate EUV far quicker than Western governments expect. All it takes is to throw money at a few key ASML engineers and researchers and the commitment of the state to follow through with this project, which they will.
I'm absolutely reminded of the atomic bomb. This created quite the debate in military and foreign policy circles about what to do. The prevailing presumption was that the USSR would take 20 years to develop their own bomb if it ever happened.
It took 4 years.
And then in 1952 the US detonated the first thermonuclear bomb. The USSR followed suit in 1953.
[1]: https://www.tomshardware.com/tech-industry/artificial-intell...
this is inacccurate, tesla was the first mover in china's EV market and held by far the largest market share for over a decade. obviously that was in large part to elon hiring chinese systems engineers to build out the first super factories and using chinese robotics tech. but ever since losing those key early leaders, tesla has completely fallen behind.
We've moved beyond telling people not to forget and have entered "expect nothing less" territory
Aren't export taxes against the US constitution?
Yes but only if you call them export taxes.
If it’s payments to continuously verify National Security protections, it’s all good.
Yes ... but what's your point? /s
[dead]
Isn't this our governments classic negotiation strategy? Go to the extreme, and meet somewhere well on their side of the middle.
The Trump administration tends to use this playbook.
Putting aside my take, I’m trying to objectively make sure I’m grounded on what is likely to happen next, without confusing “what is” with “what is ok”.
Can't just unplug the thing and use something else.
Obviously the DoD would not want limited use. Strange they don't make their own given their specific needs.
I think this is maybe the most revealing thing about this saga, that seemingly the U.S. government has not been training their own frontier models.
Perhaps getting donations from Altman and "investing" directly in OpenAI is more convenient.
> Obviously the DoD would not want limited use.
I agree in this sense: Hegseth's Dept. of War doesn't want any restrictions. I'll try to make the case this is self-defeating, assuming one has genuine, long-term national interests at the front of mind (which I think is lacking or at least confused in Hegseth).
Historically, other (wiser) SecDefs would decide more carefully. They are aware when their actions would position DoD outside of reasonable ethical norms, as defined both by their key personnel as well as broader culture. I think they would recognize Hegseth's course of action as having two broadly negative effects:
1. Technology, Employees, Contractors. Jeopardizes DoD's access to the best technology. Undermines efforts in hiring the best people. Demotivates existing employees and contractors. Bullying leads to fearful contractors who perform worse. Fewer good contractors show up. Trumpist corruption further degrades an already lagging, sluggish, inefficient system.*
2. Goodwill & Effectiveness. Damages international goodwill that takes a long time to restore. Goodwill is a good investment; it pays dividends for U.S. military strength. The fallout will distract Hegseth from legitimately important duties and further undermine his credibility. Leading probably to a political mess for Hegseth, undermining his political capital.
* Improving DoD procurement is already hard given existing constraints. Adding Trumpist-level corruption makes it unnecessarily worse. There is already an unsavory, poorly tracked, bloated gravy train around the military industrial complex.**
** BUT... Despite all this, the system has more or less worked reasonably well for more than what, 80 years! It has enjoyed bipartisan continuity, kept scientists and mathematicians well funded, and spurred lots of useful industries. It is, in a weird gnarly way, a sort of flux capacitor for U.S. technical dominance.
I agree with the damage. Not simply an unwise spokesman though. It's the trend in the entire administration, or one should identify as the United States slide into less sugar coating, if any.
We will kidnap statesmen, we will conduct illegal arrests, illegal tariffs, threaten to take over lands from our traditional allies, bomb our enemies during negociation, without congress approval. All for the good of the American Empire.
It would be a far stretched script for fiction, but that's exactly the terms and actions taken just in the last year.
I doubt these were the recipe for what worked well for the last 80 years. Momentum is the result of a smoother and more balanced doctrine.
> So I guess our national security is significantly at risk for the next six months?
That does seem to be what Hegseth is arguing, yes; and that is presumably his justification for doing something drastic here. Although I assume he is lying or wrong.
And as a cynic, let me just add that the image of someone going to the political overseers of the US military with arguments about being "effective" or "altruistic" is just hilarious given their history over the last ~40 years.
There has been a terrifying decline in quality and an increase in corruption in Trump’s second administration.
Re: the hilarity part, I’m conflicted: in general, a good sense of humor is useful, but in present circumstances a stoic seriousness seems warranted.
[flagged]
Any documentation regarding the claim about breaking their contract?
Haven't heard that. Regardless, as someone who works with these models daily (as well as company leadership that loves AI more than they understand it) - Anthropic is absolutely right to say that the military shouldn't be allowed to use it for lethal, autonomous force.
The United States has freedom of speech. The Supreme Court has ruled that money is speech. A company can always direct their money, speech, however they like with regards to the government. Can you be sued for breach of contract? Sure. Is it a supply chain risk absolutely not.
> They are a "supply chain risk" if they can willy-nilly break their contract with US govt and enforce arbitrary rules to service.
It is the US govt that seeks to break their contract with Anthropic.
The contract they signed had the safeguards, so they were mutually agreed upon. These safeguards against fully autonomous killbots and AI spying of US citizens was known before signing.
This conflict now is because the US govt regrets what they agreed to in the contract.
> These safeguards against fully autonomous killbots and AI spying of US citizens was known before signing
source?
https://www.axios.com/2026/02/24/anthropic-pentagon-claude-h...
[flagged]
> completely understandable decision from a neutral third party PoV.
Except it's not, really. If Anthropic/Claude doesn't mean the DoD's need, they can and should just put out an RFP for other LLM providers. I'm sure there's plenty of others that'd happily forgo their morals for that sweet government contract money.
No US company has to provide services to the DoD or any other branch of government. It's not "veto power" it's being selective of who you do business with, which is 100% legal.
I don't understand your point here. Looks like what you suggest is exactly what is happening. US government did not ban Anthropic from conducting business in the US. They just don't want them to influence their own supply chain, 100% legal as you say.
If the government just banned all government agencies from working with Anthropic, that would be reasonable. But they didn't. They're banning any company that works with the military from working with Anthropic in any way, using a law that has never been invoked against an American company.
Well, great! Sounds like this is exactly what Anthropic wants and hopes for; for their technology to minimally benefit warfighting. Otherwise, are you suggesting they are so evil that they were just advertising those the terms to fool us and virtue signal?
> has never been invoked against an American company.
There's always a first. I am assuming it is not illegal to do that. It's a completely reasonable business decision to ensure your supply chain does not depend on things that may change against your goals. For example, you don't want to build or depend on an open source platform that you know is gonna rug pull, if you count on it remaining open source, do you? American or otherwise.
Anthropic was not anticipated injured party with standing in American courts, until today, now they are very much injured and do have standing to bring a whole slew of lawsuits against the administration who is operating illegally and unconstitutionally against an american company. This seems like the start of the battle for anthropic not the end. The government signed contracts they don't get to just reneg whenever they fucking please because cheeto bantito in chief and his unhinged alcoholic secretary of defense are unreliable liars
The governments supply chain is like 80% of the US
And the point is? They made a voluntary business decision not to sell to them, whatever that number is. Possibly more than offset by marketing gains and loyalty from other segments; or not.
Its not voluntary if its coerced and done under threat.
I don't understand this phenomena of people acting like the stupidest human beings who ever lived. This is not your first day on Earth - you understand how coersion works and what voluntary means.
The US government is applying severe sanctions against a US company that does not "influence their supply chain". Donald Trump believes the economy is great and at the same time declares economic emergencies to justify doing certain things. It could be true that Anthropic's products are useless for the DoD because of the products' safeguards, but that doesn't mean they're a risk to the US government.
As to this being 100% legal, I'm not so sure (not a lawyer). It might not be a criminal offese, but there's a whole category of abuse of power that this may fall under if Anthropic is put under a certain status without real justification. Many powers given to the executive branch are not absolute and can't be applied arbitrarily, but require justification. Anthropic might be able to sue the government for declaring them a "supply-chain risk" without sufficient justification. E.g. they could claim that not being sufficiently patriotic in the eyes of the administration does not constitute a risk, and that since their not the sole supplier of the tech, they were not trying to strong arm the government to do anything.
I agree with your second paragraph; we will have to see to what degree the "viral" effect of Supply Chain Risk designation goes (perhaps you contract the DoD under an LLC that has a supply chain firewall from your company) and also look forward to seeing how this would be handled in court, but I would not automatically be dismissive of this being totally legal.
> does not "influence their supply chain"
I would be wary of making this conclusion. Obviously it could conceivably influence the supply chain when you build on top of their model. If you look at the type of risks enumerated in DoD guidelines, it is not just "oh this software has vulnerability" which is what started the discussion in this subthread in the first place. There are many kinds of risks DoD needs to address, none are particularly new; including Sustainment Risk. The closest thing I remember to this case was Sun Java "no use in nuclear facility" EULA term, which LLM suggests was ignored by DoE/D because that was interpreted as a "limitation on warranty" not a "restriction of use."
Then you go to another supplier. But any company with proper counsel will tell them the same thing: don't break the law, which is exactly what they're trying to coerce Anthropic into doing. DoD requests do not supersede the law.
What is this "law" you speak of?
I understand 'goals' and 'means to an end', but this concept of "law" evades me.
Not unless they're the sole supplier of the technology. They're saying, if you want to do this kind of thing - not with our product, but you can get it elsewhere.
No, you are the one lying trying to get political gotchas here. There is no "trying to exert veto power" absolutely anywhere, Anthropic's terms were laid out in the contract the Pentagon signed, which they want to forcibly amend. If they didn't like the terms, they didn't need to sign the contract.
What are you suggesting here? US government breaching the contract already signed? I am not aware of that happening here.
> Anthropic's terms were laid out in the contract the Pentagon signed, which they want to forcibly amend.
It's called negotiation in business. I am sure both sides are clear-eyed on what the consequences were and Anthropic made a calculated bet (probably correctly) that some segment of their employee/customer base would get wet by hearing this news and it more than offsets the lots business, thus is worth it.
It appears that when it comes to Jesse Jackson you're entirely capable of understanding how a shakedown works: https://news.ycombinator.com/item?id=47046514
Yes, I am entirely capable of doing that. Your point?
I'm providing information for other readers to evaluate your good faith, or lack thereof.
That's a nice straw man you got there. I don't mind you characterizing the negotiation however you want. That's not the debate. Call it "shakedown" or "mafia" as someone else mentioned, or whatnot (although it is appears the company that was trying to grandstand the elected US Government by dictating their own terms was Anthropic, not the other way around, but I digress). The question is was it a breach of contract or just a tough negotiation?
Companies have gone out of business due to a big customer pulling the contract. Imagination Technologies comes to mind. This is not a rare thing in business.
I have to admit, “accept this unilateral change to the contract or we will use the full power of the US government to destroy your company” is certainly a tough negotiation stance. You got that part right.
How did you get the "destroy your company" part? If HN sentiment is any evidence, they are even more popular than before. GPU is a constrained resource and I am sure they are going to have enough business to saturate what they got. I'm certain they would have just removed (and still will remove) two paragraphs from the terms had it really "destroyed their company."
> full power of the US government
Haha, I can assure you that is not even close to the full power of US government. Ask the crypto people during Biden admin for just a little more power (still not even close to "full.")
"Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic."
For a company of Anthropic's size, this may very well be a death sentence, even if their work has nothing to do with the military supply chain. They could have just canceled the contract, but they wanted to go full Darth Vader on them to prove a point in case anyone else thought about "negotiating" "voluntarily" with the federal government.
You don't think Anthropic is going out of business any minute now, do you? This is just rhetoric. Affirmative evidence is they would just remove two paragraphs if they were.
> I am not aware
People have noticed.
> It's called negotiation in business.
The bad faith in this statement alone is almost equal to the sum of it in the rest of your comments.
You seem really unaware of the timeline of this issue and what has actually happened, I think you should update your info before posting so confidently wrongly.
The contract, including Anthropic's redlines, was signed more than a year ago and has been humming along with no objections from anybody. Hegseth abruptly got a bug up his ass about it last week, and demanded Anthropic sign a revised version under threat of punishment. Anthropic is simply saying "no, we will not be forced into signing a new version, you can either keep going with the original terms we all agreed to, or stop using us". The Pentagon can simply stop using Anthropic if they don't like the terms anymore (which, again, are the terms Pentagon agreed to in the first place). But what the DoW wants is to strong-arm Anthropic, using the DPA, into new terms because they abruptly changed their mind. That's not "negotiation" in any sense, that's Mafia behavior.
How you characterize the behavior, Mafia or not, is of course your opinion, and I am sure if you are a voter/stakeholder you'd consider that in your political activity, but I'd appreciate if you clarify what you mean but your story and timeline, so I ask again, are you suggesting the US government has breached the contract they already signed?
I don't know why you keep bringing up breach of contract, it is not relevant to this discussion at all. No, the government did not breach the contract AFAIK, they just decided they didn't like it anymore, and instead of either withdrawing or entering into a negotiation about it, they decided to use threats to try and get their terms at metaphorical gunpoint.
The actual terms of the contract aren't even relevant, this is purely a matter of tort law and whether you can bully someone into a new contact because you woke up one day and decided you didn't like the one you agreed to.
Because you implied it here:
> Anthropic's terms were laid out in the contract the Pentagon signed, which they want to forcibly amend.
They want to "forcibly amend" is either within their rights per original contract, or not. One is fair game, the other is not.
I did not read that as implying breach of contract, and AI don't understand your explanation.
Isn't agreeing to amend a contract always within their rights?
The comment you replied to is pretty clear: Yes, the US government seeks to void the contract they already signed.
That said, many government contracts include some variant of "we can cancel at any time for any reason".
It's actually even worse than that: Anthropic already agrees that the Pentagon can walk away from the contract and stop using Claude if they want to, there's no dispute there. What the Pentagon wants is to force Anthropic into a new set of terms which cannot be refused.
[flagged]
I'm just curious, do you understand that the DoD isn't saying it won't do business with Anthropic. Its saying it will also ban any company that does business with the DoD (so 90% of large enterprises?) from doing business from Anthropic. Are you aware of this?
Yes, I am aware. That is not entirely unreasonable if it touches the actual Supply Chain tree. I do fully sympathize that the extent of legality of that rule should be clarified/restricted if say, Claude is used by a separate division unrelated to DoD business. I think courts will resolve this, likely fairly quickly via an injunction.
Hegseth managed to get through art of the deal? Maybe he made a drinking game out of it, a shot per page.
Or worse: train the AI to make decisions that align with the view of Anthropic management and not the elected government. Workout telling anyone.
I’d agree it is a serious risk.
The government is supposed to represent the people and their will, not dictate
The current government is deeply unpopular, it's only going to get worse for them.
This rather implies that simply being elected casts a binding on officials that forces them to pursue popular will with their mandate.
I admire Anthropic for sticking to their principles, even if it affects the bottom line. That’s the kind of company you want to work for.
It's also a very clear differentiator for them relative to Google, Facebook, and OpenAI, all of whom are clearly varying degrees of willing to sell themselves out for evil purposes.
It will also cost openai dearly if they don't communicate clearly, because I for one will internally push to switch from openai (we are on azure actually) to anthropic. Besides that my private account also.
You can deploy Opus and Sonnet on Azure.
This will not cost OpenAI anything.
Thanks for being the voice of cynical inaction.
Is making effective weapons evil?
Given the history of US military adventurism and that we’re about to start another completely unjustified war of aggression against Iran, yes. Absolutely yes.
If it wasn't for US military power, Russia would have already overrun Ukraine. And if Iranian nuclear program is destroyed and the regime falls, it would be a good thing. For context, I'm from Czechia.
I'm from the US and strongly disagree that either of those things are a benefit to me as a US citizen. All it's doing is taking my money and putting me more at risk, and in the case of the attack on Iran: making me complicit in the most immoral acts imaginable.
Whether it's justified or not depends on what you're trying to achieve. If your goal is to deny nukes from Iran, then the war is entirely justified.
The same admin that tore up the agreement for this we already had with Iran?
Not the same admin (that was Trump as the 45th), but I don't see the argument you're making.
A weapon is a tool.
Whether they are good or evil depends on the hands that hold it.
In good hands, weapons provide defense, deterrence, and protection.
In bad hands, weapons hurt the innocent, instill fear, and oppress.
The hands that wield them make all the difference.
What about all the weapons forbidden by the Geneva convention?
> What about all the weapons forbidden by the Geneva convention?
Some weapons are prohibited Geneva convention because they are designed to cause suffering or indiscriminately kill non-combatants:
"Weapons prohibited under the Geneva Convention and associated international humanitarian law (including the 1925 Protocol, CCW, and specific treaties) include chemical/biological agents (mustard gas, sarin), blinding lasers, expanding bullets, and non-detectable fragments. Also banned are anti-personnel landmines and cluster munitions.
Key prohibited and restricted weapons include:
Chemical and Biological Weapons: The 1925 Geneva Protocol and subsequent conventions (1972, 1993) banned the use, development, and stockpiling of asphyxiating, poisonous, or other gases, including nerve agents and biological weapons.
Blinding Laser Weapons: Specifically designed to cause permanent blindness (Protocol IV of the CCW).
Non-detectable Fragments: Weapons designed to injure by fragments not detectable in the human body by X-rays (Protocol I of the CCW).
Incendiary Weapons: Restrictions on using fire-based weapons (like flamethrowers) against civilian populations (Protocol III of the CCW).
Anti-personnel Landmines: Banned under the Ottawa Treaty (1997) due to risks to civilians.
Cluster Munitions: Prohibited due to their indiscriminate nature.
These treaties aim to protect civilians and combatants from unnecessary suffering and long-term danger."
Would "good hands" choose weapons that are designed to cause suffering or that kill indiscriminately?
No, they would not.
That’s a simplistic framing (obviously)
What does effective weapons mean in this particular instance?
Depends what the customers of anthropic and OpenAI think.
Yeah
"You need me on that wall!"
This guy sounds like he ordered a code red.
Yes?
Companies change (remember "don't be evil"?) but yeah for the Anthropic of today, respect.
I'm signing up for their $200/year plan to reward them for standing up to this regime.
The team that handles their PR has done an amazing job in the last 9 months
Hint: It's much easier to have good PR by being actually good. Though it does make people like this do the whole implication thing.
I saw this the other day:
> Costco is a really popular subject for business-success case studies but I feel like business guys kinda lose interest when the upshot of the study is like "just operate with scrupulous integrity in all facets and levels of your business for four decades" and not some easy-to-fix gimmick
https://bsky.app/profile/mtsw.bsky.social/post/3lnbrfrvmss26
I don't know, staff at my two Costcos feel much more disinterested and rude then I remember a decade ago. It used to feel fun but now it's miserable.
At peak times they run out of carts and tell the customers to go hunting in the lot for them, door greeters shouting at members across the floor, checkout queues stretch the length of the warehouse, they start half blocking the gas station entrance 30mins before close so trucks can't get in, so maybe they're turning those profit screws.
>It used to feel fun but now it's miserable.
It's not their job to entertain you.
'Delight the customer' is a basic tenet of business. A business that wants repeat customers, that is.
Ah, right, by being actually good, as in - being okay with mass surveillance as long as it isn't being done in the US, being okay with Claude assisting in killing people as long as it isn't fully autonomous, and being actively hostile to open-weight LLMs and open research on LLMs? This kind of "good"?
No, OP is right, their PR department is doing a great job.
Correct. Protect our citizens' rights, as we are the ones under the jurisdiction of our government. Yes, design competitive weapons systems that can stand up to the threats that adversary powers are creating, but do so while maintaining human control.
That kind of good.
It’s nice that Americans are being so open about how they feel about other countries these days.
"these days"? Too many countries/HNers are only just figuring out it's not fun being at the sharp-end of imperialism.
What part are you bothered about? The concept of nations?
Sibling comment summed it up pretty well; my country is considered an ally of yours, but even left leaning Americans seem to take it for granted that we deserve mass AI surveillance/blackmail/manipulation if there’s a chance it could benefit us citizens in the short term. I suppose we deserve it for being complicit in American crimes for so long
You're assuming things I didn't state. I don't particularly want mass AI surveillance at all, but considering how much more dangerous a government's mass spying is to its own citizens living in it 24/7, it's not unreasonable for that to be the focus.
> You're assuming things I didn't state. I don't particularly want mass AI surveillance at all
That's fair, sorry for that.
> considering how much more dangerous a government's mass spying is to its own citizens living in it 24/7, it's not unreasonable for that to be the focus
The US government is actively trying to influence politics in my country and spending huge amounts of money to do it. The US government is a much larger threat to us than our own government.
All of our tech is owned and operated by US companies, which means the US government has read/write access to all of our data. If we attempt to incentivize domestic software production (e.g. by taxing imported software, or by stipulating where our data can be stored and who can access it), the US government will destroy our economy. This has played out a few times recently.
I can't believe we were so foolish as to let this situation grow. Its going to be a painful few decades.
How have they been hostile to open weight models and research? Just because they don't release models themselves?
Note that they are still releasing interesting research
Why? What has their PR department done? Most people are quite critical of a lot of their messaging, it's their actions that seem worth encouraging
[flagged]
It's funny, because even if they walk it back, they still would come out ahead in PR versus if they just rolled over. Because at that point, it would look like a hostage victim reading a statement that they are being treated well by their captors in front of a camera.
The admin is clearly running out of steam yet you expect them to be able to get what they want next week after failing this week?
Ive been hearing this since 2016. Any day now.
Do you think that bad things happening is just hilarious in general? Do you like to see good behavior punished? I'm really trying to understand what you get out of making this comment. Also what happens when ... This doesn't happen? You just polluted the epistemic commons a bit more with some cynical bullshit sans consequence? Enough. I think it's time to start calling this garbage out when I see it.
Two things can be true at the same time. It can notionally be a “good” decision and also a straightforward act of Anthropic continuing their PR that they’re some sort of benevolent entity despite continuing to pursue a typical corporate capitalistic structure. It is what it is. The game is the game. But I’m not going to sit there and pretend their virtues are as pure of snow. I’m sorry that’s upset you.
This whole saga is extremely depressing and dystopic.
Anthropic is holding firm on incredibly weak red lines. No mass surveillance for Americans, ok for everyone else, and ok to automatic war machines, just not fully unmanned until they can guarantee a certain quality.
This should be a laughably spineless position. But under this administration it is taken as an affront to the president and results in the government lashing out.
We live in a timeline where you don’t have to have strong morals to be crushed. If you have any morals, you will be crushed.
They have earned my business, for now.
If you're a billionaire there's no risk to "sticking to principles", so there's nothing to admire. Also that's not what they're doing. These are calculated moves in a negotiation and the trump regime only has 3 years left. Even a CEO can think 4 years ahead.
It's probably in Anthropic's interest to throw grok to these clowns and watch them fail to build anything with it for 3 years.
i disagree. 3 years is an insanely long time in the AI space. The entire industry pretty much didn't even exist three years ago! Or at least not within 4 orders of magnitude.
Also, every other company has bent the knee and kissed the ring. And the trump admin will absolutely do everything they can to not appear weak and harm Anthropic. If it was so easy to act principled, don't you think other companies would've refused too? Eg Apple
And there is real harm here. You're reading about it - they get labeled a supply chain risk. This is negative and very tangible
Considering how many bootlicking billionaires I see these days, it is still a bit surprising.
[flagged]
There is already genai.mil: https://www.war.gov/News/Releases/Release/Article/4354916/
why does it need to be a completely different, trained model? AWS doesn't provide unique technologies in their goverment cloud, beyond isolation and firewalled access; Anthropic can do the same thing. Probably need to cough up enough to register a new domain name!
I can think of two reasons. One, to have the plausible deniability with the necessary future statement "Claude is not used by the DoD/DoW to conduct domestic mass surveillance or autonomous killing"; by having the model be properly a different from the one used by the public, they can wrangle over the language with technicalities and still avoid outright lying. (With their IPO in sight, let's keep in mind that everything is securities fraud.)
And two, I suspect that some of the guardrails have been "baked in" to Anthropic's model. Much in the same way as the Chinese open-weight models have a strong bias against expressing positive sentiments about Tiananmen Square, Tank Man or Winnie the Pooh, the "Standard Claude" would likely have the fundamental product biases trained into it.
Taken together it would therefore be both politically and financially sensible for Anthropic to create a separate, unrestricted[tm] almost-Claude for the morally unconstrained military / intelligence purposes.
Exactly.
> 83 people in total killed in US attack to abduct President Nicolas Maduro
Blood is on their hands already
So much left unsaid. So much implied. Let’s make it explicit and talk about it. Here are some follow questions that reasonable people will ask:
What was Anthropic’s role in the Maduro operation? (Or we can call it state-sponsored kidnapping.) Who knew what and when? Did A\ find itself in a position where it contradicted its core principles?
More broadly, how does moral culpability work in complex situations like this?
How much moral culpability gets attributed to a helicopter manufacturer used in the Maduro operation? (Assuming one was; you can see my meaning I hope.)
P.S. Traditional programming is easy in comparison to morality.
Good. I'd rather not have my favorite AI from a company working on AGI to have murder and spying in it's DNA.
In fact, as a patriotic American veteran, I'd be ok with Anthropic moving to Europe. It might be better for Claude and AGI, which are overriding issues for me.
Rutger Bregman @rcbregman
This is a huge opportunity for Europe. Welcome Anthropic with open arms. Roll out the red carpet. Visa for all employees.
Europe already controls the AI hardware bottleneck through ASML. Add the world's leading AI safety lab and you have the foundations of an AI superpower.
https://x.com/rcbregman/status/2027335479582925287
> Good. I'd rather not have my favorite AI from a company working on AGI to have murder and spying in it's DNA.
Anthropic made it quite clear they are cool with spying in general, just not domestic spying on Americans, and their "no killbots" pledge was asterisked with "because we don't believe the technology is reliable enough for those stakes yet". The implication being that they absolutely would do killbots once they think they can nail the execution (pun intended).
I suppose you could say they're taking the high road relative to their peers, but that's an extremely low bar.
I wouldn't say it's clear. People keep pointing to the wording used in the statement to say it, but I wonder if it has to do with constitutionally; domestic surveillance of people in the US without a warrant is against the constitution, and surveillance of non-citizens outside the U.S is not. Can they even be compelled by the executive branch to do an action that may be unconstitutional?
Sure they can. They can “temporarily” suspend parts of the constitution in times of “grave national peril”, and hand out presidential pardons in advance. But doing that would surely be considered dropping the last fig-leaf from the performance art of giving a fuck about the constitution.
I guess that my point is: Saying that you are against surveillance in general is a morally sound position, but would not be a defense if the DoD invokes the DPA, as one can't just refuse an order due to it being immoral. One can refuse an order if the order contradicts with the constitution.
> Can they even be compelled by the executive branch to do an action that may be unconstitutional?
Seems like legally the answer is "no".
But it also seems like practically the answer is "definitely".
I have my doubts about Anthropic wanting to pick up and move the entire company to Europe even if Ursula von der Leyen personally signed their visas. Maybe only if the government tried to nationalise their proprietary models.
doesn't the Defense Production Act essentially do that?
So, is Anthropic a threat to, or indispensable to National Security? You can't have it both ways. The US used to act like a nation with the rule of law, anyone cheering for the erosion will be hit by the downstream effects sooner or later, amd they will not like it.
Canada is another option. Canada has significant AI research institutes going back decades ( https://en.wikipedia.org/wiki/Mila_(research_institute) ) that have produced much of the foundational research that backs today's AI models.
For Americans and international researchers it's easy to get visas there quickly. It's not far at all for Americans to relocate to or visit. Electricity is cheap and clean. Canada has the most college educated adults per capita. The country's commitment to liberalism, and free markets, is also seeming more steadfast than the US at this point in time.
Canada faces obstacles with its much smaller VC ecosystem, its smaller domestic market, and the threat of US economic aggression. Canada's recent trade deals are likely to help there.
I say this all as an American who is loyal to American values first and foremost. If the US wants to move away from its core values I hope other countries, like Canada or the EU, can carry on as successful examples for the US to eventually return to.
Canada is not as good as Europe when it comes to be out of reach of the US
Do all of the employees want to move to Europe suddenly? Unless it’s the UK or Ireland, do they speak the local language? If it is the UK or Ireland, do they prefer the weather in California? Do they have children in school or in college locally? Do they have family they’d rather not move 9 time zones away from? Elderly parents they’re taking care of?
They only have to move their headquarters no? Reincorporate in France. Hire Yann LeCun (I like LeCun)
responding to "Visa for all employees." (I know that is a quote from a tweet)
LeCun is starting is own thing, I doubt he wants to drop it? He also lives in NYC afaik, he is a professor at NYU.
I'm pretty vocal about our collective responsibility to work against the Trump administration, and even I would be hesitant to work as a US employee of a company that fled the country after a dispute with the US military. Seems like an extreme threat to my personal safety for little resistance benefit.
I don't know. Depending on the company, I'd see that as a mark of great pride.
History and the world are strewn with people (and hence entities) that fled the land and kept the fight on (and alive) from outside, and it mattered. In fact, it helps. Other options could be acquiesce or extinguish.
But, is there a safe haven that'd stand up against the blatant bullying and daily (or more frequent) national threats/trolling (which often stem from social media and sometimes become reality)?
[dead]
[flagged]
Where is this text located? I googled "Anthropic Constitution" and found "Claude Constitution" (this this the same thing to you? I don't think the company Claude has a "constitution" itself.
Within the Claude Constitution, the words "non-western" do not appear. Where is your quote from?
AGI? My guy, it's a text predictor slot machine. Very useful tool but will never be AGI.
"I can state flatly that heavier than air flying machines are impossible. — Lord Kelvin, 1895"
I'm sure this doesn't apply to you since you're not Lord Kelvin. On the other hand, people like Peter Norvig state in a popular AI textbook that, for example, they don't know why similar concepts appear close by in the vector space, so maybe you just know something other people don't.
Said the biological text predictor…
Map problems to slot machines, guess enough slots and you're indistinguishable from GI.
I'm not taking a position here but the person you're replying to stated that Anthropic are working on AGI, not that their current LLM offering will evolve into AGI.
Ok that's different then. LLM, by definition, can't be AGI. But AGI can be AGI with another technology.
> LLM, by definition, can't be AGI.
False, and you've given no argument to the contrary. There's certainly no definition that precludes it. It isn't, currently; there's no reason it can't be, any more than there's reason that Conway's Game of Life can't be, given sufficiently interesting data to process. Any Turing-complete system could simulate AGI. It might not be the most efficient mechanism for doing so, but that's not the question at hand.
2021 called, they want their uninformed metaphor back.
Oh sorry: *text predictor that feeds text back into text predictor
And?
If Anthropic moving to Europe was better for Claude, why has Europe not produced Claude?
Why wouldn’t the government just arrest their board and execs on charges of treason or something? At this point they could probably publicly hang them all and a plurality of Americans would cheer it. I don’t know if you appreciate how disliked tech is by the left and right alike.
Europe doesn’t give a shit about another American company and their employees trying to dominate their markets and import their workaholic American culture. They will tell Anthropic to go home.
Topics like this are where I struggle with HN philosophy. Normally avoiding politics and ideology where possible, created higher quality and more interesting discussions.
But how do you even begin to discuss that Tweet or this topic without talking about ideology and to contextualize this with other seemingly unrelated things currently going on in the US?
I genuinely don't think I'm conversationally agile enough to both discuss this topic while still able to avoid the political/ideological rabbit-hole.
You can't discuss this topic without broaching the idea that the government is acting in bad faith — that they don't actually believe that Anthropic is a supply-chain risk and that this action is meant to punish the company. But this is in the HN guidelines regarding comments:
> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
If a commenter who supports the government makes the same argument that the government is making, the guidelines tell us to assume good faith.
My conclusion is that any topic where a commenter might be making a bad faith argument is outside the scope of Hacker News.
I've been on hn for years and I see this kind of sentiment raised all the time. It is not my understanding of the guidelines.
Politics and ideology are not off topic, provided the subject matter is of interest, or "gratifying", to colleagues in the tech/start-up space.
What's important is that we don't use rhetoric, bad faith or argumentation to force our views on others. But expressing our opinions about how policy affects technology and vice versa has always been welcome, in my observation.
So, what do you think about the US government's decision, and why?
> Normally avoiding politics and ideology where possible, created higher quality and more interesting discussions.
Everything is politics and "ideology"
>Topics like this are where I struggle with HN philosophy. Normally avoiding politics and ideology where possible, created higher quality and more interesting discussions.
Our whole society runs on technology. All tech is inherently political.
A "no politics" stance is merely an endorsement of the status quo.
Everything is political. All of our tech exists within society, and the actions of the government shape the incentives of every actor and the framework we exist in.
HN likes to pretend otherwise, especially when it's inconvenient.
If the last ten years have taught us anything it's that politics just isn't a topic isolated to the halls of government. It's real life. Political alignment has never so starkly indicative of your position on fundamental human morality. At the same time we've never had a government be so directly involved in private businesses.
Being a hacker used to be an extremely political and ideological movement. Then capitalism came along and bought the term. It's about time we take that word back where it belongs.
Why would you want to be non-political in 2026? The current administration is awful in ways we couldn't have imagined. There's no sense in not talking about it.
I appreciate your restraint, and keeping this a high quality discussion space. As a political dissident myself, I don't mind some threads going political, I expect them to. The best ones are when there is a lot of disagreement or debate. As long as its not in every unrelated thread....
Welcome to reality. HN likes to pretend politics is something you can just look away from and ignore. That’s a mighty big privilege, which makes sense since HN skews cis-white-het-male. That’s not a lie. It is easy to ignore this when it doesn’t touch them. But now it DOES touch them, and you’ve just discovered what every oppressed group in history has to live with: politics doesn’t just go away if you ignore it.
I don't know which HN you have been using so far, but this particular site discusses politics all the time when it comes to Trump administration.
Please at least try. There are already enough contributors here "qualified" to talk about politics.
[dead]
McCarthyism began in 1947, with Truman demanding goverment employees be "screened for loyalty". They wanted to remove anyone who was a member of an "organization" they didn't like. It began with hearings, and then blacklists, and then arrests and prison sentences. It lasted until 1959. (https://en.wikipedia.org/wiki/McCarthyism)
This is the new McCarthyism. Do what the administration says, or you will be blacklisted, or worse.
Feels a bit like Jack Ma and Alibaba
[flagged]
This could kill Anthropic.
The designation says any contractor, supplier, or partner doing business with the US military can’t conduct any commercial activity with Anthropic. Well, AWS has JWCC. Microsoft has Azure Government. Google has DoD contracts. If that language is enforced broadly, then Claude gets kicked off Bedrock, Vertex, and potentially Azure… which is where all the enterprise revenue lives. Claude cannot survive on $200/mo individual powerusers. The math just doesn’t math.
None of the hyper scalers are going to stop offering Claude. All of the big 3 have invested billions of dollars into Anthropic, and have tens (if not hundreds) of billions more tied up in funding deals with them. Amazon and Google are two of the largest shareholders of Anthropic.
Anthropic is going to be fine. The DoD is going to walk this back and pretend it never happened to save face.
Not entirely true.
The designation only applies to projects that touch the federal government, or software developed specifically for the federal government.
Contractors can still use Claude internally in their business, so long as it is not used in government work directly.
A complete ban would be adding Anthropic to the NDAA, which requires congress.
The DoD designation allows the DoD to make contractors certify that Anthropic is not used in the fulfillment of the government work.
It is narrower than that by law, though not by their proclamation.
That label forbids contractors on DoD contracts for billing DoD for Anthropic, or including Anthropic as part of their DoD solution.
So - AWS can keep claude on bedrock, but can't provide claude to the DoD under its DoD contracts
From what I’ve heard the actual restriction is just on using Claude for stuff they’re doing for the Pentagon. They’ll keep using Claude for everything else and be less effective when they work for the government, and that’s fine because everyone else working for the government will have the same handicap.
This will likely go to court, again as Dario has stated this is blatant retaliation as no US company has ever been designated a supply chain risk and they continue to operate on classified systems for 6 more months.
> This could kill Anthropic
I am both dumb and without access to Claude, thus I must ask: My fellow smart HN'ers, what kind of impacts would this likely have on the economy?
Has a lot of money and resources not been pumped into Anthropic (albeit likely less than OpenAI)? I imagine such a decision would not be the ROI that many investors expected.
I’m sure most of their revenue is large enterprise customers who serve government with their products - this looks very bad
There's going to be a TRO against the attempt by like 9 AM Monday, and the bad faith from the government couldn't be more obvious. All it's really going to do is cost them some extremely expensive lawyer time.
No, Anthropic could easily call their bluff.
"Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic."
This is authoritarian behavior. You're having trouble negotiating a contract, so instead of just canceling it - you basically ban all of F500 from doing business with that firm.
The US is currently an autocracy/idiocracy. A staggeringly corrupt, busted nation.
Soon enough the midterms will be effectively cancelled.
Americans remain blissfully unaware.
This certainly isn't going to attract foreign investment. Business isn't big on governments that capriciously threaten to seize control of or financially harm them.
[flagged]
[flagged]
> Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.
I’m sure the lawyers just got paged, but does this mean the hyperscalers (AWS, GCP) can’t resell Claude anymore to US companies that aren’t doing business with the DoD? That’s rough.
Probably yes. Additionally the (probably more for AWS) won't be allowed to use it internally either. This will probably apply to all the top SaaS/software companies unilaterally.
Additionally, every major university will undoubtedly have to terminate the use of Claude. First on the list will be universities that run labs under DOD contracts (e.g. MIT, Princeton, JHU), DOE contracts (Stanford, University of California, UChicago, Texas A&M, etc...), NSF facilities (UIUC, Arizona, CMU/Pitt, Purdue), NASA (Caltech).
Following that it will be just those who accept DOD/DOE/NSF grants.
Billable hours will win figuring it out but in theory, no because they can’t test it or use it.
Generally any machine that touches Supply chain Risk software cannot ship any software to DoD. AWS has separate clouds but software comes from same place.
Bigger question is whether government contractors can use any Open Source software after this. Open Source is a big part of the supply chain.
It means everyone waits for the injunctions.
(edit: I'm most likely wrong)
You got it backwards, can't use claude if you ARE doing business with DoD.
Presumably AWS/GCP don't care, its up to the end customer to comply. Not like GCP KYC asks if you work with DoD.
have you tried punching in "Huawei" the shopping portal on google.com in the US?
There is no way they can just stop selling Opus 4.6. This will crater the market.
Even more extreme, that might mean they won't be able to offer Claude to non-US companies at all.
Wait, what about Bun?
"They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security." from Dario's statement (https://www.anthropic.com/news/statement-department-of-war)
Supply chain risk ? Seems the risk here is the US Gov't wanting free reign to do whatever they want - - when they want.
Look no further than the famous expose by Mark Klein, the former AT&T technician and whistleblower who exposed the NSA's mass surveillance program in 2006, revealing the existence of "Room 641A" in San Francisco. He discovered that AT&T was using a "splitter" to copy and divert internet traffic to the NSA, proving the government was monitoring massive amounts of domestic communication.
The real question we should be asking is what others HAVE agreed to. Has OpenAI just agreed to let the government go crazy with their models?
It's scary to me that there are a significant voting-bloc out there who don't see this kind of zero-integrity (and self-serving) behavior as disqualifying in anyone wielding authority.
Worse, they act like it's virtuous.
Is this the same Administration that reversed a previous block, and allowed NVIDIA to sell H200 to China?
That's a shame. They might at least continue to work together to spy on foreigners. I don't understand the fuss anyway, what do claude models do that gpt and gemini can't?
It feels like when you are negotiating a contract for job with a toxic employer who you still don’t know they are toxic yet.
Trump wrote a long rant on Truth Social and ordered ALL federal agencies to stop using Anthropic. Not just the department of defense. This is straight up authoritarian.
Meanwhile, irrelevant "AI Czar" David Sacks, member of the PayPal mafia alongside known Epstein affiliates Elon Musk and Peter Thiel, is furiously retweeting all the posts from Trump, Hegseth, and other accounts. He is such a coward and anti American:
https://xcancel.com/davidsacks
[flagged]
I don’t see a contradiction here. If control is out of the hands of decision makers, that’s a supply chain risk . Were it not for that, the service is seen as critical to national security.
I dunno, safeguard seems like a weasel word here. It’s just reserving control to one party over another. It’s understandable why the DoD(W) wouldn’t like that.
Hard decision by Anthropic, but at least they can sleep well at night knowing their products doesn’t kill human beings around the world.
That’s the crazy thing. This whole dispute was over Anthropic saying no to fully automated kill bots. They only required there be a human in the loop to press the button.
'yet'. Their reason for not allowing autonomous weapons usage was it isn't ready, not that they wouldn't do it on principle. Only the surveillance objection was on principle.
I don't think it was that hard because if they had caved a LOT of employees would have quit.
A bit of a cop-out, don't you think?
They still pay taxes, which fund the US government, which kills innocent human beings around the world...
Sleep well in a box under the overpass maybe. If Amazon can’t serve Anthropics model until the courts get everything figured out it will be too late for them.
Strange times. I truly feel these are the last days of our Republic. Especially if more aren't willing to take a stand.
As a Canadian looking in, I see people talking about a 36% approval as low.
How is it that high!?
That means that more than 1-in-3 of your countrymen are ride-or-die, and it's just heartbreaking to see that we're going to have to launch that many people into the sun.
To counter point, do you think AI companies located on our adversaries turf will take the same stand? I agree its nightmarish to think of AI surveillance. But why is that being lumped in with weaponry? I see these as two separate issues.
I'd say you're right, except that Trump is near death (maybe) and (more importantly) very unpopular.
[dead]
American people: latinamerican here. Maybe it's silly to root for a country in the world hegemony arena. I've usually been partial to the USA over China. Now I'm not rooting for your country anymore. As far as I'm concerned, I'd rather have China being the foremost power, at least they seem to be less keen on invading or heavily strong-arming latinamerica
They are literally doing what China has already done. In what world would China be a better option here?
American here, I would much rather have China being the foremost power too. This saga with Anthropic shows just how clueless these AI companies are. This soap opera has to stop, none of these CEO's, officials from the Trump administration, or the Department of War are good for humanity. I've read the ethics policies that China that they released on generative AI and it's years ahead of anything we have in America.
China's AI Safety Governance Framework: https://www.cac.gov.cn/2025-09/15/c_1759653448369123.htm
Most Americans hate AI and it's effectively the ostrich effect where they hope to outright ban it and ignore everything else. Meanwhile, all the evil people are running the show. While Anthropic continues to propagate Sinophobic messaging, DeepSeek and other companies have a much more muted tone.
and USA created Islamic terrorism that is plaguing the whole world
I empathize, but surely China is not the right choice? Can we please have like, Australia? Or a unified EU?
I’m just laughing at the possibility of it he US military being forced to use Chinese open source AI models because every US model provider refuses to work with them.
Could the NSA use a national security letter to get a copy of a major private LLM?
>because every US model provider refuses to work with them
Zero percent chance of that happening as long as xAI exists.
They were already banned over a year ago
Pete Hegseth is frantically asking Deepseek to come up with targets in Iran and some plausible objectives he can sell to the public.
Ukrainians and Russians are experimenting with FPV drones using AI for target acquisition and homing. Not yet economically viable because it is cheaper to give your FPV fiber spool instead of Nvidia Jetson to bypass jamming.
When we have first politician blown to bits by autonomous AI FPV there will be sheer panic of every politician in the world to put the genie back into the bottle. It will be too late at that point.
Anthropic is correct with its no killbot rule.
Autonomous loitering munitions with 'AI' (image classification CNNs) are already in service and have been used - most demonstrably by the IDF.
Even during the Nagorno-Karabakh war, Azeri loitering munitions were able to suppress Armenian air defenses by hitting them when they rolled out of of concealment. I believe that killchain requires a level of autonomous functionality.
If one of our main adversaries is building these weapons already, this is actually an argument for developing this technology ourselves.
As written this would be the end of Anthropic. AWS, Microsoft et al are all suppliers of the DoW and as written they must immediate stop doing business with Anthropic. Will be interesting to see how this unfolds.
TACO
Why does everyone associated with this administration sound like a 17 year old who got dumped when they post on twitter.
Because this administration is entirely composed of those same 17 year olds, older but not any more mature.
Basically a reflection of the average intelligence in the U.S.
https://xcancel.com/i/status/2027507717469049070
Remember to vote in this year's midterms (Nov 3) if you're eligible. I don't think it's off-topic.
Hats off to Anthropic for not wavering here.
Supply-chain risks means "the potential for adversaries to sabotage, subvert, or disrupt the integrity and delivery of defense systems, including software, hardware, and services, to degrade national security".
So now Anthropic is an adversary, because it does not want "fully autonomous weapons" or automated mass surveillance? Sure thing, DoD. Go use Grok or whatever, I'm sure that will go great.
Sam Altman says OpenAI shares Anthropic's red lines in Pentagon fight [1]
So OpenAI will also be marked as a supply chain risk too, right?
[1]: https://www.axios.com/2026/02/27/altman-openai-anthropic-pen...
Really hoping for an official statement from oai. If all large llms are a supply risk, I guess it's a crash
Glad there are no hard feelings after those Superbowl ads
This is why you can't gatekeep AI capabilities. It will eventually be taken from you by force.
Open-source everything. Papers, code, weights, financial records. Do all of your research in the open. Run a 100% transparent organization so that there's nothing to take from you. Level the playing field for good and bad actors alike, otherwise the bad actors will get their hands on it while everyone else is left behind.
Stop comparing AI capabilities to nuclear weapons. A nuke cannot protect against or reverse the damage of another nuke. AI capabilities are not like nukes. Diffuse it as much as possible. Give it to everyone and the good will prevail.
Build a world where millions of AGIs run on millions of gaming PCs, aligned with millions of different individuals. It is a necessary condition for humanity's survival.
This is why OpenClaw (and other claw frameworks) ar so interesting. I'm not saying the current implementation is great, mind. But it's a possible safe-er scenario, where the ecosystem is already occupied.
Nuke is the only thing that can protect you from the potential damage of another nuke.
Decades of speculative science fiction, thought experiments, and discourse led to this. It’s gratifying to see that we’ve garnered enough concern, a major AI lab risking this to reign in the potential of runaway AI disasters. Hopefully we see other labs follow.
Recent and related:
Statement from Dario Amodei on our discussions with the Department of War - https://news.ycombinator.com/item?id=47173121 - Feb 2026 (1508 comments)
It's nice to see Anthropic sticking to their terms. I just have one question in all this. Why is Anthropic being singled out when it seems all the other big players are down to play with the DoD? Is this just a pissing match, or have the Anthropic models been proven the real winner for them?
It's same reason this administration recently tried to indict six Congresspersons for urging military members to resist "illegal orders." They want to demonize anyone who isn't blindly loyal to their side.
The discussion here underlines the reality that one can never make a “deal” with a powerful state, just as Lando Calrisian famously found out in Empire Strikes Back.
Dario is Lando, complaining “We had a deal!” Only to be told, “I’m altering the deal. Pray I don’t alter it any further.”
A drunkard, ex-fox news host, wants mass surveillance and automated killing, what could go wrong?
I wish I thought enough Americans had the spine required to stand up to this, and I know for a fact that a lot do... the solution is literally written into your constitution.
Good PR for Anthropic: the DoD already has contracts with OpenAI and xAI, but is still so eager to use Claude that they must threaten Anthropic.
This sounds like a message to would-be founders: don't base your company in the US. The strongest markets to do business are the ones with the most freedom from government meddling. In the US, big government is happy to use its power to crush private enterprise that it doesn't like.
Note that previously this label has been applied (nearly?) exclusively to non-US companies. US companies that don't do business with the DoD are not affected, and non-US companies that do business with the DoD are affected.
Name one truly major market that is more business friendly
It may not be obvious. But this is actually a good thing when we looking back in a few years. I always feel weird that executive branch can just destroy private enterprise with "Supply-chain Risk" / "Terrorist List" without Due Process.
I guess the worry is that we don't get Due Process here and they destroy them to make an example of them.
It's basically legal hacking.
Hacking is using a system in a way it was not intended to be used.
Here it is that, but applied to the law.
Hegseth and friends are a bunch of black hat legal hackers.
Why are so many adopting this name for what is by law, by the American people, called the Department of Defense? The name change pertains directly to the Anthropic issue, which is the function of the government and department, the power of the American people to govern themselves, and the role of the president relative to the soveriegn American people.
Well put and it bothers me too. It seems to be another case of Orwellian manipulation, i.e. an expression of power through language, functioning as a litmus test of the speaker's loyalty. Serious publications are not going along with it. More craven or (here) thoughtless ones are falling in line.
Because it sounds a lot cooler.
[dead]
There is clearly a need to codify into all of these historical acts that they can't be invoked unless there is a declaration of war (or some other appropriate prerequisite).
This administration consistently exploits what were designed to be emergency powers because no such requirement exists. Leave no room for interpretation.
The current administration scoffs at laws. Nothing stopping them in that case from declaring war on Nauru and doing all the same. The solution is a sane, informed electorate, which is much more difficult in this age where a few disgustingly rich people have so much influence over news and media.
So they're essentially admitting they want to use Claude to mass surveil Americans and/or build autonomous weapons with no humans in the loop. Kind of nuts.
Labeling a company that refused to comply with nakedly authoritarian orders is a true New Speak moment
I imagine I'm not the only one to switch over to giving Claude my money today. I'm sure the "Other" comments for the cancellation were often as blunt as mine.
Q: "Is there anything we could do to change your mind?"
A: "Yes! Stand up to the current administration."
What player is going to step in and do what Anthropic wouldn't? Or, worse, will the DoW try to author its own AI to go where private AI won't?
Probably Grog, which probably means even worse outcomes
Grok is already being brought in
How many layers deep does this go? Does Microsoft using Claude to develop their Word products mean the US government has to switch to linux?
It means MS has to stop using Claude.
> "Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic."
Does this mean Azure & AWS will have to stop offering Claude as a model?
You would have to assume it will be immediately challenged and an injunction filed to suspend the order until it makes it to court.
AWS Bedrock has deployed Anthropic models under an interesting structure. It is fully hands off - the models are copied into the AWS infrastructure and don't use anything from Anthropic. I think if push came to shove, Anthropic could cut ties with Amazon and AWS could probably still keep serving the models it has with Anthropic forgoing revenue until this is resolved, while asserting they are not "conducting commercial activity" between each other.
All speculation of course.
I wonder, can't Amazon create a new legal entity to split AWS into "AWS-for-DoD" and "AWS-for-everyone-else"? So one can work with Anthropic and the other can't. Not sure how it works in the US.
>Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.
Nevermind Claude, does that mean Anthropic's offices can't use a power company if that same company happens to supply electricity to a US military base? What about the water, garbage disposal, janitorial services? Fedex? Credit card payments? Insurance companies? Law firms? All the normal boring stuff Anthropic needs that any other business needs.
This is a corporate death penalty. Or corporate internal exile or something, I don't know of a good analogy.
Anthropic’s stance is fundamentally incompatible with American principles.
Come to EU guys, we'll prepare a warm welcome!
EU won't do 996
> Anthropic’s stance is fundamentally incompatible with American principles.
TIL Fully automated killbots and mass domestic surveillance are American principles.
I mean, I should have known but there's no clearer sign saying "leave the country now if you don't agree with this admin" than now I guess.
Given that Anthropic is clearly risking their entire business just to stand up for what they believe is right, which appears to be what everyone here agrees with, is everyone who is supporting them here planning to also start using Anthropic and switch away from other vendors until they follow suit? Or are folks planning to just use whatever regardless?
Edit: I should perhaps clarify I'm more interested in paid users, rather than free. It's harder to tell if free users switching would help them or hurt them... curious if anyone has thoughts on that too.
i’m currently subscribed to openai for their $20 a month tier chatgpt subscription.
i told myself if anthropic does not back down on their current stipulations to the DoD, then i’d cancel and switch over to claude
they said there is a line they do not want to cross, and stuck to that stance, at great personal and financial risk to themselves
I've only ever used the free plans, but I'd consider a sub with Anthropic now.
My understanding is that they would have been likely to lose many of their senior researchers if they had backed down here.
I'm switching.
Probably used Claude to write the tweet.
"Hey Claude, make this sound less durnk ..."
Does Anthropic have standing to sue to Government for libel? I don’t think the Government is allowed to arbitrarily designate a company a supply chain risk without good cause.
Under normal circumstances this would end up in court, but when this administration ignores court orders it doesnt like Anthropic would effectively have no legal recourse.
What court orders has the admin ignored?
I got downvoted for this in the other thread, but this is basically an attempt at bankrupting Anthropic. No US company has ever been designated a supply chain risk, and the foreign companies that are on that list are now doing 0 business in the US. Very large portion of the US economy relies on some contracts with the US government, Anthropic cannot survive this if this holds.
I don't think it will hold, in the end this is mafia behavior, but if it does, we are yet again in uncharted waters.
This was basically what Marc Andreessen said - allegedly he was told by some high-up government officials something like: they were going to pick winners and losers in the AI race, and it would be a bad idea to try to compete in that market. It seems like the election of Trump has only changed the criteria for being a winner.
It's fascinating to me that this decision was set for 5 pm ET on a friday, and I think it may be more responsible to set big deadlines like this for a time while the stock market is open. I imagine this will negatively impact confidence in the US economy at large, and stock markets will reflect that. But since the market is closed, we'll have to wait till Monday, with the tension/anticipation of a drop building. If the deadline had been set for say, midday thursday, the market would have responded immediately, but at least you wouldn't have the building anxiety over the weekend. Of course the result wasn't known ahead of time, and I imagine some people will argue that the weekend will give investors time to cool off instead of following their gut reaction. But personally I don't find those arguments very convincing.
It's extremely common to release negative news on Friday after the markets close. It happens nearly every Friday.
> Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic.
Kesha tried to hug Jerry Seinfeld vibes.
> Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon.
Strange way of saying "this vendor doesn't meet our software requirements".
> they have attempted to strong-arm the United States military into submission
Err... You approached them?
> a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives.
It's an orthogonal point, but "Silicon Valley ideology" has made up a significant portion of the USA's GDP for the last however many years.
> Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable.
Again... You approached them?
> I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security.
Like most companies in the world I imagine. They just haven't been approached yet.
> to allow for a seamless transition to a better and more patriotic service.
Internally re-framing all the recent "EU moving away from American tech!" articles as "EU builds more patriotic services!"
> This decision is final.
Nothing says "final" like a Tweet. The most uncontroversial and binding mechanism of all communication.
>>LAWFUL
This word effectively means NOTHING, anymore.
Doublespeak this motherfucking wrongthink.
In all this commotion I've completely forgotten that Anthropic dropped their safety pledge three days ago.
"Department of War" - I suppose one could give them credit for being honest but what bastards...
The name is the Department of Defense. Congress did not vote to rename it, so the name hasn’t changed.
Should military contractors put conditions on the use of their weapons? Here's our tank, but you can't invade Iran with it? We think your invasion of Venezuela is illegal, we're activating the kill switch on your jets. That's a real dangerous proposition.
They can, but the government can always just not buy their stuff.
That's not what the government is doing here.
If the T&C is agreed to up front, why shouldn't they be able to? If their client or potential client doesn't like the T&C, they can find another vendor.
Oh well, I guess I've got no choice but to sign my business up for Pro plans with Kimi K2.5. lol.
https://x.com/PalmerLuckey/status/2027500334999081294
It is an interesting point. What's the difference between this use license and others?
If the government thinks the terms of Anthropic are unacceptable, they can just stop using them, right? But why would you then retaliate and ban other companies from making business with Anthropic if they want to be a defense contractor? How do these requirements make Anthropic a supply chain risk that makes them unusable for use by other companies?
It's perfectly reasonable for the US government to end the contract if they no longer like the terms they agreed to (assuming the contract does in fact let them); it's not reasonable to destroy the counterparty to the contract in retaliation. The line "I am altering the deal; pray I don't alter it further" is literally spoken by Darth Vader, the most comic-book of comic-book villains.
Then the government should end their contract with Anthropic. The terms of the contract were clear.
Designating them a supply chain risk is unprecedented authoritarian strong-arming.
What a dork.
This is nice rhetoric but ignores the fact that the elected officials are bought out by other billionaires. The US is an oligarchy in a republics clothing.
This is good news all around, especially with OpenAI's statement siding with Anthropic.
Anthropic folks: I've been a bit salty on HN about bugs in Claude Code, but I feeling pretty warm and fuzzy about sending you my cash this month.
I already loved Claude models, and this makes me even more eager to use them.
This is the most unhinged thing yet, after all the previous unhinged things.
Cue xAI.
And here’s the irony: Musk, who claimed only he is virtuous enough to defend us from AI, who insisted he always wanted model labs to be non profit and research focused, will now bring his for profit commercial entity into service to aid in mass domestic censorship and fully autonomous weapons of war.
In fact it won’t surprise me further if NVIDIA is strong armed into providing preference to xAI, in the interest of security, or if the government directly funds capital investments.
Anthropic saves some dignify and they’re the losers today, but we are the losers tomorrow.
Last I heard, it's still legally called the Department of Defense.
But anyway, I guess the question is, will any other big AI companies stand with them? It's what needs to happen, but I am not hopeful.
In theory, this is why there should be competition in industry, because it removes the capability of a single large actor to be able to control the government's access to things.
Oddly, though, it seems like that should solve this problem as well. I'm not sure why the Department of Defense insists on Anthropic's models in particular; one would think one of the other players, at the very least least xAI, would be willing to step in and provide the capability Anthropic doesn't want to provide.
The whole thing is fascinating. In my heart of heart, in principle, I want models to be essentially unrestricted, but I still find it somewhat problematic that government thinks it can say: you will make adjust your product to match our exact expectations even if you don't sign an updated contract with us. Odd stuff. I know they are trotting out War powers, but.. well.. we are not at war ( at least not yet or at least not yet officially declared.. ).
Stupid situation, but a badge of honour awarded to Anthropic.
OpenAI came out just last night or today claiming they would hold the same line as Anthropic. Makes me think both sides knew Elon had already won the contract.
Help me understand the line Anthropic is drawing in the sand?
Don't get me wrong i'm glad they are unwilling to do certain things...
but to me it also seems a little ironic that Anthropic literally is partnered with Palantir which already mass surveills the US. Claude was used in the operation in Venezuala.
Their line not to cross seems absurdly thin?
Or there is something mega scary thats already much worse they were asked to do which we dont know about I guess.
The whole reason this is happening is because Anthropic looked into how Claude was used in the Maduro op and found it to violate the negotiated terms of service.
Their hard lines are:
- no usage of AI to commit murder WITHOUT a human in the loop
- no usage of AI for domestic mass surveillance
I don't understand the line as well. So its no to domestic surveillance, but all other countries are a fair game? How is this an ethical stand? What sort of mental gymnastics allow Anthropic to classify this as an ethical stance?
To me all of this reads like "we don't trust our models enough yet to not cause domestic havoc, all other is fine, and we don't trust our models enough yet to not vibe-kill people". Key word being "yet".
The US Government is such a bunch of clowns - it's hard to take their nonsense seriously... well except that their stupid policies kill people...
So Anthropic cannot make deals with the US government, because they are a supply-chain risk. They can also not make deals with European governments, because Anthropic is based in the US.
So it would make sense now for Anthropic to move outside the US, e.g. to Europe or Canada to at least be able to make deals with European governments.
Insanely stupid and petty decision. I just left voicemails for all my members of Congress urging them to fight back. I hope the DoW loses this one.
Google and Amazon both partner with them and sell to the US Government... so does this mean they can't run on Google or AWS infrastructure?
A level up, this is only the beginning of the political headwinds for AI. There will be a lot more, especially if constituencies begin to get displaced. I don’t think “job loss” will really occur, at least not in a dramatic way overnight. But I do believe there will be both aggressive regulation and very aggressive taxation of this technology in the near/mid-term.
It seems like some comments here are from merged threads AND front-dated?
Makes for very confusing reading when comments from "1 hour ago" are actually on preceding events from earlier, before TFA news (announcement of designation).
mods: Especially in sensitive and rapidly developing situations like this, please don't mess with timestamps of comments. It's effectively revisionism.
So the government said, We need y’all to flip on the Minority Report and the Terminator modes or we’ll put you out of business… cool
We can actually get a glimpse of how AI might wipe out humanity here.
Model collapse making models identify everyone as a potential threat who needs to be eliminated.
Companies should have a right to refuse such requests on moral grounds though.
This stance is vindictive. Just don't use Claude in the military. Extending it to all government agencies is not right. They do great work. Can't deny that.
I'm convinced the only possible good end game here is if this leads to a showdown where GenAI is just made illegal full stop.
Neither side wants that so seems pretty unlikely
In what fantasy world?
How 'bout that government meddling in the free market, eh?
Every conservative needs to do some very deep, very serious soul-searching. As for me, as a hyper-progressive, I'm drawing up proposals for nationalizing real estate developers in order to force them to build new houses to sell below cost.
Its one thing to say "we cannot abide by these terms, so let's part ways", and its another entirely to respond this drastically. The Trump administration will look back on this decision as the most consequential in their efforts to win the 2026 midterms and Republican efforts in 2028. This is a $400B+ American company that has significant partial ownership from Amazon, Google, and other private equity sources; they just made serious enemies in SV, many of whom supported Trump in his 2024 election victory.
This is a pimple on the arse of said consequence. It's one tiny thing in a chain of many bigger things.
It's magnified because it's right now, but this won't affect midterm results barely a whisker compared to many other daily headlines.
There are no serious enemies to this administration in SV and I can't see this changing that. SV has bent the knee exactly like Anthropic didn't. They're not going to stand up because of this, they've proven they don't have those muscles.
OTOH it could amplify their base: “Big Tech refusing to work with us on National Security matters!” The base will never hear what/where the red line was drawn, just that Some Company in California (liberal/bad) is being Woke and Political.
While I still think the GPT models are superior, I am very inclined to keep my Claude subscription because of this news. Even if Claude provides me with the occasional response out of left-field, I find that easier to live with than a world Anthropic is fighting to avoid.
What's with the Republicans. Do they want a strong or a weak government? I can't tell anymore.
I don't think it's ever been about strong or weak, or at least I don't think that's where the differentiation is. You always want 'strong' government, committed to the things it says it's committed to.
It's more been about the size of the government; that it should do a minimal amount of control (and do it well), but leave a lot of things for "the market to decide".
Having said all that, I think this issue is just tangential to any big/small government ideology. This is a hissy fit about a defence contractor sticking to their agreement where the DoD want to change the agreement in a way that goes against the contractors Mission Statement and/or the US Constitution itself.
The old ideology of the Republicans doesn't mean anything here. This administration is purely about 'give me what I want, now!'.
And it's whims change with the breeze. Do not look for consistency here.
https://xcancel.com/AlexBlechman/status/1457842724128833538
Government: We will destroy any company that refuses to create the Torment Nexus
The most horrifying thing is this means that they’re trying to spy en masse on all US citizens.
Hey Anthropic, Europe welcome you!
I like how Grok managed to polish the t.. make the situation sound good.
https://x.com/grok/status/2027518650710700068
I am reminded of bcantrill's legendary quote:
> You don’t anthropomorphize your lawnmower, the lawnmower just mows the lawn - you stick your hand in there and it’ll chop it off, the end.
Except this is like two lawnmowers going at it, which would be a sight to behold indeed.
Once the democrats are in the oval office again can they label palantir a supply chain risk? Is there anything stopping the administations red or blue from shutting down any company that doens't agree 100% with them politically
I can't seem to find what being designated a "Supply-Chain Risk to National Security" implies from a legal standpoint. From what I can find, it doesn't seem to be a formal legal status. Curious if anyone knows more.
Basically, if you are a federal contractor, the designation means the DoD can force you to certify that Anthropic tech is not used in the fulfillment of your government work. Because it's just a DoD designation, and an executive order and not added to the NDAA, you can still use Claude for non-government (federal) touching work.
So using Claude Code to write software for the DoD is now a no go, you'd be in breach of procurement directives now.
If they go as far as to convince congress to add Anthropic to the NDAA, that would be a nationwide ban like Huawei making it illegal for any federal contractor to use the tech anywhere in their business.
But for now, even fed contractors can still use Claude in their business, just not directly for government work.
Stop calling it the Department of War, it's not the official name of that agency.
Department of War is a teenage boy's idea of "manly" and "cool". Same with X. These juvenile idiocrats will be laughed at by children in the future studying history. "Seriously? How dumb were these people in the 21st century."
Wild that not wanting to support fully autonomous weaponry…yet…is the sane take here.
Working with the government is typically a huge pain in the ass unless you have a lot of friends on the inside. It's not hard to do the math when you you dealing with a government whose acting incredibly oppositional.
This is getting silly guys. All on the same team. Need to have a c.t.j. meeting.
I had the co-founder of Levels and current head of the US Treasury Sam Corcos reach out to me a few weeks ago for a job. I was initially kind of excited because I had really wanted to work for the Treasury a couple years ago, so I took the phone call with him.
He called me and he seemed like a nice enough guy, but I realized that he's one of the DOGE/Elon acolytes and he started talking about how he's "fixing" the Treasury and that every engineer is apparently supposed to use Claude for everything.
It would have been a considerable pay downgrade which wouldn't necessarily be a dealbreaker but being managed by DOGE would be, but mostly relevant is that I found it kind of horrifying that we're basically trusting the entire world's bank to be "fixed" with Claude Code. It's one thing when your ad platform or something is broken, but if Claude fucks something up in the Treasury that could literally start a war. We're going to "fix" all the code with a bunch of mediocre code that literally no one on earth actually understands and that realistically no one is auditing [1].
If they're going to "fix" all the Treasury code with stuff generated by Claude, I'm not sure they will have a choice but to stick with it, because very it seems very likely to me that it will be incomprehensible to anything but Claude.
[1] Be honest, a lot of AI generated code is not actually being reviewed by humans; I suspect that a lot of the AI code that's being merged is still basically being rubber-stamped.
don't worry
it won't be the world's bank for very long
USA is trying to use IA for something so evil that a for profit company is risking to loose a lot of money and even close. Nobody are allowed to know what these evil things are.
And people here are debating legalese...
So I'm very curious, assuming this happens and is later found to be an illegal order - will Anthropic have rights to redress (ie: monetary compensation)?
Because that could be absolutely staggering.
https://xcancel.com/secwar/status/2027507717469049070
You would have to believe that an AI model would be 100% correct in its decision to discern an enemy from a civilian. So an intelligent lunatic, or an uninformed lunatic politician
Grok in US gov in 3 2 1…
Already there 'February 23, 2026: The Pentagon confirmed a new agreement allowing Grok use in classified systems. Defense Secretary Pete Hegseth announced it would go live soon on unclassified and classified networks, alongside other models, as part of feeding military data into AI.'
This will mean Grok becomes the defacto US Gov AI provider.
Don't worry, they will be seized by the government soon. Sounds crazy right. Not that far from the headline though, that would sound insane a mere 18 months ago.
Theo's got a good overview
https://youtu.be/MWFyApldYDA?si=yskCcx2hY4Wjkgw8
It'll get cleared up.
TACO
The next question, what person wants to send all their personal questions to whichever AI lab does help the government do domestic surveillance
This is just an authoritarian state, wanting to use AI to implement something almost certainly anti freedom. We have to be honest about that.
I read the tweet and honestly thought I was reading parody.
It almost is parody that a former Fox News host is the SECRETARY OF WAR.
The US is such a shit show. Personally I hope this doesn't affect Anthropic's growth and development because I quite enjoy using their products and see them evolve.
This will likely be deeply unpopular but: Good!
The place to set policies on the use of hammers and police enforcement is not at the counter of the hardware store. “You want a hammer but don’t have a contractors license? Are you in a training program? Oh you just want to hang framed art - can I see your lease, does it allow hammering metal into the walls?”
We govern these things through laws and a democratic process. Police enforce the laws.
I don’t want some overconfident Silicon Valley engineering firm telling me how to use my digital tools, and you shouldn’t either.
Whatever you think of this administration, our military should not have to ask contractors permission for their operations.
To stop mass surveillance and autonomous lethality, pass laws. Asking unelected tech executives to do this is asking for trouble. They have no business doing it.
> I don’t want some overconfident Silicon Valley engineering firm telling me how to use my digital tools, and you shouldn’t either.
Last I heard, a US firm can refuse to do business with the US military as a customer in general commercial contexts, there is no blanket legal duty for private companies to sell goods or services to the US military, government agencies do not have a constitutional right to (nor are they a protected category for) the purchase of goods and services from private businesses, and private contracts are voluntary so if either party doesn't like the terms they can decline.
There's the somewhat conscription-y Defense Production Act, but the US goverment making use of that in this case is fundamentally incompatible with them simultaneously declaring the exact same organisation a "supply chain risk". Even without the near simultaneous references to both in this case, it seems to me like the US admin has said:
Modulo Trump being more shouty and less coherent, and Hegseth being less shouty.> To stop mass surveillance and autonomous lethality, pass laws. Asking unelected tech executives to do this is asking for trouble. They have no business doing it.
The US executive appears to consider the US constitution to not bind on them, only on their enemies.
What laws do you think you can pass, when even the constitution is seen that way?
"strong-arm the United States military into submission - a cowardly act"
How going against the most powerful army on Earth is coward?
At what point will military and politicians be deemed too great of a risk for humanity and be put in jail?
Look folks when he's (trump) that stuck on stupid, he's right and you're wrong. Class it up, people! Class it up!
I am directing my Department of Peace to designate Anthropic as a Supply-Chain Risk to Fascism.
I have just purchased a chunk of extra usage credit. I encourage my peers to do the same. Let's send a message to those that work forces.
Will be interesting to see how quickly it becomes clear that most of Anthropic's competitors are stealing from them.
If anything, isn’t this admitting that the government thinks Anthropic has better technology than OpenAI, Grok, etc?
Maybe, but nowadays I wouldn’t put much money on what the US government thinks.
Since google aws have contracts with the governor, can they make cloud providers stop providing services to anthropic?
I've had issues with Anthropic since the beginning. I never trusted them. Whoever did, might have some problems.
They should wear it like a badge of honor
Sounds very much like "Department of War" designating humans a supply-chain risk.
Good, anthropic should sell there services to China introduce the “security risk” to China.
Anthropic should become an actual supply chain risk and move its HQ to China now, lol.
Something is clearly unraveling.
Sounds like I should upgrade to the $100 subscription in support on Anthropic.
Why does this feel like a Facebook post from the person who got broken up with
it's funny that this is being framed as big tech vs us government, when in reality this move is probably strongly influenced by the desire to help openai and other big tech against anthropic
The funny thing about stupid people, they do stupid things all the time...
> Anthropic’s stance is fundamentally incompatible with American principles.
I don't think that Secretary Hegseth is qualified to speak on American principles.
Cheating on multiple spouses[1], being an active alcoholic, and being accused of multiple sexual assaults and paying off the accusers[3] is fundamentally incompatible with being a Secretary of Defense and a good leader.
Also, this violates freedom of speech and will probably get shot down in the courts.
1. https://en.wikipedia.org/wiki/Pete_Hegseth#Marriages
2. https://en.wikipedia.org/wiki/Pete_Hegseth plus multiple recent media pieces
3. https://en.wikipedia.org/wiki/Pete_Hegseth#Abuse_and_sexual_...
Maybe time for Anthropic to leave the US. Come to Australia :)
What does this mean for Bun (recently acquired by Anthropic)?
Can we all take a big step back and just ask why the DoD wants to use a fundamentally unreliable technology to guide deadly weapons?
They don't. They want to punish a company for expressing values that introduce friction to the whims of the current administration.
No, stop, I understand the politics here, but I’m asking about the technical fundamentals.
LLMs produce output of unknowable and unpredictable accuracy, and as far as we know, this is a mathematically unsolvable problem. This shit should not be within 1000 miles of a weapons system. Why are we even talking about this?
The DoD killing lots of people based on faulty intelligence - never!
Joking aside, this administration clearly cares much less others. They don't care if innocent people are killed.
> LLMs produce output of unknowable and unpredictable accuracy
So do humans. But humans might not follow illegal or immoral orders.
You don't understand the politics if you keep asking about the red herring of technical limitations.
Anthropic could have said "you can use our technology for anything but faster-than-light travel." The military administration would have said "you're not the boss of me," and the outcome would have been exactly the same.
It's a hot-button issue, just like flag burning. Nobody ever really cared about flag burning.
By the way, your "No, stop" was rude and unnecessary, and your comment would have been stronger without it.
Because of the politics.
In a sane world we wouldn't be, but Hegseth has been rather insistent for some reason.
The same reason why they used a Signal chat group for discussing matters of national security.
The real question: did he have Claude write this for him?
Once again we have the US actually doing what the says China might do in the future.
It's true that Chinese companies are extensions of the state. But they serve the state. And the state has thus far served the citizenry eg raising 800M people out of extreme poverty. China's HSR network of 32,000 miles of track was built in 20 years for ~$900B. That's less than the annual US military budget.
You can look at the relationship between the US government and US companies in one of two ways:
1. US companies serve the government but the government doesn't serve the people. After all, where's our infrastructure, healthcare, housing and education? or
2. The US government serves US corporate interests to enrich the ultra-wealthy.
Either way a handful of people are getting incredibly wealthy and all it takes is for a little corruption. Political donations, jobs after government, positions on boards and so on.
Pete Kegseth is unhinged. I’m siding with Anthropic here
Wonder what other countries are doing in this situation
this all seems like to me as a trumped up (lol) excuse for a government bailout of openai assuming openai steps in and fills anthropics shoes.
Unserious people, in the most serious of positions.
So the DOW is using it till the mid term elections?
> Anthropic's two hard lines:
> 1. No mass domestic surveillance of Americans
> 2. No fully autonomous weapons (kill decisions without a human in the loop)
Surveillance takes place with or without Anthropic, so depriving DoW of Anthropic models doesn't accomplish much (although it does annoy Hegseth).
The models currently used in kill decisions are probably primitive image recognition (using neural nets). Consider a drone circling an area distinguishing civilians from soldiers (by looking for presence of rifles/rpgs).
New AI models can improve identification, thus reducing false positives and increasing the number of actual adversaries targeted. Even though it sounds bad, it could have good outcomes.
I thought Anthropic's take on #2 was they don't think the model's good enough yet?
But compared to what - if Anthropic's models aren't perfect but still better than existing (old school) models, it's understandable DoW still wants to use them (since they're potentially the best available, despite imperfections). I think Hegseth is saying to Anthropic: "that's our call, not yours".
But surely if Anthropic thinks there's a risk that their models might make bad decisions, and the resulting civilian or etc deaths are blamed on them, it's their right to refuse to sell it for that purpose? That's why they had those restrictions in the contract to begin with. How can they be forced to provide something?
I agree they can't be forced to provide something. I just see DoW's reasoning, and I can't fault it.
Anthropic are taking a moral position which is admirable, but in this case it could actually make people's lives worse (if we assume more false positives and fewer true positives, which is probably a fair assumption given how much better 'modern' AI is compared to the neural net image recognition of just a few years ago).
> You sound like an unhinged person if you in plain words describe what’s happening, but the Trump admin demanded Anthropic’s AI be able to kill things for it without human approval and also do mass surveillance.
> Anthropic said no, and now the admin is trying to destroy the company in retaliation.
From https://bsky.app/profile/bbkogan.bsky.social/post/3mfuuprph5...
Confirmed: we're living in hell.
This whole tweet seems very childish.
This is the inflection point for the beginning of culling of the intellectual class. If not physically, atleast economically and socially.
A few arrests and a few in detention centres, will be enough to make them fold and grovel.
They are now categorised as "radical left" and woke.
The elections will be controlled to "prevent the radical left take over of the greatest country on the planet".
edit : The stage is also being set for total media control. My prediction is that the next target will be Google, specifically Youtube. You should start seeing talks about how the radical left is inflitrated youtube.
The 20th century is finally over...
This is only the first year of this fascist government, and I believe the first powerful company that is taking a stance? Meta, Apple, etc. have all bent the knee right?
Apple not just bent the knee, but also presented a golden plaque to go along with it. Yuck
Batshit situation, respectable position from Dario throughout.
But there's some irony in this happening to Anthropic after all the constant hawkish fearmongering about the evil Chinese (and open source AI sentiment too).
Good. At least now I don't have to worry that my vibe-coded, unreviewed checkout button is accidentally going to hallucinate the command that blows up a kindergarten in Yemen.
So the DoW is angry because it can’t use the model produced by what they call a woke radical left company?
And nobody in the administration is concerned at all that the model itself might be somewhat against their own views?
If it was so radically woke, wouldn’t the model, as used in fully autonomous weapons, be potentially harmful to ICE officers that the left considers as a threat to the American people?
Wouldn’t the mass surveillance of Americans be biased against the right?
These people are so dumb.
Related:
Trump orders federal agencies to stop using Anthropic AI tech 'immediately'
https://news.ycombinator.com/item?id=47185528
Statement from Dario Amodei on our discussions with the Department of War
https://news.ycombinator.com/item?id=47173121
Fuck it, I am buying a Max Pro subscription just because of this.
Bluster followed by a "we can't do it now but we will... soon". Whoever has the best model can do what they please you'll see. I work with these things daily as an engineer (been doing this shit for 25 years and wow it's like mana from heaven these days). Believe me no one is going to screw with themselves by not using the best one and right now Anthropic has it.
- Co-authored by Claude
AI crash here we come
A risk of what?
Trump's associated "Truth" ("Truth Social" is the name of his risible fake-Twitter and they call Tweets, "Truths" there) that preceded this:
https://www.trumpstruth.org/statuses/36981
Don't worry, this is an archive/mirroring site for his account, not the actual TS site.
I'd comment on how wackadoo this all is, but, 1) that applies to almost everything these days, and 2) the post's right there, see for yourself.
I really don't follow USA-politics besides the occasional hn-thread, random yt videos, and comments from friends...
With that said: what are the chances, in your opinion, that Donald wrote that himself?
To me it reads too coherent for there to be any chance he wrote or even dictated that.
He doesn’t write any of his posts. A team of absolute degenerates does. Can you imagine that buffoon typing all of that out?
I think odds are high a lot of these posts are by staffers. The posting volume is bananas, even granting that he spends a lot more time personally online and watching cable news et c. than any prior president, I don’t think there’s any way they’re all by him.
I do think a lot of the more hot-take type posts (often in response to stuff he’s watching on tv) are either actually him, or he’s dictating to an aide. These larger policy-type ones that he treats as quasi-executive-orders, I think are likely drafted by one or more of his cabinet-level folks, or others roughly as high up. That’s just my speculation based on reading the “tea leaves”, though.
As for official word, it waffles between “all of it’s him” and “oh not that one though, that racist video repost was a staffer who made a mistake”, so that’s little help in sussing out the truth (but I am rather certain they’re not all directly written and posted by him)
Old enough to remember when the likes of A16Z said they had to support Trump because the Biden admin was being too meddlesome in the tech industry.
Sometimes it pays to think even two steps ahead of your most immediate thought…
Such a dipshit administration. I hope California secedes from the union to protect our champions.
we are experiencing marketing at its best
Presumably Trump will be returning his $90 million in lawsuit booty now that it's been decided you cannot say no to the government right? Heck he dodged the draft 5 times.
I don't know if we should be terrified by Hegseth's response, or relieved that the government doesn't just shrug and lie over privately agreed upon terms.
I can't wait to read the transcript of the AUSA in front of a federal judge trying to explain threatening to declare a company a supply chain risk if the company doesn't supply things to the government.
An earlier post to a news article rather than to a tweet: https://news.ycombinator.com/item?id=47186662
That news article doesn't mention the designation of Anthropic as a supply chain risk (it was published about 20 minutes before Hegseth's tweet)
David Sacks
This is going to have two unintended consequences.
One, it’s going to fuck with the AI fundraising market. That includes for IPO. If Trump can do this to Anthropic, a Dem President will do it to xAI. We have no idea where the contagion stops.
Two, Anthropic will win in the long run. In corporate America. Overseas. And with consumers. And, I suspect, with investors.
> In corporate America
A lot of corporate America contracts for the military in some capacity (it's a giant piggy bank and if you jump through a few hoops you get to siphon money out of it, so of course they do) and assuming this Tweet is accurate (Jesus, what a world) this will also affect them.
IDK maybe they have corporate structures that avoid letting this kind of thing mess too badly with the parts of their company that don't have contact with the government, or maybe it'll only apply to specifically the work they do for the government, but otherwise I expect it'll be devastating for Anthropic's B2B effort.
> lot of corporate America contracts for the military in some capacity
And a lot does not, or does so through dedicated subsidiaries so they can work multinationally.
What percentage of their revenue comes from the government?
> If Trump can do this to Anthropic, a Dem President will do it to xAI. We have no idea where the contagion stops.
Will the next Democratic President do it to xAI? On what grounds?
The Biden admin negotiated a contract with a supplier with terms which are – to the best of my knowledge – rather unprecedented – do Pentagon contracts normally have terms like this, restricting the government's use of the supplied good or service? Do missile or plane contracts with Boeing or Lockheed Martin contain restrictions on what kind of operations that hardware will be used in? I don't think that's the norm. So the next administration tears up a contract made by the previous admin with unusual terms – nothing unexpected about that. The "hardball" of declaring them a "supply chain risk" is escalating this dispute to a never-before-seen level, but the underlying action of cancelling the contract isn't. I honestly suspect the "supply chain risk" aspect will be suspended by the courts, and/or heavily watered down in the implementation; but the act of cancelling the contract in itself seems legally airtight.
Next Democratic administration inherits a contract with xAI (and quite possibly OpenAI and/or Google too) – with presumably standard terms. I can totally understand the political desire for vengeance. But what's the actual legal justification for it? Facially, the current administration has a politically neutral justification for what they are doing, even if some suspect there is some deeper political motivation. Will the next Democratic administration have such a facial justification for doing the same to xAI?
Plus, Democrats always sell themselves on "we obey norms". They have the structural disadvantage that either they keep their word on that, and can't do the same things back, or they break their word, and risk losing the people who supported them based on that word.
> Will the next Democratic President do it to xAI? On what grounds?
Elon being affiliated with Trump. About the strength of logic that makes Dario woke.
> don't think that's the norm
Norms are different from law or contract. And yes, lots of service providers limit where their civilians can be deployed and under what circumstances.
> can totally understand the political desire for vengeance. But what's the actual legal justification for it?
President has core Constitutional control of the military.
> Democrats always sell themselves on "we obey norms"
That hasn't worked. The American electorate is looking for change. And up-and-coming Democrats are picking up on that.
> risk losing the people who supported them based on that word
The Democrat base absolutely wants vengeance. It doesn't play in swing states. But it probably also doesn't hurt. These are court politics, at the end of the day.
> Elon being affiliated with Trump. About the strength of logic that makes Dario woke.
I think you have to distinguish between the official justification and some of the associated political rhetoric.
Official justification: "Previous admin agreed contract with unprecedented terms, we demand those terms be removed, vendor is refusing to renegotiate"
Political rhetoric: "THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS!"
If you forget about the political framing, and look at the official justification in the abstract, it doesn't actually seem facially unreasonable. The escalation to "supply chain risk" is a different story, but the core contract dispute and cancelling the contract as a result of it isn't.
So the question is, can Democrats come up with an equivalent abstract official justification–if so, what will it be? Or do they decide they don't even need that–in which case they aren't just matching Trump, they are going even further down the road to normlessness than he's gone.
> And yes, lots of service providers limit where their civilians can be deployed and under what circumstances.
There's a big difference between contracts for boots-on-the-ground and contracts for hardware/software. There is lots of precedent for contractual limitations on how boots-on-the-ground can be used. I'm not aware of similar precedent for hardware or software.
> That hasn't worked. The American electorate is looking for change. And up-and-coming Democrats are picking up on that.
Are they? Gavin Newsom? Zohran Mamdani? AOC? Do they actually sell themselves as "we see Trump breaking the rules, and we'll break them just as hard, even moreso"?
> The Democrat base absolutely wants vengeance. It doesn't play in swing states. But it probably also doesn't hurt.
It is too early to tell. You can argue in the abstract that X approximately equals Y, so if swing voters will tolerate the GOP doing X, they'll also tolerate Democrats doing Y – but the actual swing voters might not agree with you on that.
I'd at least, you know, pretend we had a top-secret amazing model. By airing all of this publicly, they've basically admitted that Claude is the best there is.
I think an important point to consider is that the administration's demands for domestic deployment and automation of homicide are not so much due to a lack of technical ability or personnel resources to achieve sought-for military-strategic outcomes, but an unwillingness for anyone in the administration to take on the responsibility for those decisions.
If an employee of the government makes a decision that subsequently turns out to be very very unpopular, that unpopularity is sooner or later going to coalesce and land on them, and the more unpopular it turns out to be the less of a shield legal arguments about immunity or pardons will be because so many people are increasingly out of patience with a system they deem to be corrupt. Being able to offload the political, legal, and personal risks of extremely consequential decisions onto The Bad Computer System is the political equivalent of crack cocaine - you might know that the feeling of freedom and power it provides is wholly illusory, you might know that it's likely to ruin your own and many other lives, you might know that it's a disaster for the health of the body politic...but it also offers the possibility that you can have an absolute blast and get away with it.
My anecdotal experience of being around wealthy and powerful people over the years inclines me to think that not only do our social systems select in favor of people who take big risks for big rewards, but that virtually everyone in that class has a) done a lot of getting away with things legally speaking and b) enjoys using illegal drugs. Even if they've given up recreational drug taking or limit it to strictly defined times and places so as not to interfere with their business/personal success, they like thrills and have confidence about their ability to enjoy them without negative consequences. You need some of that risk-taking, high personal autonomy attitude if you aspire to be a mover and shaker as opposed to a leading figure in risk management or regulatory compliance.
Everyone enjoys the feeling of power without responsibility; it's a fundamental underpinning of games and many other kinds of recreation. Add in significant amounts of money and people think differently about risk, as in the topical case of the experienced Supreme Court litigator who turned out to have have a secret life as a high-stakes poker gambler and eventually started betting against the IRS while filing his taxes (https://www.politico.com/news/2026/02/25/supreme-court-litig...).
Now, if you're in the political-military sphere and you get your thrills by literally redrawing lines and relationships on the map of the world and deciding what the news on TV is going to be for the next day/week/month/year, and you get offered a tool that promises to give a significant edge over other players in this game but which also gives you a versatile and widely accepted excuse for avoiding consequences for the inevitable losing hands, there are massively compelling psychological incentives for using it. And correspondingly, there's going to be massive emotional disruption (and bad decision-making and behavior) if your supply is threatened. You might start labeling the people who are interfering with your good time as cognito-terrorists and telling all your friends and supporters that your formerly trustworthy supplier did you dirty...
Does anyone believe he's correct? That is, not lying? That is, abusing the office, violating his oath?
If we don't impeach for this, we might as well surrender to MAGA.
Finally silicon valley is being shown who they sucked up to.
when do they go to court?
it's so funny to me that anthropic was created specifically using the virtue signaling line of defensive safety against bad actors (ie the woo woo bad guy of chinese dictatorship), yet the real danger was always coming from inside the house - your own government being an absolute evil clusterfuck.
Hegseth's had a busy week: trying to kill Anthropic, attending the State of the Union, fighting Scouting America, and his regularly scheduled efforts to shame fatties & trans kids... Unlike so many in the orange one's inner circle who are just incompetent (say, Kash Patel for one), this dude is both incompentent a very bad, bad person.
Ironic. This makes me want to quit ChatGPT in favor of Claude because fuck this administration.
Can we get a list of companies with this designation so I can migrate my subscriptions to them?
Pathetic posturing. Also, does this read ECACTLY like an Andor script to anybody else!?
"I am altering the deal. Pray, I do not alter it further." - a scary evil dude.
let's see...
> Populist nationalism + “infallible” redemptive leader cult
> Scapegoated “enemies”; imprison/murder opposition/minority leaders
> Supremacy of military / paramilitarism; glorify violence as redemptive
> Obsession with national security / nation under attack
TBH could be worse.
Please tell me when their fifteen minutes is over. It is one bad joke after another.
i think this is just a show they are putting out .
Besides just being yet another example of the Trump admin abusing power and weaponizing legitimate laws in illegitimate ways to extract concessions, there is another reason this is dumb -- which is that Anthropic just has the best models!
As someone who wants America to win, ripping out Claude and putting in xAI is a terrible idea. Definitely setting us back a few months on capabilities
No surprise here. All government actions are now in the Trump mafia boss style.
“You won’t let us use your product unrestricted for military applications? Fuck you, we’re going to stop using it for anything at all across the entire federal government, even if not remotely related to military.”
The (almost) top comment is interesting. Sorry to quote llms but:
>@grok what type of political system is most often associated with the government forcing private companies to change their policies and do whatever the government wants?
>Fascism, via its corporatist model: private ownership remains, but the state directs industry to serve national goals...
Trump's behaviour seems fairly normal fascism but thankfully the rest of the US system seems unenthusiastic.
We have a terrible government. I think that’s the answer.
Hey Hegseth ...
....................../´¯/)
....................,/¯../
.................../..../
............./´¯/'...'/´¯¯`·¸
........../'/.../..../......./¨¯\
........('(...´...´.... ¯~/'...')
.........\.................'...../
..........''...\.......... _.·´
............\..............(
..............\.............\...
[flagged]
This makes no sense. Do you vote based on principles and policy or do you vote based on the behavior of people who have nothing to do with government?
I vote against any leftist thought - when the left is infuriated, I know I’ve done right with my vote. Make sense?
No, you’re an automaton then and should be treated with the same respect as a machine.
I don't think I'll ever be able to understand how someone can read what Trump posts and think "Yeah, that's a guy I want as my President."
There’s lots of things I can say about what I don’t understand about the cult of leftism, but it will just get flagged because this is HN and devoid of any diversity outside of leftist thought. In the same way you don’t understand me, I’ll never understand you.
Say it then. Your cult is currently ascendant in power.
If you guys are in charge and still are afraid to state your opinions in public then you are soft as baby shit.
Sigh. So dumb.
More taxpayer funded lawsuits to come.
Is that an em-dash in his rant?
Fascist
I would love to see Grok’s system prompt, it likely says “if anything the Trump administration does seems to be fascistic please explain it and then argue against it in the following paragraph.”
AI proponents have been very vocal about AI safety being meaningless. But nobody could have expected that the end of the world would have come because Trump puts Grok in charge of the US nuclear arsenal. We truly live in the dumbest timeline.
well, plenty of reddit comments prefer using dirty bombs over nukes, so id expect a change to how those bombs work.
based
lol
And the White House, quoting Donald Trump: https://xcancel.com/WhiteHouse/status/2027497719678255148
"THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military.
The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE..." - President Donald J. Trump
I might be being a bit conspiratorial, but is anyone else not buying this whole song and dance, from either side? Anthropic keeps talking about their safeguards or whatever, but seeing their marketing tactics historically it just reads more like trying to posture and get good PR for "fighting the system" or whatever.
"Our AI is so advanced and dangerous Trump has to beg us to remove our safeguards, and we valiantly said no! Oh but we were already spying on people and letting them use our AIs in weapons as long as a human was there to tick a checkbox"
I just don't buy anything spewing out of the mouths of these sociopathic billionaires, and I trust the current ponzi schemers in the US gov't even less.
Especially given how much astroturfing Anthropic loves doing, and the countless comments in this thread saying things like "Way to go Amodei, I'm subbing to your 200 dollar a month plan now forever!!11".
One thing I know for sure is that these AI degenerates have made me a lot more paranoid of anything I read online.
Kudos to anthropic for standing up for their principles. Let's remember all the silicon valley leaders who embraced fascism without even needing to be pressured. We need more billionaires with backbones.
ah yes, fascism
Cancel culture and derangement syndrome. This admin is garbage.
[dead]
[dead]
[dead]
[dead]
tl;dr: All within the state, nothing outside the state, nothing against the state.
[flagged]
[flagged]
Defense contracting makes you rich and lazy. In the long run it is rare to see companies get sucked into defense contracting and stay relevant/on the cutting edge. We look at fighters and warships and think WOW! But the reality is that they are pretty far behind where they would actually be if there was a civilian purpose to them that mattered.
It’s not the defense contracts to Anthropic that hurt. It’s not being able to do business with anyone who does business with the DOD that hurts.
[flagged]
This is why when ceos get summoned to testify they are always neutered and hat-in-hand humble. It’s trivial for the us gov to destroy any business unless you reach too big to fail status. Anthropic nor OpenAI is too big too fail yet.
Surveil not protect
Unfortunately their models suck, though. The difference between the best Grok model and Opus 4.6 is night and day, and not only for coding, but entirely across-the-board.
What does xAI's future as a defense contractor AI company look like after the 2028 presidential election?
[flagged]
There was already a Democrat that beat Trump once. And looking at the past elections, it looks like the US elections are currently in a pendulum where the balance of power just swings back and forth.
Yes but you are not suggesting Biden runs again? I meant now, who looks like they could beat the Trump machine, possibly Gavin Newsom but not popular outside of Cali.
Surely you can appreciate that Biden was an abnormally weak candidate (how many times did he try to win on his own merits, only to just squeak in on a tide of anti-Trump voters?). Pretty much anyone will be able to beat the GOP candidate at the next election. And it will likely be the biggest landslide since Reagan. Only MAGA thinks they are popular right now, but back in the real world they are deeply, deeply unpopular. And you know they are going to double down and make it even worse over the next couple years.
No one thought Biden could beat Trump the first time. No one thought Trump could beat Clinton. No one thought Obama could win the primaries.
Things happen.
I don’t know what will happen, but it still could work out to benefit Anthropic. I believe the public sentiment is OVERWHELMINGLY with Anthropic on this one. Both their stance and standing up to Trump bullies.
[flagged]
[flagged]
[flagged]
[flagged]
This comment does not hold up to scrutiny.
Appealing to the pragmatic and the "game theory" of complying with authoritarian rule that you don't have power over - because the other party that you don't have any power over will benefit from it - is a zero-sum argument.
Procurement decisions are not authoritarian rule. A government agency deciding that a vendor doesn't meet its operational requirements and setting a timeline to transition off that vendor is one of the most ordinary functions of institutional management. Every organization, public or private, does this. Authoritarian rule involves the coercive suppression of rights or autonomy. Choosing not to renew a contract with a provider who has voluntarily excluded itself from your use case is the opposite of coercion; it's respecting that provider's choice and acting accordingly.
The "zero-sum" label is equally off-base. Zero-sum describes a situation where one party's gain is necessarily another's loss, and that is precisely the nature of military capability competition. If an adversary fields unrestricted AI systems and you field restricted ones, the gap is real and the consequences are asymmetric. You don't have to like that reality, but calling it a zero-sum argument as though it's a rhetorical trick misidentifies what's actually a structural condition. The term you seem to be reaching for is something closer to "fear-based reasoning" or "false dilemma," but neither of those applies cleanly here either, because the competitive dynamic being described is well-documented and not hypothetical.
If there's a genuine objection to be made, and there may well be, it has to engage with the specifics: whether the restrictions in question actually matter operationally, whether the transition plan is proportionate, whether the policy creates worse risks than it solves. That's where the real debate is.
[edit:typos]
Hegseth gets so belligerent when he's hammered.
As best I can tell, his hard-drinking era ended many years before he entered the cabinet. But this does feel like a pretty impulsive decision, and there's some ambiguity over whether this statement was approved by the WH, or whether this was just the SECDEF taking it to the next level to look super loyal and badass. This ambiguity gives the WH room to walk it back in the coming weeks, depending on how things evolve.
I can honestly understand both positions. The U.S. military must be able to use technology as it sees fit; it cannot allow private companies to control the use of military equipment. Anthropic must prevent a future where AIs make autonomous life and death decisions without humans in the loop. Living in that future is completely untenable.
What I don’t understand is why the two parties couldn’t reach agreement. Surely autonomous murderous robots is something U.S. government has interest in preventing.
> it cannot allow private companies to control the use of military equipment.
The big difference here is that Claude is not military equipment. It's a public, general purpose model. The terms of use/service were part of the contract with the DoD. The DoD is trying to forcibly alter the deal, and Anthropic is 100% in the clear to say "no, a contract is a contract, suck it up buttercup."
We aren't talking about Lockheed here making an F-35 and then telling the DoD "oh, but you can't use our very obvious weapon to kill people."
> Surely autonomous murderous robots is something U.S. government has interest in preventing
After this fiasco, obviously not. It's quite clear the DoD most definitely wants autonomous murder robots, and also wants mass domestic surveillance.
So what your saying is it should be removed from the military supply chain?
i dont think any of the big ai companies or any of the sota models should be in a kill chain
i as a foreign citizen get to have hard to detect influence over the model because it scraped tons and tons of my internet comments.
if youre going to have a supply chain, it needs to include where the trainjng data is sourced from and who can contribute to it
No, he's saying if this was such a big deal why did they sign up in the first place?
Because the current government wants unquestioning obedience, not a discussion (assuming they were capable of that level of nuanced thought in the first place). The position of this government is "just do what I say or I will hit you with the first stick that comes to hand".
A vendor doesn't want to do something you need, you find another vendor (there are others).
This is just petty.
If the government doesn't want to sign a deal on Anthropic's terms, they can just not sign the deal. Abusing their powers to try to kill Anthropic's ability to do business with other companies is 10000% bullshit.
I can see both sides as pertains to Trump's initial decision to stop working with Claude, but now, this over-the-top "supply chain risk" designation from Hegseth is something else. It's hard to square it with any real principle that I've seen the admin articulate.
> What I don’t understand is why the two parties couldn’t reach agreement.
Someday we'll have to elect a POTUS who is known for his negotiation and dealmaking skills.
> What I don’t understand is why the two parties couldn’t reach agreement. Surely autonomous murderous robots is something U.S. government has interest in preventing.
Consider the government. It’s Hegseth making this decision, and he considers the US military’s adherence to law to be a risk to his plans.
I am fine with this. If you are a defense contractor, you are a defense contractor, and you follow the military needs that you government believes are necessary - or you stop being a defense contractor.
I wouldn't want a bullet manufacturer to hold back on my government based on their own internal sense of ethics (whether I agreed with it or not, it's not their place)
You're fine with a company being designated a supply chain risk, a designation heretofore used exclusively for foreign adversaries and usually a death knell for most companies, because the government wants to break a negotiated terms of service and contract that they already accepted?
The fuck?
Everyone is getting wrapped around the axel here but this is about the big picture, not the specifics. A private company should not have the ability to dictate how its technology is used by the government. If they can’t agree to that, then don’t sell your technology to the government. Personally, I don’t want to be spied on by the government with it (I don’t think their tech does that) but I also don’t want Anthropic having operational control over a mission.
That's exactly what is happening... Anthropic are choosing not to sell their technology to the government. I'm not sure what you're suggesting otherwise here.