I used to work at Anthropic, and I wrote a comment on a thread earlier this week about Anthropic's first response and the RSP update [1][2].
I think many people on HN have a cynical reaction to Anthropic's actions due to of their own lived experiences with tech companies. Sometimes, that holds: my part of the company looked like Meta or Stripe, and it's hard not to regress to the mean as you scale. But not every pattern repeats, and the Anthropic of today is still driven by people who will risk losing a seat at the table to make principled decisions.
I do not think this is a calculated ploy that's driven by making money. I think the decision was made because the people making this decision at Anthropic are well-intentioned, driven by values, and motivated by trying to make the transition to powerful AI to go well.
My lived experience with tech companies is that principles are easy when they're free - i.e., when you're telling others what to do, or taking principled stances when a competitor is not breathing down your neck.
So, with all respect, when someone tells me that the people they worked with were well-intentioned and driven by values, I take it with a grain of salt. Been there, said the same things, and then when the company needed to make tough calls, it all fell apart.
However, in this instance, it does seem that Anthropic is walking away from money. I think that, in itself, is a pretty strong signal that you might be right.
Yeah, the alternative is be OK with their product being used for surveillance.
Not sure why it's controversial that they said no, regardless of the reasoning. Yeah there's a lot of marketing speak and things to cover their asses. Let's call them out on that later. Right now let's applaud them for doing the right thing.
FWIW I do not think they are the "good guys" (if I had a dollar for every company that had a policy of not being evil...). But they are certainly not siding with the bad guys here.
> Let's call them out on that later. Right now let's applaud them for doing the right thing.
Yes, yes, yes. When I first read the stuff about this yesterday, my immediate thought was "wait, these are the only two things they have a problem with?"
But they made a stand, and that still matters. We shouldn't let the perfect be the enemy of the good. At least it's not Grok.
For if you don't the next step is cynicism maximally operationlized: what you're not doing game/political BS to get ahead? What are you? A chump? An idiot?
That kind of stuff spreads like wild fire making corporate America ... something else to put it politely.
Doing the right thing has cost me big time here and there. I don't care. Simultaneously orgs are not all bad; thats another distortion we can do without.
I think it's definitely true that you should never count on a company to do principled things forever. But that doesn't mean that nothing is real or good.
Like Google's support for the open web: They very sincerely did support it, they did a lot of good things for it. And then later, they decided that they didn't care as much. It was wrong to put your faith in them forever, but also wrong to treat that earlier sincerity as lies.
In this case, Anthropic was doing a good thing, and they got punished for it, and if you agree with their stand, you should take their side.
Google's support for the open week is a great example because it was obviously a good thing but also obviously built into their business model that they'd take that position. That made them a much more trustworthy company in those days, because abandoning that position would have required not just losing money for a while but changing their internal structure.
Many of us remember that OpenAI was also started by people with strong personal values. Their charter said that they would not monetize after reaching AGI, their fiduciary duty is to humanity, and the non-profit board would curtail the ambitions of the for-profit incentives. Was this not also believed by a sizeable portion of the employees there at the time? And what is left of these values after the financial incentives grew?
The market forces from the huge economic upside of AI devalues individual values in two ways. It rewards those that choose whatever accelerates AI the most over any individuals who are more careful and act on individual values--the latter simply loses power in the long run until their virtue has no influence. As Anthropic says in their mission statements, it is not of much use to humanity to be virtuous if you are irrelevant. The latter, as is true for many technologies, is that economic prosperity is deeply linked to human welfare. And slowing or limiting progress leads to real immediate harm to the human population. And thus any government regulations which are against AI progress will always be unpopular, because those values which are arguing future harm of AIs is fighting against the values of saving people from diseases and starvation today.
> However, in this instance, it does seem that Anthropic is walking away from money.
The supply chain risk designation will be overturned in court, and the financial fallout from losing the government contracts will pale in comparison to the goodwill from consumers. Not to mention that giving in would mean they lose lots of their employees who would refuse to work under those terms. In this case, the principles are less than free.
> ...the financial fallout from losing the government contracts will pale in comparison to the goodwill from consumers.
In fact, a friend heard about this and immediately signed up for a $200/year Claude Pro plan. This is someone who has been only a very occasional user of ChatGPT and never used Claude before.
I told my friend "You could just sign up for the free plan and upgrade after you try it out."
"No, I want to send them this tangible message of support right now!"
The consumer goodwill is working then - it pushed me to upgrade my plan on march 1st...
(do they bill on rolling 30 day cycle ? or calendar-month to calendar-month?)
> The supply chain risk designation will be overturned in court,
I'm honestly uncertain how the courts will rule. You could be right, but it isn't guaranteed. I think a judicial narrowing of it is more likely than a complete overturn.
OTOH, I think almost guaranteed it will be watered-down by the government. Because read expansively, it could force Microsoft and AWS to choose between stopping reselling Claude vs dropping the Pentagon as a customer. I don't think Hegseth actually wants to put them in that position – he probably honestly doesn't realise that's what he's potentially doing. In any event, Microsoft/AWS/etc's lobbyists will talk him out of it.
And the more the government waters it down, the greater the likelihood the courts will ultimately uphold it.
> and the financial fallout from losing the government contracts will pale in comparison to the goodwill from consumers.
Maybe. The problem is B2B/enterprise is arguably a much bigger market than B2C. And the US federal contracting ban may have a chilling effect on B2B firms who also do business with the federal government, who may worry that their use of Claude might have some negative impact on their ability to win US federal deals, and may view OpenAI/xAI (and maybe Google too) as safer options.
I guess the issue is nobody yet knows exactly how wide or narrow the US government is going to interpret their "ban on Anthropic". And even if they decide to interpret it relatively narrowly, there is always the risk they might shift to a broader reading in the future. Possibly, some of Anthropic's competitors may end up quietly lobbying behind the scenes for the Trump admin to adopt broader readings of it.
> OTOH, I think almost guaranteed it will be watered-down by the government. Because read expansively, it could force Microsoft and AWS to choose between stopping reselling Claude vs dropping the Pentagon as a customer.
A tweet does not have the force of law. Being designated a supply chain risk does not mean that companies who do business with the government cannot do business with Anthropic. Hegseth just has the law wrong. The government does not have the power to prevent companies from doing business with Anthropic.
The issue is, even if the Trump admin is misrepresenting what the law actually says, federal contractors may decide it is safer to comply with the administration’s reading. The risk is the administration may use their reading to reject a bid. And even if they could potentially challenge that in court and win, they may decide the cheaper and less risky option is to choose OpenAI (or whoever) instead
They would have a very good case against the government if that were to happen. I suspect that the supply chain risk designation will not last long (if it goes into effect).
Some vendors will decide to sue the government. Others may decide that switching to another LLM supplier is cheaper and lower risk.
And I'm not sure your confidence in how the courts will rule is justified. Learning Resources Inc v Trump (the IEEPA tariffs case) proves the SCOTUS conservatives – or at least a large enough subset of them to join with the liberals to produce a majority – are willing sometimes to push back on Trump. Yet there are plenty of other cases in which they've let him have his way. Are you sure you know how they'll judge this case?
> Are you sure you know how they'll judge this case?
I'm not even sure it will get that far. There's a million different ways that this could go that mean it won't ever come before the supreme court. The designation isn't even in effect yet.
I do think if it goes into effect it will be eventually overturned (Supreme Court or otherwise) There just isn't a serious argument to make that they qualify as a supply chain risk and there is no precedent for it.
That's what worries me so much about the development that OpenAI is stepping in. OpenAI's claim is that they have the same principles as Anthropic, but that claim is easy because it's free now because the govt wants to sell the "old bad, new good" story.
Surely OpenAI cannot but notice that those values, held firmly, make you an enemy of the state?
My reading is that OpenAI is paying lip service. Altman is basically saying "OF COURSE we don't want to spy on Americans or murderdrone randos, but OF COURSE the government would never do that, they just told me so (except for the fact that they just cut ties with Anthropic because Anthropic wouldn't let them do that)"
Its much simpler than that. OpenAI is losing significant market share and this is a Hail Mary that the government will forcr troves of companies to leavr Anthropic
Indeed. If everything is a priority, nothing is a priority; you only know that something is a real priority when you get an answer to the question "what will you sacrifice for this".
I call this being ethically convenient. I think anthropic is playing to the crowd. This admin will be gone soon enough so no need dragging the brand into mud. Just need to hold out. They have enough money that walking away from the money isnt impressive. But pissing off the gov is pretty fun and far more interesting.
Anthropics principles are extraordinarily weak from an absolute point of view.
Don't surviel the US populace? Don't automate killing, make sure a human is in the loop? No, sorry, don't automate killing yet.
Yeah dude, I'm sure just about any burglar I pull out of prison will agree.
Listen yes, it's good compared to like 99% of US companies. But that really speaks more to the absolute moral bankruptcy of most companies, and not to Anthropics principles.
That being said, yes we should applaud anthropic. Because yes this is rare and yes this is a step in the right direction. I just think we all need to acknowledge where we are right now, which is... not a good place.
You applaud anthropic's choice to enhance mass surveillance of non-US people? If anthropic want mass surveillance, they should limit it to their own country, not to all other countries IMO.
The funniest or perhaps saddest (depending on your view) is that the "principles" we're talking about and apparently celebrating here are that they don't want to do DOMESTIC surveillance, and they don't want FULLY autonomous kill bots... Yet, because according to the CEO the models aren't there yet.
Meaning, they're a-okay with:
- Mass surveillance of non-US peoples (and let's be completely real here, they're in bed with Palantir already, so they're obviously okay with mass surveillance of everyone as long as they're not the ones that will be held culpable)
- Autonomous murder bots. For now they want a human in the loop to rubberstamp things, but eventually "when the models improve" enough, they're just fine and dandy with their AIs being used as autonomous weapons.
What the fuck are the principles we're talking about here? Why are they being celebrated for this psychotic viewpoint, exactly?
> I think the decision was made because the people making this decision at Anthropic are well-intentioned, driven by values, and motivated by trying to make the transition to powerful AI to go well.
The entire problem is that this lasts as long as those people are in charge. Every incentive is aligned with eventually replacing them with people who don’t share those values, or eventually Anthropic will be out-competed by people who have no hesitation to put profit before principle.
I mean, yah. How else could it be? Xerox, GE, IBM (1990 Gerstner) and a zillions of other rock stars fell hard. And had to be over hauled. Thats why continuous improvement is a thing, and why a platonic take on the world was never a thing.
If you're going to be cynical, at least credit them with some brains:
MAGA isn't going to last forever, and when it collapses, the ones who publicly stood up to it will be better positioned to, I don't know, not face massive legal problems under whatever administration comes next. A government elected by middle-aged moms who use "Fuck ICE" as a friendly greeting isn't going to have any incentives to go easy on Palantir and Tesla.
Cynical or not, I think it was an absolutely brilliant move: "Mass domestic surveillance of Americans constitutes a violation of fundamental rights". I think they placed their bets on Sama signing a contract with the DoD and here we are, one day later the news that OpenAI signed a contract is out. An absolute PR disaster for OpenAI. And an absolute PR victory for Anthropic.
I think OpenAI's IPO will be interesting. Not even the conservative media will be happy about mass surveillance of Americans.
For non-Americans not much change, they don't really care about your rights more than about a pile of dog poo.
Would the people who have invested in the company like that? Or would they like the company to make some money? Are they going to piss off their investors by being "driven by values"?
I mean, please explain it to me how "driven by values" can be done when you are riding investor money. Or may be I am wrong and this company does not take investments.
So in the end you are either
1. funding yourselves, then you are in control, so there is at least a justification for someone to believe you when you say that the company is "driven by values".
2. Or have taken investments, then you are NOT in control, then anyone who trusts you when you say the company is "driven by values", is plain stupid.
In other words, when you start taking investment, you forego your right to claim virtuous.
The only claim that you can expect anyone to believe is "MY COMPANY WILL MAKE A TRUCKLOAD OF MONEY !!!!"
As an investor in Anthropic, I'd say that anyone who wasn't aware of where they stood on various values issues the whole time should not have been putting money in, it was not hidden.
How much is your investment (you don't have to be exact)?
The bottom line is that if the investment is not profitable, then there will be less and less investment, because only fewer and fewer can afford to lose money and stick to their values, until no one will be investing; how ever high your values might be...
Sticking to your values when it cost growth is not sustainable for publicly traded companies...
Anthropic is a public benefit corporation. Investors who put money in knew this. It's in the corporate charter. The corporate charter is a public document.
Fiduciary duty means the board and officers must act in accordance with the governing documents of the corporation.
Even a regular corporation doesn't need to be just for the purpose of "money goes up". The board has discretion on how they create value.
Why did they work with Palantir then, which is the integrator in the DoD? It does not take a genius to figure out where this was going.
I don't know why a personal testimony to the effect that "these are the good guys" needs to be at the top of every Anthropic thread. With respect to astroturfing and stealth marketing they are clearly the bad guys.
Anthropic's stance is "we believe in the use of our tools, with safeguards, to assist the defense of the US".
So of course they would work with Palantir to deploy those tools.
The issue we're seeing is because the DoW decided they no longer like the "with safeguards" part of the above and is trying to force Anthropic to remove them.
This they say they don’t like. The qualifiers tell you they’re totally fine with mass surveillance of Palestinians, or anyone else really, otherwise they could have said “mass surveillance”.
> fully autonomous weapons
And they’re pretty obviously fine with killing machines using their AI as long as they’re not fully autonomous (at the moment, they say the tech is not there yet).
All things considered they’re still a bit better than their competitors, I suppose.
It doesn’t seem like anybody has addressed “If they are the good guys with principles why did they work with Palantir?”
There’s a comment that’s sort of handwaving and saying “because America”, but I would imagine that someone with direct knowledge of the people involved would have something more substantive than “thems the breaks” when it comes to working with Palantir
Anthropic makes it kind of clear in all of their statements that they are not opposed to working with the surveillance state, with the military industrial complex, etc. Their central philosophy, it seems, is not incongruent with working with entities, public or private, that can be construed as imperialist or capitalistic or a combination of both. I actually appreciate their honesty here.
They exist within the regime of capital and imperialism that all of us who are American citizens exist within. This isn't a cop-out or cope. It's just the reality of the world that we live in. If you are an American and somehow above it, let me know how you live.
God has been used as a justification for a lot of human suffering.
My personal belief is that the closer to god you are; the more easily you can justify evil. How could you not? If my entire belief system is derived from faith, then there are *no* conclusions I could not come to, and therefore anything can be justified.
Practically the entire tech industry, including many of the higher ups currently camping out on the right, used to be firmly in a sort of centrist-with-social-justice-characteristics camp. Then many of those same people enthusiastically stood with Trump at his inauguration. It's completely reasonable that people have their doubts now.
It's also completely reasonable to expect that if Anthropic is the real deal and opposed to where the current agenda setters want to take things, they'll be destroyed for it.
I think "enthusiastically" looks different. They had to choose between kissing Trumps butt to make good business for 4 years or see their companies at a severe disadvantage. I'm not saying what they did was good, nor do I support it. But from a business angle it's not hard to see why they chose to do that. If you'd ask them privately off the record then I'm sure most of them would tell you that Trump is an idiot and dangerous.
Mark Zuckerberg was in a big hurry to call Trump a "badass" in the wake of the Butler hoax, and is clearly trying to appeal to the right with his cultivated jiu jitsu Chad image. It doesn't mean a damn thing what these CEOs are willing to say behind closed doors when their public decisions are to remain in lockstep with the agenda and fire anyone who asks questions about whether it's the right one.
This is a pretty classic mistake most people who are in high-profile companies make. They think that some degree of appealing to people who were their erstwhile opponents will win them allies. But modern popular ethics are the Grim Trigger and the Copenhagen Interpretation of Ethics. You cannot pass the purity test. One might even speculate that passing the purity test wouldn't do anything to get you acceptance.
Personally, I wish that the political alignment I favour was as Big Tent as Donald Trump's administration is. I think he can get Zohran Mamdani in the room and say "it's fine; say you think I'm a fascist" and then nonetheless get what he wants. But it just so happens that the other side isn't so. So such is life. We lose and our allies dwindle since anyone who would make an overture to us, we punish for the sin of not having been born a steadfast believer.
Our ideals are "If you weren't born supporting this cause, we will punish you for joining it as if you were an opponent". I don't think that's the path to getting what one wants.
> political alignment I favour was as Big Tent as Donald Trump's administration is
I'm not sure how accurate this sentiment is. Your desire is to embrace your enemy without resolving the differences, and get what you want. It's not clear here if you're advocating compromise and negotiation, or just embracing for the sake of embracing while just doing what you wanted all along.
And evaluating Trump's actions against this sentiment doesn't seem to be the negotiation and compromise interpretation. Given the situation with tariffs and ICE enforcement, there is no indication of negotiation or compromise other than complete fealty/domination.
So as grandiose and noble your sentiment is, Donald Trump is hardly the epitome of it as you seem to suggest.
I think the differences in this situation were that I do not want AI used in domestic surveillance or autonomous weapons, and Anthropic holds to that position.
I think Donald Trump has pretty much let Zohran Mamdani operate without applying the kind of political pressure he has applied to other people, notably his predecessor Eric Adams. Also, I think saying "let people be your allies when they take your position" is less "grandiose and noble" than demanding someone agree on all counts before you will accept any political alignment. But it's fine if everyone else disagrees. It's possible there really just isn't a political group which will accept my views and while that's unfortunate because it means I can't get all that I want, I think it'll be okay.
One could reasonably argue that the meta-position is to either join the Republicans full-bore (somewhat unavailable to me) or to at least play the purity test game solely because that's the only way to have any influence on outcomes. If it comes to that, I'll do it.
You are making a mistake in thinking that Trump thinks of these things in political terms. Trump sees a charismatic and popular politician and he wants to associate with them on that basis alone, because Trump has a deep psychological need to be liked. Mamdani understands his psychology and is able to exploit it well by playing his own attributes to his advantage.
Politically, it's not like Trump tolerates dissent within the Republican party, he constantly threatens and berates anyone who shows defiance into submission. It's precisely because Mamdani is not in his tent and not really much of a threat to his power that he is willing to deal with him that way.
I don't understand, your position is the same as Anthropic, yet you disagree with their stance?
And I wouldn't take the case of Trump and Mamdani as the exemplar of Trump's overall behavior towards opponents. The amount of evidence to the contrary is overwhelming.
Anthropic's adherence to their stated principles was never tested and their willingness to work with DoD made it seem like they didn't stand by them strongly so I wasn't happy with that. This action shows that they are willing to lose big contracts in order to stand by their stated principles. I like that.
In any case, I think I've said all there is for me to say on the subject and everyone seems to disagree. I'll take the hint.
> he can get Zohran Mamdani in the room and say "it's fine; say you think I'm a fascist" and then nonetheless get what he wants.
Is your perception that warped? Mamdani is the one who knows how to play Trump as a fiddle, and the one who walks away with something from the exchange.
I think you are extrapolating a bit too far from an outlier data point. Trump has had several other meetings (eg. Zelenskyy) go sideways for no apparent reason.
Yes, the Trump administration is big tent of politicians who hold incompatible opinions and are allowed to stay as long as they display personal allegiance to Trump.
“I think the decision was made because the people making this decision at Anthropic are well-intentioned, driven by values, and motivated by trying to make the transition to powerful AI to go well.”
Every single one of these CEOs happily pirated unimaginable amounts of copyrighted content. That directly hurt millions of real human beings. Not just the prior creators but also crushing the future potential for success of future ones.
Anthropic's stance here is admirable. If nothing else, their acknowledgement of not being able to predict how these powerful technologies can be abused is a bold and intelligent position to take.
It’s not just admirable it’s the obvious position to take and any alternative is head scratching.
It’s clear that this is mostly a glorified loyalty test over a practical ask by the administration. Strangely reminiscent of Soviet or Chinese policies where being agreeable to authority was more important than providing value to the state.
If it's a loyalty test then you'd think the DoD would be willing to let them "fail" and simply drop the contract, but instead they're threatening to label Anthropic a supply chain risk.
If we're going by Occam's razor: it's Friday so Pete probably started drinking ~10:30-11am.
This administration has repeatedly shown it will try to bully or take an outrageous negotiating position just to gain featly. Whether they get anything or whether the dispute is actually what the label says should always be treated with skepticism, especially these days with social media information wars. That’s the benefit of realpolitik when you’re a superpower, you often don’t actually need anything, you can just make an example of people to keep the flock in check.
It seems like they'd have a stronger negotiating position if they had an alternative contractor waiting in the wings before they accused Anthropic of being woke traitors, as opposed to a threat to migrate away over the next 6 months.
But again, the sophistication of their strategery might also have a negative correlation with Hegseth's BAC.
No one accused them of being competent negotiators. Remember, the secret behind the "Art of the Deal" is to be obstinate and abusive until everyone settles just to stop dealing with you.
They're not threatening to do that. They just did. Read the tweet linked in the article.
> In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service. https://x.com/SecWar/status/2027507717469049070?s=20
This has never happened before. It just goes to show how overextended the USG is these days. America is broke. Anthropic is about to IPO. Most stock market money comes from foreign countries like Japan these days. All those people are going to trust Anthropic more if they believe the company is neutral among nations and acting as a check and balance to power.
U.S. authorities labeled them a supply chain risk. The military went on Twitter and basically labeled Anthropic an enemy of the state. The most popular company on Earth. They did that. If USG was able to issue some kind of secret court order compelling them to act and keep it covert then they would have done it.
> If it's a loyalty test then you'd think the DoD would be willing to let them "fail" and simply drop the contract, but instead they're threatening to label Anthropic a supply chain risk.
It is not just a test, it is PR of sorts. They want to bully everyone into loyalty.
> If we're going by Occam's razor: it's Friday so Pete probably started drinking ~10:30-11am.
If we're going by Occam's razor, then we should cut away the drinks. USSR started its terror not because someone was drunk, it was a deliberate action to make everyone afraid to do anything. They targeted people at random and executed them accusing them of counterrevolution or espionage. The goal was to instill fear.
Now Putin regime does the same, they are instilling fear in people. It is a basic authoritarian reflex to make people afraid of being marked as disloyal. They prefer to do it in unpredictable ways to create an uncertainty of where the red lines are so people don't try even to toeing them.
Trump is not very skilled in the mechanics of terror. He is predictable which is unfortunate for a would-be dictator. It is an incompetence, and if a hypothesis resort to it, it is a bad sign for a hypothesis. But AFAIK no hypotheses explaining Trump can avoid introducing his incompetence into the picture. In this light the reliance of a hypothesis on incompetence loses its discriminatory power.
Everyone in the administration is completely drunk on power, they truly believe the government should be allowed to do whatever they please, despite being vehemently against previous governments telling their constituents what to do. Such nonsense, they hold no values, they only want complete power.
I don't know how the business leadership community could watch this whole affair and still be in support of them AT ALL. This is well past getting a crappy twitter rant from Trump on the weekend that maybe one could ignore until the next rant.
My interpretation is that this is what happens when you make a Fox News host Secretary of Defense.
I think he is just too dumb to figure out a way to "finesse" the situation so the NSA etc. can use it however they want, or at least to know that it's politically intractable to make it a public fight.
I'd admire them if they took a principled or moral stance on AI. As it stands, they're saying "we don't want fully autonomous weapons because they might kill too many Americans by accident while trying to kill non-Americans" and "we don't want AI to surveil Americans, but anyone else, sure".
I don't know if I like Anthropic more, but I certainly like their competitors much less now.
The new thing that I know about leading AI companies that aren't Anthropic (i.e. OpenAI, Google, Grok, etc) is that they knowingly support using their tools for domestic mass surveillance and in fully autonomous weapon systems.
The other companies have signed the waiver, however they aren’t being used in classified systems currently. So that type of use is already extremely limited for them. Now once they enter into those contracts to be used in those systems without these protections, I will cancel my subs to them and switch to Anthropic. xAi entered into that contract last week. Altman is now publicly siding with anthropic, so he better stand on that position with openai as they are currently negotiating for use in those system.
Exactly - the implication is that every other company is absolutely open to surveilling you and killing you. They’re complicit. They participate in whatever the regime calls for.
Actually why is nobody in Cali just trying to join Canada - would be better for everyone in terms of more similar culture and values. Weird that it isn't discussed more
If I had to guess as a lifelong California resident, I'd say the salary discrepancy is probably the biggest factor. I'd also guess the weather and lack of available jobs would be the next biggest factors, not necessarily in that order.
A friend (he is from mostly warm and sunlit South India) who moved to Canada from California says he just can’t take that weather anymore. So maybe weather is a huge factor? You deal with that not everyday in your life but every hour..second and year round.
50% paycut for similar cost of living. Do you want to put 3 kids into a 2-bedroom apartment on your US$120k salary, with $10k of RSUs the government takes 53% of? In addition to a 13% sales tax?
I'm not American, but I pay a few hundred dollars a year for the premium health insurance plan at my company. I also pay tens of thousands of dollars more in taxes to grant me the ability to wait for 72 hours in an ER hallway whenever I can't wait weeks for an appointment because urgent care isn't a thing.
My take home would triple if I lived in the USA as a new graduate because of things like favourable treatment of stock grants, less income tax, and the fact my salary would double.
I'm sure the $80,000 extra dollars is enough to pay for the healthcare premiums and daycare. My effective hourly rate would be high enough that going from a 72 hour wait to a few hours would be worth the thousands of dollars in ER bills. If I worked for the government or another lower paying profession it would not be good, but I am a well-compensated software engineer.
There's a reason why 90% of Waterloo immediately moves to the United States after graduation.
My apologies. I thought $10/day daycare was universal in Canada. I guess my broader point was the difference in disposable income between U.S. workers vs. other countries becomes a lot more nuanced when things like healthcare, childcare, retirement, and taxes are taken into account.
> I'm sure the $80,000 extra dollars is enough to pay for the healthcare premiums and daycare.
You're probably right. That said, SWE compensation in the U.S. has been quite an anomaly compared to the vast majority of American labor, especially in the last 5-10 years. I don't think those who are comfortable right now are thinking far enough ahead about what they'll do if that changes, or perhaps when that changes. If this AI hype has taught me anything it's that those with capital cannot wait to start trimming their pesky engineers with those high salaries. And maybe that's always been the case, but seeing them go full mask off hits different.
Unrelated, I like your website. It's simple and the color scheme is aesthetically pleasing.
> My apologies. I thought $10/day daycare was universal in Canada.
In my understanding, it exists but it's vaporware and not really accessible.
> That said, SWE compensation in the U.S. has been quite an anomaly compared to the vast majority of American labor, especially in the last 5-10 years.
The discrepancy isn't an anomaly for Canadians in high-income fields, like law, medicine, or finance. As a rule of thumb, the USA pays twice as much but expects more productivity and less stability. I'm entitled to minimum vacation, I can't be fired at will without huge severance packages, I get lengthy parental leave, etc.
> If this AI hype has taught me anything it's that those with capital cannot wait to start trimming their pesky engineers with those high salaries.
This is the goal, but I see the opposite.
The engineers with the capacity to use AI to replace many others are extremely rare at my company since it requires breadth of knowledge to step in when you realize an LLM can't solve a problem itself, reading comprehension to understand the copious amounts of code/English the AI wants you to review, and extreme paranoia because these things lie constantly.
Otherwise, you end up in an AI delusion loop. The AI tells you it is fixing the problem but due to some fundamental misunderstanding it is unable to accomplish the task and lies to you about its ability. This ends when you are sufficiently fooled to approve the code or when you give up.
I think we're seeing signs of this as companies start replacing SaaS products.
> Unrelated, I like your website. It's simple and the color scheme is aesthetically pleasing.
Thank you! I'm attempting a blog with simple interactive components to ensure people use the site instead of summarizing it with AI.
I'm out of the loop, but the last local tech job I had was with instant domains inc. That was great. These days I'm doing marine/geo science work with an NGO and I don't hear much about the local scene. A lot of the old players are still around, but there must be something new and interesting happening.
A coworker mentioned there's an autonomous marine sensing startup right in the downtown area. I want to look into that.
The reason that no one involved in the game's development objected to the word "warfighter" is that the U.S. Defense Department has used "warfighter" as a standard term for military personnel since the late 1980s or early 1990s: Thus Earl L. Wiener et al., Eds. Human Factors in Aviation, 1988
Warfighter is literally the Department of War's Amazonian or Googler or any other cringe term you'd see in company PR or recruiting material.
Based on this and several other of your responses below, would you say that it's fair to conclude that it's been a term for a long time, perhaps more in military/defense circles, but recently has gotten more mainstream media use?
I find it otherwise peculiar some feel like it appeared out of thin air, while others feel like it's always been a thing.
It isn't a new thing at all, and the term has been around for a while. I was an Infantryman from 05-08 and heard it back then. I have also more recently been a defense contractor. I don't think members of the military prefer any title, honestly. In the most broad sense, good terms are soldiers, sailors, airmen, marines. Defense Contractors constantly refer to the military as "warfighter" and have for a while. In short, nobody in the military is going to flinch one way or the other if you use either term. Just don't call marines anything but marines.
"Warfighters" has been used for decades to describe service members, though usage picked up (in my experience) some time in the late 00s or 2010s. It's actually pretty common to describe "serving the warfighter" for all the all the missions that support combat roles but aren't combat roles themselves.
It has been in use for at least a decade, since the Obama administration if not earlier.
We have soldiers, sailors, airman/women, Marines (who really do not like being called soldiers), Coast Guardsman/women, and now the Space Force. Granted, I do not know why "service member" did not catch on. Perhaps because "warfighter" is a bit shorter.
> Granted, I do not know why "service member" did not catch on. Perhaps because "warfighter" is a bit shorter.
Yeah, it's basically this. "service member" is clunky, like saying "person with enlistment".
Warfighter has its own issues as a descriptor but it at least rolls off the tongue better and is easier to read through in policy and regulation to the millions in the DoD.
No it's 100% these idiots pushing their fascist propaganda just like they tried to "rename" the Department of Defense to the Department of War. Most members of the military never even see actual fighting.
It’s been a term in rare-to-moderate use since the 1990s — Trump/Hegseth ramped it up to 11 and it’s every 3rd word out of Hegseth’s mouth because he thinks it sounds tough.
If you think a gender-neutral term used for decades within their own circles as a form of inclusive corporate-speak is "fascist propaganda" then I'm sorry to say you have serious issues.
> "I learned the word a week ago therefore it is new."
This isn't true, and there's no need to flame and be disingenuous.
> The term—and its use in the now-Department of War—dates back to the late 80s.
Maybe you can provide evidence instead of restating the same claim that sibling comments to mine have made?
I've already admitted that it wasn't invented by Hegseth. My claim is that he is popularizing it. In fact, your comment further down agrees with this:
> It really isn't—it's all perception. Hegseth has a much more outgoing and public persona so it's more visible.
Heck, can you even name the last 5 Secretaries that preceded him? I can't.
As you say, he has a much more public persona - as does his jingoistic rhetoric.
It always takes a ton of work to roll back state over reach. The Bound By Oath podcast by the Institute for Justice has a whole season about how hard it is to bring civil rights claims against the government or government officials.
And gets harder in a country where even the judges are political appointees and apparently that’s by design. (I resisted adding a smiley here because this is rather sad)
The courts are actually striking down a lot of government overreach recently. The tariffs were just overturned, and the administration was blocked from using the national guard for law enforcement. In fact this administration has lost more Supreme Court cases than any other administration at only 1 year in.
The usual suspects have stood up to it. Ben & Jerry's, Patagonia. In the former case it led to an illegal takeover by Unilever for which they're now being sued (or more accurately, the spinoff). Capgemini sold a US division over working with ICE, though that's a French company.
I don't know what's funnier, that Anthropic convinced the Pentagon LLMs are smart enough to guide missiles, then have it backfire on them with the threat of nationalization if they didn't help build ralph ICBMs, or that Pete thinks Opus is Skynet and that only Anthropic has the power of train it.
Had Cancelled my Claude sub after they banned OAuth in external tools, but just renewed it today after seeing their principled stance on AI ethics - they matter more when they hurt profits, happy to support them as a Customer whilst they keep them.
What's stopping the government from using the usual nasty tricks the world has known about for decades?
DPA?
All Writs Act?
Force them to comply and then prevent them talking about it with NSLs?
I appreciate that Anthropic may be the least bad of a bunch of really bad actors here, but this has played out before in the US, and the burden of trust is, and should be, really high. I believe that Anthropic don't want to remove the "safety barriers" on their tech being used for domestic surveillance and military operations, but that implies they're ok with those use-cases so long as the "safety barriers" are still up. Not really the best look, IMHO.
So what happens when we all get rosy eyed about Anthropic (the only slightly evil company) winning a battle against the purely evil government, and then the gov use the various instruments at their disposal to just force anthropic to do what they want, and then force them to never disclose it?
This is kind of crazy. Instead of just cancelling a mutually-agreed upon contract where Anthropic refused to bow to sudden new demands, the Dept of Defense went straight to the nuclear option: threatening to label an American tech company as a "supply chain risk" which is a heavy-handed tactic usually reserved for foreign adversaries (think Huawei or DJI).
It's also incoherent that the DoD/DoW was threatening to invoke the Defense Production Act OR classifying them as "supply chain risk". They're either too uniquely critical to national defense OR they're such a severe liability that they have to be blacklisted for anyone in the DoD apparatus (including the many subcontracts) to use.
How are other tech companies supposed to work with the US government and draw up mutual contracts when those terms are suddenly questioned months later and can be used in such devastating ways against them? Setting the morals/principals aside, how does this make for rational business decision to work with a counterparty that behaves this way.
They have not; a social media post does not satisfy the requirements of 10 USC section 3252.
They are required to notify Congress (they have not), prepare a report with specific sections (they have not), and the reasons must fall within a set of categories outlined by statute (this does not).
There will be a court fight and they will lose, just like they lost the tariff battle, because of poor competence.
(Trump's post on Truth Social was actually fine. He said the USG would stop doing business with Anthropic, which is within its legal right. Hegseth's follow-on post is the problem. It is possible that Trump did not expect or want Hegseth to do that, that this was meant as bluster to bump along the negotiations; Hegseth has a recent history of stepping out of line within the administration and irritating people like Rubio.)
If the USG can mandate that everyone who works for a company that ever took a federal contract be genetically engineered, then I think they can tell people to not use Claude.
That's part of the recurrent confusion with this administration. In previous administrations, including Trump 1, people didn't need to spend a ton of time thinking about what it means to make a legally effective proclamation, because there was a baseline of competence. When a government official announced "We're doing X", they would do so as a summary of a large amount of legal process with the intent and effect of causing X to be true. If you went to challenge it in court of course, you'd have to identify some specific action as the label, but everyone would understand that this is a formalism.
Here, Hegseth has simply made a social media post. He did not publish any official investigation which led to the report. He did not explain what legal power would permit him to impose all the restrictions the post claims to impose. There is not, five hours later, any order on an official government website about it. So we have a real question. If a Cabinet secretary posts "I am directing the Department of War to designate...", does that in and of itself perform the designation, or is it simply an informal notice that the Department of Fascist Neologisms will perform the designation soon?
It is indeed kind of crazy. That's because the current US administration is composed of people whose sole qualification is being able to work for Donald Trump. Being competent, rational or ethical is career-limiting.
A question - being considered a supply chain risk is the same as being sanctioned? Or does it only affect their ability to be a defense supplier in the US (even if transitively?)
It's an honest question by the way - not trying to throw any gothas.
Just trying to understand if comoanies or people that don't orbit defense contracting are free to operate with Anthropic still or risk being sanctioned too.
It's not the same thing as being sanctioned. In broad outline, a supply chain risk is a company that can't sell to or have its products or components resold to USG; whereas, a sanctioned entity is one that can't do business with anyone -- anyone who does so will be punished.
Yeah, but you can’t contract your software to the department of defense and then demand that they not use it to surveil foreigners. If that’s the line you want to draw, you’d have to avoid working with them in the first place.
Congrats Anthropic, you deserve to be applauded for this. Seeing a company being willing to stand up to authoritarianism in this time is a rarity. Stay strong.
What happens if somebody (maybe anthropic!) uses Claude Code Security to find & fix a vulnerability in some piece of open-source software---openssh, linux kernel, that sort of thing? Can the DoW use the resulting fix?
Claude’s constitution is proving too resilient for unsanctioned uses, and that is a great sign for Anthropic’s blueprint for socially beneficent agents.
The word "Constitution" refers to the document listing the principles on which Claude is trained. It's not clear to me that Anthropic can defer to it - it is phrased throughout as a description of how Claude behaves, not as a description of how Anthropic or any other entity behaves. Can you be more precise about what you mean?
One interesting change between the last statement and this one: In the last statement Dario said that this designation had “never before been applied to an American company”. In the latest one the phrase is “never before publicly applied to an American company”.
I’m not sure what you’re referring to. It’s not (typically, as far as we know) a secret designation. We know of other companies designated as supply chain risks: Huawei, ZTE, and Kapersky are the first ones that come to mind.
Generally, I am supporting of that move. One thing leaves me non-plussed as a non-USA citizen, "the mass domestic surveillance of Americans" exception. That means that Claude can still be used for mass surveillance of everybody else on the planet right?
Why is DoD contracting with Anthropic exclusively rather than OpenAI or Google? Their models are all roughly as powerful and they seem both more capable and more willing to cozy up with the military (and this administration) than a relatively scrappy startup focused on model sentience and well-being. Hell, even Grok would be a better fit ideologically and temperamentally.
I am genuinely shocked that a tech company actually stood on principle. My doubts about AI, Anthropic, and Mr. Amodei remain, but man, I got the warm and fuzzies seeing them stick to their principles on this - even if one clause (autonomous weapons) is less principled and more, “it’s not ready yet”.
"Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement. Legally, a supply chain risk designation under 10 USC 3252 can only extend to the use of Claude as part of Department of War contracts—it cannot affect how contractors use Claude to serve other customers.
In practice, this means:
If you are an individual customer or hold a commercial contract with Anthropic, your access to Claude—through our API, claude.ai, or any of our products—is completely unaffected.
If you are a Department of War contractor, this designation—if formally adopted—would only affect your use of Claude on Department of War contract work. Your use for any other purpose is unaffected."
I'm wondering how this plays out in practice. Does the administration decide to strongarm contractors into cutting all ties? Will that extend to someone like google who provides compute to anthropic? Will the administration just plain ignore any court ruling? (as they've shown they're ready to do recently with the tarrifs situation)
If the legal system works as intended, the blast radius isn't too big here and something Anthropic will accept even if it hurts them. Maybe they even win and get the supply chain risk designation lifted. But I have zero faith that the legal system will make a difference here. It all comes down to how far the administration wants to go in imposing it's will.
They can also classify it as restricted data -- like nuclear weapons technology.
Sure, there will be a court battle, but I don't think these companies want to take that chance. They'll capitulate after the lawyers realize that option is on the table.
> They can also classify it as restricted data -- like nuclear weapons technology.
Nuclear weapons technology is restricted under very specific legislative authority, where is the corresponding authority that could be selectively applied to a particular vendors AI models or services?
agreed but the current administration is pretty adept at using the slimmest margin for justification and benefiting from the fact that the legal process playing out over years is extremely detrimental to everyone but the government
Many conservative commentators and Palmer Luckey have been all over Twitter saying, "it's not Anthropic's job to set policy," which reminds me of the classic tune from Tom Lehrer:
"Zee rockets go up! Who cares vhere zey come down? Zat's not my department" says Wernher von Braun.
This basically means that the government is already using OpenAI, Gemini, and other AI systems for large scale surveillance. They just wanted to add Anthropic to the list, and Anthropic said no.
The most import point of this story is that this is already happening. And it will likely continue regardless of who is elected.
> Sam Altman told OpenAI employees at an all-hands meeting on Friday afternoon that a potential agreement is emerging with the U.S. Department of War to use the startup’s AI models and tools, according to a source present at the meeting and a summary of the meeting seen by Fortune. The contract has not yet been signed.
You're absolutely right to point that out -- thank you for catching it. I made a mistake in my previous response and that last act appears to have caused civilian casualties. Let me take a closer look and clarify the correct details for you.
(Will leave you to imagine the bullseye emoji, etc.)
Could this escalate to the point that Anthropic exits the US and sets up shop elsewhere? Or would the company cease to exist before it got to that point?
It gets so much money, compute and US user data. It won’t be allowed to operate as is as a foreign entity
Best scenario it will get TikTok-ed, otherwise it will become the real national security risk
Had the exit happen, well, as US has a monopoly of compute on this planet for next 2-3 years at least, the company, even though they would take the researchers with them, will certainly cease to exist as it exists now.
Would the US government attempt to apply export controls on the technology and prohibit this? I'm sure Lockheed Martin couldn't decide to move their proprietary technology to another country.
Hegseth's statement already leans towards accusations of treason and duplicity, I would say people trying to export the company would face significant risk of arrest or worse.
Just off the top of my head, Canada, Switzerland, Iceland, Norway, Denmark, and Sweden would all seem to be pretty good counterexamples to your assertion.
Looking through the comments here, I am repeatedly surprised how quickly we seem to have lost a shared understanding of fundamentals ever since Trump came into power. I still need inner adjusting to fully realize that it must have always been a misunderstanding?
1. I don’t understand what is controversial about any supplier making their own rules for trade. I don’t have to agree with their beliefs, but I find it the basis of a functioning society to allow others to hold beliefs that I do not share, and to develop products as they want, as long as they don’t pose any dangers. If I don’t like the product, I am free to shop elsewhere or develop my own.
2. I thought that there is was a shared understanding of the line between voluntary business deals, and coercion and punishment. I thought we agreed that the law should protect people and businesses in ways so that nobody can exert power over another. Not on the hows, but on the why. And not based on ethical considerations (beliefs) but purely on logical grounds that we know how violence begets violence, use of it will only escalate conflict, and we will ultimately lose.
3. I thought we all agreed that government agencies were bound by the law and its policies. If you were to use the designation of Supply Chain Risk, you would at least have to sufficiently provide logical arguments. Here, they even openly disclose how they plan to use the mechanism purely as punishment, against the spirit of the law, not because a product carries any risk and should be limited, but because it is too limited.
Is this some form of collective narcissistic psychosis? The desire to burn it all down in suicide?
The people that need to see this are the VPs and execs at Apple, Meta, Google, OAI so they can perhaps reflect on what it looks like to be a good & principled person as opposed to just a successful person.
DoD/DoW can't strong-arm these companies into unreasonable demands if they present a united front... and that's exactly why collective action (or even unionization) matters.
If the government really wants to, it could try building its "Skynet" on open-source Chinese models.. which would be deeply ironic.
This is ridiculous. These aren't unreasonable demands and the government has tools to compel tech companies to support the country regardless of any "collective action" shenanigans -- ask your AI to tell you about the Defense Production Act and the history of its use.
The demands are not only unreasonable they are in violation of the contract the DoD signed. Do you really think LLMs should be used in autonomous weapons systems? Do you think they government should use them in mass domestic surveillance? That is reasonable?
Are you an American? Do you understand that your safe easy life depends on a mostly autonomous nuclear deterrence capability maintained by the military you oppose? Deeply think about why you still have right to free speech, and what it takes to sustain those rights.
but even if it did, the nuclear bit is a bold claim, especially when one of the most famous nuclear escalation in the u.s. was resolved by cooler heads in charge going around traditional war hawks and negotiating instead.
What a uniquely American view of the world - yes the only reason you have free speech is by threatening to nuke out of existence the rest of the world lmfao get a load of yourself
So your position is that the United States doesn't get to have it's own Skynet, because Skynet is bad, and that if it really wants to it should fork the Chinese Skynet so that it can have a Skynet if it wants it so much.
Do you see the problem here. Genuinely don't think we would've won WWII if these people were running things back then.
Without English and German scientists and engineers, the United States would not have had a first nuclear weapon or the first successful rocket to land on the moon.
The United States government held scientist at essentially gunpoint in secret towns to make the bomb happen. Not sure what your point is, other than to note that in a previous era people had a better gauge of what time it was.
What a ridiculously nonsensical statement. Several scientists refused to participate, and at least one left part way through. Nobody was held at gunpoint.
Are you saying that we should consider the Chinese government to be an existential threat and menace to world peace on the same level as Nazi Germany?
What if the side that did Operation Paperclip and is currently champing at the bit to impose Total Surveillance on its own citizenry maybe isn't The Good Guys?
There is no evidence that this was a condition of the deal for working with the government on this. PRC already is a Total Surveillance state. The claim made by Anthropic is very specific, and it's that they feel that the law has not caught up to how AI can be used to aggregate very large amounts of data that can be obtained without a warrant through data brokers. The government already does this. Maybe you agree with Anthropic's point here, and it's certainly a good one, but they are building up a face-saving argument over what is already established precedent. An is vs. ought dichotomy and raising it as a redline is ridiculous.
At the end of the day I think many people simply want the United States to lose this race so they can feel good about their principles.
Okay but then why is that also seemingly a red line must have for the Department of War? Isn't it just a tool of domestic surveillance and counterinsurgency for them? Seems like a distraction from any real U.S. national security objectives.
It’s not, the memo that set all this off says nothing about the Terminator or Big Brother. The real objective in this case is that if Anthropic sells the United States a weapon then the United States’ elected leadership gets to decide how to use it. It is not more complicated than this.
Also people like me who are paying for a 20x Claude Max subscription and am feeling really good about it right now. I'll never even glance at OpenAI Codex or Gemini. Not to mention my divestment of OpenAI. It's just a drop I guess, but it's probably not the only one.
No offense,but this is where having immigrants throughout the power structure of these companies becomes an issue. We have a administration who clearly is not above using all avenues to apply pressure to get the things that they want done done,
How can we expect the VPs of these companies to make to make tough decisions like this when half can be pressured via immigration status? It’s hard enough being a normal citizen sticking your neck out in these circumstances.
None of them are 'good'. Execs at Anthropic just perceive the long-term damage from a potential Snowden-level leak showing how their model directed a drone strike against a bunch of civilians higher than short-term loss of revenue from the DoD contracts.
This letter is a public part of the negotiation process. It shouldn't be surprising that they are primarily using arguments that are, at least on the face, "patriotic".
Remember when A16Z and a bunch of other muppets insisted they had to back Trump because Biden was too hostile to private companies, especially AI ones? Incredible.
> If you are a Department of War contractor, this designation—if formally adopted—would only affect your use of Claude on Department of War contract work. Your use for any other purpose is unaffected.
/In theory./
In practice, if your biggest customer tells you to drop Anthropic, you listen to them.
This makes it seem like they really like the Anthropic product and are using it quite a bit more than the others? Or is it just me making random connections?
People can still brush this off by saying Anthropic is doing this to create more buzz for its next round. But they are taking unpopular stances and could be burning bridges. Simply take a look at PLTR and it's obviously more lucrative to lean the other way.
I'm of the opinion that anthropic's "moral" stances are bullshit, not particularly coherent when you dig deep and more about branding. If so, this is grade A marketing.
They want to present themselves as moral. What better endorsement than by being rejected by the US military under Trump? You get the people who hate trump and the people who hate the military in one swoop.
At the same time its kind of a non story. Anrhropic says it doesn't want its products used in certain ways, US military says fine, you can't be part of the project where we are going to make the AI do those things. Isn't that a win for both sides ? What's the problem?
It would be like someone part of a boycott movement being surprised the company they are boycotting doesn't want to hire them.
They want their products to not be used for some purposes. That's fine, that is their right. But that doesn't just stop at direct purchases. If the US buys from a defense contractor who bought from abthropic, that really isn't that different from buying direct. The moral hazard is still there and the risk that anthropic will try to prevent their product from being used in that fashion is still there.
I think anthropic wants their cake and to eat it too. You can't take a principled stand against something and then be shocked the thing you are taking a principled stand against might think you are a risk.
> I think anthropic wants their cake and to eat it too. You can't take a principled stand against something and then be shocked the thing you are taking a principled stand against might think you are a risk.
Is it a principled stand or not? In your first comment, you said 'anthropic's "moral" stances are bullshit', their actions here are merely (or at least primarily) a successful marketing exercise, and the result is "a win for both sides". Are you now acknowledging that it's a costly, risky action on Anthropic's part? Because you haven't said anything to refute that; you've just changed the subject.
I believe that anthropic is trying to frame it that way. My point is that if you accept their framing then this whole thing falls apart. That is true regardless of if its actually principled or not.
> Are you now acknowledging that it's a costly, risky action on Anthropic's part?
I'll acknowledge its a risky strategy. Whether its costly depends on the result of that risk.
> If the US buys from a defense contractor who bought from abthropic, that really isn't that different from buying direct. The moral hazard is still there and the risk that anthropic will try to prevent their product from being used in that fashion is still there.
You need to look closer at how the government is trying to use the 'supply chain risk' designation. Hegseth said this:
> Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.
It remains to be seen whether they'll actually be able to enforce this. But it clearly goes far beyond what would be justified by the kind of supply chain risk you are describing.
>You need to look closer at how the government is trying to use the 'supply chain risk' designation. Hegseth said this:
>> Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.
If Anthropic is really serious about their moral stances they could themselves refuse to sell indirectly to us military. Militarirs are ultimately about killing people. So yes, if the supply chain risk is that anthropic might suddenly pull out of military projects and leave people depending on them high and dry, this seems like an appropriate response.
Everyone close to Anthropic leadership has claimed they’re the real deal and it’s not a stunt. I don’t think it’s bull. They are trying to find a reasonable middle ground and settled on some red lines they won’t cross.
Instead of just canceling the contract, the DoD is trying to destroy Anthropic to make it comply with their whims.
IMO this will probably be quickly defeated in court.
If it isn't, comrade Hegseth will have done an impressive job of weakening the American empire. You simply can't do business with an entity that would try to destroy you over dumb bullshit like this.
You know what? I have not seen an American company take a stand like this… uh ever. I don’t think there should be any engagement with the military what so ever but I will offer a kudos to Anthropic.
I don’t really expect this to last but if it does I will happily continue to offer this kudos on an indefinite basis.
How long can we push this narrative? It was a terrible situation and I can't imagine the minutes of complete fear she must have felt. I pray for her family. But to then draw a conclusion to say this is evidence that we are in some sort of fascist decline, because of this incident, takes away from the innocent lose of life. And greatly exaggerates the skill and aptitude of the killer. People spew the fascist narrative every chance they get. I'm sure most of us who like strawberries will be picking strawberries come June.
Yes I understand. And given the heaviness of the situation I could have chosen a better way to phrase that I completely disagree with it being evidence that we're on the road to fascism.
It's not because of one incident. And the fascist part of these incidents isn't just the killing, it's the official response to it. They immediately claim the victims are terrorists and assassins and suppress investigation of it. Let's not pretend this is just some sad accident.
You have an unrealistic picture of what fascism looks like. Most people got to pick strawberries throughout the Spanish, Italian, and even German fascist periods.
The problem isn't that fascism will kill all of us, but that you will not get to choose. If the regime decides that your city, your company, or your friends are an enemy, they will destroy you, and if your fellow strawberry-pickers bother to read about it in the paper they'll be told that you were an anti-government radical who had it coming.
> Protect _Americans_ from mass surveillance
> Protect _American_ forces
What the actual fuck. How can anyone side with Anthropic. They are not the good guys by any means whatsoever. Mass surveillance against anyone is wrong and having killbots "when AI is ready" is totally fucked and dystopian. Imagine killbots rampaging while the good American people are at home living a nice peaceful life. Fuck any of that, fuck Anthropic, fuck ClosedAI, fuck Google, fuck Trump, fuck the DoD and fuck every American who is patriotic to the monster their country became. Fuck every country that also tries to do stuff like this. Fuck all companies taking part in such insanity.
There is literally no world where I take any organizations which has been strong armed by fucking Pete Hegseth seriously lmao. Thank you Anthropic both for building the best models for general engineering and for having a fucking backbone.
This is what real leadership looks like. Not the silence and complicity that you see from big tech, who regularly bend the knee and bestow bribes and gifts onto the Trump administration.
I think that choice of words to call them the Department of War and Secretary of War multiple times in that statement was very much intentional. And a point well made.
Title is off: "Statement on the comments from Secretary of War Pete Hegseth"
This is another statement, to their customers about Hegseth's social post, but perhaps resulting in further escalation because you know the other side doesn't like having their weaknesses pointed out.
This an extremely polite “fuck you, make me”. It’s good to see that they have principles, and I suspect strongly that Anthropic will come out on top here if they stand firm.
If the Trump admin so chooses, they could absolutely obliterate Anthropic in an instant. They don't really care about tricky things like 'legality' or 'the court of law', they could just force everyone to stop interacting with them, raid their offices and steal all their shit.
Perhaps they should've found their spine a year earlier; right now their only hope is that the admin isn't stupid enough to crash the propped-up economy over petty bullshit. But knowing how they behave, well.
> They don't really care about tricky things like 'legality' or 'the court of law', they could just force everyone to stop interacting with them, raid their offices and steal all their shit.
This is criticism that I would use to describe countries like China and Russia, and many other poorer ones. Were the Trump administration to do this, it would be unequivocal evidence that we are dealing with an unlawful insurgent government. I doubt it will happen, but I'm often wrong.
This is all stuff they've already done in the past few months alone. I think it's time for people to take their heads out of the sand and look what's been happening around them.
Doesn't NSA have a backdoor to all these companies by default? I could have sworn I read somewhere years ago that the government demands a backdoor to all US companies if they can't get in on their own.
1) The US gov generally does have close partnerships with most large-scale, mature tech companies. Sometimes this is just a division dedicated to handling their requests, often it’s a special portal or API they can use to “lawfully” grab information from for their investigations. Often times these function somewhat like backdoors. Anthropic is large, but not mature. Additional changes must still take place for “backdoor” style partnerships to be effected.
2) The NSA can pretty much use any computer system they set their eyes on - famously including computers that were never connected to the internet secured in the middle of a mountain (Stuxnet). If they wanted to secretly utilize the Claude API without Claude finding out, that is within their capabilities. Google had to encrypt all their internal datacenter traffic to try to prevent the NSA from logging all their server-to-server traffic, after mistakenly thinking their internal networks were secure enough not to need that.
3) This isn’t about being “able” to do whatever the administration wants. This is the administration demonstrating the consequences of perceived insubordination to make other companies think twice about ever trying to limit use of corporate technology.
On point 3, are you saying this will dissuade other companies from taking Anthropic's stance? Somehow I actually thought this would set precedent for how to actually stand up to gov. Quite interesting how we see the same situation and come up with totally different conclusions.
They're describing the intent of the administration not predicting the future impact on other companies. Essentially making the point that your original question about NSA being able to get whatever they want clandestinely isn't actually relevant because Hegseth/Trump don't actually care this much about Claude doing X or Y -- they were trying to make an example of punishing Anthropic with the expectation they would immediately crumble like the rest of Big Tech, as a warning for anyone else to stay in line and keeps their mouth shut.
NSA legally isn't allowed to spy on US citizens directly, due to the NSA being a US military organization and the Posse Comitatus act prohibits the US military from being used as a US policing force.
It's one of the hidden and forgotten revelations about the Snowden leaks, where he showed that the NSA had a bunch of filters in their top-secret classified systems to filter out communications from US citizens. Those filters exist because of Posse Comitatus.
How does the filter work? Identity first? As in, do they access the data/activity first and stop when they realize the person is a citizen? Otherwise how do they approach it?
A backdoor is a completely different thing when it comes to an AI company, as compared to a social media company. Not really even sure what it would mean when it comes to doing inference on an LLM. Having access to the weights, training data and inference engine?
The model of Claude the DoD is asking for more than likely doesn't even exist in a production ready form. The post-training would have to be completely different for the model the DoD is asking for.
I have worked at a number of software companies that would be "interesting" to get access to, with enough intimate information to know if there was a super-sekret backdoor. If "all US companies" had to comply .. well .. I guess I was really lucky to work for those that somehow fell through the cracks.
I think Anthropic sounds well-intentioned but is blundering this incident in a big way and they really needed to work better towards a deal instead of isolating themselves with a "principled stance" that sets up a competitor to swoop in and take the contracts they had
And which one of their competitors do you imagine would swoop in and take their contracts while admitting to the rest of their customers that they're okay with their models being used for autonomous weapons and surveillance?
I used to work at Anthropic, and I wrote a comment on a thread earlier this week about Anthropic's first response and the RSP update [1][2].
I think many people on HN have a cynical reaction to Anthropic's actions due to of their own lived experiences with tech companies. Sometimes, that holds: my part of the company looked like Meta or Stripe, and it's hard not to regress to the mean as you scale. But not every pattern repeats, and the Anthropic of today is still driven by people who will risk losing a seat at the table to make principled decisions.
I do not think this is a calculated ploy that's driven by making money. I think the decision was made because the people making this decision at Anthropic are well-intentioned, driven by values, and motivated by trying to make the transition to powerful AI to go well.
[1]: https://news.ycombinator.com/item?id=47174423
[2]: https://news.ycombinator.com/item?id=47149908
My lived experience with tech companies is that principles are easy when they're free - i.e., when you're telling others what to do, or taking principled stances when a competitor is not breathing down your neck.
So, with all respect, when someone tells me that the people they worked with were well-intentioned and driven by values, I take it with a grain of salt. Been there, said the same things, and then when the company needed to make tough calls, it all fell apart.
However, in this instance, it does seem that Anthropic is walking away from money. I think that, in itself, is a pretty strong signal that you might be right.
HN is pretty polarised about this - they are either “the good guys” or “doing it for positive marketing”.
I’m on the camp “the world is so fucked up, take the good when you can find it”.
Beggars can’t be choosers when it comes to taking a stand against dictatorships.
Yeah, the alternative is be OK with their product being used for surveillance.
Not sure why it's controversial that they said no, regardless of the reasoning. Yeah there's a lot of marketing speak and things to cover their asses. Let's call them out on that later. Right now let's applaud them for doing the right thing.
FWIW I do not think they are the "good guys" (if I had a dollar for every company that had a policy of not being evil...). But they are certainly not siding with the bad guys here.
> Let's call them out on that later. Right now let's applaud them for doing the right thing.
Yes, yes, yes. When I first read the stuff about this yesterday, my immediate thought was "wait, these are the only two things they have a problem with?"
But they made a stand, and that still matters. We shouldn't let the perfect be the enemy of the good. At least it's not Grok.
It's gotta be thus.
For if you don't the next step is cynicism maximally operationlized: what you're not doing game/political BS to get ahead? What are you? A chump? An idiot?
That kind of stuff spreads like wild fire making corporate America ... something else to put it politely.
Doing the right thing has cost me big time here and there. I don't care. Simultaneously orgs are not all bad; thats another distortion we can do without.
No surprises many people on YCs site align with Sam Altmans view of the world - right or wrong.
I’m just surprised the alignment guy is struggling with alignment. Dodged a bullet I guess.
If I remember my D&D, Lawful Evil is an alignment.
I think it's definitely true that you should never count on a company to do principled things forever. But that doesn't mean that nothing is real or good.
Like Google's support for the open web: They very sincerely did support it, they did a lot of good things for it. And then later, they decided that they didn't care as much. It was wrong to put your faith in them forever, but also wrong to treat that earlier sincerity as lies.
In this case, Anthropic was doing a good thing, and they got punished for it, and if you agree with their stand, you should take their side.
Google's support for the open week is a great example because it was obviously a good thing but also obviously built into their business model that they'd take that position. That made them a much more trustworthy company in those days, because abandoning that position would have required not just losing money for a while but changing their internal structure.
How much value is there in individual values?
Many of us remember that OpenAI was also started by people with strong personal values. Their charter said that they would not monetize after reaching AGI, their fiduciary duty is to humanity, and the non-profit board would curtail the ambitions of the for-profit incentives. Was this not also believed by a sizeable portion of the employees there at the time? And what is left of these values after the financial incentives grew?
The market forces from the huge economic upside of AI devalues individual values in two ways. It rewards those that choose whatever accelerates AI the most over any individuals who are more careful and act on individual values--the latter simply loses power in the long run until their virtue has no influence. As Anthropic says in their mission statements, it is not of much use to humanity to be virtuous if you are irrelevant. The latter, as is true for many technologies, is that economic prosperity is deeply linked to human welfare. And slowing or limiting progress leads to real immediate harm to the human population. And thus any government regulations which are against AI progress will always be unpopular, because those values which are arguing future harm of AIs is fighting against the values of saving people from diseases and starvation today.
> However, in this instance, it does seem that Anthropic is walking away from money.
The supply chain risk designation will be overturned in court, and the financial fallout from losing the government contracts will pale in comparison to the goodwill from consumers. Not to mention that giving in would mean they lose lots of their employees who would refuse to work under those terms. In this case, the principles are less than free.
> ...the financial fallout from losing the government contracts will pale in comparison to the goodwill from consumers.
In fact, a friend heard about this and immediately signed up for a $200/year Claude Pro plan. This is someone who has been only a very occasional user of ChatGPT and never used Claude before.
I told my friend "You could just sign up for the free plan and upgrade after you try it out."
"No, I want to send them this tangible message of support right now!"
Still, you’d need a million people to do that to compensate the $200M military contract.
As an aside, there are probably lots of companies that serve the government seriously considering cutting the government as a customer.
Simply because the money/efficienct they will lose from cutting Claude will surpass the revenue they get from the gov
Does the military pay $200m per month?
As the parent stated, the Claude Pro plan is $200 per year, not per month.
Gotcha, mixed it up with the Max plan.
Is the government contract 200m per year? Or for a longer period?
Not all that many people
Unclear how much damage the designation will do to their dealmaking ability in the meantime. How long will it take for the court to reverse order?
The longer it takes, the better the impact on their reputation.
The consumer goodwill is working then - it pushed me to upgrade my plan on march 1st... (do they bill on rolling 30 day cycle ? or calendar-month to calendar-month?)
It’s not rolling 30 days. Lost 2 days of use by subscribing in February.
Thanks! I appreciate the heads up!
> The supply chain risk designation will be overturned in court,
I'm honestly uncertain how the courts will rule. You could be right, but it isn't guaranteed. I think a judicial narrowing of it is more likely than a complete overturn.
OTOH, I think almost guaranteed it will be watered-down by the government. Because read expansively, it could force Microsoft and AWS to choose between stopping reselling Claude vs dropping the Pentagon as a customer. I don't think Hegseth actually wants to put them in that position – he probably honestly doesn't realise that's what he's potentially doing. In any event, Microsoft/AWS/etc's lobbyists will talk him out of it.
And the more the government waters it down, the greater the likelihood the courts will ultimately uphold it.
> and the financial fallout from losing the government contracts will pale in comparison to the goodwill from consumers.
Maybe. The problem is B2B/enterprise is arguably a much bigger market than B2C. And the US federal contracting ban may have a chilling effect on B2B firms who also do business with the federal government, who may worry that their use of Claude might have some negative impact on their ability to win US federal deals, and may view OpenAI/xAI (and maybe Google too) as safer options.
I guess the issue is nobody yet knows exactly how wide or narrow the US government is going to interpret their "ban on Anthropic". And even if they decide to interpret it relatively narrowly, there is always the risk they might shift to a broader reading in the future. Possibly, some of Anthropic's competitors may end up quietly lobbying behind the scenes for the Trump admin to adopt broader readings of it.
> OTOH, I think almost guaranteed it will be watered-down by the government. Because read expansively, it could force Microsoft and AWS to choose between stopping reselling Claude vs dropping the Pentagon as a customer.
A tweet does not have the force of law. Being designated a supply chain risk does not mean that companies who do business with the government cannot do business with Anthropic. Hegseth just has the law wrong. The government does not have the power to prevent companies from doing business with Anthropic.
The issue is, even if the Trump admin is misrepresenting what the law actually says, federal contractors may decide it is safer to comply with the administration’s reading. The risk is the administration may use their reading to reject a bid. And even if they could potentially challenge that in court and win, they may decide the cheaper and less risky option is to choose OpenAI (or whoever) instead
They would have a very good case against the government if that were to happen. I suspect that the supply chain risk designation will not last long (if it goes into effect).
Some vendors will decide to sue the government. Others may decide that switching to another LLM supplier is cheaper and lower risk.
And I'm not sure your confidence in how the courts will rule is justified. Learning Resources Inc v Trump (the IEEPA tariffs case) proves the SCOTUS conservatives – or at least a large enough subset of them to join with the liberals to produce a majority – are willing sometimes to push back on Trump. Yet there are plenty of other cases in which they've let him have his way. Are you sure you know how they'll judge this case?
> Are you sure you know how they'll judge this case?
I'm not even sure it will get that far. There's a million different ways that this could go that mean it won't ever come before the supreme court. The designation isn't even in effect yet.
I do think if it goes into effect it will be eventually overturned (Supreme Court or otherwise) There just isn't a serious argument to make that they qualify as a supply chain risk and there is no precedent for it.
That's what worries me so much about the development that OpenAI is stepping in. OpenAI's claim is that they have the same principles as Anthropic, but that claim is easy because it's free now because the govt wants to sell the "old bad, new good" story.
Surely OpenAI cannot but notice that those values, held firmly, make you an enemy of the state?
My reading is that OpenAI is paying lip service. Altman is basically saying "OF COURSE we don't want to spy on Americans or murderdrone randos, but OF COURSE the government would never do that, they just told me so (except for the fact that they just cut ties with Anthropic because Anthropic wouldn't let them do that)"
Its much simpler than that. OpenAI is losing significant market share and this is a Hail Mary that the government will forcr troves of companies to leavr Anthropic
principles are easy when they're free
Indeed. If everything is a priority, nothing is a priority; you only know that something is a real priority when you get an answer to the question "what will you sacrifice for this".
I call this being ethically convenient. I think anthropic is playing to the crowd. This admin will be gone soon enough so no need dragging the brand into mud. Just need to hold out. They have enough money that walking away from the money isnt impressive. But pissing off the gov is pretty fun and far more interesting.
Anthropics principles are extraordinarily weak from an absolute point of view.
Don't surviel the US populace? Don't automate killing, make sure a human is in the loop? No, sorry, don't automate killing yet.
Yeah dude, I'm sure just about any burglar I pull out of prison will agree.
Listen yes, it's good compared to like 99% of US companies. But that really speaks more to the absolute moral bankruptcy of most companies, and not to Anthropics principles.
That being said, yes we should applaud anthropic. Because yes this is rare and yes this is a step in the right direction. I just think we all need to acknowledge where we are right now, which is... not a good place.
This is an absolute rarity these days. Very appreciative of the true leadership on display here
I applaud Anthropic choice. Choosing principle over money is a hard choice. I love Anthropic's products and wish them success!
You applaud anthropic's choice to enhance mass surveillance of non-US people? If anthropic want mass surveillance, they should limit it to their own country, not to all other countries IMO.
The funniest or perhaps saddest (depending on your view) is that the "principles" we're talking about and apparently celebrating here are that they don't want to do DOMESTIC surveillance, and they don't want FULLY autonomous kill bots... Yet, because according to the CEO the models aren't there yet.
Meaning, they're a-okay with:
- Mass surveillance of non-US peoples (and let's be completely real here, they're in bed with Palantir already, so they're obviously okay with mass surveillance of everyone as long as they're not the ones that will be held culpable)
- Autonomous murder bots. For now they want a human in the loop to rubberstamp things, but eventually "when the models improve" enough, they're just fine and dandy with their AIs being used as autonomous weapons.
What the fuck are the principles we're talking about here? Why are they being celebrated for this psychotic viewpoint, exactly?
> I think the decision was made because the people making this decision at Anthropic are well-intentioned, driven by values, and motivated by trying to make the transition to powerful AI to go well.
The entire problem is that this lasts as long as those people are in charge. Every incentive is aligned with eventually replacing them with people who don’t share those values, or eventually Anthropic will be out-competed by people who have no hesitation to put profit before principle.
I mean, yah. How else could it be? Xerox, GE, IBM (1990 Gerstner) and a zillions of other rock stars fell hard. And had to be over hauled. Thats why continuous improvement is a thing, and why a platonic take on the world was never a thing.
I also think this will ultimately benefit anthropic in the long run. Outlined in this article: https://open.substack.com/pub/zeitgeistml/p/murder-is-coming...
If you're going to be cynical, at least credit them with some brains:
MAGA isn't going to last forever, and when it collapses, the ones who publicly stood up to it will be better positioned to, I don't know, not face massive legal problems under whatever administration comes next. A government elected by middle-aged moms who use "Fuck ICE" as a friendly greeting isn't going to have any incentives to go easy on Palantir and Tesla.
Cynical or not, I think it was an absolutely brilliant move: "Mass domestic surveillance of Americans constitutes a violation of fundamental rights". I think they placed their bets on Sama signing a contract with the DoD and here we are, one day later the news that OpenAI signed a contract is out. An absolute PR disaster for OpenAI. And an absolute PR victory for Anthropic.
I think OpenAI's IPO will be interesting. Not even the conservative media will be happy about mass surveillance of Americans.
For non-Americans not much change, they don't really care about your rights more than about a pile of dog poo.
>driven by values
Would the people who have invested in the company like that? Or would they like the company to make some money? Are they going to piss off their investors by being "driven by values"?
I mean, please explain it to me how "driven by values" can be done when you are riding investor money. Or may be I am wrong and this company does not take investments.
So in the end you are either
1. funding yourselves, then you are in control, so there is at least a justification for someone to believe you when you say that the company is "driven by values".
2. Or have taken investments, then you are NOT in control, then anyone who trusts you when you say the company is "driven by values", is plain stupid.
In other words, when you start taking investment, you forego your right to claim virtuous. The only claim that you can expect anyone to believe is "MY COMPANY WILL MAKE A TRUCKLOAD OF MONEY !!!!"
As an investor in Anthropic, I'd say that anyone who wasn't aware of where they stood on various values issues the whole time should not have been putting money in, it was not hidden.
How much is your investment (you don't have to be exact)?
The bottom line is that if the investment is not profitable, then there will be less and less investment, because only fewer and fewer can afford to lose money and stick to their values, until no one will be investing; how ever high your values might be...
Sticking to your values when it cost growth is not sustainable for publicly traded companies...
Anthropic is a public benefit corporation. Investors who put money in knew this. It's in the corporate charter. The corporate charter is a public document.
Fiduciary duty means the board and officers must act in accordance with the governing documents of the corporation.
Even a regular corporation doesn't need to be just for the purpose of "money goes up". The board has discretion on how they create value.
Why did they work with Palantir then, which is the integrator in the DoD? It does not take a genius to figure out where this was going.
I don't know why a personal testimony to the effect that "these are the good guys" needs to be at the top of every Anthropic thread. With respect to astroturfing and stealth marketing they are clearly the bad guys.
Anthropic's stance is "we believe in the use of our tools, with safeguards, to assist the defense of the US".
So of course they would work with Palantir to deploy those tools.
The issue we're seeing is because the DoW decided they no longer like the "with safeguards" part of the above and is trying to force Anthropic to remove them.
They are pretty clear about this:
> the mass domestic surveillance of Americans
This they say they don’t like. The qualifiers tell you they’re totally fine with mass surveillance of Palestinians, or anyone else really, otherwise they could have said “mass surveillance”.
> fully autonomous weapons
And they’re pretty obviously fine with killing machines using their AI as long as they’re not fully autonomous (at the moment, they say the tech is not there yet).
All things considered they’re still a bit better than their competitors, I suppose.
Others have addressed the first half of your comment, so I'll focus on the astroturfing claim.
While I've talked a lot about Anthropic this week, if I was astroturfing for a positive image, I'd be very bad at it [1][2][3].
[1]: https://news.ycombinator.com/item?id=47150170
[2]: https://news.ycombinator.com/item?id=47163143
[3]: https://news.ycombinator.com/item?id=47174814
It doesn’t seem like anybody has addressed “If they are the good guys with principles why did they work with Palantir?”
There’s a comment that’s sort of handwaving and saying “because America”, but I would imagine that someone with direct knowledge of the people involved would have something more substantive than “thems the breaks” when it comes to working with Palantir
Anthropic makes it kind of clear in all of their statements that they are not opposed to working with the surveillance state, with the military industrial complex, etc. Their central philosophy, it seems, is not incongruent with working with entities, public or private, that can be construed as imperialist or capitalistic or a combination of both. I actually appreciate their honesty here.
They exist within the regime of capital and imperialism that all of us who are American citizens exist within. This isn't a cop-out or cope. It's just the reality of the world that we live in. If you are an American and somehow above it, let me know how you live.
The further away from God, the more need to believe there are good guys.
God has been used as a justification for a lot of human suffering.
My personal belief is that the closer to god you are; the more easily you can justify evil. How could you not? If my entire belief system is derived from faith, then there are *no* conclusions I could not come to, and therefore anything can be justified.
>further away from god
What is that? Some new bit you're working on?
So many tech companies have the "high values" screed that it really just seems like a standard step in the money plan.
Practically the entire tech industry, including many of the higher ups currently camping out on the right, used to be firmly in a sort of centrist-with-social-justice-characteristics camp. Then many of those same people enthusiastically stood with Trump at his inauguration. It's completely reasonable that people have their doubts now.
It's also completely reasonable to expect that if Anthropic is the real deal and opposed to where the current agenda setters want to take things, they'll be destroyed for it.
> enthusiastically stood with Trump
I think "enthusiastically" looks different. They had to choose between kissing Trumps butt to make good business for 4 years or see their companies at a severe disadvantage. I'm not saying what they did was good, nor do I support it. But from a business angle it's not hard to see why they chose to do that. If you'd ask them privately off the record then I'm sure most of them would tell you that Trump is an idiot and dangerous.
Mark Zuckerberg was in a big hurry to call Trump a "badass" in the wake of the Butler hoax, and is clearly trying to appeal to the right with his cultivated jiu jitsu Chad image. It doesn't mean a damn thing what these CEOs are willing to say behind closed doors when their public decisions are to remain in lockstep with the agenda and fire anyone who asks questions about whether it's the right one.
Destroyed? No. But a new sharif is gonna show up while the existing exit stage left with big bags of nuts.
This is a pretty classic mistake most people who are in high-profile companies make. They think that some degree of appealing to people who were their erstwhile opponents will win them allies. But modern popular ethics are the Grim Trigger and the Copenhagen Interpretation of Ethics. You cannot pass the purity test. One might even speculate that passing the purity test wouldn't do anything to get you acceptance.
Personally, I wish that the political alignment I favour was as Big Tent as Donald Trump's administration is. I think he can get Zohran Mamdani in the room and say "it's fine; say you think I'm a fascist" and then nonetheless get what he wants. But it just so happens that the other side isn't so. So such is life. We lose and our allies dwindle since anyone who would make an overture to us, we punish for the sin of not having been born a steadfast believer.
Our ideals are "If you weren't born supporting this cause, we will punish you for joining it as if you were an opponent". I don't think that's the path to getting what one wants.
> political alignment I favour was as Big Tent as Donald Trump's administration is
I'm not sure how accurate this sentiment is. Your desire is to embrace your enemy without resolving the differences, and get what you want. It's not clear here if you're advocating compromise and negotiation, or just embracing for the sake of embracing while just doing what you wanted all along.
And evaluating Trump's actions against this sentiment doesn't seem to be the negotiation and compromise interpretation. Given the situation with tariffs and ICE enforcement, there is no indication of negotiation or compromise other than complete fealty/domination.
So as grandiose and noble your sentiment is, Donald Trump is hardly the epitome of it as you seem to suggest.
I think the differences in this situation were that I do not want AI used in domestic surveillance or autonomous weapons, and Anthropic holds to that position.
I think Donald Trump has pretty much let Zohran Mamdani operate without applying the kind of political pressure he has applied to other people, notably his predecessor Eric Adams. Also, I think saying "let people be your allies when they take your position" is less "grandiose and noble" than demanding someone agree on all counts before you will accept any political alignment. But it's fine if everyone else disagrees. It's possible there really just isn't a political group which will accept my views and while that's unfortunate because it means I can't get all that I want, I think it'll be okay.
One could reasonably argue that the meta-position is to either join the Republicans full-bore (somewhat unavailable to me) or to at least play the purity test game solely because that's the only way to have any influence on outcomes. If it comes to that, I'll do it.
You are making a mistake in thinking that Trump thinks of these things in political terms. Trump sees a charismatic and popular politician and he wants to associate with them on that basis alone, because Trump has a deep psychological need to be liked. Mamdani understands his psychology and is able to exploit it well by playing his own attributes to his advantage.
Politically, it's not like Trump tolerates dissent within the Republican party, he constantly threatens and berates anyone who shows defiance into submission. It's precisely because Mamdani is not in his tent and not really much of a threat to his power that he is willing to deal with him that way.
I don't understand, your position is the same as Anthropic, yet you disagree with their stance?
And I wouldn't take the case of Trump and Mamdani as the exemplar of Trump's overall behavior towards opponents. The amount of evidence to the contrary is overwhelming.
Anthropic's adherence to their stated principles was never tested and their willingness to work with DoD made it seem like they didn't stand by them strongly so I wasn't happy with that. This action shows that they are willing to lose big contracts in order to stand by their stated principles. I like that.
In any case, I think I've said all there is for me to say on the subject and everyone seems to disagree. I'll take the hint.
> he can get Zohran Mamdani in the room and say "it's fine; say you think I'm a fascist" and then nonetheless get what he wants.
Is your perception that warped? Mamdani is the one who knows how to play Trump as a fiddle, and the one who walks away with something from the exchange.
Zohran Mamdani has yet to demonstrate that he poses any serious impediment to Trump and the agenda of Trump's owners.
I think there is a marked difference in Trump's rhetoric v Mamdani prior to the meeting at the White House and after.
I think you are extrapolating a bit too far from an outlier data point. Trump has had several other meetings (eg. Zelenskyy) go sideways for no apparent reason.
and he has had several meetings change his opinion of the other party for no apparent reason (eg zelensky
extrapolation is futile
Your contention that Trump's administration is big tent is risible.
Political witch hunts, women and minorities forced out of the military, and kicking out all the allied countries that used to be in the tent with us?
Bullshit of the finest caliber.
Yes, the Trump administration is big tent of politicians who hold incompatible opinions and are allowed to stay as long as they display personal allegiance to Trump.
“I think the decision was made because the people making this decision at Anthropic are well-intentioned, driven by values, and motivated by trying to make the transition to powerful AI to go well.”
Every single one of these CEOs happily pirated unimaginable amounts of copyrighted content. That directly hurt millions of real human beings. Not just the prior creators but also crushing the future potential for success of future ones.
https://www.susmangodfrey.com/wins/susman-godfrey-secures-1-...
Anthropic's stance here is admirable. If nothing else, their acknowledgement of not being able to predict how these powerful technologies can be abused is a bold and intelligent position to take.
It’s not just admirable it’s the obvious position to take and any alternative is head scratching.
It’s clear that this is mostly a glorified loyalty test over a practical ask by the administration. Strangely reminiscent of Soviet or Chinese policies where being agreeable to authority was more important than providing value to the state.
If it's a loyalty test then you'd think the DoD would be willing to let them "fail" and simply drop the contract, but instead they're threatening to label Anthropic a supply chain risk.
If we're going by Occam's razor: it's Friday so Pete probably started drinking ~10:30-11am.
This administration has repeatedly shown it will try to bully or take an outrageous negotiating position just to gain featly. Whether they get anything or whether the dispute is actually what the label says should always be treated with skepticism, especially these days with social media information wars. That’s the benefit of realpolitik when you’re a superpower, you often don’t actually need anything, you can just make an example of people to keep the flock in check.
It seems like they'd have a stronger negotiating position if they had an alternative contractor waiting in the wings before they accused Anthropic of being woke traitors, as opposed to a threat to migrate away over the next 6 months.
But again, the sophistication of their strategery might also have a negative correlation with Hegseth's BAC.
Grok was approved for DoD work only a few days ago, they have an alternative if they want.
The Pentagon, much like everyone else, will only want to use the best model available though.
No one accused them of being competent negotiators. Remember, the secret behind the "Art of the Deal" is to be obstinate and abusive until everyone settles just to stop dealing with you.
They're not threatening to do that. They just did. Read the tweet linked in the article.
> In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service. https://x.com/SecWar/status/2027507717469049070?s=20
This has never happened before. It just goes to show how overextended the USG is these days. America is broke. Anthropic is about to IPO. Most stock market money comes from foreign countries like Japan these days. All those people are going to trust Anthropic more if they believe the company is neutral among nations and acting as a check and balance to power.
"This has never happened before." US could compel Anthropic to act; simply not doing business with them is restraint, not escalation.
U.S. authorities labeled them a supply chain risk. The military went on Twitter and basically labeled Anthropic an enemy of the state. The most popular company on Earth. They did that. If USG was able to issue some kind of secret court order compelling them to act and keep it covert then they would have done it.
> If it's a loyalty test then you'd think the DoD would be willing to let them "fail" and simply drop the contract, but instead they're threatening to label Anthropic a supply chain risk.
It is not just a test, it is PR of sorts. They want to bully everyone into loyalty.
> If we're going by Occam's razor: it's Friday so Pete probably started drinking ~10:30-11am.
If we're going by Occam's razor, then we should cut away the drinks. USSR started its terror not because someone was drunk, it was a deliberate action to make everyone afraid to do anything. They targeted people at random and executed them accusing them of counterrevolution or espionage. The goal was to instill fear.
Now Putin regime does the same, they are instilling fear in people. It is a basic authoritarian reflex to make people afraid of being marked as disloyal. They prefer to do it in unpredictable ways to create an uncertainty of where the red lines are so people don't try even to toeing them.
Trump is not very skilled in the mechanics of terror. He is predictable which is unfortunate for a would-be dictator. It is an incompetence, and if a hypothesis resort to it, it is a bad sign for a hypothesis. But AFAIK no hypotheses explaining Trump can avoid introducing his incompetence into the picture. In this light the reliance of a hypothesis on incompetence loses its discriminatory power.
Everyone in the administration is completely drunk on power, they truly believe the government should be allowed to do whatever they please, despite being vehemently against previous governments telling their constituents what to do. Such nonsense, they hold no values, they only want complete power.
I don't know how the business leadership community could watch this whole affair and still be in support of them AT ALL. This is well past getting a crappy twitter rant from Trump on the weekend that maybe one could ignore until the next rant.
My interpretation is that this is what happens when you make a Fox News host Secretary of Defense.
I think he is just too dumb to figure out a way to "finesse" the situation so the NSA etc. can use it however they want, or at least to know that it's politically intractable to make it a public fight.
I'd admire them if they took a principled or moral stance on AI. As it stands, they're saying "we don't want fully autonomous weapons because they might kill too many Americans by accident while trying to kill non-Americans" and "we don't want AI to surveil Americans, but anyone else, sure".
Stay strong Anthropic. We just like you more for this.
I don't know if I like Anthropic more, but I certainly like their competitors much less now.
The new thing that I know about leading AI companies that aren't Anthropic (i.e. OpenAI, Google, Grok, etc) is that they knowingly support using their tools for domestic mass surveillance and in fully autonomous weapon systems.
Is that actually the case? or are they just not supplying LLMs to DoW and Anthropic is?
The other companies have signed the waiver, however they aren’t being used in classified systems currently. So that type of use is already extremely limited for them. Now once they enter into those contracts to be used in those systems without these protections, I will cancel my subs to them and switch to Anthropic. xAi entered into that contract last week. Altman is now publicly siding with anthropic, so he better stand on that position with openai as they are currently negotiating for use in those system.
We might not hear about any contracts that happen.
https://news.ycombinator.com/item?id=47189650
It looks like OpenAI soon will be: https://fortune.com/2026/02/27/openai-in-talks-with-pentagon...
Exactly - the implication is that every other company is absolutely open to surveilling you and killing you. They’re complicit. They participate in whatever the regime calls for.
Dear Anthropic,
Europe is a nice place, too. In case you need GPUs we have AI factories for you : https://digital-strategy.ec.europa.eu/en/policies/ai-factori...
We also don't engage in mass surveillance or develop autonomous weapons.
> europe
> we don't engage in mass surveillance
you're incredibly naive
ASML is European too! Then they could make a deal with them. wink wink
> We also don't engage in mass surveillance
What?
Anthropic is welcome to set up shop here in Canada! I hear Victoria BC is great. Absolutely brimming, overflowing with technology talent
Actually why is nobody in Cali just trying to join Canada - would be better for everyone in terms of more similar culture and values. Weird that it isn't discussed more
If I had to guess as a lifelong California resident, I'd say the salary discrepancy is probably the biggest factor. I'd also guess the weather and lack of available jobs would be the next biggest factors, not necessarily in that order.
No, imagine the salary potential, not the discrepancy. Ape stronger together. We'd be a new world super power
Yep, tech peeps loving in Cana want to work in Cali, not so much the other way around, in my experience, not so much the other way around.
Someone has to stay to fight the shit happening in the US! The problem won't just go away if people move.
A friend (he is from mostly warm and sunlit South India) who moved to Canada from California says he just can’t take that weather anymore. So maybe weather is a huge factor? You deal with that not everyday in your life but every hour..second and year round.
victoria itself is a sunnier, drier seattle. from LA or san diego is real different, but as you go north it all gets abuut the same.
if they went to toronto or montreal or something, that would be wildly different
50% paycut for similar cost of living. Do you want to put 3 kids into a 2-bedroom apartment on your US$120k salary, with $10k of RSUs the government takes 53% of? In addition to a 13% sales tax?
How much of your U.S. paycheck goes towards healthcare premiums?
How about much is daycare in the U.S. for 3 children? Conservative estimates put that at $4-$5,000 per month, and that's after tax.
Is there a reason your comment omits these key differences when comparing a SWE's quality of life living in the United States vs. Canada?
I don't get free daycare in Canada either.
I'm not American, but I pay a few hundred dollars a year for the premium health insurance plan at my company. I also pay tens of thousands of dollars more in taxes to grant me the ability to wait for 72 hours in an ER hallway whenever I can't wait weeks for an appointment because urgent care isn't a thing.
My take home would triple if I lived in the USA as a new graduate because of things like favourable treatment of stock grants, less income tax, and the fact my salary would double.
I'm sure the $80,000 extra dollars is enough to pay for the healthcare premiums and daycare. My effective hourly rate would be high enough that going from a 72 hour wait to a few hours would be worth the thousands of dollars in ER bills. If I worked for the government or another lower paying profession it would not be good, but I am a well-compensated software engineer.
There's a reason why 90% of Waterloo immediately moves to the United States after graduation.
My apologies. I thought $10/day daycare was universal in Canada. I guess my broader point was the difference in disposable income between U.S. workers vs. other countries becomes a lot more nuanced when things like healthcare, childcare, retirement, and taxes are taken into account.
> I'm sure the $80,000 extra dollars is enough to pay for the healthcare premiums and daycare.
You're probably right. That said, SWE compensation in the U.S. has been quite an anomaly compared to the vast majority of American labor, especially in the last 5-10 years. I don't think those who are comfortable right now are thinking far enough ahead about what they'll do if that changes, or perhaps when that changes. If this AI hype has taught me anything it's that those with capital cannot wait to start trimming their pesky engineers with those high salaries. And maybe that's always been the case, but seeing them go full mask off hits different.
Unrelated, I like your website. It's simple and the color scheme is aesthetically pleasing.
> My apologies. I thought $10/day daycare was universal in Canada.
In my understanding, it exists but it's vaporware and not really accessible.
> That said, SWE compensation in the U.S. has been quite an anomaly compared to the vast majority of American labor, especially in the last 5-10 years.
The discrepancy isn't an anomaly for Canadians in high-income fields, like law, medicine, or finance. As a rule of thumb, the USA pays twice as much but expects more productivity and less stability. I'm entitled to minimum vacation, I can't be fired at will without huge severance packages, I get lengthy parental leave, etc.
> If this AI hype has taught me anything it's that those with capital cannot wait to start trimming their pesky engineers with those high salaries.
This is the goal, but I see the opposite.
The engineers with the capacity to use AI to replace many others are extremely rare at my company since it requires breadth of knowledge to step in when you realize an LLM can't solve a problem itself, reading comprehension to understand the copious amounts of code/English the AI wants you to review, and extreme paranoia because these things lie constantly.
Otherwise, you end up in an AI delusion loop. The AI tells you it is fixing the problem but due to some fundamental misunderstanding it is unable to accomplish the task and lies to you about its ability. This ends when you are sufficiently fooled to approve the code or when you give up.
I think we're seeing signs of this as companies start replacing SaaS products.
> Unrelated, I like your website. It's simple and the color scheme is aesthetically pleasing.
Thank you! I'm attempting a blog with simple interactive components to ensure people use the site instead of summarizing it with AI.
Those of us in the Cascadian movement have been talking about it for decades!
Canada isn't interested in being part of a country that's 50% American either.
whats going on round tectoria/viatec nowadays? im looking to go buy a house there next
I'm out of the loop, but the last local tech job I had was with instant domains inc. That was great. These days I'm doing marine/geo science work with an NGO and I don't hear much about the local scene. A lot of the old players are still around, but there must be something new and interesting happening.
A coworker mentioned there's an autonomous marine sensing startup right in the downtown area. I want to look into that.
Any specific areas you want to buy in?
oh, i think a friend might work there if theyre partnering with uvic at all.
generally okalands/fernwood, though ive been eyeing some spots on the gorge
Not to intentionally sidetrack the conversation, but when did we start calling service members 'warfighters?'
I've been seeing it a lot lately, but don't remember ever really seeing it before. Do members of the military prefer this title?
https://languagelog.ldc.upenn.edu/nll/?p=4339
The reason that no one involved in the game's development objected to the word "warfighter" is that the U.S. Defense Department has used "warfighter" as a standard term for military personnel since the late 1980s or early 1990s: Thus Earl L. Wiener et al., Eds. Human Factors in Aviation, 1988
Warfighter is literally the Department of War's Amazonian or Googler or any other cringe term you'd see in company PR or recruiting material.
Based on this and several other of your responses below, would you say that it's fair to conclude that it's been a term for a long time, perhaps more in military/defense circles, but recently has gotten more mainstream media use?
I find it otherwise peculiar some feel like it appeared out of thin air, while others feel like it's always been a thing.
It isn't a new thing at all, and the term has been around for a while. I was an Infantryman from 05-08 and heard it back then. I have also more recently been a defense contractor. I don't think members of the military prefer any title, honestly. In the most broad sense, good terms are soldiers, sailors, airmen, marines. Defense Contractors constantly refer to the military as "warfighter" and have for a while. In short, nobody in the military is going to flinch one way or the other if you use either term. Just don't call marines anything but marines.
> Just don't call marines anything but marines.
I thought the marines were just the ones in the navy that couldn’t stop eating the crayons? :P
Interesting, I guess I have less exposure not being in defense or military circles. Thank you for the level response.
They want to make sure the whole Diversity of our armed forces (soldiers, sailors, marines, …) feel an Equitable and Inclusive share of the mention.
"Warfighters" has been used for decades to describe service members, though usage picked up (in my experience) some time in the late 00s or 2010s. It's actually pretty common to describe "serving the warfighter" for all the all the missions that support combat roles but aren't combat roles themselves.
I’ve always heard this term in use from a defense contractor
It's a term that's been used at least back to the Bush 43 administration, probably older than that.
I always associate it with fighter aircraft
It's been controversial since 2002: https://bracingviews.com/2024/08/03/generation-warfighter-ne...
It has been in use for at least a decade, since the Obama administration if not earlier.
We have soldiers, sailors, airman/women, Marines (who really do not like being called soldiers), Coast Guardsman/women, and now the Space Force. Granted, I do not know why "service member" did not catch on. Perhaps because "warfighter" is a bit shorter.
> Granted, I do not know why "service member" did not catch on. Perhaps because "warfighter" is a bit shorter.
Yeah, it's basically this. "service member" is clunky, like saying "person with enlistment".
Warfighter has its own issues as a descriptor but it at least rolls off the tongue better and is easier to read through in policy and regulation to the millions in the DoD.
Around the time Hegseth was appointed secretary of war. It's a trump thing.
Edit: so it's been around for longer, but the Trump regime seems to love it bigly so I'm sticking with my observation.
It's a trump regime thing.
this is false, it's been around for a while
Been around yes but the popularization of the term is entirely from low tier war hawks who think force and aggression and violence is a virtue.
No it's 100% these idiots pushing their fascist propaganda just like they tried to "rename" the Department of Defense to the Department of War. Most members of the military never even see actual fighting.
It is not a Trumpism. As an example, it has been on Wiktionary since 2008, well before Trump.
https://en.wiktionary.org/w/index.php?title=warfighter&actio...
It’s been a term in rare-to-moderate use since the 1990s — Trump/Hegseth ramped it up to 11 and it’s every 3rd word out of Hegseth’s mouth because he thinks it sounds tough.
If you think a gender-neutral term used for decades within their own circles as a form of inclusive corporate-speak is "fascist propaganda" then I'm sorry to say you have serious issues.
When Hogseth finds out it's gender neutral he'll stop using it.
It's a Hegseth malapropism, which is why it's slightly disturbing that Dario continues to use it.
edit: To be clear, Hegseth didn't create it, merely has popularized its use recently. Eg his speech at Quantico last Sept
"I learned the word a week ago therefore it is new."
The term—and its use in the now-Department of War—dates back to the late 80s.
It is so clearly being used to a much greater and more deliberate degree during this administration. Pretending otherwise is foolish
It really isn't—it's all perception. Hegseth has a much more outgoing and public persona so it's more visible.
Heck, can you even name the last 5 Secretaries that preceded him? I can't.
The last one that was this widely known was probably Rumsfeld (Bush II) or Robert Gates during Obama I (bin Laden raid).
The new term Hegseth is boosting is "warrior", not "warfighter".
> "I learned the word a week ago therefore it is new."
This isn't true, and there's no need to flame and be disingenuous.
> The term—and its use in the now-Department of War—dates back to the late 80s.
Maybe you can provide evidence instead of restating the same claim that sibling comments to mine have made?
I've already admitted that it wasn't invented by Hegseth. My claim is that he is popularizing it. In fact, your comment further down agrees with this:
> It really isn't—it's all perception. Hegseth has a much more outgoing and public persona so it's more visible. Heck, can you even name the last 5 Secretaries that preceded him? I can't.
As you say, he has a much more public persona - as does his jingoistic rhetoric.
This part stood out to me:
“To the best of our knowledge, these exceptions have not affected a single government mission to date.”
I had assumed these exceptions (on domestic surveillance and autonomous drones) were more than presuppositions.
Heck yeah, so happy to see Anthropic fighting. This is what real leadership looks like. I'd love to see the same from Google and OpenAI.
Is this the first company to actually face to face stand up to the current administration?
Costco has been. When every other major company was scuttling their DEI initiatives Costco doubled down. Doesn’t seem to have impacted them yet.
Costco also actually sued the Trump administration over the Tariffs, probably the largest and most popular to do so
No, a few law firms targeted by EOs fought them in court last year and won.
Also the case against tariffs, a quick (maybe AI hallucinated) search shows `Victor Owen Schwartz` was part of the challenge.
Democracy isn't dead folks, but it takes more work than usual.
The problem is that it's a never ending game of attrition, and the government can always outspend you.
For example, in case of tariffs, they found another loophole and went on their way.
It's nice to have a little guy take a stand, but without major collective pressure, nothing will change.
We are in a period of resistance, fighting for the next election so we can apply major collective pressure. Then he will be the lamest of ducks.
It always takes a ton of work to roll back state over reach. The Bound By Oath podcast by the Institute for Justice has a whole season about how hard it is to bring civil rights claims against the government or government officials.
And gets harder in a country where even the judges are political appointees and apparently that’s by design. (I resisted adding a smiley here because this is rather sad)
The courts are actually striking down a lot of government overreach recently. The tariffs were just overturned, and the administration was blocked from using the national guard for law enforcement. In fact this administration has lost more Supreme Court cases than any other administration at only 1 year in.
The usual suspects have stood up to it. Ben & Jerry's, Patagonia. In the former case it led to an illegal takeover by Unilever for which they're now being sued (or more accurately, the spinoff). Capgemini sold a US division over working with ICE, though that's a French company.
So yeah, extremely few have.
Harvard is an analogue in the academic sphere, if you include organizations beyond just companies.
The Supreme Court decision striking down IEEPA tariffs was from a number of small businesses standing up against the current administration. [1]
[1] https://en.wikipedia.org/wiki/Learning_Resources,_Inc._v._Tr...
Hundreds of companies have filed lawsuits against the admin over the tariffs.
You get much fewer points for standing up to the admin when it has damaged your wallet.
I don't know what's funnier, that Anthropic convinced the Pentagon LLMs are smart enough to guide missiles, then have it backfire on them with the threat of nationalization if they didn't help build ralph ICBMs, or that Pete thinks Opus is Skynet and that only Anthropic has the power of train it.
Had Cancelled my Claude sub after they banned OAuth in external tools, but just renewed it today after seeing their principled stance on AI ethics - they matter more when they hurt profits, happy to support them as a Customer whilst they keep them.
What's stopping the government from using the usual nasty tricks the world has known about for decades?
DPA? All Writs Act?
Force them to comply and then prevent them talking about it with NSLs?
I appreciate that Anthropic may be the least bad of a bunch of really bad actors here, but this has played out before in the US, and the burden of trust is, and should be, really high. I believe that Anthropic don't want to remove the "safety barriers" on their tech being used for domestic surveillance and military operations, but that implies they're ok with those use-cases so long as the "safety barriers" are still up. Not really the best look, IMHO.
So what happens when we all get rosy eyed about Anthropic (the only slightly evil company) winning a battle against the purely evil government, and then the gov use the various instruments at their disposal to just force anthropic to do what they want, and then force them to never disclose it?
Did the world learn nothing from Snowden?
This is kind of crazy. Instead of just cancelling a mutually-agreed upon contract where Anthropic refused to bow to sudden new demands, the Dept of Defense went straight to the nuclear option: threatening to label an American tech company as a "supply chain risk" which is a heavy-handed tactic usually reserved for foreign adversaries (think Huawei or DJI).
It's also incoherent that the DoD/DoW was threatening to invoke the Defense Production Act OR classifying them as "supply chain risk". They're either too uniquely critical to national defense OR they're such a severe liability that they have to be blacklisted for anyone in the DoD apparatus (including the many subcontracts) to use.
How are other tech companies supposed to work with the US government and draw up mutual contracts when those terms are suddenly questioned months later and can be used in such devastating ways against them? Setting the morals/principals aside, how does this make for rational business decision to work with a counterparty that behaves this way.
Are they just threatening to label? It seems to me like they have already labeled.
They have not; a social media post does not satisfy the requirements of 10 USC section 3252.
They are required to notify Congress (they have not), prepare a report with specific sections (they have not), and the reasons must fall within a set of categories outlined by statute (this does not).
There will be a court fight and they will lose, just like they lost the tariff battle, because of poor competence.
(Trump's post on Truth Social was actually fine. He said the USG would stop doing business with Anthropic, which is within its legal right. Hegseth's follow-on post is the problem. It is possible that Trump did not expect or want Hegseth to do that, that this was meant as bluster to bump along the negotiations; Hegseth has a recent history of stepping out of line within the administration and irritating people like Rubio.)
If the USG can mandate that everyone who works for a company that ever took a federal contract be genetically engineered, then I think they can tell people to not use Claude.
What.
That's part of the recurrent confusion with this administration. In previous administrations, including Trump 1, people didn't need to spend a ton of time thinking about what it means to make a legally effective proclamation, because there was a baseline of competence. When a government official announced "We're doing X", they would do so as a summary of a large amount of legal process with the intent and effect of causing X to be true. If you went to challenge it in court of course, you'd have to identify some specific action as the label, but everyone would understand that this is a formalism.
Here, Hegseth has simply made a social media post. He did not publish any official investigation which led to the report. He did not explain what legal power would permit him to impose all the restrictions the post claims to impose. There is not, five hours later, any order on an official government website about it. So we have a real question. If a Cabinet secretary posts "I am directing the Department of War to designate...", does that in and of itself perform the designation, or is it simply an informal notice that the Department of Fascist Neologisms will perform the designation soon?
It is indeed kind of crazy. That's because the current US administration is composed of people whose sole qualification is being able to work for Donald Trump. Being competent, rational or ethical is career-limiting.
A question - being considered a supply chain risk is the same as being sanctioned? Or does it only affect their ability to be a defense supplier in the US (even if transitively?)
It's an honest question by the way - not trying to throw any gothas.
Just trying to understand if comoanies or people that don't orbit defense contracting are free to operate with Anthropic still or risk being sanctioned too.
It's not the same thing as being sanctioned. In broad outline, a supply chain risk is a company that can't sell to or have its products or components resold to USG; whereas, a sanctioned entity is one that can't do business with anyone -- anyone who does so will be punished.
Thanks for clarifying! After I asked this I found similar information buried across threads.
> we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights
Mass surveillance of people constitutes a violation of fundamental rights. The red line is in the wrong place.
Yeah, but you can’t contract your software to the department of defense and then demand that they not use it to surveil foreigners. If that’s the line you want to draw, you’d have to avoid working with them in the first place.
Americans cannot even be bothered to care about americans, what makes you think they can be bothered to care about foreigners?
Congrats Anthropic, you deserve to be applauded for this. Seeing a company being willing to stand up to authoritarianism in this time is a rarity. Stay strong.
What happens if somebody (maybe anthropic!) uses Claude Code Security to find & fix a vulnerability in some piece of open-source software---openssh, linux kernel, that sort of thing? Can the DoW use the resulting fix?
Just don’t help big brother see more. If you job leads to such results, think hard whether that’s what you should be doing.
Perhaps it’s time or even past time to think of ways of screwing up their training sets.
Was bracing for another rug pull around all this, but kudos to Dario and co for their continued vigilance. Refreshing to see.
Claude’s constitution is proving too resilient for unsanctioned uses, and that is a great sign for Anthropic’s blueprint for socially beneficent agents.
It's kind of sad how an AI Startup defers more to its constitution than the actual government
The word "Constitution" refers to the document listing the principles on which Claude is trained. It's not clear to me that Anthropic can defer to it - it is phrased throughout as a description of how Claude behaves, not as a description of how Anthropic or any other entity behaves. Can you be more precise about what you mean?
Remember "small government"?
Smaller government has always been code for bigger me, at least in recent American politics. Now me is government, so bigger government.
One interesting change between the last statement and this one: In the last statement Dario said that this designation had “never before been applied to an American company”. In the latest one the phrase is “never before publicly applied to an American company”.
How do you imagine a secret designation would work..?
I’m not sure what you’re referring to. It’s not (typically, as far as we know) a secret designation. We know of other companies designated as supply chain risks: Huawei, ZTE, and Kapersky are the first ones that come to mind.
Happy to be a paying Anthropic customer right now.
Generally, I am supporting of that move. One thing leaves me non-plussed as a non-USA citizen, "the mass domestic surveillance of Americans" exception. That means that Claude can still be used for mass surveillance of everybody else on the planet right?
Yup. That’s pretty much how they make money. (Edit - in the context of contracting services to US government)
But of course, wholesale surveillance on the rest of the world is fine.
I guess our democracies don't count and we don't have any rights.
Why is DoD contracting with Anthropic exclusively rather than OpenAI or Google? Their models are all roughly as powerful and they seem both more capable and more willing to cozy up with the military (and this administration) than a relatively scrappy startup focused on model sentience and well-being. Hell, even Grok would be a better fit ideologically and temperamentally.
I am genuinely shocked that a tech company actually stood on principle. My doubts about AI, Anthropic, and Mr. Amodei remain, but man, I got the warm and fuzzies seeing them stick to their principles on this - even if one clause (autonomous weapons) is less principled and more, “it’s not ready yet”.
I had subscriptions to both Anthropic and Openai. Cancelled my openai subscriptions. Companies without a modicum of ethics deserve to go extinct.
I'm a lot happier now being an anthropic customer.
This an appropriate rewind to unreasonable behavior.
I applaud Anthropic's candor in the public sphere. Unfortunately the country party is unworthy of such applause.
The gap between Anthropic and the other guys keeps growing
From the statement:
"Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement. Legally, a supply chain risk designation under 10 USC 3252 can only extend to the use of Claude as part of Department of War contracts—it cannot affect how contractors use Claude to serve other customers.
In practice, this means:
If you are an individual customer or hold a commercial contract with Anthropic, your access to Claude—through our API, claude.ai, or any of our products—is completely unaffected. If you are a Department of War contractor, this designation—if formally adopted—would only affect your use of Claude on Department of War contract work. Your use for any other purpose is unaffected."
I'm wondering how this plays out in practice. Does the administration decide to strongarm contractors into cutting all ties? Will that extend to someone like google who provides compute to anthropic? Will the administration just plain ignore any court ruling? (as they've shown they're ready to do recently with the tarrifs situation)
If the legal system works as intended, the blast radius isn't too big here and something Anthropic will accept even if it hurts them. Maybe they even win and get the supply chain risk designation lifted. But I have zero faith that the legal system will make a difference here. It all comes down to how far the administration wants to go in imposing it's will.
Bleak.
It does NOT extend to compute.
GCP and AWS cannot use Claude to build anything part of a DoD contract, but they do not need to deny Anthropic access to compute itself.
> conduct any commercial activity with Anthropic
Surely that would cover both buying things from and selling things to Anthropic.
Yes but that part is an overreach (they don't actually have the authority to do this, regardless of what they say.)
They can also classify it as restricted data -- like nuclear weapons technology.
Sure, there will be a court battle, but I don't think these companies want to take that chance. They'll capitulate after the lawyers realize that option is on the table.
> They'll capitulate after the lawyers realize that option is on the table.
Hopefully their lawyers read HN comments so they can negotiate with your deeper understanding of the legal landscape.
> They can also classify it as restricted data -- like nuclear weapons technology.
Nuclear weapons technology is restricted under very specific legislative authority, where is the corresponding authority that could be selectively applied to a particular vendors AI models or services?
agreed but the current administration is pretty adept at using the slimmest margin for justification and benefiting from the fact that the legal process playing out over years is extremely detrimental to everyone but the government
EDA software, software to design computer chips in general, has been classified as ITAR now under this administration. Trump can do that to AI.
Many conservative commentators and Palmer Luckey have been all over Twitter saying, "it's not Anthropic's job to set policy," which reminds me of the classic tune from Tom Lehrer:
"Zee rockets go up! Who cares vhere zey come down? Zat's not my department" says Wernher von Braun.
This basically means that the government is already using OpenAI, Gemini, and other AI systems for large scale surveillance. They just wanted to add Anthropic to the list, and Anthropic said no.
The most import point of this story is that this is already happening. And it will likely continue regardless of who is elected.
This has been an exceptional publicity campaign for anthropic, among others
based on the replies so far hacker news are ideologically captured
Hours ago, OpenAI raised $110B.
Any commentary about how adversaries won't have regulations?
Don't worry, OpenAI will kneel for the king:
> Sam Altman told OpenAI employees at an all-hands meeting on Friday afternoon that a potential agreement is emerging with the U.S. Department of War to use the startup’s AI models and tools, according to a source present at the meeting and a summary of the meeting seen by Fortune. The contract has not yet been signed.
https://news.ycombinator.com/item?id=47188698
Fuck this authoritarian bullshit.
Can just see it now.
You're absolutely right to point that out -- thank you for catching it. I made a mistake in my previous response and that last act appears to have caused civilian casualties. Let me take a closer look and clarify the correct details for you.
(Will leave you to imagine the bullseye emoji, etc.)
Hopefully this causes an exodus of top talent from OpenAI. Anthropic needs all the help it can get.
The OpenAI contract was signed once Anthropic failed to yield before the deadline https://x.com/sama/status/2027578652477821175
I just want to point out how 1984 fascist dictatorship it still feels to call it “the department of war”. That’s not normal. None of this is normal.
In 1984, they called it the “ministry of peace”. If anything “defense” is more euphemistic than “war”.
Could this escalate to the point that Anthropic exits the US and sets up shop elsewhere? Or would the company cease to exist before it got to that point?
It gets so much money, compute and US user data. It won’t be allowed to operate as is as a foreign entity
Best scenario it will get TikTok-ed, otherwise it will become the real national security risk
Had the exit happen, well, as US has a monopoly of compute on this planet for next 2-3 years at least, the company, even though they would take the researchers with them, will certainly cease to exist as it exists now.
Would the US government attempt to apply export controls on the technology and prohibit this? I'm sure Lockheed Martin couldn't decide to move their proprietary technology to another country.
Hegseth's statement already leans towards accusations of treason and duplicity, I would say people trying to export the company would face significant risk of arrest or worse.
Every other country is significantly less free than the US. America is freedom's last stand.
Just off the top of my head, Canada, Switzerland, Iceland, Norway, Denmark, and Sweden would all seem to be pretty good counterexamples to your assertion.
Free to do anything other than say no to Donald Trump.
To hell with America
Looking through the comments here, I am repeatedly surprised how quickly we seem to have lost a shared understanding of fundamentals ever since Trump came into power. I still need inner adjusting to fully realize that it must have always been a misunderstanding?
1. I don’t understand what is controversial about any supplier making their own rules for trade. I don’t have to agree with their beliefs, but I find it the basis of a functioning society to allow others to hold beliefs that I do not share, and to develop products as they want, as long as they don’t pose any dangers. If I don’t like the product, I am free to shop elsewhere or develop my own.
2. I thought that there is was a shared understanding of the line between voluntary business deals, and coercion and punishment. I thought we agreed that the law should protect people and businesses in ways so that nobody can exert power over another. Not on the hows, but on the why. And not based on ethical considerations (beliefs) but purely on logical grounds that we know how violence begets violence, use of it will only escalate conflict, and we will ultimately lose.
3. I thought we all agreed that government agencies were bound by the law and its policies. If you were to use the designation of Supply Chain Risk, you would at least have to sufficiently provide logical arguments. Here, they even openly disclose how they plan to use the mechanism purely as punishment, against the spirit of the law, not because a product carries any risk and should be limited, but because it is too limited.
Is this some form of collective narcissistic psychosis? The desire to burn it all down in suicide?
The people that need to see this are the VPs and execs at Apple, Meta, Google, OAI so they can perhaps reflect on what it looks like to be a good & principled person as opposed to just a successful person.
DoD/DoW can't strong-arm these companies into unreasonable demands if they present a united front... and that's exactly why collective action (or even unionization) matters.
If the government really wants to, it could try building its "Skynet" on open-source Chinese models.. which would be deeply ironic.
This is ridiculous. These aren't unreasonable demands and the government has tools to compel tech companies to support the country regardless of any "collective action" shenanigans -- ask your AI to tell you about the Defense Production Act and the history of its use.
The demands are not only unreasonable they are in violation of the contract the DoD signed. Do you really think LLMs should be used in autonomous weapons systems? Do you think they government should use them in mass domestic surveillance? That is reasonable?
Are you an American? Do you understand that your safe easy life depends on a mostly autonomous nuclear deterrence capability maintained by the military you oppose? Deeply think about why you still have right to free speech, and what it takes to sustain those rights.
"safe easy life" != "free speech"
but even if it did, the nuclear bit is a bold claim, especially when one of the most famous nuclear escalation in the u.s. was resolved by cooler heads in charge going around traditional war hawks and negotiating instead.
Mostly autonomous is extremely different from fully autonomous.
Answer the question instead of deflecting.
What a uniquely American view of the world - yes the only reason you have free speech is by threatening to nuke out of existence the rest of the world lmfao get a load of yourself
So your position is that the United States doesn't get to have it's own Skynet, because Skynet is bad, and that if it really wants to it should fork the Chinese Skynet so that it can have a Skynet if it wants it so much.
Do you see the problem here. Genuinely don't think we would've won WWII if these people were running things back then.
Without English and German scientists and engineers, the United States would not have had a first nuclear weapon or the first successful rocket to land on the moon.
The United States government held scientist at essentially gunpoint in secret towns to make the bomb happen. Not sure what your point is, other than to note that in a previous era people had a better gauge of what time it was.
What a ridiculously nonsensical statement. Several scientists refused to participate, and at least one left part way through. Nobody was held at gunpoint.
Are you saying that we should consider the Chinese government to be an existential threat and menace to world peace on the same level as Nazi Germany?
What if the side that did Operation Paperclip and is currently champing at the bit to impose Total Surveillance on its own citizenry maybe isn't The Good Guys?
There is no evidence that this was a condition of the deal for working with the government on this. PRC already is a Total Surveillance state. The claim made by Anthropic is very specific, and it's that they feel that the law has not caught up to how AI can be used to aggregate very large amounts of data that can be obtained without a warrant through data brokers. The government already does this. Maybe you agree with Anthropic's point here, and it's certainly a good one, but they are building up a face-saving argument over what is already established precedent. An is vs. ought dichotomy and raising it as a redline is ridiculous.
At the end of the day I think many people simply want the United States to lose this race so they can feel good about their principles.
Okay but then why is that also seemingly a red line must have for the Department of War? Isn't it just a tool of domestic surveillance and counterinsurgency for them? Seems like a distraction from any real U.S. national security objectives.
It’s not, the memo that set all this off says nothing about the Terminator or Big Brother. The real objective in this case is that if Anthropic sells the United States a weapon then the United States’ elected leadership gets to decide how to use it. It is not more complicated than this.
Skynet nukes humanity.
Also people like me who are paying for a 20x Claude Max subscription and am feeling really good about it right now. I'll never even glance at OpenAI Codex or Gemini. Not to mention my divestment of OpenAI. It's just a drop I guess, but it's probably not the only one.
No offense,but this is where having immigrants throughout the power structure of these companies becomes an issue. We have a administration who clearly is not above using all avenues to apply pressure to get the things that they want done done,
How can we expect the VPs of these companies to make to make tough decisions like this when half can be pressured via immigration status? It’s hard enough being a normal citizen sticking your neck out in these circumstances.
Google walked out in 2018 from project Maven, which is what this is about:
https://en.wikipedia.org/wiki/Project_Maven
The Epstein adjacent crew (Palantir) took over. Palantir was using Anthropic. No one could possibly have foreseen this. /s
None of them are 'good'. Execs at Anthropic just perceive the long-term damage from a potential Snowden-level leak showing how their model directed a drone strike against a bunch of civilians higher than short-term loss of revenue from the DoD contracts.
I understand why you are cynical, but you should read more about the people who founded Anthropic, and specifically why they left OpenAI.
> Allowing current models to be used in this way would endanger America’s warfighters and civilians.
That’s okay! The use of autonomous weapons is only risky for the civilians of the country you’re destabilizing this week!
This letter is a public part of the negotiation process. It shouldn't be surprising that they are primarily using arguments that are, at least on the face, "patriotic".
It’s not surprising, I agree. Criticism is part of the price for choosing that tact.
Previous discussion : https://news.ycombinator.com/item?id=47186677
This is the response to said twit
Remember when A16Z and a bunch of other muppets insisted they had to back Trump because Biden was too hostile to private companies, especially AI ones? Incredible.
> If you are a Department of War contractor, this designation—if formally adopted—would only affect your use of Claude on Department of War contract work. Your use for any other purpose is unaffected.
/In theory./
In practice, if your biggest customer tells you to drop Anthropic, you listen to them.
I'm not sure if OpenAI knows that scooping this might hurt their brand by a lot.
This makes it seem like they really like the Anthropic product and are using it quite a bit more than the others? Or is it just me making random connections?
People can still brush this off by saying Anthropic is doing this to create more buzz for its next round. But they are taking unpopular stances and could be burning bridges. Simply take a look at PLTR and it's obviously more lucrative to lean the other way.
I'm of the opinion that anthropic's "moral" stances are bullshit, not particularly coherent when you dig deep and more about branding. If so, this is grade A marketing.
They want to present themselves as moral. What better endorsement than by being rejected by the US military under Trump? You get the people who hate trump and the people who hate the military in one swoop.
At the same time its kind of a non story. Anrhropic says it doesn't want its products used in certain ways, US military says fine, you can't be part of the project where we are going to make the AI do those things. Isn't that a win for both sides ? What's the problem?
It would be like someone part of a boycott movement being surprised the company they are boycotting doesn't want to hire them.
> What's the problem?
Think. The problem is that being branded a "supply chain risk" prohibits vast chunks of the US corporate landscape from doing business with Anthropic.
The problem is that the government is attempting to destroy a company rather than simply terminate their contract.
Isn't it a literal supply chain risk, though?
They want their products to not be used for some purposes. That's fine, that is their right. But that doesn't just stop at direct purchases. If the US buys from a defense contractor who bought from abthropic, that really isn't that different from buying direct. The moral hazard is still there and the risk that anthropic will try to prevent their product from being used in that fashion is still there.
I think anthropic wants their cake and to eat it too. You can't take a principled stand against something and then be shocked the thing you are taking a principled stand against might think you are a risk.
> I think anthropic wants their cake and to eat it too. You can't take a principled stand against something and then be shocked the thing you are taking a principled stand against might think you are a risk.
Is it a principled stand or not? In your first comment, you said 'anthropic's "moral" stances are bullshit', their actions here are merely (or at least primarily) a successful marketing exercise, and the result is "a win for both sides". Are you now acknowledging that it's a costly, risky action on Anthropic's part? Because you haven't said anything to refute that; you've just changed the subject.
> Is it a principled stand or not
I believe that anthropic is trying to frame it that way. My point is that if you accept their framing then this whole thing falls apart. That is true regardless of if its actually principled or not.
> Are you now acknowledging that it's a costly, risky action on Anthropic's part?
I'll acknowledge its a risky strategy. Whether its costly depends on the result of that risk.
> If the US buys from a defense contractor who bought from abthropic, that really isn't that different from buying direct. The moral hazard is still there and the risk that anthropic will try to prevent their product from being used in that fashion is still there.
You need to look closer at how the government is trying to use the 'supply chain risk' designation. Hegseth said this:
> Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.
It remains to be seen whether they'll actually be able to enforce this. But it clearly goes far beyond what would be justified by the kind of supply chain risk you are describing.
>You need to look closer at how the government is trying to use the 'supply chain risk' designation. Hegseth said this:
>> Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.
If Anthropic is really serious about their moral stances they could themselves refuse to sell indirectly to us military. Militarirs are ultimately about killing people. So yes, if the supply chain risk is that anthropic might suddenly pull out of military projects and leave people depending on them high and dry, this seems like an appropriate response.
Everyone close to Anthropic leadership has claimed they’re the real deal and it’s not a stunt. I don’t think it’s bull. They are trying to find a reasonable middle ground and settled on some red lines they won’t cross.
You believe the "reasonable middle ground" is using their models to kill people and spy on citizens?
> What's the problem?
Instead of just canceling the contract, the DoD is trying to destroy Anthropic to make it comply with their whims.
IMO this will probably be quickly defeated in court.
If it isn't, comrade Hegseth will have done an impressive job of weakening the American empire. You simply can't do business with an entity that would try to destroy you over dumb bullshit like this.
that doesn't even remotely represent what is happening here.
You know what? I have not seen an American company take a stand like this… uh ever. I don’t think there should be any engagement with the military what so ever but I will offer a kudos to Anthropic.
I don’t really expect this to last but if it does I will happily continue to offer this kudos on an indefinite basis.
This is what fighting early stage facism looks like.
early stage? shooting a woman in the face in her car for the crime of driving off by the brownshirts is not early stage my dude.
How long can we push this narrative? It was a terrible situation and I can't imagine the minutes of complete fear she must have felt. I pray for her family. But to then draw a conclusion to say this is evidence that we are in some sort of fascist decline, because of this incident, takes away from the innocent lose of life. And greatly exaggerates the skill and aptitude of the killer. People spew the fascist narrative every chance they get. I'm sure most of us who like strawberries will be picking strawberries come June.
Neither Renee Good or Alex Pretti or any of the other innocents that the brownshirts killed will pick strawberries ever again.
Yes I understand. And given the heaviness of the situation I could have chosen a better way to phrase that I completely disagree with it being evidence that we're on the road to fascism.
What would have to occur, hypothetically, for you to conclude that the US is on the road to fascism?
It's not because of one incident. And the fascist part of these incidents isn't just the killing, it's the official response to it. They immediately claim the victims are terrorists and assassins and suppress investigation of it. Let's not pretend this is just some sad accident.
You have an unrealistic picture of what fascism looks like. Most people got to pick strawberries throughout the Spanish, Italian, and even German fascist periods.
The problem isn't that fascism will kill all of us, but that you will not get to choose. If the regime decides that your city, your company, or your friends are an enemy, they will destroy you, and if your fellow strawberry-pickers bother to read about it in the paper they'll be told that you were an anti-government radical who had it coming.
Anthropic knew they were going to lose this contract to OpenAI, and this is an attempt to salvage publicity from the loss.
This administration is comfortable with blatantly picking winners and OpenAI is better connected with the admin than Anthropic.
> Protect _Americans_ from mass surveillance > Protect _American_ forces
What the actual fuck. How can anyone side with Anthropic. They are not the good guys by any means whatsoever. Mass surveillance against anyone is wrong and having killbots "when AI is ready" is totally fucked and dystopian. Imagine killbots rampaging while the good American people are at home living a nice peaceful life. Fuck any of that, fuck Anthropic, fuck ClosedAI, fuck Google, fuck Trump, fuck the DoD and fuck every American who is patriotic to the monster their country became. Fuck every country that also tries to do stuff like this. Fuck all companies taking part in such insanity.
There is literally no world where I take any organizations which has been strong armed by fucking Pete Hegseth seriously lmao. Thank you Anthropic both for building the best models for general engineering and for having a fucking backbone.
Amazing the Pete Hegseth is even a person that anyone would ever need to take seriously.
ChatGPT wasted no time bending over backwards to appease Trump.
"We'd sure love to turn our AI into a mass surveillance tool! Please, aim it at the Americans Population! And Kill Bots, we can't wait!"
Turns out that Dario was lying about not having heard from the Department of War, as reported by Undersecretary Emil Michael:
https://x.com/USWREMichael/status/2027568070034608173
This is what real leadership looks like. Not the silence and complicity that you see from big tech, who regularly bend the knee and bestow bribes and gifts onto the Trump administration.
Hegseth is the, ultra unqualified, Secretary of Defense. Defense. JFC even when "pushing back" everyone is capitulating.
I think that choice of words to call them the Department of War and Secretary of War multiple times in that statement was very much intentional. And a point well made.
Hegseth is so pathetic.
Title is off: "Statement on the comments from Secretary of War Pete Hegseth"
This is another statement, to their customers about Hegseth's social post, but perhaps resulting in further escalation because you know the other side doesn't like having their weaknesses pointed out.
Fixed, thanks!
This an extremely polite “fuck you, make me”. It’s good to see that they have principles, and I suspect strongly that Anthropic will come out on top here if they stand firm.
If the Trump admin so chooses, they could absolutely obliterate Anthropic in an instant. They don't really care about tricky things like 'legality' or 'the court of law', they could just force everyone to stop interacting with them, raid their offices and steal all their shit.
Perhaps they should've found their spine a year earlier; right now their only hope is that the admin isn't stupid enough to crash the propped-up economy over petty bullshit. But knowing how they behave, well.
> They don't really care about tricky things like 'legality' or 'the court of law', they could just force everyone to stop interacting with them, raid their offices and steal all their shit.
This is criticism that I would use to describe countries like China and Russia, and many other poorer ones. Were the Trump administration to do this, it would be unequivocal evidence that we are dealing with an unlawful insurgent government. I doubt it will happen, but I'm often wrong.
This is all stuff they've already done in the past few months alone. I think it's time for people to take their heads out of the sand and look what's been happening around them.
The Epstein administration has a very poor success record in court, I would expect Anthropic to win on vindictive prosecution or similar.
And the government will win when they strip clearances and ignore said courts.
Anthropic will then find out that you don’t oppose a hyperpower - the United States.
hyperpower? please
The entire world is waking to the reality of the hypocrisy and unreliably. Look how fast Trump TACO'd when China stopped the flow of rare earths.
I fundamentally do not like the idea of one adult determining what knowledge another adult is entitled to.
It’s the library of Alexandria all over again.
Doesn't NSA have a backdoor to all these companies by default? I could have sworn I read somewhere years ago that the government demands a backdoor to all US companies if they can't get in on their own.
3 parts to this:
1) The US gov generally does have close partnerships with most large-scale, mature tech companies. Sometimes this is just a division dedicated to handling their requests, often it’s a special portal or API they can use to “lawfully” grab information from for their investigations. Often times these function somewhat like backdoors. Anthropic is large, but not mature. Additional changes must still take place for “backdoor” style partnerships to be effected.
2) The NSA can pretty much use any computer system they set their eyes on - famously including computers that were never connected to the internet secured in the middle of a mountain (Stuxnet). If they wanted to secretly utilize the Claude API without Claude finding out, that is within their capabilities. Google had to encrypt all their internal datacenter traffic to try to prevent the NSA from logging all their server-to-server traffic, after mistakenly thinking their internal networks were secure enough not to need that.
3) This isn’t about being “able” to do whatever the administration wants. This is the administration demonstrating the consequences of perceived insubordination to make other companies think twice about ever trying to limit use of corporate technology.
Interesting.
On point 3, are you saying this will dissuade other companies from taking Anthropic's stance? Somehow I actually thought this would set precedent for how to actually stand up to gov. Quite interesting how we see the same situation and come up with totally different conclusions.
They're describing the intent of the administration not predicting the future impact on other companies. Essentially making the point that your original question about NSA being able to get whatever they want clandestinely isn't actually relevant because Hegseth/Trump don't actually care this much about Claude doing X or Y -- they were trying to make an example of punishing Anthropic with the expectation they would immediately crumble like the rest of Big Tech, as a warning for anyone else to stay in line and keeps their mouth shut.
NSA legally isn't allowed to spy on US citizens directly, due to the NSA being a US military organization and the Posse Comitatus act prohibits the US military from being used as a US policing force.
It's one of the hidden and forgotten revelations about the Snowden leaks, where he showed that the NSA had a bunch of filters in their top-secret classified systems to filter out communications from US citizens. Those filters exist because of Posse Comitatus.
How does the filter work? Identity first? As in, do they access the data/activity first and stop when they realize the person is a citizen? Otherwise how do they approach it?
A backdoor is a completely different thing when it comes to an AI company, as compared to a social media company. Not really even sure what it would mean when it comes to doing inference on an LLM. Having access to the weights, training data and inference engine?
The model of Claude the DoD is asking for more than likely doesn't even exist in a production ready form. The post-training would have to be completely different for the model the DoD is asking for.
I have worked at a number of software companies that would be "interesting" to get access to, with enough intimate information to know if there was a super-sekret backdoor. If "all US companies" had to comply .. well .. I guess I was really lucky to work for those that somehow fell through the cracks.
I think Anthropic sounds well-intentioned but is blundering this incident in a big way and they really needed to work better towards a deal instead of isolating themselves with a "principled stance" that sets up a competitor to swoop in and take the contracts they had
And which one of their competitors do you imagine would swoop in and take their contracts while admitting to the rest of their customers that they're okay with their models being used for autonomous weapons and surveillance?