Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Foundational model provider manifesto:

‘While there’s value in safety, we value the Pentagon’s dollars more’

 help



It turns out the biggest threat to AI safety is capitalism, who would have thought

Certainly not the prior century-and-a-half's worth of books and films.

And I still run into naysayers claiming that we cannot extract valuable opinions or warnings from fiction because "they're fictional". Fiction comes from ideas. Fiction is not meant to model reality but approximate it to make a point either explicitly or implicitly.

Just because they're not 1:1 model of reality or predictions doesn't mean that the ideas they communicate are worthless.


Anthropic is a public benefit corp

And OpenAI was founded as a non profit, back in the time it was open

Exactly. Neither firm would have been (successfully) sued by their shareholders for failing to make significant profits, so let's not blame on capitalism what is instead the individual greed driving these decisions. In fact, OpenAI is now going to trial because it gave up its non-profit status, reneging on the commitments it made to its shareholders (fraud, by another word).

I don’t get it. Even the Soviet Union used money. Simply paying for stuff isn’t necessarily capitalism? Or are you suggesting Anthropic should be state-owned?

No, capitalism is prioritising profit over all other priorities, as we see happening here.

That's not capitalism. It's extreme capitalism or some other specific philosophy

Because otherwise there is no word for regular capitalism that's based on money and property but with other values too.


Using money as a medium to facilitate exchange of goods and services is not capitalism. Abandoning one of your core principles in the pursuit of money, or more charitably because not doing so means your competitors will make more money and overtake you in the marketplace is an outgrowth of capitalism

In the Soviet Union the reasons might have been "to beat the Capitalists", "for the pride of our country" or "Stalin asked us to and saying no means we get sent to Siberia". Though a variant of the last one may well have happened here, and the justification we read is just the one less damaging to everyone involved


>Though a variant of the last one may well have happened here, and the justification we read is just the one less damaging to everyone involved

Hegseth was planning on getting the model via the Defense Production Act or killing Anthropic via supply chain risk classification preventing any other company working with the Pentagon from working with Anthropic. So while it wasn't Siberia, it was about as close as the US can get without declaring Claude a terrorist. Which I'm sure is on the table regardless


And you know Claude will be on the hook for any bad "decision" the military makes. So this will end poorly for them, anyway.

So this isn’t really capitalism then. Crony capitalism is closer to a planned economy then it is to a free market.

This. Anthropic didn't really have a choice, at that point, short of killing its company and closing its doors ahead of time.

"Pentagon officials said the Defense Department is planning to keep using Anthropic's tools, regardless of the company's wishes."

NPR - Hegseth threatens to blacklist Anthropic over 'woke AI' concerns

Clearly the threat to go to Grok was just a bluster, which says volumes about what the admin thinks of Grok vs Claude.


Nick Land has basically been saying this since the 90s, if you can look past all the rhetoric

Exactly. He recently said the following in an interview:

"AI safety and anti-capitalism [...] are at least strongly analogous, if not exactly the same thing." [0]

[0] Nick Land (2026). A Conversation with Nick Land (Part 2) by Vincent Lê in Architechtonics Substack. Retrieved from vincentl3.substack.com/p/a-conversation-with-nick-land-part-a4f




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: