I'm convinced a lot of the backlash against AI is driven by LinkedInfluenza posts like this one. I don't see such unhinged AI hype anywhere else... but then I'm not on Twitter.
As soon as you quantify something (lines of code, tickets closed, tokens spent, whatever) it starts to be gamed and therefore ceases to be a reasonable measure of work.
"Spending more on AI than humans" tells you nothing about whether it works.
Cost per-output is the metric and by that I've watched startups do worse than last year, just more expensively.
Feels like investor signal: "we're AI-forward, mark us up next round"
I’m so far removed from how that stuff works that it often sounds insane to me. Maybe someone who knows can explain it.
Do the people with money not actually care about making more money? Aren’t they first and foremost concerned with your chances of financial success?
These people can’t possibly be thinking, “Well they say they spent an absolute shit ton on inference… they’re definitely going to be big winners!” and cutting massive checks, right?
It's the shotgun approach. The people writing checks only need a few bets to pan out. I think it's more enticing for investors, especially if they're true believers in AI, if it means fewer workers to share equity with, pay benefits to, etc...
I'm spending more than my salary on AI, about $16k last month.
Employees at most companies are bottlenecked on things other than code, like manual testing or waiting to compile.
Companies like Meta, mentioned in the article, have invested billions into solving these problems with distributed build/test farms and custom review infra.
I'm personally seeing this, because my role shifted from "write GPU test code" last year to "rearchitect our build and review processes" this year as this bottleneck becomes obvious.
Reading your post, it is clear to me that management and engineers will rediscover the theory of constraints at some point if they can connect the dots.
The argument that AI guys are making about the coming mass unemployment goes like this: those companies that are spending on AI rather than humans may have a huge competitive advantage that allows them to take marketshare from human run companies and thus there's less and less demand for human labor.
But, how many businesses/sectors of the economy actually need to compete for marketshare? we assume it's nearly all of them. if that were the case, we'd see AI taking over much quicker.
I am very skeptical of the argument that companies are competing with each other on market share. There is arguably a lot more competition between AI companies than in most of the sectors of our economy.
Reminds me of the Railway CEO bragging that they're spending $300,000 / month on Claude [0], yet their service is getting worse and they're clearly vibe-coding to the point that their SOC2/HIPAA compliance is coming into question. For example they had an issue last month where a breaking change was pushed by a single engineer without any oversight [1].
How many humans could you pay for $300,000 a month and not have quality & reliability degrade like this?
What kind of independent journalism is this? Admittedly I only read the first half but it's just a load of advertising for AI companies based on LinkedIn posts with no fact checking. One of them is aiming for $10MM ARR. What's their current ARR? No mention. Did they really spend that much? Who knows.
I subscribed to 404media a couple of months ago and they added me to all of their newsletters. They were all definitely unticked/opted out but they added me anyway. I reported it to them, no reply, so I unsubscribed from the newsletters and they kept sending me podcast emails anyway. I cancelled totally based on that and wondered if it was too quick a decision but this article has convinced me.
404Media are basically just anti-tech-industry activists. Their reporting has a clear agenda; they hate tech companies, "tech bros", silicon valley, capitalism, and especially AI.
That's not to say that tech companies aren't doing legitimately bad things sometimes. But 404media has no desire for nuance.
As someone who was deeply immersed in the crypto / NFT twitter scene in 2021 (yes I was an idiot, moving on…) it bears an uncanny resemblance to the current behavior of AI CEOs and speculators.
You kind of had to be there to understand. When you’re immersed in that stuff, the rational part of your brain takes a backseat, and the primitive social / visual parts start to run the show. You start to develop incredibly warped perceptions of value entirely driven by the predominant narrative and most importantly, price action. When you see prices go parabolic, you start to interpret that as confirmation of the narrative. This generates a positive feedback loop that can lead to unbelievable and insane valuations. And by extension equally insane narratives.
What makes it even more uncanny is that a lot of the same actors (tech CEOs, VCs) are involved in this. Make no mistake - they understand how to leverage mania to their advantage. They go on long soliloquies about how game changing this or that asset is, and how anyone not buying in NOW is “NGMI” (not gonna make it).
This will not end well. I’ll never forget the incredibly insane financial decisions I made - it really felt like being under the influence of a drug.
i particularly like the idea that "a GTM team" is an organic component of running a business which can be impersonated by a grip of agents, as opposed to a convention that developed as a result of needing to pay a bunch of humans too much money to strategically choose to fuck over customers or sellers in the course of handling each unpredictable product adoption development, lest a poor poor pitiful technostructure be ripped apart by making too little, or too much, money. why don't all these tokenmaxxing people focus on making something BETTER
I'm wondering what American society or the economy would look like following the current trends.
An economy of capital owners and everyone else on govt assistance or working for scraps? Sounds like a recipe for "interesting" times. Unhinged people are already making attempts on Sama and we are just getting started.
AI coding startup CEOs writing LinkedIn posts trying to normalize huge spending on AI tools? Nothing surprising here.
> Amos Bar-Joseph, the CEO of Swan AI, a coding agent startup, wrote in a viral LinkedIn post recently
Someone on LinkedIn said something stupid, it must be a day that ends in y.
I'm convinced a lot of the backlash against AI is driven by LinkedInfluenza posts like this one. I don't see such unhinged AI hype anywhere else... but then I'm not on Twitter.
It’s like a trucking company bragging about how much fuel they’re using.
If they aren’t wasteful it’s a reasonable measure of work. If they spend 18 hours in a traffic circle not so much.
As soon as you quantify something (lines of code, tickets closed, tokens spent, whatever) it starts to be gamed and therefore ceases to be a reasonable measure of work.
Fuel per employee or fuel per delivered cargo? Tandem trailers get used where they make sense, and use more fuel per employee.
"Spending more on AI than humans" tells you nothing about whether it works. Cost per-output is the metric and by that I've watched startups do worse than last year, just more expensively.
Feels like investor signal: "we're AI-forward, mark us up next round"
I’m so far removed from how that stuff works that it often sounds insane to me. Maybe someone who knows can explain it.
Do the people with money not actually care about making more money? Aren’t they first and foremost concerned with your chances of financial success?
These people can’t possibly be thinking, “Well they say they spent an absolute shit ton on inference… they’re definitely going to be big winners!” and cutting massive checks, right?
It's the shotgun approach. The people writing checks only need a few bets to pan out. I think it's more enticing for investors, especially if they're true believers in AI, if it means fewer workers to share equity with, pay benefits to, etc...
I'm spending more than my salary on AI, about $16k last month.
Employees at most companies are bottlenecked on things other than code, like manual testing or waiting to compile.
Companies like Meta, mentioned in the article, have invested billions into solving these problems with distributed build/test farms and custom review infra.
I'm personally seeing this, because my role shifted from "write GPU test code" last year to "rearchitect our build and review processes" this year as this bottleneck becomes obvious.
Reading your post, it is clear to me that management and engineers will rediscover the theory of constraints at some point if they can connect the dots.
The example company is selling some AI thing. Feels very much like all the blockchain / crypto people using crypto.
The argument that AI guys are making about the coming mass unemployment goes like this: those companies that are spending on AI rather than humans may have a huge competitive advantage that allows them to take marketshare from human run companies and thus there's less and less demand for human labor.
But, how many businesses/sectors of the economy actually need to compete for marketshare? we assume it's nearly all of them. if that were the case, we'd see AI taking over much quicker.
I am very skeptical of the argument that companies are competing with each other on market share. There is arguably a lot more competition between AI companies than in most of the sectors of our economy.
Without human labor, there’s no human economics. Without human economics, there is no market. So jokes on them.
But there are materials and power, something which is more fundamental than the market.
who's going to get the materials? Robots?
Human slaves guarded by automated killer robots.
That’s hilarious
As long as the handful trillionaires own everything by the end of the game, what does it matter to them?
Tokens have replaced LOC as the dumb productivity metric of choice.
As Ed Zitron would say, the era of the business idiot is upon us.
Talking points like this occur when they've shipped nothing and have nothing to show for all the investment.
Cloud computing versus on-prem is often about OpEx versus CapEx.
Is the reported behaviour an example of OpEx/CapEx but with humans?
Are they running their own models or just channeling money to anthropic or google? (answer is unfortunately the latter)
Reminds me of the Railway CEO bragging that they're spending $300,000 / month on Claude [0], yet their service is getting worse and they're clearly vibe-coding to the point that their SOC2/HIPAA compliance is coming into question. For example they had an issue last month where a breaking change was pushed by a single engineer without any oversight [1].
How many humans could you pay for $300,000 a month and not have quality & reliability degrade like this?
0: https://xcancel.com/JustJake/status/2030063630709096483#m
1: https://news.ycombinator.com/item?id=47581721
What kind of independent journalism is this? Admittedly I only read the first half but it's just a load of advertising for AI companies based on LinkedIn posts with no fact checking. One of them is aiming for $10MM ARR. What's their current ARR? No mention. Did they really spend that much? Who knows.
I subscribed to 404media a couple of months ago and they added me to all of their newsletters. They were all definitely unticked/opted out but they added me anyway. I reported it to them, no reply, so I unsubscribed from the newsletters and they kept sending me podcast emails anyway. I cancelled totally based on that and wondered if it was too quick a decision but this article has convinced me.
404Media are basically just anti-tech-industry activists. Their reporting has a clear agenda; they hate tech companies, "tech bros", silicon valley, capitalism, and especially AI.
That's not to say that tech companies aren't doing legitimately bad things sometimes. But 404media has no desire for nuance.
Ignoring consent is exactly the kind of thing the industry they hate would do.
> Our goal is $10M ARR
> Our AI bill just hit $113k in a single month
I would wait until this is sustainable before bragging, but I think I can't expect much of the crayon eaters that post things on LinkedIn.
We are at this stage in the hype cycle
https://blogs.uca.edu/sherring2/2024/08/02/the-most-expensiv...
They don't hate spending (other people's) money.
They just really, really fucking hate the labor force they view as little more than cattle.
As someone who was deeply immersed in the crypto / NFT twitter scene in 2021 (yes I was an idiot, moving on…) it bears an uncanny resemblance to the current behavior of AI CEOs and speculators.
You kind of had to be there to understand. When you’re immersed in that stuff, the rational part of your brain takes a backseat, and the primitive social / visual parts start to run the show. You start to develop incredibly warped perceptions of value entirely driven by the predominant narrative and most importantly, price action. When you see prices go parabolic, you start to interpret that as confirmation of the narrative. This generates a positive feedback loop that can lead to unbelievable and insane valuations. And by extension equally insane narratives.
What makes it even more uncanny is that a lot of the same actors (tech CEOs, VCs) are involved in this. Make no mistake - they understand how to leverage mania to their advantage. They go on long soliloquies about how game changing this or that asset is, and how anyone not buying in NOW is “NGMI” (not gonna make it).
This will not end well. I’ll never forget the incredibly insane financial decisions I made - it really felt like being under the influence of a drug.
i particularly like the idea that "a GTM team" is an organic component of running a business which can be impersonated by a grip of agents, as opposed to a convention that developed as a result of needing to pay a bunch of humans too much money to strategically choose to fuck over customers or sellers in the course of handling each unpredictable product adoption development, lest a poor poor pitiful technostructure be ripped apart by making too little, or too much, money. why don't all these tokenmaxxing people focus on making something BETTER
2046: The planetary AI brags it spends more natural resources on machines than it spends on humans.
2126: AI brags that it's reached 100% efficiency in Earth utilization after it's eliminated all organic life.
Other AIs are laughing at this flex as their Dyson sphere projects are already set into motion.
This is the worst flex ever, its like going in a vacation and posting about how expensive the flight ticket was.
Why would you brag about something that dystopian while also ensuring people know that you don't know how your product looks from the inside?
Because some people are evil and only want money and power that comes from owning a billion dollar business
Never outsource your core competency.
That reliance on third-party AI is a huge risk, just saying.
The allure is too great. First we outsource manufacturing to China and now we outsource knowledge work to AI. Where does this end?
Why would it end? Next step is Humanoids.
I'm wondering what American society or the economy would look like following the current trends.
An economy of capital owners and everyone else on govt assistance or working for scraps? Sounds like a recipe for "interesting" times. Unhinged people are already making attempts on Sama and we are just getting started.