I don't think a lot of people are really worried that LLMs will successfully replace them, but they might still get let go because the people in charge think they can replace people with LLMs. These two scenarios don't imply the same level of confidence in LLMs at all.
What people who know nothing about creation/production think only matters in the short term, and over a long enough time frame they will be proven wrong.
I've used LLMs via agents and chat for what I do and I have zero confidence that it could be a productive part of a team without a very knowledgeable handler that knows exactly what they want and how to correct errant output... Meaning you'll still have to hire an engine programmer in order to get a game engine, then you can pretend that they'll have to use a LLM to get their work done (but given that the "you" in this scenario is completely out of the loop when it comes to production you wouldn't be able to tell that they did all their work manually, except perhaps if you notice that velocity went up, bug count went down, and there was more confidence when it came to estimations).
> the people in charge think they can replace people with LLMs
Additionally using it as a pretext to fire lots of workers like Amazon and others seem to have been doing. Some friends mentioned their companies using it as a way to offshore to cheaper locales while getting less bad press.
I don't see it discussed very often, maybe because we're tech-companies concentrated here, but I can tell you 100% that in Italy-Poland, every single 100+ people non-IT company is aggressively pushing AI down their employees.
In Italian banking and insurance companies it's all about writing Gemini "gems" (essentially custom agents) and leveraging NotebookLM, occasionally Microsoft Copilot. Every innovation department out there is all about promoting and bonusing employees that can show the best savings in time and efficiency through LLMs.
So far I'm not seeing much success, because the people shoving those are mostly clueless about what LLMs are good at, they are desperately looking to be able to show that "anything" went from X hours of effort to X/2 or better and this pressure more often than not is alienating most employees, not because they don't appreciate AI, but because atm it's mostly an _additional_ task on top of their already existing work.
I myself, as an independent consultant I'm tasked by all my clients to automate and automate and bring the tools as close as possible to stakeholders, effectively making myself redundant at least on the software side (albeit I like to think not on the engineering and processes one, which is why I have the same clients since 2022...).
Isn't this superbly stupid, though? Like if the users don't even know how LLMs work or what they are good at, why are they being forced to find new ways? Is it just FOMO? Surely a better way would be to allow expert researchers/app developers create AI apps that work for niche use-cases and have domain-appropriate guardrails etc, right? And then everyone (including non technical people) can use it and improve productivity or whatever
It's like forcing someone who has never driven a car to figure out how to make it go faster
Over the last 12 months, AI agents have become dramatically better. And in the last 3 months, they have reached a point where, with some light guidance, they can write 100% of the code. Most skeptics have been convinced and are now realizing the impact. That's what you see in the stock market.
I don't know where the ceiling is. And how much of the improvement was due to better context engineering, and how much to better models. I would expect the context engineering to plateau very soon. Not sure about the models.
An even more dramatic change for the whole economy will be when non-IT, non-creative office clerks are replaced. This is mostly a matter of redesigning the interfaces around them. AI could probably do already most of the work, but getting the tasks to the AI, using their output, and communication with third parties are still a major challenge. Like someone processing insurance claims. AI needs a way to get the claim, to contact third parties (write emails to humans, communicate with other AI agents, maybe even call humans), and then to initiate the payout. It's already doable with today's technology, but still a lot of work.
Everything you’ve listed, both criticisms and hype, have been true ever since these tools were introduced. I don’t recall a single week going by without reading takes of it being a bubble or replacing everyone.
Sure but “SaaS Apocalypse” is a new thing, also the “selling on high CAPEX expenditures”.
I mean to say that a year ago there was talk on forums of "fear of AI replacing developers" but companies were not losing 20/30% in one day because of this.
Now, besides talking about it among nerds, the situation is having a real impact in the economic/financial world.
4 months ago I was incredibly dismissive. After having used Claude Code extensively since then, I think these LLM tools definitively has a a place in software development, but with every new tool in software development, the floor has been raised for what can be completed with less resources. I'm more worried for the junior engineers coming in now.
Why would you be worried about junior engineers? I see this expressed a lot. It seems kind of condescending to me. It's just a different build toolchain. We can build faster, and having a lot of experience helps you know how things should fit together. People figure shit out. There are plenty of juniors that are way smarter than you or I. Do you mean like a junior who is not as clever as you will have a hard time getting their foot in?
We have a junior in our team. In the past 2 years he hasn't written any code himself, because he can't code. He's been using chatgpt to write his code for 2 years, and he mostly delivers stuff, but the code is shit. Our manager isn't aware because he doesn't read our code, but I do and it's super obvious. But the point is the junior can't code.
Over the past month the company deployed Claude internally with tools etc. I, who can code well, picked it up in about 2 days despite the fact that I had never regularly used AI. Now I can produce code using Claude as reliably as the "experts" in our company (to be clear in our company by "experts" I mean good users, not people who actually understand how these machines work). The junior is still struggling go get Claude to do what he wants.
The point is the following: This new technology, like all new technologies, raises the minimum amount of things you have to be proficient at in order to even be at the bottom of the corporate ladder. The more you know the faster you learn, therefore those who know less (juniors) will struggle for longer with the new technology than without it.
I feel like I've been doing multiplications with pen and paper and the other guy doesn't understand the concept of multiplication. Now someone gave us a spreadsheet. I can do multiplications a billion times faster, and the other guy still doesn't understand the concept of multiplication.
You can say that our junior is especially bad and I don't disagree. But the phenomenon of "the more you know the faster you learn therefore new technologies impact juniors harder" is on average true regardless.
Because you need to understand the output of coding agents when there is an issue or even a decision to be made. Even Opus-4.6 gets into rough states where it spews out garbage or suboptimal code and juniors might not catch it. Reading somebody else's code and evaluate it quickly is a tough skill to learn compared to writing one's own code; slow thinking vs fast thinking mode.
True, but junior developers used to provide a lot of value while doing this. Now their value, while they are still figuring it out, has gone down immensely. For a company, there is no value in letting a junior dev write code anymore. And for reviewing the AI output, you need someone more experienced.
AI and RTO are wonderful tools to get rid of workers with little short-term hassle or expense. If things go pear shaped in 6 months, it’ll be somebody else’s fault…. Going to suck being in a world of brittle, bloated software that few (if any) know how to fix without regenerating entirely new code by asking with a changed prompt.
Save your old machines that run old software. Use them to debug virtual machines that will let you continue. Or, reduce the software overhand of your business as much as possible to minimize damage.
The idea that AI will replace programmers has been around since the emergence of AI. I do not know what the future holds. But I know that using in AI in software engineering reduces productivity by almost 20%. My point is, one tries to distinguish fact from opinion.
Yes, but this has a real effect on the economy. Additionally, it's not solely related to the stock market. Look at the announcement from Bloc yesterday about cutting 40% of their workforce due to AI (in example), or the memory shortage due to AI datacenters on the other side.
I have the same feeling but i don't think its a change that you observed in 3 months. Rather its different people's opinions being highlighted at different times.
What is a bit irrational is something I have noticed - that the same people claim that AI is both a bubble but also fear job losses from AI. But also think that the billionaires get rich out of this. How all of these things can happen together, I don't know.
Its just likely that people can't deal with uncertainty and fear change - they would resort to opposing change with arguments from all dimensions even if they contradict eachother.
> [...] the same people claim that AI is both a bubble but also fear job losses from AI. But also think that the billionaires get rich out of this. How all of these things can happen together, I don't know.
One can believe the thing is a bubble, while also acknowledging the existential fear that if it isn't, then it might come and basically ruin your life. It's a balancing scale, and one side is weighted much more heavily than the other. Plus, as we've already seen, some executives are really jumping on the bandwagon and using AI as an excuse for massive layoffs, and finding a job in the current market unless you're particularly valuable is difficult.
So in that last case, AI can be a useless bubble that takes your job anyway because of trigger-happy CEOs.
I don't think a lot of people are really worried that LLMs will successfully replace them, but they might still get let go because the people in charge think they can replace people with LLMs. These two scenarios don't imply the same level of confidence in LLMs at all.
What people who know nothing about creation/production think only matters in the short term, and over a long enough time frame they will be proven wrong.
I've used LLMs via agents and chat for what I do and I have zero confidence that it could be a productive part of a team without a very knowledgeable handler that knows exactly what they want and how to correct errant output... Meaning you'll still have to hire an engine programmer in order to get a game engine, then you can pretend that they'll have to use a LLM to get their work done (but given that the "you" in this scenario is completely out of the loop when it comes to production you wouldn't be able to tell that they did all their work manually, except perhaps if you notice that velocity went up, bug count went down, and there was more confidence when it came to estimations).
> the people in charge think they can replace people with LLMs
Additionally using it as a pretext to fire lots of workers like Amazon and others seem to have been doing. Some friends mentioned their companies using it as a way to offshore to cheaper locales while getting less bad press.
> the people in charge think they can replace people with LLMs
I am not even sure if they even think that. It can be a placeholder for any other reason
I don't see it discussed very often, maybe because we're tech-companies concentrated here, but I can tell you 100% that in Italy-Poland, every single 100+ people non-IT company is aggressively pushing AI down their employees.
In Italian banking and insurance companies it's all about writing Gemini "gems" (essentially custom agents) and leveraging NotebookLM, occasionally Microsoft Copilot. Every innovation department out there is all about promoting and bonusing employees that can show the best savings in time and efficiency through LLMs.
So far I'm not seeing much success, because the people shoving those are mostly clueless about what LLMs are good at, they are desperately looking to be able to show that "anything" went from X hours of effort to X/2 or better and this pressure more often than not is alienating most employees, not because they don't appreciate AI, but because atm it's mostly an _additional_ task on top of their already existing work.
I myself, as an independent consultant I'm tasked by all my clients to automate and automate and bring the tools as close as possible to stakeholders, effectively making myself redundant at least on the software side (albeit I like to think not on the engineering and processes one, which is why I have the same clients since 2022...).
Isn't this superbly stupid, though? Like if the users don't even know how LLMs work or what they are good at, why are they being forced to find new ways? Is it just FOMO? Surely a better way would be to allow expert researchers/app developers create AI apps that work for niche use-cases and have domain-appropriate guardrails etc, right? And then everyone (including non technical people) can use it and improve productivity or whatever
It's like forcing someone who has never driven a car to figure out how to make it go faster
> Like if the users don't even know how LLMs work or what they are good at, why are they being forced to find new ways?
Same happened with crypto. Every financial institution was doing or trying to do some blockchain thing. 90% or more failed.
Just how it is.
Not really. Many departments have little access to IT and now have this opportunity to automate things by themselves.
Over the last 12 months, AI agents have become dramatically better. And in the last 3 months, they have reached a point where, with some light guidance, they can write 100% of the code. Most skeptics have been convinced and are now realizing the impact. That's what you see in the stock market.
I don't know where the ceiling is. And how much of the improvement was due to better context engineering, and how much to better models. I would expect the context engineering to plateau very soon. Not sure about the models.
An even more dramatic change for the whole economy will be when non-IT, non-creative office clerks are replaced. This is mostly a matter of redesigning the interfaces around them. AI could probably do already most of the work, but getting the tasks to the AI, using their output, and communication with third parties are still a major challenge. Like someone processing insurance claims. AI needs a way to get the claim, to contact third parties (write emails to humans, communicate with other AI agents, maybe even call humans), and then to initiate the payout. It's already doable with today's technology, but still a lot of work.
Everything you’ve listed, both criticisms and hype, have been true ever since these tools were introduced. I don’t recall a single week going by without reading takes of it being a bubble or replacing everyone.
Sure but “SaaS Apocalypse” is a new thing, also the “selling on high CAPEX expenditures”.
I mean to say that a year ago there was talk on forums of "fear of AI replacing developers" but companies were not losing 20/30% in one day because of this.
Now, besides talking about it among nerds, the situation is having a real impact in the economic/financial world.
4 months ago I was incredibly dismissive. After having used Claude Code extensively since then, I think these LLM tools definitively has a a place in software development, but with every new tool in software development, the floor has been raised for what can be completed with less resources. I'm more worried for the junior engineers coming in now.
Why would you be worried about junior engineers? I see this expressed a lot. It seems kind of condescending to me. It's just a different build toolchain. We can build faster, and having a lot of experience helps you know how things should fit together. People figure shit out. There are plenty of juniors that are way smarter than you or I. Do you mean like a junior who is not as clever as you will have a hard time getting their foot in?
Let me tell you why.
We have a junior in our team. In the past 2 years he hasn't written any code himself, because he can't code. He's been using chatgpt to write his code for 2 years, and he mostly delivers stuff, but the code is shit. Our manager isn't aware because he doesn't read our code, but I do and it's super obvious. But the point is the junior can't code.
Over the past month the company deployed Claude internally with tools etc. I, who can code well, picked it up in about 2 days despite the fact that I had never regularly used AI. Now I can produce code using Claude as reliably as the "experts" in our company (to be clear in our company by "experts" I mean good users, not people who actually understand how these machines work). The junior is still struggling go get Claude to do what he wants.
The point is the following: This new technology, like all new technologies, raises the minimum amount of things you have to be proficient at in order to even be at the bottom of the corporate ladder. The more you know the faster you learn, therefore those who know less (juniors) will struggle for longer with the new technology than without it.
I feel like I've been doing multiplications with pen and paper and the other guy doesn't understand the concept of multiplication. Now someone gave us a spreadsheet. I can do multiplications a billion times faster, and the other guy still doesn't understand the concept of multiplication.
You can say that our junior is especially bad and I don't disagree. But the phenomenon of "the more you know the faster you learn therefore new technologies impact juniors harder" is on average true regardless.
Im just worried many will give them less of a chance to learn, in the same way we were given.
Because you need to understand the output of coding agents when there is an issue or even a decision to be made. Even Opus-4.6 gets into rough states where it spews out garbage or suboptimal code and juniors might not catch it. Reading somebody else's code and evaluate it quickly is a tough skill to learn compared to writing one's own code; slow thinking vs fast thinking mode.
> People figure shit out.
True, but junior developers used to provide a lot of value while doing this. Now their value, while they are still figuring it out, has gone down immensely. For a company, there is no value in letting a junior dev write code anymore. And for reviewing the AI output, you need someone more experienced.
there will be plenty of "juniors" who can still code circles around you and me and will do just fine.
AI and RTO are wonderful tools to get rid of workers with little short-term hassle or expense. If things go pear shaped in 6 months, it’ll be somebody else’s fault…. Going to suck being in a world of brittle, bloated software that few (if any) know how to fix without regenerating entirely new code by asking with a changed prompt.
Save your old machines that run old software. Use them to debug virtual machines that will let you continue. Or, reduce the software overhand of your business as much as possible to minimize damage.
The idea that AI will replace programmers has been around since the emergence of AI. I do not know what the future holds. But I know that using in AI in software engineering reduces productivity by almost 20%. My point is, one tries to distinguish fact from opinion.
Source: https://arxiv.org/abs/2507.09089
"The idea" is one thing, seeing it concretely, seeing companies laying off 40% of workers or losing 20% in a stock exchange session is another.
> But I know that using in AI in software engineering reduces productivity by almost 20%.
So why are these companies losing billions in a few months?!
Are the big hedge funds stupid or is a pre-print not considered reliable?
One thing has always been constant throughout, though: It's always about the stock market.
Yes, but this has a real effect on the economy. Additionally, it's not solely related to the stock market. Look at the announcement from Bloc yesterday about cutting 40% of their workforce due to AI (in example), or the memory shortage due to AI datacenters on the other side.
I have the same feeling but i don't think its a change that you observed in 3 months. Rather its different people's opinions being highlighted at different times.
What is a bit irrational is something I have noticed - that the same people claim that AI is both a bubble but also fear job losses from AI. But also think that the billionaires get rich out of this. How all of these things can happen together, I don't know.
Its just likely that people can't deal with uncertainty and fear change - they would resort to opposing change with arguments from all dimensions even if they contradict eachother.
> [...] the same people claim that AI is both a bubble but also fear job losses from AI. But also think that the billionaires get rich out of this. How all of these things can happen together, I don't know.
One can believe the thing is a bubble, while also acknowledging the existential fear that if it isn't, then it might come and basically ruin your life. It's a balancing scale, and one side is weighted much more heavily than the other. Plus, as we've already seen, some executives are really jumping on the bandwagon and using AI as an excuse for massive layoffs, and finding a job in the current market unless you're particularly valuable is difficult.
So in that last case, AI can be a useless bubble that takes your job anyway because of trigger-happy CEOs.
[dead]
[dead]