Personally I spent the vast majority of my software engineering career without access to AI and feel relatively confident that I could pick up right where I left off.
Humans are pretty adaptable. For example like many others, I became pretty reliant on Google Maps to get me wherever I wanted to go in the early 2000s. Later, I moved to Taiwan, and my phone didn’t have anywhere near the same capabilities - zero access to any of these smart map systems.
I found my ability to navigate and locate places spatially came back, just like riding a bicycle. I don’t think those skills permanently atrophy; I think they just go a bit dormant.
That being said, I can't speak for newer, more junior engineers especially the coming generations who will grow up always having had access to LLMs.
Personally I spent the vast majority of my software engineering career without access to AI and feel relatively confident that I could pick up right where I left off.
Humans are pretty adaptable. For example like many others, I became pretty reliant on Google Maps to get me wherever I wanted to go in the early 2000s. Later, I moved to Taiwan, and my phone didn’t have anywhere near the same capabilities - zero access to any of these smart map systems.
I found my ability to navigate and locate places spatially came back, just like riding a bicycle. I don’t think those skills permanently atrophy; I think they just go a bit dormant.
That being said, I can't speak for newer, more junior engineers especially the coming generations who will grow up always having had access to LLMs.
im running local. itd take several acts of god to intrerupt my workflow.
This is a good idea. What setup do you use and how does the model perform?
AMD 395+ w/128B. Qwen3-coder-next running from llama.cpp; opencode seems to have the best depth. It can get to around 80k context before pooping out.