The US at only 38%... I really didn't expect that. Japanese news always reports that the US is vigorously pushing AI forward, but it turns out it's just 38%. It seems America still has a long way to go.
I am Korean. First of all, I often receive freelance requests from China (because I am cheaper than Chinese developers), and in China, starting with the 996 work culture, they exploit human labor to an extreme degree.
To be clear right off the bat, the anti-AI protesters mentioned in the article (researchers, engineers, artists, etc.) are essentially the "establishment" who have enjoyed the greatest benefits under the current system. They instinctively sense that AI will dismantle their high-value-added jobs, and they are causing system bottlenecks in the most primal ways—through NIMBY opposition to data center construction and physical violence (like the terror attack on Sam Altman's home).
On the other hand, countries like Singapore and Indonesia have a relatively smaller legacy ecosystem to protect. For them, AI is a tool for a *'quantum leap'* that allows them to skip traditional stages of industrial development. The video of dancing robots during the Chinese Spring Festival is not simple optimism; it is the manifestation of a ruthless pragmatism that says, "We will thoroughly exploit technology as a tool."
Why has this pragmatism developed? It is likely a complex mix of reasons. However, fundamentally, it is because Asian governments are central control groups. Starting with South Korea, China, and Japan, Asian governments are inherently authoritarian.
This means it is easy for them to control the public's anxiety about AI. Herein lies the problem. What is the greatest function of the media? It is 'agenda setting'. However, in Korea, China, and Japan, the level of criticism against the government is weaker than one might think, and social agendas are not formed around articles critical of the government. The media is failing to play its proper role.
People know the narrative that "if you don't use AI, you will fall behind." But they also see an opportunity: the chance to close the knowledge gap with the English-speaking world. The issue is the underlying belief that "I will be the beneficiary of that Golden Ticket." In reality, of course, the vast majority will never receive that Golden Ticket.
As for the media, since most traditional outlets have hit their growth ceilings, they sell the anxiety of "falling behind without AI" for their own profit. I consider this an 'Anxiety Business'. Most legacy media can no longer grow, and in a society where they must compete with "new media" like YouTube, they ultimately needed a new card to play—and I believe that card is the AI business.
In truth, modern society is already fundamentally abundant in productivity. So what is the problem? Most sectors of business have become bloated, and capital needs new places to invest. Businesses that have grown sufficiently large in a specific sector effectively have a growth rate identical to the industry's growth rate, yet the growth rate of the internet is slowing down. The core issue was that there was a limit to how much labor costs could be cut to drive profit.
In modern industry, labor accounts for roughly 20-25% of the price of manufactured goods. Is the world currently unable to overproduce? Not at all. We are in a state where overproduction is intentionally restricted for the sake of fuel costs, real estate, and corporate profits. In other words, it is not an era where we cannot overproduce, but an era where we do nto. With technology advancing like this, I have no idea how people are supposed to consume or how companies will actually increase their revenues. The current wave of AI replacement feels exactly like the shift to self-serve kiosks.
They eliminated the labor cost of store clerks, but the prices remained the same. Even when you actually use an AI customer service agent, you end up getting completely pissed off during the call, only to finally face a human agent in a state of rage. If it had just been a human agent from the start, there would have been no reason to get angry... Most of these inbound AI calls are failures. Companies are just intoxicated by the AI hype and adopting it, but overall, this AI is mostly of lower quality than humans or inherently flawed. Ultimately, the vast majority of AI services only amplify human negative emotions, leaving humans to clean up the mess. No matter how I look at it, this seems to be the 'Doorman Fallacy'—the mistaken belief that a subordinate's job is easy and easily replaceable.
I digressed a bit, but the fundamental reason Asia is optimistic about AI is likely because, within the society's overarching Golden Ticket syndrome, everyone believes they will be the lucky exception to secure one.
It's simply that no one who's still in the job market in China, has ever witnessed a recession or high unemployment. They had a non-stop, rapid growth from 1978. They have no mental concept of it. So they live in blissful ignorance.
PRC axed 30-40 millions / 1/3rd of state iron rice bowl jobs in the 1995-2000s due to SOE reforms (for WTO accession), created northwest rustbelts etc. This ~20% of urban workforce. And it's losing income, housing, healthcare, state schools, pensions. This comparable to US great depression era, or 30 years of USSR shock therapy compressed in ~5 years. No modern western country has remotely gone through anything similar. Modernization/tech eventually lifted all boats etc, so in that sense they're either optimistic or acceptance that tech is inevitable or better to embrace then be left behind which historically haven't worked out, century of humiliation and all that.
Look at the countries listed in the article it's not just China and I don't agree with your assessment that they are optimistic because they've never seen a recession or high unemployment. Many of those countries have high unemployment rates already.
I suspect the AI optimism has to do more with cultural differences than with job market perception impact differences. When you work extensively with APAC companies you will quickly understand the cultural differences and how you have to deal with them. I believe this is why they are so open to AI.
I find AI to be an incredibly powerful tool in the correct hands but without keeping it close eye on it and guarding what it does it often does things very stupidly. It's not that it doesn't have the knowledge to do things right It's that it is not supplied with the correct context or it takes things to literally. US engineers and programmers are expected to be more well-rounded They are expected to push back on bad ideas they are expected to individually evaluate and assess if it's the right thing and raise issues when they see them.
Culturally in the APAC region this is not the norm. There are a few people that will do that but that is the minority. Most people will simply take it and they will shovel it through an attempt to deliver it even if they know that it's wrong even if they know that there's problems they will literally comply with the spec and deliver you a steaming turd if you didn't describe it perfectly and put in all of the use cases and all of the tests. Because culturally they view things differently that it is not their role to raise up these red flags and these problems. So a lot of them, are quite intelligent, but they act like AI. So they've now gotten a tool that basically does what they do It empowers them to do more of the same.
For me AI does allow me to do more but it feels more like AI is training me then me having an AI that understands what should be done. It doesn't feel very intelligent to me but it does feel very knowledgeable about a lot of subjects.
The US at only 38%... I really didn't expect that. Japanese news always reports that the US is vigorously pushing AI forward, but it turns out it's just 38%. It seems America still has a long way to go.
I am Korean. First of all, I often receive freelance requests from China (because I am cheaper than Chinese developers), and in China, starting with the 996 work culture, they exploit human labor to an extreme degree.
To be clear right off the bat, the anti-AI protesters mentioned in the article (researchers, engineers, artists, etc.) are essentially the "establishment" who have enjoyed the greatest benefits under the current system. They instinctively sense that AI will dismantle their high-value-added jobs, and they are causing system bottlenecks in the most primal ways—through NIMBY opposition to data center construction and physical violence (like the terror attack on Sam Altman's home).
On the other hand, countries like Singapore and Indonesia have a relatively smaller legacy ecosystem to protect. For them, AI is a tool for a *'quantum leap'* that allows them to skip traditional stages of industrial development. The video of dancing robots during the Chinese Spring Festival is not simple optimism; it is the manifestation of a ruthless pragmatism that says, "We will thoroughly exploit technology as a tool."
Why has this pragmatism developed? It is likely a complex mix of reasons. However, fundamentally, it is because Asian governments are central control groups. Starting with South Korea, China, and Japan, Asian governments are inherently authoritarian.
This means it is easy for them to control the public's anxiety about AI. Herein lies the problem. What is the greatest function of the media? It is 'agenda setting'. However, in Korea, China, and Japan, the level of criticism against the government is weaker than one might think, and social agendas are not formed around articles critical of the government. The media is failing to play its proper role.
People know the narrative that "if you don't use AI, you will fall behind." But they also see an opportunity: the chance to close the knowledge gap with the English-speaking world. The issue is the underlying belief that "I will be the beneficiary of that Golden Ticket." In reality, of course, the vast majority will never receive that Golden Ticket.
As for the media, since most traditional outlets have hit their growth ceilings, they sell the anxiety of "falling behind without AI" for their own profit. I consider this an 'Anxiety Business'. Most legacy media can no longer grow, and in a society where they must compete with "new media" like YouTube, they ultimately needed a new card to play—and I believe that card is the AI business.
In truth, modern society is already fundamentally abundant in productivity. So what is the problem? Most sectors of business have become bloated, and capital needs new places to invest. Businesses that have grown sufficiently large in a specific sector effectively have a growth rate identical to the industry's growth rate, yet the growth rate of the internet is slowing down. The core issue was that there was a limit to how much labor costs could be cut to drive profit.
In modern industry, labor accounts for roughly 20-25% of the price of manufactured goods. Is the world currently unable to overproduce? Not at all. We are in a state where overproduction is intentionally restricted for the sake of fuel costs, real estate, and corporate profits. In other words, it is not an era where we cannot overproduce, but an era where we do nto. With technology advancing like this, I have no idea how people are supposed to consume or how companies will actually increase their revenues. The current wave of AI replacement feels exactly like the shift to self-serve kiosks.
They eliminated the labor cost of store clerks, but the prices remained the same. Even when you actually use an AI customer service agent, you end up getting completely pissed off during the call, only to finally face a human agent in a state of rage. If it had just been a human agent from the start, there would have been no reason to get angry... Most of these inbound AI calls are failures. Companies are just intoxicated by the AI hype and adopting it, but overall, this AI is mostly of lower quality than humans or inherently flawed. Ultimately, the vast majority of AI services only amplify human negative emotions, leaving humans to clean up the mess. No matter how I look at it, this seems to be the 'Doorman Fallacy'—the mistaken belief that a subordinate's job is easy and easily replaceable.
I digressed a bit, but the fundamental reason Asia is optimistic about AI is likely because, within the society's overarching Golden Ticket syndrome, everyone believes they will be the lucky exception to secure one.
It's simply that no one who's still in the job market in China, has ever witnessed a recession or high unemployment. They had a non-stop, rapid growth from 1978. They have no mental concept of it. So they live in blissful ignorance.
PRC axed 30-40 millions / 1/3rd of state iron rice bowl jobs in the 1995-2000s due to SOE reforms (for WTO accession), created northwest rustbelts etc. This ~20% of urban workforce. And it's losing income, housing, healthcare, state schools, pensions. This comparable to US great depression era, or 30 years of USSR shock therapy compressed in ~5 years. No modern western country has remotely gone through anything similar. Modernization/tech eventually lifted all boats etc, so in that sense they're either optimistic or acceptance that tech is inevitable or better to embrace then be left behind which historically haven't worked out, century of humiliation and all that.
Look at the countries listed in the article it's not just China and I don't agree with your assessment that they are optimistic because they've never seen a recession or high unemployment. Many of those countries have high unemployment rates already.
I suspect the AI optimism has to do more with cultural differences than with job market perception impact differences. When you work extensively with APAC companies you will quickly understand the cultural differences and how you have to deal with them. I believe this is why they are so open to AI.
I find AI to be an incredibly powerful tool in the correct hands but without keeping it close eye on it and guarding what it does it often does things very stupidly. It's not that it doesn't have the knowledge to do things right It's that it is not supplied with the correct context or it takes things to literally. US engineers and programmers are expected to be more well-rounded They are expected to push back on bad ideas they are expected to individually evaluate and assess if it's the right thing and raise issues when they see them.
Culturally in the APAC region this is not the norm. There are a few people that will do that but that is the minority. Most people will simply take it and they will shovel it through an attempt to deliver it even if they know that it's wrong even if they know that there's problems they will literally comply with the spec and deliver you a steaming turd if you didn't describe it perfectly and put in all of the use cases and all of the tests. Because culturally they view things differently that it is not their role to raise up these red flags and these problems. So a lot of them, are quite intelligent, but they act like AI. So they've now gotten a tool that basically does what they do It empowers them to do more of the same.
For me AI does allow me to do more but it feels more like AI is training me then me having an AI that understands what should be done. It doesn't feel very intelligent to me but it does feel very knowledgeable about a lot of subjects.