Mark P. Mills Zohran Mamdani, AI, and the Job Apocalypse The disaffected laptop class fears the artificial intelligence revolution.
https://www.city-journal.org/article/zohran-mamdani-artificial-intelligence-jobs
Does Zohran Mamdani, an unapologetic socialist, owe his political rise as New York City’s leading mayoral candidate to artificial intelligence?
We’re not referring to whether Mamdani, a TikTok and Instagram virtuoso, used AI to help propel himself to victory in the primaries (he may have). Instead, consider the anxieties that AI is fueling in the demographic that voted for him. Ever since ChatGPT ignited the modern AI era, we’ve seen a stream of headlines and studies predicting that AI will soon perform virtually all knowledge work. Mamdani captured his big majorities among the laptop class of middle- and upper-middle income citizens, not in working-class neighborhoods. Socialism’s central nostrum—that well-intentioned experts and ruling elites should tame the predations of market and technology disruptions—becomes more appealing during periods of social and economic upheaval.
There is no shortage of reasons for anxiety and unhappiness today, not least the intensity of political and cultural debates over “woke” ideas, “social justice,” the impact of social media, natural disasters blamed on human behavior (the climate-disaster thesis), and ongoing wars. Now, added to this already turbulent backdrop, comes the fear of an AI-driven jobs apocalypse. Few concerns, aside from health issues, cause more stress than the threat or fact of job loss. A May 2025 survey by the American Psychological Association found that three-fourths of employees feel stressed over job insecurity.
These worries are not unfounded. Recently, Salesforce CEO Marc Benioff said that the company would hire 30 percent to 50 percent fewer people because of AI, while Amazon CEO Andy Jassy aroused employee ire for observing that, with AI, “we expect the total number of employees to decrease over the coming years.” Ford CEO Jim Farley asserted that AI “is going to replace literally half of all white-collar workers in the U.S.” A Wall Street Journal headline echoed the point: “CEOs Start Saying the Quiet Part Out Loud: AI Will Wipe Out Jobs.”
No wonder a recent Pew survey found “half of workers or more across age groups say they feel worried about the use of AI in the workplace,” and “the youngest workers are most likely to say they feel overwhelmed: 40 percent of workers ages 18 to 29 say this.” Mamdani dominated in that age cohort.
Anxieties persist despite a low overall U.S. unemployment rate and data showing lots of job vacancies. But in the details of the latest jobs report we find over half of all private companies cut employment in the knowledge-work sectors that account for most nongovernment jobs, while government work accounted for most of the new positions. Where the private sector did hire, it was mainly hiring in health care, social services, and leisure.
Job seekers, whether recently laid off or looking to change careers, grasp this new reality. One sign of growing anxiety is the collapse of the Great Resignation trend that surged during the Covid era. At the time, record numbers of employees quit their jobs, confident they could easily find new work. That indicator has fallen below its 2019 level as more people cling to their jobs.
The key factors creating hiring headwinds are, as the Wall Street Journal reports, business uncertainty over high interest rates, changes in taxes, and tariffs—all government policies—not technology. Policy uncertainty is standard fare after elections. This time, it’s accompanied by fears that AI will reduce hiring needs, permanently and at unprecedented scale.
Add to that the collateral and recent invention of the Cloud, an entirely new infrastructure that is rapidly democratizing AI. Every infrastructure revolution has far-reaching consequences. And because such revolutions occur only rarely, living through one understandably prompts the feeling that “this time it’s different.” Any period of disruption is, by definition, different from the periods of relative stasis that lie between technological upheavals. As economic historians have documented, every infrastructure revolution—whether railroads, electrification, highways, or the Internet—disrupts markets and jobs, and in turn, reshapes politics.
Every such disruption involves “creative destruction”—the disappearance of many kinds of businesses and jobs, even as new ones emerge. For those who lose out in these transformations, it’s little consolation that others benefit, or that society as a whole becomes more prosperous. As historian Clarence Brinton noted in his seminal 1938 book, Anatomy of Revolution, it’s not the downtrodden who foment revolution (they’re too busy surviving); it’s the middle class, when they find their “expectations dashed” and feel that “they have less than they believe they deserve.”
The question is whether we’re entering a period that echoes earlier technological revolutions—ones we’ve muddled through reasonably well—or whether AI represents something truly unique. Alarmists warn that AI is “disrupting labor at a speed and scale unseen in prior industrial revolutions.” This claim is anchored in three propositions: that AI’s productivity gains (by definition, less labor for equal output) are unprecedented, that AI’s reach is also unprecedented and will impact everyone, and that the technology is being deployed at an “unprecedented pace.” In short, everything about it is unprecedented.
First, let’s set aside the notion that the invention of the AI chip itself proves an “accelerating pace of innovation.” The media and commentariat often mistake a commercial “inflection point” (such as the November 2022 release of ChatGPT) for evidence of an overnight revolution. But history shows otherwise: after a breakthrough invention, it typically takes two decades of behind-the-scenes engineering to achieve commercial viability, followed by another two decades before market applications begin to scale.
Consider: it took 20 years from the invention of the automobile to the first practical product—the 1908 Model T—and another 20 before car sales reached their inflection point. Similarly, two decades passed between the invention of the lithium battery and its commercialization, and nearly 20 more before the arrival of the first viable electric car (Tesla). It was 20 years from first nuclear fission to the first commercial reactor in 1958, and another 20 before fission provided 5 percent of global electricity. Likewise, 20 years after the first electronic computer, commercial machines began appearing in the mid-1950s; two decades later came the desktop computing boom and the invention of the Internet, followed by another 20 before the emergence of the first Cloud data centers.
Back to our future: the concept of a learning algorithm—what we now call machine learning—was introduced in a seminal 1986 paper coauthored by Geoffrey Hinton, who went on to share the 2024 Nobel Prize in Physics. It took two decades before silicon hardware became powerful enough to perform the massively parallel functions Hinton (and fellow laureate John Hopfield) envisioned. True to pattern, ChatGPT launched another 20 years after that.

Now, the scale of the Cloud/AI infrastructure buildout is indeed remarkable. This year alone, the top seven tech companies collectively plan to spend around $300 billion on AI infrastructure, a figure expected to triple within six years. That’s Department of Defense-level spending, rare but not without precedent in private markets. The telecom boom from 1990 to 2000, which launched the World Wide Web, saw a similar tripling in investment, peaking at roughly $250 billion (in 2025 dollars). Comparable bursts of private investment also occurred in the 1920s with communications and during the early peak-growth years of the electric grid.
It’s possible that AI infrastructure spending could yet rival history’s record, the construction of the continental railroad system, when spending tripled from 1844 to 1854, peaking at a 4 percent share of the nation’s GDP. To match that as a share of GDP, AI spending by 2033 would have to reach nearly $2 trillion a year, a level in the range of some forecasts.
Of course, the disruption brought by the railroad era marked a tectonic economic and social shift. As every schoolchild knows, agriculture had long dominated the nation’s economy and workforce. Railroad technology truly “changed everything.” So did the electric motor, the telegraph, the automobile . . . and, in time, so will the AI-infused Cloud.
But what about the central claim that AI promises unmatched productivity and an unprecedented ability to replace human labor? So far, the productivity gains attributed to AI, as reflected in the layoff levels cited by CEOs, remain modest by historical standards. Few AI applications have yet achieved the kinds of transformative productivity leaps seen in past technological revolutions, which repeatedly delivered astonishing economic gains.
The Middle Ages saw a flourishing of machine inventions during what historian Jean Gimpel called an “age of reason and mathematics,” chronicled in his book The Medieval Machine. Europe’s explosive wealth expansion at the time was driven by machines that cut labor hours by twofold to tenfold—productivity gains of 200 percent to 1,000 percent, far surpassing CEO Jim Farley’s projection of a mere 50 percent. Consider also the advent of the “automated” loom around 1812, which dramatically reduced labor inputs. It triggered the infamous Luddite riots, yes, but also made clothing more affordable in an era when textiles were among Europe’s largest industries.
Return to the railroad era that delivered dramatic productivity gains, not just in speed (three times faster than horse-and-wagon) but in cost, with a 25-fold drop in the ton-mile cost of moving people or goods. And the early telegraph delivered a productivity gain not only in speed—nearly instantaneous information transmission—but also economic efficiency at one-tenth the cost of the Pony Express. Later, between 1910 and 1930, technology drove down labor hours per car manufactured by nearly fourfold, and per ton of steel, by a factor of seven.
The 1960s saw another episode of productivity progress. Automation boosted car production while it lowered employment. President John Kennedy created an Office of Automation and Manpower to tackle what he said was “the major domestic challenge of the Sixties: to maintain full employment at a time when automation, of course, is replacing men.” President Lyndon Johnson followed with a Blue Ribbon commission to study Technology, Automation, and Economic Progress that originated the idea of “a guaranteed minimum income,” repackaged today as a Universal Basic Income (UBI).
It was precisely all of the extraordinary gains in technology-driven productivity that powered the United States’s unparalleled expansion of wealth in the twentieth century. And despite all the astounding advances in labor-saving technologies, mass unemployment did not follow.
Of course, one glaring difference separates then from now: previous waves of disruption primarily affected manual labor, which was then the dominant form of employment, just as knowledge work is today.
Another key difference: today’s disrupted class holds disproportionate sway over media and policymaking. Their influence is evident in the growing calls for more government regulation, the push to expand safety nets like the UBI, and the embrace of policies designed to “nudge” (the new euphemism for mandate) markets in the name of protecting the public.
Which brings us back to candidate Mamdani and his fellow travelers, who see socialist solutions as the answer to market disruptions.
Few doubt that the AI revolution holds enormous economic potential. (Indeed, I’ve written an entire book about that, as have others recently.) Private companies don’t invest at this scale—especially without subsidies or mandates—unless real market demand exists for the products and services being developed. Even if the current level of AI spending turns out to be a “bubble,” the expansion remains market-driven. And if AI lives up to its productivity promise, the resulting boost to economic growth will be the only effective path to reducing the federal deficit.
Still, disruption is inevitable. How we navigate this upheaval will be one of the defining challenges of our time. At its core, the debate is about how far government should go in directing and controlling private enterprise, or whether we will continue to rely on market forces. Will we veer toward European-style socialism, or reaffirm our commitment to American capitalism?
The rise of the PC- and smartphone-based Internet offers a telling precedent. Its expansion unfolded with relatively little government direction, unleashing a wave of innovation and a blizzard of startups, most of them American. Out of that free-market efflorescence of small businesses emerged today’s tech giants. The combined market value of the so-called Magnificent Seven—Alphabet, Amazon, Apple, Meta, Microsoft, Nvidia, and Broadcom, all of them once small U.S. firms—now exceeds that of the entire European capital market, despite Europe’s many capable tech companies.
No planner could have predicted which among all the startups would become titans, or how the infrastructure itself would evolve. Yet today’s industrial-policy advocates believe they can and should steer this next revolution from above. We should hope that the political impulses of the disaffected laptop class don’t dictate the course of AI. The best way to nurture the next wave of innovation is not through central planning, but through the messy, decentralized dynamism of the market.
Comments are closed.