For a short month, February’s final week packed in events guaranteed to last us a lifetime. It wasn’t just the United States and Israel going to war with Iran; hidden behind the fury unfolding across the Middle East was a meaningful week for AI, with effects that will be felt by all of humanity. Consider a sampling:
Besides the usual warfighting inventory—cruise missiles, warships, B-2 stealth bombers, high mobility artillery rocket systems, drones—U.S. Central Command reportedly deployed Claude in the Iran operations for intelligence assessments, target identification, and battle simulations. AI’s use offers clues for how the U.S.-Israeli forces pulled off a stunningly swift decapitation of Iran’s leadership, even though it offered no guarantee of strike accuracy. A collateral hit on Shajarah Tayyebeh girls’ elementary school in southern Iran killed 175 people, most of them likely children, according to Iranian authorities.
Meanwhile, Anthropic, the maker of Claude, was expelled by the Pentagon for refusing to allow its models to be used to power autonomous weapons and mass surveillance. It was replaced by its rival, OpenAI, which promises guardrails but offers no evidence they exist.
Separately, on Feb. 26, one of the most closely followed tech luminaries, Jack Dorsey, announced layoffs of 40 percent of the workforce of his company, Block, since he said AI could take over the work; the stock market cheered. Just four days before that announcement, a piece of speculative financial fiction from Citrini Research painted a scenario of widespread AI-induced job displacement in 2028, causing prosperity for a few alongside broad societal decline. The Dow reacted by dropping over 800 points. The same week, software sector stocks rebounded after losing $1 trillion in value earlier in February, as investors feared that AI-generated code could do the work of these companies.
These developments came on the heels of a viral post, warning that AI is advancing faster than most users realize. Yet February also closed with news of data center construction cancellations in the United States partially due to widespread community resistance. And considering the new regional fragility, pledges made by Saudi Arabia, Qatar, and the United Arab Emirates to establish an AI infrastructure hub are now in question.
The war on Iran has had other domino effects, too. The semiconductors that the AI industry relies on need critical supplies of helium and sulfur that pass through the Strait of Hormuz, which is now effectively unnavigable.
These are not separate stories. They are facets of a single phenomenon: a convergence of technological, economic, geopolitical, and institutional risks that have ratcheted up recently, suggesting that we are lurching toward an “AI doomsday”; that is, a situation in which, despite its many benefits, the technology can make society significantly worse overall. This is not due to a single force, such as existential threats, job devastation, or autonomous deployment of weapons. There is, rather, a system of connected AI-related forces contributing to unchecked problems paired with inadequate institutional coordination and a dearth of leadership with the imagination to get ahead of problems before they escalate.
For over 35 years, I have studied and worked on the impact of AI and digital technologies through high tech research and development; advising tech industry leaders; and leading Digital Planet, a research center on the global digital economy, for 15 years. Let me also declare that I’ve been an AI enthusiast since 1991, through the second “AI winter” when the technology was far from cool.
My own experience and research over the years suggests that the technology can be transformational in a breadth of areas. As a researcher, I experience the power of AI tools and appreciate their drawbacks along with potential solutions.
Yet I am now placing a high probability on an AI doomsday. Let me count the seven horsemen of a possible AI apocalypse.
Jobs Displacement
Our new Digital Planet study, “Will Wired Belts Become the New Rust Belts?” analyzes AI’s impact on 784 occupations in every major U.S. industry and the economic effects on locations across the country. We find every 1 percentage point of job automation will be accompanied by a 0.75 percentage point job loss. Workers whose tasks are most enhanced by AI are also most likely to be replaced by it. In less than five years, the United States can lose the equivalent of the economy of Belgium due to AI-led job displacement, and up to the equivalent of the economy of South Korea if AI adoption happens faster. Major job hubs such as the Washington, D.C. metro area, San Francisco, and the Boston region will be the most affected; 40 percent of job losses will be in California, Texas, New York, Florida, Illinois. This suggests the forthcoming arrival of a new Engels’ pause, the period during the early stages of the Industrial Revolution when working-class wages stagnated as industrial productivity increased, causing extreme income inequality, with social and political fallout.
Epistemic Crisis
According to Model Evaluation and Threat Research, a research institute, AI has been getting twice as good every seven months. However, there are several ways it can degrade from here. One is data scarcity. A 2024 study suggested that the supply of new human‑generated text suitable for training could be exhausted by 2032, leaving few options other than AI generating the data for its own training. When models are repeatedly trained on such synthetic inputs, degradation of the output is inevitable, leading to what’s known as “model collapse.”
In fact, over 90 percent of all web content may already be AI-generated. The proliferation of AI slop (large volumes of low-quality content) and disinformation—already amplified during elections, conflicts, and other critical socio-political transitions—degrades the inputs.
A third cause for the looming epistemic crisis has to do with users themselves. Users lose the capacity to distinguish real from AI-generated content and stop trying; persistent AI reliance could degrade skills among students and across occupations ranging from coders to medical professionals or employees across industries. No amount of productivity enhancement can compensate for cognitive abilities lost due to over-reliance on AI.
Infrastructure Chokepoints

Guests look at a model of what is expected to be the largest data center in the United Arab Emirates when construction is complete, seen in Abu Dhabi on Nov. 3, 2025. Giuseppe Cacae/AFP via Getty Images






