🔗Stop pretending you know what AI does to the economy
By Noah Smith
Noah Smith critiques the growing tendency—particularly in the U.S.—to assume that artificial intelligence (AI) will inevitably harm the economy and society. He likens this to a mental “brainworm” that spreads through confirmation bias: once someone adopts a pessimistic narrative about AI, they begin interpreting every new development as evidence of its impending doom. Smith argues that while skepticism can be healthy, the prevailing pessimism is often disconnected from actual outcomes and driven more by ideology than empirical evidence.
A particularly striking observation is that even many people within the AI industry harbor bleak views. Some engineers, Smith notes, privately confess they believe their work will eventually make humans obsolete, even as they work to profit from it in the short term. Similarly, many center-left policy thinkers have approached AI with a mindset of preemptive restriction, treating it not as a tool for progress but as a dangerous force to be curtailed. Notably, Smith references economist Daron Acemoglu’s thesis that AI may increase inequality without improving productivity—a stance Smith considers overly fatalistic and insufficiently supported by current data.
The skepticism isn't confined to experts. A research from Pew Research Center, illustrates a deep divide between AI experts and the general U.S. public. While only 17% of U.S. adults believe AI will have a positive effect over the next 20 years, 56% of AI experts hold a positive view. Meanwhile, 35% of the public expects negative outcomes, compared to only 15% of experts. This gap underscores Smith’s argument that public fear of AI may be exaggerated and poorly aligned with the opinions of those most familiar with the technology.
Adding an international perspective, Smith presents data on how people across different countries emotionally respond to AI. The Anglosphere (United States, UK, Canada, Australia, etc.) shows high levels of nervousness and relatively low excitement. In contrast, Asian countries like China, Indonesia, and South Korea demonstrate high enthusiasm and relatively lower concern. Europe occupies the middle ground. This geographic split reinforces Smith’s view that the cultural environment plays a major role in shaping AI perceptions, and that American pessimism isn’t globally representative.
Despite the ongoing alarmism, Smith argues that AI has not yet had a demonstrably negative impact on employment or inequality in the U.S. In fact, the job market remains historically strong, suggesting that fears of mass unemployment are—at least for now—unfounded. Previous AI-related panics, such as those around job automation or biased algorithms, have often failed to materialize in a broad or lasting way when subjected to scrutiny.
Smith doesn’t dismiss the possibility that AI could become a serious economic disruptor in the future. However, he emphasizes the importance of waiting for clear evidence before forming strong conclusions. Blind pessimism, he argues, not only misdiagnoses the present but can also hinder society from realizing the benefits of a transformative technology. He urges readers to avoid ideological rigidity and remain open to the complexity and unpredictability of economic change.
In sum, the article encourages a more balanced and empirically grounded approach to thinking about AI’s economic impact. The divide between public perception and expert opinion, as well as the variation in attitudes across regions, suggests that fears about AI may be more culturally and emotionally driven than rationally justified. Until there is definitive evidence of harm, Smith advises, it’s premature to assume that AI spells economic disaster.



