top of page

AI Publications

Public·8 members

The 2028 Global Intelligence Crisis

🔗THE 2028 GLOBAL INTELLIGENCE CRISIS


Citrini Research’s “The 2028 Global Intelligence Crisis” is framed as a thought exercise written from the future (June 2028), not as a literal forecast. The authors are explicit about that. Their goal is to model a scenario that many AI bulls underweight: what if AI progress keeps exceeding expectations — and that very success becomes macroeconomically destabilizing? They position the essay as a left-tail risk map, not a doomer manifesto. 

 

The piece opens with a striking fictional macro snapshot: U.S. unemployment at 10.2%, the S&P down 38% from October 2026 highs, and markets already desensitized to bad labor prints. This future voice matters because the essay is written as a post-mortem, reconstructing how a “contained” AI disruption metastasized into a broad economic crisis in just two years. 

 

A central idea is the distinction between headline productivity and human economic participation. In the scenario, nominal GDP and productivity initially look strong, corporate margins expand, and markets rally because AI agents dramatically reduce labor costs. But the gains accrue mainly to capital owners and compute owners, while wage growth and job quality deteriorate. 

 

The authors call this mismatch “Ghost GDP”: output that appears in national accounts but does not circulate through the consumer economy. In their telling, AI is working too well for firms, while households lose income and spending power. That is the core paradox of the essay: a productivity boom that coexists with broad demand fragility. 

 

The scenario’s engine is a self-reinforcing loop the authors name the human intelligence displacement spiral. AI improves, firms cut white-collar payroll, spending weakens, firms face margin pressure and buy more AI, which further improves AI and enables more cuts. The essay argues this loop has no built-in brake, unlike normal cyclical recessions. 

 

The story begins in late 2025 with a step-change in agentic coding tools. The authors argue that once capable developers using tools like Claude Code or Codex can reproduce “good enough” versions of mid-market SaaS products in weeks, procurement behavior changes even before full replacement happens. The threat itself becomes bargaining power. 

 

That procurement dynamic is illustrated through a Fortune 500 anecdote: a vendor expecting standard price increases instead accepts a 30% discount after the buyer signals they may use AI plus forward-deployed engineers to replace the software entirely. The point is not that every enterprise immediately rebuilds everything in-house; it’s that SaaS pricing power weakens once the outside option becomes credible. 

 

The essay argues the long tail of SaaS gets hit first, while “systems of record” are initially assumed to be safer. But then a large incumbent example (ServiceNow in the scenario) reveals a reflexive mechanism: slowing growth, workforce cuts, and structural efficiency programs signal that incumbents themselves are being forced to use AI aggressively to defend margins. 

 

That reflexivity is one of the essay’s strongest analytical claims. Historically, incumbents often resisted disruptive technology and were outcompeted by entrants. Here, the authors argue, incumbents cannot afford to resist. They become the fastest adopters of the very tools undermining their own labor-intensive cost structures. Rational at the firm level, destructive in aggregate. 

 

The piece then broadens the lens beyond software into what it calls the intermediation layer—business models that monetize friction, inertia, complexity, or human impatience. As consumer agents become default and operate continuously in the background, many of the psychological and behavioral quirks those firms exploit become less valuable. 

 

The authors imagine agentic assistants shifting commerce from discrete human decisions to 24/7 machine optimization, with users setting preferences and agents continuously re-shopping, renegotiating, and routing transactions. In that world, customer lifetime value erodes because passive renewals, trial traps, and convenience premiums get systematically attacked by software. 

 

Travel booking is presented as an early casualty because it is structured, comparison-heavy, and amenable to optimization. The same logic extends to insurance renewals, tax prep, routine legal work, and other services where providers historically monetized complexity that consumers found tedious. The essay’s claim is that agents do not experience tedium, so a major source of human-market rents disappears. 

 

A particularly vivid subsection concerns habitual intermediation—moats built on app placement, brand default behavior, and user laziness. The authors argue that machine buyers have no “favorite app,” no impulse shortcuts, and no loyalty in the human sense. That destroys a class of consumer-platform moat that has been extremely profitable in the smartphone era. 

 

DoorDash and food delivery are used as the poster example. In the scenario, coding agents lower barriers to entry so dramatically that many new delivery apps emerge quickly, drivers use multi-app dashboards, and fee take-rates compress as new entrants pass most fees to drivers. Then consumer agents further intensify price competition by routing orders across many options automatically. 

 

The essay extends this logic into payments infrastructure, especially card interchange. Once agents optimize machine-to-machine transactions at scale, the authors argue they will target fees directly and prefer cheaper rails, including stablecoins on high-throughput blockchains or L2s, where settlement is faster and cheaper than traditional card rails. 

 

In the scenario, this becomes a major market signal when a Mastercard earnings report references agent-led optimization and discretionary pressure, prompting a sharp market reaction. The authors’ point is that agentic commerce stops being just a product feature story and becomes a plumbing story—a threat to the transactional toll booths underneath consumer finance. 

 

They note that card-focused lenders and mono-line issuers are particularly exposed because they depend heavily on interchange economics and on affluent white-collar consumers. In the fictional sequence, American Express and several issuers fall sharply as both ends of the model are pressured: fewer high-income customers and lower fee capture per transaction. 

 

At this stage, markets still treat the downside as a sector story: software, consulting, payments, and “toll booths” are hurt, but the broader economy seems resilient. The essay argues this was a category error because the U.S. is fundamentally a white-collar services economy, so “sector-specific” damage to those jobs is actually damage to the consumption core. 

 

The authors directly confront the standard historical automation rebuttal—technology destroys jobs, then creates more jobs. They acknowledge that this pattern held repeatedly (ATMs, internet disruption, retail shifts), but argue AI is different because it is a general-purpose substitute that keeps improving in the same cognitive domains workers would normally redeploy into. 

 

That point is sharpened with an explicit contrast: prior new jobs still required humans. In the essay’s scenario, AI systems already handle long R&D tasks, write most code, and continue getting better and cheaper, meaning many “new jobs” created by AI are too few and too low-paid relative to those displaced. Humans remain in the loop, but in thinner, lower-compensation coordination roles. 

 

Labor data in the scenario begins to reflect this imbalance. White-collar openings fall sharply, hiring weakens, and job churn concentrates in office-based roles even while some blue-collar categories remain more stable. Equity markets stay preoccupied with AI infrastructure upside, but bond markets begin to price the coming consumption slowdown earlier. 

 

A key macro claim of the essay is that this downturn is not cyclical. In a typical recession, overbuilding or inventory corrections eventually sow the seeds of recovery. Here, the cause keeps improving each quarter: cheaper, better AI capabilities continually intensify displacement, which suppresses spending and encourages further AI adoption as firms protect margins. 

 

By 2027, the story becomes socially visible. The essay emphasizes “dinner table” evidence before formal statistics: former high-paid knowledge workers moving into gig jobs, flooding labor supply in lower-wage sectors, and compressing wages there too. This converts sector disruption into broad-based wage pressure and worsens inequality across labor segments. 

 

The narrative then shifts into finance through private credit and PE-backed software. The authors argue that many loans and valuations were underwritten on assumptions of persistent ARR growth that no longer held once AI began eroding pricing power and reducing the need for traditional software layers. Marks lag public-market reality, creating latent stress. 

 

Moody’s downgrades of PE-backed software debt in the scenario act as a trigger, followed by defaults and restructurings in software and related information services. Zendesk is used as a symbolic turning point: an ARR-backed loan structure fails because the “recurring” revenue assumption was secularly impaired by AI-native customer-service automation. 

 

Importantly, the essay concedes that private credit alone should have been survivable because it is structurally more insulated from classic bank-run dynamics. The deeper risk arises when exposures are linked into insurers, asset managers, offshore reinsurance entities, and SPVs through opaque structures and regulatory arbitrage. That is where a contained credit problem becomes a system-confidence problem. 

 

This is the “daisy chain of correlated bets” thesis: many seemingly different positions ultimately depend on the same underlying assumption—continued white-collar income growth and stable monetization of productivity. Once that assumption is challenged across software, intermediation, consumer demand, and credit, correlations converge in the worst way. 

 

The essay’s final major acceleration channel is housing. The authors focus not on immediate 2008-style collapse, but on trajectory risk: prime borrowers in tech/finance-heavy ZIP codes remain current by cutting discretionary spending, draining savings, and deferring maintenance—until delinquencies begin rising in places previously considered “bulletproof.” That is how labor displacement threatens the mortgage complex. 

 

By the time the essay reaches “The Battle Against Time,” its argument is that the crisis is now twin-looped: a real-economy loop (AI → layoffs → weaker spending → more AI) and a financial loop (income impairment → mortgage stress → tighter credit → weaker wealth effect). Traditional monetary tools may cushion finance but cannot fix the underlying issue: AI is reducing the scarcity premium of human intelligence. 

 

The policy section argues the state is structurally unprepared because tax systems are built around human labor income. In the scenario, payroll and income tax receipts fall while productivity rises, because gains accrue to capital and compute rather than workers. Meanwhile, automatic stabilizers were designed for temporary unemployment, not persistent wage replacement by improving machines. 

 

The authors introduce fictional policy responses such as a “Transition Economy Act” (direct transfers funded partly by deficit spending and an AI inference tax) and a more radical “Shared AI Prosperity Act” (a public claim on returns from intelligence infrastructure, somewhere between a sovereign wealth fund and royalties on AI-generated output). These proposals symbolize the essay’s broader point: redistribution and institutional redesign move slower than technical capability. 

 

Socially, the essay imagines growing backlash against AI labs and their investors as wealth concentration accelerates. Protest movements, media attention, and political polarization all intensify, not because productivity gains vanish, but because the gains are distributed too narrowly and too quickly relative to institutional adaptation. The “villain,” the authors say, is ultimately time. 

 

The closing section, “The Intelligence Premium Unwind,” is the conceptual core of the whole piece. The authors argue that modern economies were built on the assumption that human intelligence was scarce and therefore valuable. If machine intelligence becomes a competent, scalable substitute for many cognitive tasks, then labor markets, mortgages, tax systems, and financial valuations all require repricing. 

 

Crucially, the essay does not end in deterministic collapse. It says repricing is painful and disorderly, but not the same as permanent breakdown. The economy can find a new equilibrium, but only if societies create new frameworks fast enough. That is why the final rhetorical pivot back to February 2026 matters: the canary is still alive, meaning preparation is still possible.

35 Views
bottom of page