top of page

AI Publications

Public·8 members

Project Iceberg

🔗https://iceberg.mit.edu/


Introduction


Project Iceberg starts from a simple observation: when AI changes tasks inside jobs, the economic shockwaves don’t stay neatly inside “tech.” If AI automates quality control in a factory, the consequences can ripple through suppliers, logistics, and local service economies—yet most planning tools only notice the disruption after it shows up in employment or wage statistics.


The report argues that this is a measurement problem as much as a technology problem. Traditional labor metrics were built for a “human-only” economy; they track workers, wages, and outcomes, but not where AI capability overlaps with human skills before adoption takes off. That overlap is the early-warning signal policymakers and business leaders need when they’re committing billions to training, energy, and infrastructure.


To fill that gap, the team introduces a national-scale simulation framework (“Project Iceberg”) and a new KPI (“the Iceberg Index”) that estimates technical exposure: the share of wage value tied to skills that current AI systems—via real, deployed tools—can perform. It’s explicitly not a prediction of job losses; it’s a map of where the ground is likely to shift first.


Why today’s workforce planning is missing the point


The report describes a fast-moving policy environment: federal and state actors are investing heavily in AI-related workforce initiatives and infrastructure. The risk is timing. If the labor market changes faster than planning cycles, governments can end up building programs for skills that are already being restructured.


A key reason is what the paper calls the “invisible economy”: AI adoption and worker behavior increasingly occur on digital platforms (copilots, tool ecosystems, gig/freelance marketplaces) that don’t flow cleanly into official reporting. By the time disruption appears in surveys, the window for proactive reskilling or regional strategy may have narrowed.


The deeper argument is that the unit of analysis must change from jobs to skills. Past economic eras required new measurements (e.g., output-per-hour for industrial productivity; digital economy accounts for internet-era services). In the “intelligence era,” the relevant question is: which skills inside occupations can AI systems perform, and how much wage value is tied to them? 


This is also why “AI impact” can look small if you only track visible adoption. The public story often centers on consumer tools and software jobs, but firms are quietly reorganizing workflows across finance, healthcare, logistics, and administration—often through tool-enabled automation and augmentation rather than headline-grabbing layoffs.


What Project Iceberg is building


Project Iceberg is positioned as a “sandbox” for scenario planning: a computational model that simulates the emerging human–AI labor market, so leaders can test strategies before committing real resources. It’s built on Large Population Models and runs at national scale (151 million workers).


The platform works by mapping both sides of the market into a shared representation. On the human side, it builds a digital workforce across 923 occupations and ~3,000 counties, representing more than 32,000 distinct skills. Each worker is modeled as an agent with attributes like skills, tasks, and location—enabling analysis such as reskilling pathways and occupational similarity.


On the AI side, the project catalogs 13,000+ AI tools (copilots, automation systems, and other production systems) using the same skill taxonomy. That alignment matters because it turns “AI is improving” into a measurable comparison: which tools map onto which skills, and where that overlaps with real occupational skill bundles.


Finally, the model simulates interactions between workers, skills, and AI tools—incorporating assumptions about technology readiness, adoption behavior, and regional variation. The output is not a single forecast, but a way to compare interventions (training programs, incentives, regulatory choices) and see how impacts could cascade across sectors and geographies.


The Iceberg Index, what it reveals, and what it doesn’t claim


The Iceberg Index is introduced as a skills-centered measure of workforce exposure. It quantifies the wage value of occupational work that AI systems can technically perform, based on demonstrated capability through deployable tools. It aggregates skill importance, automatability, and prevalence into an exposure value that can be compared across occupations, industries, and regions.


A major emphasis is interpretation: the Index measures technical exposure, not displacement outcomes or adoption timelines. In other words, it does not claim “X% of jobs will disappear.” It claims “X% of wage value is tied to skills where AI tools exist that can do that work,” which is a different (and earlier) signal for planning.


To build confidence, the report includes validation steps. It reports that skill-based representations predict a large share of observed career transitions (85% for highly similar occupation pairs) and shows substantial geographic agreement (69%) between predicted exposure in tech and observed usage patterns from external AI adoption data sources.


Then comes the headline “iceberg” finding. The Surface Index—measuring exposure where adoption is visibly concentrated today (computing/tech)—is 2.2% of labor-market wage value (about $211B). But when extending the same approach to broader cognitive and administrative work, the Iceberg Index averages 11.7%—about $1.2T in wage value—roughly five times larger than the visible tip.


Crucially, the geography changes. Visible tech exposure clusters in coastal hubs, while broader cognitive/administrative exposure is distributed across states—including places with small tech sectors. The report frames this as a “blind spot” risk: regions may see little visible adoption yet still have large underlying overlap in finance, admin, and professional services that support their industrial base.


Finally, the report argues that traditional economic indicators (GDP, income, unemployment) align somewhat with today’s visible tech disruption, but explain less than 5% of the variation in the broader Iceberg Index. That’s the point: if you steer using only yesterday’s dashboard, you’ll underestimate where AI capability is poised to reshape work next.


The limitations section is explicit: real-world outcomes depend on firm strategy, worker adaptation, societal acceptance, and policy choices; the Index is a capability map meant for scenario planning, not deterministic forecasting. It also notes scope choices (focus on production tools over frontier benchmark performance; excluding physical robotics due to immature adoption data) and modeling assumptions like skill transferability and wage-value weighting.


Conclusion


Project Iceberg’s core contribution is shifting the conversation from “Which jobs will AI replace?” to “Where is AI–human skill overlap already high enough that work will be reorganized?” It offers a national-scale simulation plus a practical KPI (the Iceberg Index) designed to detect exposure before disruption crystallizes in official statistics.


If the report is right, the visible AI story (tech layoffs, coding copilots) is only the surface narrative. The larger transformation is the quiet spread of cognitive automation into administrative, financial, and professional services—distributed across the entire economy. The takeaway is not panic; it’s steering: measure the right thing early, test interventions cheaply in simulation, and invest where exposure is forming rather than where the headlines already are.

29 Views
bottom of page