Project Iceberg
đhttps://iceberg.mit.edu/
Introduction
Project Iceberg starts from a simple observation: when AI changes tasks inside jobs, the economic shockwaves donât stay neatly inside âtech.â If AI automates quality control in a factory, the consequences can ripple through suppliers, logistics, and local service economiesâyet most planning tools only notice the disruption after it shows up in employment or wage statistics.
The report argues that this is a measurement problem as much as a technology problem. Traditional labor metrics were built for a âhuman-onlyâ economy; they track workers, wages, and outcomes, but not where AI capability overlaps with human skills before adoption takes off. That overlap is the early-warning signal policymakers and business leaders need when theyâre committing billions to training, energy, and infrastructure.
To fill that gap, the team introduces a national-scale simulation framework (âProject Icebergâ) and a new KPI (âthe Iceberg Indexâ) that estimates technical exposure: the share of wage value tied to skills that current AI systemsâvia real, deployed toolsâcan perform. Itâs explicitly not a prediction of job losses; itâs a map of where the ground is likely to shift first.
Why todayâs workforce planning is missing the point
The report describes a fast-moving policy environment: federal and state actors are investing heavily in AI-related workforce initiatives and infrastructure. The risk is timing. If the labor market changes faster than planning cycles, governments can end up building programs for skills that are already being restructured.
A key reason is what the paper calls the âinvisible economyâ: AI adoption and worker behavior increasingly occur on digital platforms (copilots, tool ecosystems, gig/freelance marketplaces) that donât flow cleanly into official reporting. By the time disruption appears in surveys, the window for proactive reskilling or regional strategy may have narrowed.
The deeper argument is that the unit of analysis must change from jobs to skills. Past economic eras required new measurements (e.g., output-per-hour for industrial productivity; digital economy accounts for internet-era services). In the âintelligence era,â the relevant question is: which skills inside occupations can AI systems perform, and how much wage value is tied to them?Â
This is also why âAI impactâ can look small if you only track visible adoption. The public story often centers on consumer tools and software jobs, but firms are quietly reorganizing workflows across finance, healthcare, logistics, and administrationâoften through tool-enabled automation and augmentation rather than headline-grabbing layoffs.
What Project Iceberg is building
Project Iceberg is positioned as a âsandboxâ for scenario planning: a computational model that simulates the emerging humanâAI labor market, so leaders can test strategies before committing real resources. Itâs built on Large Population Models and runs at national scale (151 million workers).
The platform works by mapping both sides of the market into a shared representation. On the human side, it builds a digital workforce across 923 occupations and ~3,000 counties, representing more than 32,000 distinct skills. Each worker is modeled as an agent with attributes like skills, tasks, and locationâenabling analysis such as reskilling pathways and occupational similarity.
On the AI side, the project catalogs 13,000+ AI tools (copilots, automation systems, and other production systems) using the same skill taxonomy. That alignment matters because it turns âAI is improvingâ into a measurable comparison: which tools map onto which skills, and where that overlaps with real occupational skill bundles.
Finally, the model simulates interactions between workers, skills, and AI toolsâincorporating assumptions about technology readiness, adoption behavior, and regional variation. The output is not a single forecast, but a way to compare interventions (training programs, incentives, regulatory choices) and see how impacts could cascade across sectors and geographies.
The Iceberg Index, what it reveals, and what it doesnât claim
The Iceberg Index is introduced as a skills-centered measure of workforce exposure. It quantifies the wage value of occupational work that AI systems can technically perform, based on demonstrated capability through deployable tools. It aggregates skill importance, automatability, and prevalence into an exposure value that can be compared across occupations, industries, and regions.
A major emphasis is interpretation: the Index measures technical exposure, not displacement outcomes or adoption timelines. In other words, it does not claim âX% of jobs will disappear.â It claims âX% of wage value is tied to skills where AI tools exist that can do that work,â which is a different (and earlier) signal for planning.
To build confidence, the report includes validation steps. It reports that skill-based representations predict a large share of observed career transitions (85% for highly similar occupation pairs) and shows substantial geographic agreement (69%) between predicted exposure in tech and observed usage patterns from external AI adoption data sources.
Then comes the headline âicebergâ finding. The Surface Indexâmeasuring exposure where adoption is visibly concentrated today (computing/tech)âis 2.2% of labor-market wage value (about $211B). But when extending the same approach to broader cognitive and administrative work, the Iceberg Index averages 11.7%âabout $1.2T in wage valueâroughly five times larger than the visible tip.
Crucially, the geography changes. Visible tech exposure clusters in coastal hubs, while broader cognitive/administrative exposure is distributed across statesâincluding places with small tech sectors. The report frames this as a âblind spotâ risk: regions may see little visible adoption yet still have large underlying overlap in finance, admin, and professional services that support their industrial base.
Finally, the report argues that traditional economic indicators (GDP, income, unemployment) align somewhat with todayâs visible tech disruption, but explain less than 5%Â of the variation in the broader Iceberg Index. Thatâs the point: if you steer using only yesterdayâs dashboard, youâll underestimate where AI capability is poised to reshape work next.
The limitations section is explicit: real-world outcomes depend on firm strategy, worker adaptation, societal acceptance, and policy choices; the Index is a capability map meant for scenario planning, not deterministic forecasting. It also notes scope choices (focus on production tools over frontier benchmark performance; excluding physical robotics due to immature adoption data) and modeling assumptions like skill transferability and wage-value weighting.
Conclusion
Project Icebergâs core contribution is shifting the conversation from âWhich jobs will AI replace?â to âWhere is AIâhuman skill overlap already high enough that work will be reorganized?â It offers a national-scale simulation plus a practical KPI (the Iceberg Index) designed to detect exposure before disruption crystallizes in official statistics.
If the report is right, the visible AI story (tech layoffs, coding copilots) is only the surface narrative. The larger transformation is the quiet spread of cognitive automation into administrative, financial, and professional servicesâdistributed across the entire economy. The takeaway is not panic; itâs steering: measure the right thing early, test interventions cheaply in simulation, and invest where exposure is forming rather than where the headlines already are.




