Labor market impacts of AI: A new measure and early evidence
🔗https://www.anthropic.com/research/labor-market-impacts
Measuring AI’s labor-market impact requires caution because earlier attempts to predict disruption from new technologies have often been less accurate than expected. The paper notes that past forecasts around offshoring, robot adoption, and even official occupational projections have produced mixed or limited predictive value.
Anthropic presents this study as an attempt to build a more practical framework for tracking AI’s labor effects early, before the evidence becomes obvious in headline employment data. The authors say their goal is not to claim that major labor disruption has already happened, but to create a measurement system that can be updated over time and may detect vulnerability before displacement is visible.
AI’s labor effects are unlikely to look like a sudden shock such as COVID, where the signal was so large that causal inference was relatively straightforward. Instead, AI may resemble slower-moving structural changes like the spread of the internet or the effects of trade shocks, where aggregate data alone can be misleading because other economic forces blur the picture.
To deal with that, the authors adopt a task-based comparison framework: workers, occupations, or industries can be compared based on how exposed their tasks are to AI. Their contribution is to combine task-level theoretical capability with real-world usage and then aggregate that information up to occupations, creating a more grounded measure of exposure than purely theoretical estimates.
Anthropic combines three sources: the O*NET database of occupational tasks, Claude usage data from the Anthropic Economic Index, and earlier task-level exposure estimates from Eloundou et al. (2023), which classify whether an LLM could theoretically cut task time by at least half.
The article stresses that theoretical capability and actual usage are not the same thing: many tasks may be feasible for AI in principle but not yet widely performed in practice because of legal constraints, workflow bottlenecks, verification needs, software dependencies, or slow adoption. Even so, the two measures are strongly related: the paper says 97% of tasks observed across the previous four Economic Index reports fall into categories rated as theoretically feasible by the earlier exposure framework.
Anthropic’s new metric, called observed exposure, connects to broader labor-market patterns. This measure gives higher exposure to occupations whose tasks are theoretically feasible, actually used with Claude in work contexts, more automated rather than purely assistive, and more central to the occupation’s time allocation.
When this is compared with US Bureau of Labor Statistics employment projections for 2024–2034, the paper finds a mild negative relationship: for every 10 percentaje point increase in observed exposure, projected job growth falls by 0.6 percentage points.
The article also finds that workers in the most exposed occupations differ notably from those in unexposed roles: they are more likely to be female, white or Asian, more educated, and significantly higher paid, with average earnings reported as 47% higher. Graduate-degree holders, for example, are much more common in the highly exposed group than in the unexposed one.
The question is what variable should matter most when trying to detect whether AI is harming workers. They review other approaches, such as looking at changes in the occupational mix, job postings, payroll data, or hiring trends, but argue that unemployment is the most direct signal of economic harm because it captures workers who want jobs and cannot find them.
A fall in job postings or a shift in employment composition may matter, but those changes do not automatically imply worker distress if displaced labor is reabsorbed elsewhere. For that reason, the paper prioritizes unemployment from the Current Population Survey, which also identifies the worker’s previous occupation and industry, making it more suitable for this kind of early-warning analysis.
Anthropic compares workers in the top quartile of observed exposure with workers in the least exposed group and examines unemployment trends since 2016, especially after the release of ChatGPT. The central result is that there is no statistically meaningful increase in unemployment among workers in the most exposed occupations since late 2022.
The exposed group may show a slight rise, but the paper says the effect is indistinguishable from zero in its difference-in-differences analysis. At the same time, the study highlights one possible early signal among younger workers aged 22–25: hiring into highly exposed occupations appears to have weakened.
The article reports that job finding rates into exposed occupations visually diverge in 2024 and that the estimated post-ChatGPT effect is a 14% drop in the job finding rate relative to 2022, although the paper describes this result as only barely statistically significant and open to alternative interpretations.
Anthropic concludes that jobs such as computer programmers, customer service representatives, and financial analysts are among the most exposed under its method, yet the study still finds no clear rise in unemployment for the workers in those occupations so far. The more tentative signal is a possible slowdown in hiring for young entrants into exposed professions.
The authors emphasize that this framework should improve as new usage and labor data become available, and they explicitly say the current work can be updated over time. They also note that one obvious next step is to study recent graduates entering exposed fields, since that may be where early labor-market effects appear first.

