Anthropic Released a Shocking List of Jobs AI Will Take Over

anthropic ai risk jobs list

Table of Contents

Let’s be honest: we’ve all seen the breathless headlines. “AI will replace 85 million jobs.” “Robots are coming for your career.” “No profession is safe.” For years, we’ve been marinating in predictions about artificial intelligence and the workforce that range from mildly alarming to outright apocalyptic. Most of those forecasts were based on theoretical modeling computer scientists and economists essentially asking the question, “what could AI do?” and extrapolating from there.

Anthropic just changed the conversation. In a major piece of research published in early 2025, the AI safety company behind the Claude family of models took a radically different approach. Instead of asking what AI is theoretically capable of, they looked at what people are actually using it for, right now, in their professional lives. The result is a study that’s at once more grounding and more urgent than anything we’ve seen before and it’s spreading rapidly across social media, with good reason.

What they found paints a more complicated and, in some ways, more honest portrait of the AI-and-work landscape than we’ve been given before. Yes, certain high-paying, white-collar professions are deeply exposed. But the gap between AI’s theoretical capability and its observed usage across most industries is enormous and that gap itself is one of the most interesting findings in the entire study.

In this piece, we’re going to unpack the full picture: what Anthropic actually did, what they found, what it means for specific occupations, and most importantly what you should actually do about it.

How Anthropic Actually Built This Research

Most job displacement research is conducted from the outside in. Researchers look at a job description, study what tasks it involves, assess whether a language model or automation system could theoretically perform those tasks, and then assign a risk score. It’s a reasonable methodology, but it has a fatal flaw: it tells you nothing about whether or how AI is actually being adopted in those roles.

Anthropic’s Economic Index took a fundamentally different route. The company analyzed tens of millions of conversations on Claude its own AI assistant to see which professions were using it, how they were using it, and whether they were using it to augment their work (doing it alongside AI) or to automate it (replacing tasks entirely with AI output).

This means the dataset isn’t hypothetical. It reflects real usage by real professionals in 2024 and early 2025 lawyers asking Claude to draft motions, software engineers using it to debug code, teachers asking it to build lesson plans, sales reps using it to write cold emails. Actual behavior, at scale.

The team then mapped this observed usage against occupational categories defined by the O*NET system the standard U.S. government taxonomy for classifying jobs by their tasks, skills, and work activities. This gave them a structured way to compare across industries and to contrast what the AI could theoretically handle versus what professionals were actually asking it to do.

The Radar Chart That’s Shocking People Online

If you’ve seen this story circulating on X, LinkedIn, or Instagram, you’ve probably seen a version of the spider/radar chart from Anthropic’s study. Here’s an interactive recreation of that data, built from the published figures:

Anthropic

What jumps out immediately from this visualization is the dramatic asymmetry. The blue area representing what Claude and similar LLMs are theoretically capable of across job categories is large and broadly spread. The red area actual observed usage is far smaller, concentrated in a few key sectors, and in many industries, barely registers at all.

Management, Business & Finance, Computer & Math, and Legal are where the observed usage most closely approaches the theoretical ceiling. These are knowledge-intensive sectors where the nature of the work reading, writing, analyzing, synthesizing maps naturally to what a language model does well.

By contrast, sectors like Agriculture, Construction, Installation & Repair, Food & Serving, and Transportation show almost no observed AI usage even though, in certain administrative dimensions, those jobs do have theoretical LLM touchpoints (paperwork, scheduling, reporting). The gap is not because the AI can’t help at all; it’s because the core, value-generating tasks in those jobs remain stubbornly physical and context-dependent.

Jobs at Higher Risk: A Sector-by-Sector Breakdown

Anthropic

1. Computer & Math The Irony of the Tech Workers

Here’s the richest irony of this entire study: the sector most exposed to AI disruption, according to both theoretical capability and observed usage, is Computer & Math the field that built the AI in the first place. Software engineers, data scientists, mathematicians, statisticians, and systems analysts are among the heaviest users of Claude and similar tools, and the tasks AI can handle in their domain are genuinely substantive.

Code generation, debugging, documentation, code review, test writing these are all tasks that language models can now perform at a level that genuinely competes with junior and sometimes mid-level engineers. GitHub Copilot, Claude, and ChatGPT are regularly described by software professionals not as toys but as colleagues. Multiple studies from productivity researchers have shown that experienced developers using AI assistants complete certain coding tasks 40–60% faster.

The implication isn’t that software engineers are about to be unemployed en masse demand for the field remains enormous. But it does suggest that fewer engineers may be needed to ship the same volume of code, which will change hiring patterns, salary curves, and the value placed on junior roles. The floor of the market entry-level coding jobs is the most exposed segment.

2. Legal Where AI Handles the Grunt Work

The legal industry is undergoing a quiet earthquake right now. Law firms, corporate legal departments, and even courts are experimenting with AI for document review, contract analysis, legal research, brief drafting, and due diligence. The theoretical coverage for legal tasks is extremely high, because so much of what lawyers do at the associate level is fundamentally information processing: reading, summarizing, pattern matching, and drafting according to templates.

Observed usage in legal tasks on Claude is already significant. Users are asking the model to analyze contracts, identify risk clauses, summarize depositions, draft demand letters, and research case law. It’s not perfect hallucination in legal contexts is a serious problem but the direction of travel is clear. The work of a first-year law associate, which was already being commoditized by LegalZoom and similar services, is now genuinely under pressure from a much more capable technology.

What’s harder to replace? The courtroom work. The judgment calls. The strategic advice that partners give clients based on decades of experience and relationship capital. The AI can read every case ever decided; it can’t read a room, understand a client’s real risk tolerance, or know when to settle.

3. Business & Finance The White-Collar Core

Finance is another sector where the core productive activity analyzing data, building models, writing reports, processing information maps almost precisely onto what LLMs do well. Financial analysts who spend their days reading earnings reports, building Excel models, and drafting investment memos are, frankly, doing work that AI can substantially assist with or, for the more routine portions, replace.

The consulting world is particularly interesting here. Firms like McKinsey, BCG, and Deloitte have been among the most aggressive early adopters of AI tools, and the reason is obvious: a huge portion of consulting deliverables are polished PowerPoints and Word documents summarizing analysis. An AI that can draft the first version of a market analysis report in 20 minutes isn’t threatening the senior partner but it is fundamentally changing what a team of analysts is for.

4. Management The Surprise Entry

Management is an interesting and perhaps underappreciated entry on the high-risk list. When we think of vulnerable jobs, we tend to think of repetitive or routine work. Management doesn’t fit that mental model. But the research suggests a significant portion of what managers actually spend their time on communicating, writing reports, scheduling, synthesizing information, preparing presentations is deeply AI-susceptible.

Middle management, particularly, is at an inflection point. The traditional function of a middle manager is to serve as an information relay between frontline workers and executive leadership collecting data, summarizing it, and communicating decisions. AI is extraordinarily good at exactly this kind of information intermediation. This doesn’t mean all managers are threatened; the judgment, people, and culture work they do remains genuinely hard to automate. But the administrative spine of management is certainly exposed.

5. Arts & Media The Creative Paradox

No area of the AI-and-work debate has generated more heat than creative work. And the Anthropic data reflects why: Arts & Media shows both high theoretical coverage and a meaningful level of observed usage copywriters, journalists, graphic-concept artists, and content creators are already using these tools in their workflows.

The nuance here is crucial and frequently lost in the debate. What’s being threatened isn’t creativity itself; it’s commoditized creativity. The stock image industry has already been largely disrupted by AI-generated images. The content mill websites that hire writers at $10 an article to produce SEO-fodder is essentially gone. But thoughtful, voice-driven journalism, original long-form writing, genuinely novel artistic vision? Those are doing fine. The market is bifurcating: AI handles the commodity tier, humans own the premium tier.

6–11. Education & Library, Architecture & Engineering, Life & Social Sciences, Sales, Office & Administrative, Social Services

Each of these sectors presents its own specific vulnerability profile, but they share a common thread: they involve substantial amounts of information handling, communication, and standardized knowledge application all areas where language models are genuinely competent. Education faces AI-powered tutoring and curriculum generation. Architectural and engineering firms are seeing AI assist with design iteration and documentation. Sales is increasingly powered by AI-generated outreach, call scoring, and CRM updates.

Social Services is perhaps the most ethically fraught entry on this list. The actual human work of social services building trust with vulnerable clients, navigating complex family dynamics, making nuanced judgment calls about safety is deeply human. But the case management, documentation, and administrative burden that buries social workers is absolutely AI-addressable, which could either free them up or, if handled poorly by institutions focused on cost-cutting, become a pretext for headcount reductions.

Anthropic

Jobs Safer From AI: Why Physical Work Has a Moat

The jobs on the “safer” side of Anthropic’s divide share a characteristic that doesn’t show up in any job description but that shapes them profoundly: they require operating in the physical world, in real time, under dynamic and unpredictable conditions. Language models live entirely in the realm of tokens and text. They cannot hold a wrench, pour concrete, navigate a forklift through a crowded warehouse, or perform a physical assessment of a patient.

This creates what economists are starting to call the “physical moat” a genuine, durable barrier to AI replacement that exists not because the cognitive content of a job is complex, but because the job’s value is fundamentally embedded in embodied action.

Construction & Trades The Billion-Dollar Skills Shortage Gets Bigger

America already faces a severe shortage of skilled tradespeople. The plumbers, electricians, HVAC technicians, carpenters, and construction managers the country desperately needs are both hard to find and increasingly well-compensated. According to the Bureau of Labor Statistics, electricians earned a median annual wage of $61,590 in 2023, with master electricians in many markets clearing six figures. Plumbers averaged $61,550. These aren’t poverty wages and they’re going up.

Anthropic’s data confirms what the trades industry already suspected: AI has very little to offer in terms of replacing the core work. A language model can help a contractor write a bid proposal or estimate materials; it cannot assess the specific challenges of a 1940s-era electrical panel in a 108-year-old building, improvise when the crawl space turns out to be half-flooded, or make the dozens of real-time physical judgment calls that define a day in the trades.

If there’s an irony here, it’s a sharp one: many of the white-collar workers now anxiously reskilling and pivoting because of AI would have been better served, by purely financial metrics, going to a vocational program instead of college.

Healthcare Practitioners Protected by Liability, Complexity, and Touch

The inclusion of Healthcare Practitioners on the “safer” list deserves careful examination, because AI is actually making enormous inroads into healthcare. Diagnostic AI systems are demonstrating radiologist-level performance in reading certain scans. AI models are being used in drug discovery. Clinical documentation tools are reducing the administrative load on physicians substantially.

And yet and this is critical the practice of medicine at the point of care remains deeply human. Patients do not want to be examined by a robot. The physician-patient relationship has therapeutic value independent of the information-processing content of the appointment. The liability and regulatory frameworks around clinical practice are extraordinarily robust. And the judgment involved in integrating a patient’s medical history, current presentation, patient preferences, family situation, and the probabilistic reality of diagnosis is genuinely complex in ways that current AI handles poorly.

What AI will change in healthcare is not whether doctors exist, but what they spend their time on shifting them toward the complex, the ambiguous, and the relational work that only humans can do, while AI handles the documentation, research, and pattern recognition that currently consumes hours of clinical time.

Transportation & Logistics Autonomous Vehicles Haven’t Arrived Yet

This one might surprise you, given how much has been written about self-driving cars, autonomous trucks, and delivery robots. And yes theoretically, transportation is under long-term pressure from autonomous systems. But the observed data reflects reality rather than hype: fully autonomous vehicles at commercial scale have repeatedly proven harder to achieve than predicted, and the regulatory and insurance frameworks for widespread deployment don’t yet exist.

Truck drivers, delivery drivers, bus operators, and train operators are in a more secure position than the headlines of five years ago suggested. The timeline for meaningful displacement in transportation has been pushed out considerably by the engineering realities of operating autonomously in complex, human-shared environments. This is a sector where the long-term risk is real, but the medium-term disruption has been substantially overestimated.

Food & Serving, Personal Care, Grounds Maintenance The Dignity Economy

There’s something philosophically interesting about the fact that some of the most AI-resistant jobs are the ones where the human interaction itself is central to the product. When someone hires a personal care aide, they are not just buying a set of physical tasks. They are buying presence, reliability, warmth, and the specific comfort of being cared for by another human being. A robot however capable cannot fully substitute for this.

Similarly, the experience of a well-executed meal in a restaurant, of a barber who remembers your preferences, of a landscaper who understands the specific microclimate of your property and has tended it for years these services have a human component that customers are willing to pay for, even as AI reshapes the broader economy.

The Huge Gap Between What AI Can Do and What People Actually Use It For

Of everything in this study, the finding that deserves the most attention and has received the least in the popular coverage is the enormous gap between theoretical AI capability and observed usage across most industries.

Look at the radar chart again. In sector after sector, the blue theoretical coverage extends far further from the center than the red observed usage. In many cases Agriculture, Construction, Transportation, Food & Serving observed AI usage is nearly invisible, even though some administrative dimensions of those jobs are theoretically LLM-addressable.

Why does this matter? Because it complicates the simple story. If AI disruption were purely driven by what AI can technically do, we would expect to see much broader adoption already. But the observed data suggests that adoption is being shaped by a complex mix of factors: worker familiarity and training, employer policies and infrastructure, sector culture, regulatory constraints, and the fundamental reality that most workers haven’t been given AI tools or taught how to use them effectively.

Anthropic

The gap data also implies that the disruption, when it comes more fully, may be more sudden than gradual. As AI tools become more integrated into enterprise software, as employees receive more training, as employers implement AI workflows more systematically, there could be threshold moments particularly in legal, finance, and education where adoption crosses a tipping point and the displacement effect accelerates rapidly.

This Isn’t a Doom and Gloom Story Here’s the Nuance

Reading this study as a simple “your job is doomed / your job is safe” binary would be a mistake. The picture is considerably more textured than that, and Anthropic is explicit about several important qualifications.

Augmentation Is Currently More Common Than Automation

One of the most interesting findings from the observed usage data is that the predominant pattern of AI use in professional contexts is augmentation workers using AI to do their existing jobs better rather than automation, where the AI entirely replaces a human task. Writers using Claude to help brainstorm are still writers. Engineers using it to debug code are still engineers. Lawyers using it for initial research drafts are still doing the legal judgment work themselves.

This doesn’t mean automation isn’t happening it is, in pockets. But the story of the current moment is mostly about AI making workers more productive, not about it replacing them wholesale. The ratio of augmentation to automation, in Anthropic’s data, skews substantially toward augmentation.

New Jobs Are Being Created

Every major technological wave has both destroyed jobs and created new ones, often in ways that were hard to predict in advance. AI is no exception. The fields of prompt engineering, AI model evaluation, AI ethics and governance, AI training data curation, and AI customer success have all emerged from near-zero to significant employment within the past two years. More will follow. The question of whether new AI-created jobs will be enough to offset displaced ones, and whether the timing will work for current workers, is genuinely uncertain but historical precedent suggests the economy does adapt.

Geography and Sector Concentration Matter Enormously

The AI-and-jobs impact will not be distributed evenly. San Francisco, New York, and Boston where the financial, legal, and tech industries concentrate are far more exposed to AI disruption than the Midwest manufacturing belt or rural agricultural regions. This has profound implications for regional economic policy. The communities most at risk of disruption are not necessarily the ones that have the most political visibility or the most robust safety nets.

The Highest-Paid Roles Within Exposed Fields Are Safer

Within every “high-risk” occupation, there is a distribution of roles. The partner at a law firm who has spent 30 years building expertise, judgment, and client relationships is far less exposed than the first-year associate doing document review. The senior financial analyst who synthesizes complex, novel investment theses is more protected than the junior analyst running DCF models from templates. The pattern across high-risk sectors is consistent: AI threatens the routine and the commoditized; it amplifies and extends the expert and the deeply skilled.

What Does This Mean For You, The Worker?

Okay, enough about the macro picture. What should an individual person someone with a career to protect, bills to pay, and maybe a family to support actually take from this research?

If You’re in a High-Risk Field: Don’t Panic, But Do Move

The study does not say your job will be gone by Tuesday. It says your job is exposed to AI disruption over the coming years and that adoption is accelerating. The right response is not paralysis and it’s not dismissal. It’s a deliberate, structured strategy for moving up the value chain within your field.

For a lawyer, that means developing the expertise, judgment, and client relationship skills that AI cannot replicate. For a software engineer, it means shifting from being someone who writes code to someone who architects systems, makes product decisions, and leads engineering teams. For a financial analyst, it means building the interpretive and strategic judgment that transforms raw data into actionable decisions that a CFO or board can act on.

In every case, the direction is the same: move toward the work that requires irreducible human judgment, relational intelligence, ethical reasoning, and contextual wisdom. Move away from the work that is primarily information processing, template application, or routine pattern matching.

If You’re in a “Safer” Field: You’re Not Immune, You’re Just Later

The trades, healthcare, and other physical/relational sectors face less immediate disruption. But this isn’t a permanent exemption certificate. The long-term trajectory of robotics and physical AI is real, even if the timeline is longer than AI optimists have predicted. Workers in these fields should be monitoring developments, developing skills in AI tool usage where relevant to their administrative work, and building the deep expertise and reputation that will differentiate them regardless of what technology does.

Develop AI Fluency Regardless of Where You Work

The worker who understands how to use AI tools effectively not just as a gimmick, but as a genuine productivity multiplier will have a structural advantage in virtually every field over the coming decade. This isn’t about becoming a programmer. It’s about developing the judgment to know when AI helps, how to prompt it effectively, how to verify its outputs, and where its limitations create risks. These are learnable skills that are becoming table stakes in a growing share of professional work.

What Policymakers and Companies Are Getting Wrong

The Anthropic study has significant implications beyond individual career decisions. For the institutions employers, governments, educational systems tasked with managing the transition, there are several key points that current policy debate is largely missing.

The Problem Isn’t Just Job Loss It’s Transition Speed

Economic theory teaches us that technological disruption creates new jobs to replace old ones. The historical record broadly supports this from the agricultural revolution to industrialization to the internet, humans have consistently found productive things to do even as technology rendered old roles obsolete. The problem is that history also shows this transition can be brutal for the specific workers whose skills become less valuable, especially if they are mid-career and lack easy retraining pathways.

The pace of AI development makes this particularly acute. Previous technological transitions unfolded over decades, giving education systems, labor markets, and social safety nets time to adapt. AI is moving faster. The gap between “this is coming” and “this is here” is compressing in a way that our institutions are not designed to handle.

The Education System Is Mis-Aligned With What’s Coming

American higher education continues to produce large numbers of graduates in fields paralegal work, certain financial analysis roles, junior software development that are in the direct path of AI disruption. Meanwhile, vocational and technical education, which produces workers for the relatively AI-resistant skilled trades, remains chronically underfunded and culturally stigmatized.

The status hierarchy that places a four-year English degree above a two-year HVAC certification is going to look increasingly archaic as the economy adjusts to AI’s effects on white-collar work. Policy interventions that make vocational training more accessible, more funded, and more socially valued would go a long way toward preparing the workforce for what’s coming.

Companies That Use AI to Simply Cut Headcount Miss the Point

Anthropic’s data on augmentation vs. automation suggests something that competitive businesses should take seriously: the firms getting the most value from AI aren’t the ones using it to fire people. They’re the ones using it to make their existing people dramatically more capable getting consultant-quality work out of analyst-level employees, enabling smaller teams to tackle bigger projects, reducing the administrative burden that keeps high-value workers from doing high-value work.

The organizations that approach AI as a cost-cutting tool rather than a capability-multiplying tool are likely to discover that they’ve destroyed institutional knowledge, morale, and competitive capacity in the process. The smart play and the more ethical play is to ask how AI can enable each person to do more of what they’re uniquely good at, rather than simply asking how many people can be replaced.

What Anthropic’s Study Really Tells Us

Anthropic’s Economic Index research is significant not because it tells a simple story it doesn’t. It’s significant because it’s the first major study grounded in what people are actually doing with AI, rather than what AI can theoretically do. That distinction matters enormously.

What the data shows is a technology that is already genuinely impacting a specific tier of knowledge work the information processing, writing, analysis, and reasoning-intensive tasks that dominate white-collar professional life while having a much more limited real-world footprint in the physical, relational, and embodied work that makes up the other half of the economy.

It shows a huge gap between theoretical capability and observed adoption that will likely narrow over time as tools improve, as workers become more familiar with them, as employers integrate them into workflows, and as the regulatory and liability environments evolve.

And it shows that the pattern of disruption, so far, leans more toward augmentation than replacement more toward AI-as-tool-that-makes-humans-better than AI-as-replacement-for-humans. This is not guaranteed to remain true as the technology develops; it’s a description of now, not a prediction of forever.

What should you do with all of this? The worst response is nothing assuming either that your job is definitely fine or that disruption is inevitable and there’s nothing to be done. The best response is honest self-assessment: What tasks in your work does AI already do better than you? What’s the irreducible human value you bring to your role? Where should you be building skills and expertise right now, before the pressure becomes acute?

The window for proactive adaptation is still open. Anthropic’s study is, in a sense, a map not of a disaster, but of terrain that every thoughtful worker, educator, and policymaker should be studying carefully. The disruption is real, but it is not evenly distributed, it is not total, and it is not happening overnight. There is still time to navigate toward solid ground. The question is whether we’ll use it.

Leave a Reply

Your email address will not be published. Required fields are marked *