The Highest Paid Jobs Are Most Exposed to AI, and Leaders Haven't Realized It Yet
On March 15, 2026, Andrej Karpathy —co-founder of OpenAI and former artificial intelligence researcher at Tesla— published what he described as a project created in "two hours on a Saturday morning": an interactive map assigning AI exposure scores to 342 occupations from the U.S. Department of Labor. The scale ranged from 0 to 10. Physical jobs, such as roofing or construction labor, hovered around 0 or 1, while software developers, financial analysts, lawyers, writers, and mathematicians clustered between 9 and 10. The project was deleted hours later, with Karpathy claiming it had been "wildly misinterpreted." But the map had already circulated, and the data it presented didn’t disappear with it.
The central finding is not that AI threatens jobs—this we already knew. What Karpathy made visible, through a visualization that any executive could read in thirty seconds, is the direct correlation between salary and vulnerability: positions generating over $100,000 annually averaged an exposure score of 6.7, the highest among all salary ranges analyzed. Those earning less than $35,000 averaged 3.4. Approximately 60 million U.S. jobs were marked as highly exposed, with total annual salaries nearing $3.7 trillion.
That’s not just a human resources statistic. It’s a sign of organizational architecture.
The Map No One Wanted to Read Aloud
The first thing to understand about Karpathy's analysis is that it wasn't designed as a verdict. He described it as an exploratory tool, inspired by a book he was reading, intended for others to visualize BLS data differently. It was neither a predictive model nor a roadmap for mass layoffs. In his words, it was a weekend experiment.
However, the reaction it stirred was disproportionate precisely because it touched on something organizations have avoided naming for years: the highest intellectual and salary roles are exactly those that large-scale language models replicate most easily. Data analysis, structured writing, legal review, financial modeling, code generation—activities carried out on screens, all sequential, all documented, all trainable.
Elon Musk responded that same day on X with his usual prediction: "All jobs will be optional. There will be a high universal income." The phrase is familiar. Musk has repeated it from various platforms, including a post from December 2025 about robot and AI-driven abundance. What matters strategically is not whether Musk is correct in his utopian horizon, but that his response to Karpathy's map was immediate and unqualified, which says more about the state of the executive debate than about AI itself: the C-Level hovers between the fatalism of "everything will change" and the denial of "that doesn’t affect our core business."
Neither position constitutes a strategy. Both are ways of avoiding decision-making.
The Problem Is Not Automation. It’s Selective Paralysis
The Anthropic study published in early March 2026—weeks before Karpathy’s map—provided an additional dimension that many media outlets overlooked: Workers most exposed to AI tend to be older, better educated, higher paid, and, in many sectors, women. And while there has not been a systematic increase in unemployment since late 2022, there has been a slowdown in hiring younger workers in high-exposure roles. This is not mass layoffs. It’s a silent substitution by not filling vacancies.
This distinction matters more than it seems. A company that stops hiring junior analysts because its AI models process the same reports isn’t making visible cuts. They’re reshaping their talent pyramid without declaring it as policy, leading to medium-term organizational consequences that few boards are measuring: erosion of the internal talent base, concentration of knowledge among senior layers without trained successors, and increasing dependence on tools that no internal team understands thoroughly.
Citadel Securities reported an 11% year-on-year growth in demand for software engineers in 2026, suggesting automation isn’t immediately collapsing specific job markets. But this data coexists with Anthropic’s findings without real contradiction: the demand for senior profiles persists while the training of new generations in those roles slows down. The market continues to buy the finished product but is increasingly neglecting investment in the supply chain that produces it.
For a CFO eyeing the quarter, this may seem efficient. For a CEO planning five years ahead, it’s a way to cannibalize future capabilities.
What Karpathy’s Map Demands from the C-Level Today
There’s a tempting but understandable tendency to see the AI exposure analysis as a talent or technology problem. It is not. It’s a matter of resource allocation and defining bets. When the roles that concentrate the most intellectual capital of an organization are simultaneously the most replicable by language models, the question leadership must answer is not "When do we automate?" but "In which dimensions of human work will we build advantages that are not automatable?"
This decision involves real sacrifice. It entails ceasing investment in processes that AI can execute at a fraction of the cost and redirecting those resources toward capabilities that current models cannot reach: judgment under severe ambiguity, relationship trust built over time, leadership in contexts of high uncertainty, and designing frameworks where the AI models themselves lack sufficient training data. These are not romantic functions. They are functions that no 2026 language model performs consistently without significant human oversight.
Organizations that continue assigning their best talent to tasks a model can complete in seconds are not being prudent. They are paying a strategic asset price for what is becoming a commodity. And the market will eventually adjust that differential, with or without warning.
The average exposure across all analyzed jobs was 5.3 out of 10. This is neither apocalypse nor comfortable margin. It signals a transition that has already begun and does not wait for the next budget cycle to account for it.
The discipline that separates organizations navigating this transition from those suffering through it is not the speed of technological adoption. It’s the clarity to decide, without ambiguity, which functions they will protect as sources of differential advantage and which they will deliberately surrender to automation. Doing both half-heartedly, out of fear of the implications of choosing, is the only gamble that guarantees irrelevance.










