Want to be AI-proof? Be the person who gets sued.
AI proof jobs will be those that sit at critical junctures where failure demands a human scapegoat.
Yesterday's massive power outage across Spain, Portugal, and other parts of Europe sent ripples through the continent. EU officials insist there's no evidence of a cyberattack—yet. Time will tell.
Regardless of what actually caused this continental blackout, the incident illuminates a profound truth about which careers will not just survive as AI reshapes everything around us.
The Silent Service
There exists a "silent service" in society. I don't mean this in the traditional sense used to describe certain intelligence agencies, but rather, a class of people whose work is indisputably mission-critical. These are the individuals for whom no lockdown, no crisis, and no wave of automation could ever pause their daily labor.
During COVID, they didn't show up because they wanted to—they showed up because society would collapse without them. And I'm not talking about marginally essential roles like baristas (no matter how desperately society seemed to crave those pumpkin spice lattes during lockdown).
I’m really referring to the jobs subject to absolutely zero debate over their essential nature. The ER doctors and EMTs racing toward disaster while everyone else flees. The nuclear power plant manager making split-second decisions during system anomalies. The "dude named Ben" who keeps the government network running. These are the professionals who remain invisible to most people—until the moment they become the only people who matter.
These jobs share a common trait: they're mostly thankless. And when the people who occupy these roles make mistakes, the consequences are extraordinary. They don't just get fired—they get crucified.
The Accountability Imperative
There exists a set of jobs so critical that people, at least within any one generation, simply won't accept those jobs being taken over by AI. As the now-famous 1979 IBM slide presciently declared: "A computer can never be held accountable. Therefore a computer must never make a management decision."
This gets at something primal in human nature. We don't just need positive results—we need someone to blame when things go wrong. We require a flesh-and-blood human being who can be hauled into court, fired in disgrace, or subjected to a congressional tongue-lashing; even if doing so changes nothing and deters no one. Justice, or at least the theater of justice, demands a human sacrifice.
The Black Box Problem
Computers have steadily crept into decision-making roles—first in seemingly trivial domains like fraud detection, then gradually in increasingly consequential fields like disease diagnosis.
The discourse around how to reason about "black box" algorithms—systems whose internal logic even their designers struggle to comprehend—has been a fixture in Silicon Valley for 60+ years. But this conversation only truly captured mainstream attention following the May 6, 2010 Flash Crash. The spectacle of quantitative trading savants attempting to explain biologically-inspired self-correcting genetic algorithms to Congressmen who could barely operate their smartphones was both comedic and tragic. Ultimately, no one in Washington DC could bring themselves to understand how gradient-boosted trees work. So what did regulators do? They found some random guy in the UK named Navinder Sarao, who literally lived in his mother's basement trading a modest account of a few million dollars, and pinned the entire market collapse on him. (Side note: 15 years later, I might still be the only person in Washington DC who intimately understands XGBoost)
The Flash Crash marked the singular inflection point where the public, and the policymakers they elect, became simultaneously aware of, and completely mystified by, "black box" algorithms. While the obsession with algorithmic trading may have faded—bull markets have a way of healing all wounds—the unease directed toward probabilistic algorithms has only intensified. Just search "algorithm congressional hearing," and you'll witness similarly absurd attempts by legislators to grasp the use of algorithms in credit decisions, public safety monitoring, health insurance provision, military targeting, image generation, and social media feeds.
The Paradox of Acceptance
Despite this nearly universal lack of understanding about how probabilistic algorithms function, people have paradoxically grown more comfortable with "black box" systems making increasingly consequential decisions. Whether it's a Tesla navigating rush hour traffic, a spacecraft autonomously docking with the International Space Station, or a neural network-powered firewall determining who can access sensitive government networks—we've gradually ceded control to systems we don't fully comprehend.
Notwithstanding this trend toward greater acceptance of "black box" systems in critical infrastructure, one truth remains crystal clear: if you want a career that will endure through the most explosive growth in AI capability anyone has ever witnessed, then position yourself as the person who can be blamed when things inevitably go wrong.
The Accountability Advantage
The AI models will certainly master your skills. First they conquered creative writing, then computer programming and vehicle operation. Their initial attempts to replicate human performance in these domains always appear laughably inadequate—until suddenly, they don't.
What the AI models will never be able to do, however, is satisfy that fundamental human need to have someone—a real person with a beating heart—to blame, and subsequently punish.
So when plotting your career trajectory, gravitate toward roles that require you to carry malpractice insurance. Become the individual personally named in the lawsuit against the hospital. The professional who might one day be summoned before a congressional committee to face aggressive questioning about your alleged misconduct. These positions will remain maximally AI-resistant, if only to sustain the continued growth of the insurance and litigation industries.
Because when the system eventually crashes, people won't demand to speak with the algorithm. They'll demand the name of the human responsible.
TLDR; The survival of certain professions in the age of AI will be anchored not just in the need for skill, but in the irreplaceable human need for accountability.