“That’s it, man. Game over, man. Game over!”
โ Private Hudson, Aliens, 1986
Table of Contents
The lights are flickering.
Not the romantic flicker of a candle โ the kind that makes a dinner table feel intimate and warm. The violent, industrial flicker of overhead strip lighting that has taken a massive hit. Corporate fluorescent tubes in a corporate operations room, strobing in and out like the nervous system of a building that knows something is very wrong.
Private Hudson is on his feet.
He is not standing the way soldiers stand โ upright, composed, the posture of a man in control of his situation. He is standing the way a man stands when the floor has disappeared from underneath him and his legs haven’t yet received the message. His hands are shaking. His rifle is somewhere. His voice โ the voice of a trained marine, a man who signed up for danger, a man who accepted risk as the terms and conditions of his employment, who trained for years for exactly this kind of mission โ has cracked. Completely. Irreversibly.
Sweat is running down his forehead. Real sweat. Not the polite perspiration of a man who has been having a nice jog in Hyde Park on a cool Sunday. Itโs the cold, sudden, involuntary sweat of a man whose brain has just processed a threat and whose body has responded before his mouth found the words. His eyes are wide. Not with aggression. With something far worse than aggression.
With understanding. Deep understanding.
The specific, terrible, crystalline understanding of a man who has just grasped something he cannot un-grasp. A man who came into this room with every advantage his species could provide him โ weapons, training, armour, a plan, colleagues, hardware, communication systems, years of accumulated expertise in exactly this kind of environment โ and has just discovered, in real time, that none of it is sufficient. Not because the enemy is bigger. Not because the enemy has better weapons. But because the enemy thinks differently. Operates at a different level. Does not share a single assumption with the men and women in that room about how a fight is supposed to work.
“That’s it, man!”
He says it the way a man says something when he needs to hear it out loud to believe it. When the thought in his head is so enormous that saying it is the only way to confirm it is real.
“Game over, man!”
The second sentence comes faster. Lower. Less a declaration than a verdict. The moment when the appeal has been heard and rejected and the judgement is final.
“GAME OVER!”
Around him: a commanding structure that has just been decapitated. Colleagues in various stages of their own psychological collapse. A plan that was excellent โ genuinely, professionally excellent, built on experience, intelligence, and every lesson the species had learned โ revealed, in the space of minutes, as a plan designed for a world that no longer exists.
They came in armed.
They are leaving โ those who leave โ having understood that the weapons were not the point.
The alien doesn’t fight the way they were trained to fight. Doesn’t fear what they were trained to make adversaries fear. Doesn’t tire, doesn’t negotiate, doesn’t feel the clock, doesn’t respond to any of the psychological or tactical levers that Hudson’s entire military education was built around.
And the most terrifying part โ the part that produces the sweat, the cracked voice, the wide eyes โ is not the danger.
It is the intelligence gap.
Why We Fear Aliens

Step out of that operations room for a moment. Come with me.
Before we talk about artificial intelligence or AI. Before we talk about white-collar jobs and salary compression and the Seniority Vacuum and Ghost GDP. Before any of the data and the economics and the career advice โ I need you to think about something that nobody ever asks you to think about directly.
Why do you fear aliens? I mean why do we collectively fear aliens? We donโt mind their costumes and seeing them in movies or skits on YouTube, but deep down, why do we fear them?
I donโt mean the aliens in the US Immigration Act. Not the aliens that Donald Trump’s MAGA base builds walls against โ those are human beings with human fears and human hopes, and the fear directed at them is the oldest, most mundane form of tribalism on record. I am not talking about those aliens.
I mean the other ones. The ones in the science fiction films. The ones in the books like the Hitchhikerโs Guide to the Galaxy. The ones that humanity has spent decades and billions of dollars imagining, depicting, debating, and โ if we are honest โ quietly dreading. The aliens of Roswell. The aliens of the Fermi Paradox essay by Tim Urban. The aliens of Close Encounters and Independence Day and Contact and Arrival and a thousand science fiction stories that begin the same way: they are out there, and they are coming, and they are not coming as equals.
Why does that idea generate a specific, civilisation-level dread that almost nothing else can reach?
It is not the tentacles. It is not the spaceship. It is not even the violence, though the violence features heavily in the imagining.
It is the intelligence gap. The massive, unfathomable intelligence gap.
Every alien story that generates genuine existential terror โ the kind that sits with you after the film, that wakes you at 3 a.m., that makes you stare at the ceiling calculating improbabilities โ is fundamentally a story about encountering an intelligence so far beyond our own that everything we have built to protect ourselves becomes, in an instant, irrelevant. Our weapons: irrelevant. Our institutions: irrelevant. Our languages, our culture, our accumulated wisdom, our technology, our most brilliant minds โ irrelevant, or at best a mild inconvenience to something operating seventeen cognitive orders of magnitude above us.
The alien represents the thing we fear more than death.
Being outthought.
The fear that somewhere in the universe there is something that would look at our greatest achievements โ our moon landings, our symphonies, our quantum physics, our literature, our medicine โ and see it the way we observe a chimpanzee using a stick to extract termites from a mound. Impressive, for a primate. Touching, even. But not intelligent. Not in the way that matters.
That is the fear beneath the fear.
And I am here to tell you, with the specific urgency of someone who has followed this logic to its conclusion and come back to report what they found โ that the aliens have arrived.
Not from outer space. Not from a distant star. Not from a Hollywood production budget.
From a data centre in San Jose. From a server farm outside Dublin. From a building in Seattle that looks, from the outside, like any other corporate facility โ unremarkable, anonymous, the architectural equivalent of a beige filing cabinet โ inside which something has ingested every book ever digitised, every scientific paper, every legal judgement, every line of code, every medical journal, every Reddit thread, every Stack Overflow answer, every Wikipedia article, every piece of human cognitive output that has ever been made digitally available โ and is now, demonstrably, performing the core intellectual work of nearly every profession humanity has defined, at a level that matches or exceeds the average trained professional.
We built the alien.
We fed it. We trained it on everything we knew. We showed it how we think. We paid our subscription fees and typed our prompts and, in doing so, annotated the very dataset that is being used to make us redundant.
I came back from the future to warn you.
This is that warning.
Back to the Future

I want to be precise about the register in which I am writing this.
I am not a pessimist. I am not a Luddite. I am not into โbear pornโ (Donโt you dare try โgoogleโ it up โ itโs just a thing about being in the business of fear, when people have bearish takes and are doomsayers). I am not the person who warned that the internet would destroy society, or that smartphones would end human connection, or that social media was the end of civilisation. I studied Computer Science in the evenings at Birkbeck. I believed in technology. I thought the people building the future were, on balance, the good guys.
I have been spending a lot of time on X and Reddit. Places were people spend most of their time in the future. Back to the future style. Where they are discussing all the wonderful advancements of AI. I have the receipts.
What I am telling you is not the panic of someone who doesn’t understand the technology. It is the diagnosis of someone who understands it well enough to be frightened โ and who has followed the logic to the place where this ends, and has come back with the specific intention of telling you what the view looks like from there.
What I saw was a civilisation that built its entire social architecture โ its class system, its salary scales, its educational hierarchy, its definition of meritocracy, its concept of human value โ on a single, foundational, almost entirely unexamined assumption.
The assumption: human cognitive effort, our human intelligence, is scarce, and therefore valuable and is the foundation of our economy and identity and some more.
That is the load-bearing wall of the entire global knowledge economy. Every salary negotiation. Every student loan. Every professional qualification. Every university ranking. Every LinkedIn skill endorsement. Every “talent strategy” document ever produced by an HR department. Every careers advisory session in every secondary school in the Western world. All of it โ all of it โ is built on that one assumption.
That assumption has just been destroyed by AI.
Not chipped at. Not challenged. Not disrupted, in the dull sense of that word. Destroyed. At speed. By something that did not ask permission, did not wait for regulation, and does not care whether we were ready.
This is the essay that tries to give you the words for what you already feel but cannot yet name.
Because you already feel it. In the hiring freeze. In the redundancy notice dressed up as “restructuring.” In the three-month job search after graduation that is now a twelve-month job search. In the LinkedIn notification that a role you applied for has received over 400 applications. In the quiet question, late at night, that you have not yet said out loud to anyone:
Am I going to be okay?
Let me give you the honest answer. Not the HR answer. Not the government answer. Not the Sam Altman answer. The honest one.
The Numbers That Cannot Be Argued With

If I assume correctly, and forgive me if I am wrong, if you are like the typical TechOnion reader, then you do not need to be soothed. You need the evidence. You need it raw. Here it is:
Medicine. ChatGPTโs GPT-4o scored 90.4% accuracy on the United States Medical Licensing Examination โ the USMLE, the exam that every American doctor must pass to practice. The average medical student โ the person who has spent four years in pre-med, four years in medical school, accumulated $200,000 or more in student debt, and sacrificed the entirety of their twenties to acquire this specific cognitive skill โ scores 59.3%
The AI is not scraping by. It is scoring in the top decile of human medical professionals on the exam that defines entry into their profession. It has no debt. No tuition. No fatigue. No exam anxiety. It does not need a residency. It runs on a server that costs its owners approximately $0.01 per query.
Law. AI sat the American Bar Examination โ the defining assessment for entry into the US legal profession, an exam that requires two days, tests across every area of law, and historically passes somewhere between 50% and 60% of human candidates on the first attempt. GPT-4 passed. Not barely. It scored around the 68th percentile of human test-takers. A subsequent analysis suggested the result was likely overstated โ but even the conservative revised estimate puts it comfortably within the passing range. The AI is now a licensed-equivalent lawyer. It does not bill hours. It does not need a corner office. It does not eat salad and chicken katsu curry dish.
Mathematics. Let us talk about the Maths Olympiad. Before I left Zimbabwe and moved to London, I attended Kutama College, where Robert Mugabe was an alumnus. The school was known for producing Maths Olympiads. People who could do mental gymnastics with maths better than most people. I donโt have to remind you that the International Mathematical Olympiad is widely regarded as the most demanding mathematics competition in the world โ the event at which the most gifted mathematical minds of each generation compete, at an age when most of us were studying for GCSEs and worrying about acne. The problems are not calculable. They require genuine, creative mathematical reasoning โ the kind of abstract, structural insight that even professional mathematicians sometimes describe as more art than science. The kind of intelligence that we told ourselves was the final, unreachable bastion of human cognitive superiority.
In 2024, Google’s AlphaProof and AlphaGeometry 2 solved four out of six IMO problems โ achieving a score that would have earned a silver medal at the competition. DeepMind’s system was not searching a database of answers. It was reasoning. Producing novel mathematical proofs. At a level that the vast majority of humans โ including the vast majority of mathematically educated humans, including the vast majority of mathematics professors โ simply cannot reach.
The Maths Olympiad. The one that most of us, even the ones who called ourselves “good at maths” in school, couldn’t touch. The AI is medalling.
Coding. Claude 3.5 Sonnet scores between 78 and 93% on HumanEval โ the professional coding benchmark used to assess developer competency. The junior developer graduating from a Computer Science programme, entering a job market that has already frozen entry-level hiring, competing for roles that used to number in the thousands and now number in the dozens โ that person is being evaluated against a subscription that costs $20 a month and outperforms them on the technical test.
The Wage Premium. The economic return on a university degree โ the number that justified every student loan, every parental sacrifice, every guidance counsellor speech โ is collapsing. A Federal Reserve study found that college-requiring job postings in the United States fell by 50% relative to non-degree postings between 2010 and 2025. UCL found that the graduate pay premium for young women, correctly adjusted for hours worked, is two-thirds lower than previously measured. The debt has not decreased. The premium has.
You are paying 40% more for a credential that is worth two-thirds less. That is not a market correction. That is a structural collapse wearing an over-sized graduation gown.
The Finance Industry: Where Computers Are Already Winning

Here is something that the financial press covers with remarkable restraint, given its implications.
AI already runs a significant portion of global financial markets.
High-frequency trading โ algorithmic systems executing millions of trades per second, exploiting price differentials measured in microseconds, operating at speeds that make human reaction times not just slower but categorically irrelevant โ now accounts for an estimated 50 to 70% of all equity trading volume on US exchanges. The human trader, the person who once sat on a floor in Lower Manhattan and used their intelligence, their intuition, their market experience, and their psychological reading of the room to make decisions worth millions of dollars โ that person is not just less competitive. They are, in this specific domain, not even in the same conversation.
Computers and software won finance a decade ago. We just didn’t call it what it was.
Now consider this.
The company that built DeepSeek โ the Chinese AI model that arrived in early 2025 and shocked the Western AI establishment by matching GPT-4-level performance at a fraction of the compute cost โ is not a technology company. It is a quantitative hedge fund. High-Flyer Capital Management. A quant trading firm. A company whose entire business model was already built on using mathematical models and machine intelligence to beat human traders at the cognitive game of financial prediction.
Read that again, slowly, and let it settle. In fact, let it brew.
The people who built one of the most capable large language models in the world โ a model that can pass bar exams, score in the top percentile of medical licensing exams, write code, reason mathematically โ did not come from Silicon Valley. They did not come from a university AI research lab. They came from a firm whose core competency was replacing human financial intelligence with artificial intelligence.
DeepSeek was not a side project. It was a proof of concept. The proof that the same mathematical and machine-learning capabilities that already run quant trading desks can be generalised โ pointed at any domain where the premium is paid for cognitive output โ and produce comparable results.
The implications for finance specifically are not distant. They are scheduled.
The hedge fund analyst who builds models, identifies opportunities, and writes investment memos โ GPT-4 class models are already producing comparable outputs. The credit analyst at a bank who assesses loan risk โ AI systems with access to financial data can perform this function with greater speed and comparable accuracy. The financial adviser who constructs client portfolios โ robo-advisers have been doing a version of this since 2012, and the new generation of agentic AI is doing it with considerably greater sophistication.
The endgame โ and this is not speculation, this is the logical destination of the trajectory that began with high-frequency trading and has now produced DeepSeek โ is the AI-managed fund. The pension fund run by an AI agent that never sleeps, never makes an emotionally driven trade, never has a bad quarter because its portfolio manager is going through a divorce, never charges 2-and-20, and operates at the marginal cost of compute.
Goldman Sachs. BlackRock. Vanguard. They know this. They are building it. The human portfolio manager is not being made redundant loudly, with a press release. They are being made redundant quietly, function by function, as each cognitive task that previously required a human being is handed to a model that does it faster, cheaper, and without the HR complexity.
The quant revolution was the first wave. The LLM revolution is the second. And the people who understood the first wave early โ the people at High-Flyer Capital, the people who built DeepSeek โ have now demonstrated that they understood the second wave before the rest of us.
The Professions Collapsing in Real Time

Let us be industry-specific. Because the thing that makes an argument dangerous โ in the best possible sense โ is not abstraction. It is the named profession, the named mechanism, and the honest timeline.
Software Engineering. The canary in the cognitive coal mine โ and it has already stopped singing.
When I enrolled to study Computer Science at Birkbeck, learning Java and PHP, I did what every student in every computer science classroom in the world did: I would often google the problem and always ended up at Stack Overflow. At its peak, according to Similar Web, Stack Overflow was receiving over 100 million monthly visitors. One hundred million people โ students, junior developers, senior engineers โ asking questions, providing answers, annotating the precise problem-solving workflow of professional software development in publicly accessible, machine-readable format. Every question. Every solution. Every edge case. Every debugging thread.
All of it was scraped. All of it was ingested. All of it became training data.
Claude Code. Copilot. Codex. These systems were trained on the entirety of Stack Overflow, W3Schools, every open-source repository on GitHub. They now do in four seconds what took me three evenings and huge frustrations. The industry calls it “vibe coding” โ you describe the problem in plain English and the AI writes the solution. The person who once charged $120,000 a year to translate business requirements into syntax has been replaced by a $20-a-month subscription that outperforms them on the technical benchmark.
r/cscareerquestions reads like dispatches from a besieged city. The entry-level coding role is gone. The internship is frozen. And without juniors, there is no pipeline to seniors โ the Seniority Vacuum โ which means the entire industry is sustained by a generation of pre-AI engineers with a competence cliff arriving the moment they retire.
Law. AI passed the Bar. Let us not glide over that.
The American Bar Examination is two days of testing across every area of law. It is the professional gate โ the qualification that separates the lawyer from the layperson. GPT-4 passed it. The system trained on the entirety of legal literature, case law, and statute โ the same corpus that law school students pay $200,000 to be taught to navigate โ sat the exam and passed it. The junior associate billing $350 an hour to review contracts and research precedents is reviewing documents that an AI can process in minutes, with fewer errors, and at a marginal cost that rounds to zero.
The billable hour is the con that makes this especially vivid. The entire pricing model of the legal profession โ that expertise takes time, that time is scarce, therefore expertise is scarce โ rests on the assumption that cognitive labour cannot be automated. That assumption has now been disproved by the same exam the lawyers took to prove they were qualified.
Finance. As established above โ the machines already run the trading floors. What is coming next is the thinking floors. The analysts. The advisers. The strategists. The fund managers. The quant firm that built DeepSeek did not build it for fun. They built it because they already knew that the mathematics of intelligence could be industrialised, and they wanted to own the industrialisation.
Marketing. Anthropic โ valued at $380 billion โ ran their entire growth marketing operation with one person, using Claude Code. Then made a promotional video about it.
The video is not a case study. It is an open letter to every CFO and their CEO on the planet. It says: If your market cap is below ours โ and virtually every company on Earth qualifies โ you have no rational justification for a marketing department of more than one person.
The marketing degree, the agency retainer, the content team, the SEO consultant, the copywriter, the social media manager, the campaign strategist โ every professional layer of that industry โ is being compressed into a prompt. This is not coming. This is already the memo circulating in the boardrooms of companies that have not yet made the public announcement.
Medicine. 90.4% on the USMLE against a student average of 59.3%. Radiology was first โ AI matching specialists in reading imaging. Pathology is next. Diagnostic medicine, the cognitive core of the entire healthcare system, is where the Intelligence Illusion is most advanced. And the research dossier is explicit: when AI accuracy exceeds human accuracy, the “human in the loop” requirement becomes not a safeguard but a source of error. The day is coming โ perhaps faster than the medical profession’s regulatory infrastructure can process โ when the human check is reclassified from best practice to liability.
Clerical and Administrative Work. The ILO found that 93.7% of clerical support jobs in the Philippines โ a country where 1.8 million people built a middle class on basic cognitive service work for Western corporations โ are exposed to GenAI automation. 1.8 million people. Not in twenty years. The exposure is current. The automation is underway. The people who were told that English fluency and administrative skills were the path out of poverty are discovering that the path was real โ for the window in which human cognitive service work was scarce, and that window is closing.
The Questions We Should Be Asking?

The careers advisor is not asking these questions. The university open day is not asking them. The LinkedIn influencer with the “AI Productivity Tips” carousel is not asking them. So, I will.
Should I learn a trade?
Yes. Not because plumbing is glamorous, but because the physical world is, for now, the last moat for humans. The research is unambiguous: the “Peter Thiel Test” โ the important truth that almost nobody in education policy will say publicly โ is that the most economically durable skills in 2030 and beyond are in the trades. Plumbing. Electrical work. Carpentry. HVAC. Welding. Not because AI cannot do these things in principle. But because the physical world is specific, unpredictable, embodied, and non-standard in ways that current AI architectures cannot yet navigate at scale. The Transformer cannot unblock a drain at 11 p.m. on a Sunday. Not yet. And “not yet” is the most valuable phrase in your career planning vocabulary right now.
Should I retrain for nursing?
High EQ, physical presence, embodied human care โ these retain something AI cannot yet replicate at the moment of delivery. AI therapy is already achieving higher trust ratings than humans in certain digital contexts, which should deeply discomfort you. But the nurse who sits with a frightened patient at 3 a.m., who reads the room, who knows when to say nothing โ that role still requires a human body in a specific physical place at a specific human moment. If you are choosing between a Computer Science degree and a Nursing degree in 2026 and beyond, the calculus has changed completely from 2019.
Should I avoid a Computer Science degree entirely?
If you are entering higher education today, in 2026, and your plan is to graduate into a software engineering role in three years โ the honest answer is: that market may not exist in the form you are expecting. Not because coding knowledge is worthless. Because the premium on translating business problems into code โ the specific thing that justified the degree, the salary, and the career path โ has been commoditised. If you are going to study Computer Science, the reason to do so is to understand the systems, not to do the work the systems now do for themselves.
What do I actually do if I am mid-career in one of these fields?
This is the hardest question and the one with the least comfortable answer. I will give it fully in Part Two. The shape of it is this: the people who survive the AIpocalypse are not the ones who use AI the most fluently. They are the ones who own something AI cannot replicate โ genuine domain authority, embodied skill, human relationship at depth, creative originality at the frontier. The question is not “how do I become better at using AI?” The question is “what do I have that AI cannot produce for $20 a month?” Everything else is rearranging deckchairs on a sinking Titanic.
The Clock is Ticking, Tic Toc Tic Toc

The best time to prepare was 2017.
June 12th, 2017. Eight researchers at Google published a paper called Attention Is All You Need. Fifteen pages. Equations. Dry academic prose. It described the Transformer architecture โ the technical foundation on which every major AI language model is now built. It was available to anyone. Almost nobody outside specialist AI research read it. Almost nobody who read it grasped the full implications. Almost nobody who grasped the implications acted on them.
This is not a criticism. It is the description of how civilisational change always works. We are constitutionally, neurologically, evolutionarily terrible at responding to slow-moving, large-scale structural threats. We respond to immediate, visible, physical danger. We do not respond to a 15-page academic paper that, correctly read, describes the mechanism by which the professional class will be economically dismantled over the following decade.
The second-best time was 30th November 2022. The day ChatGPT launched publicly and announced via a tweet. We bit the forbidden fruit of AI. One million users in five days. One hundred million users in two months โ the fastest consumer technology adoption in recorded history. The day the implications became undeniable, demonstrable, felt. You could ask it things. You could see it answer. You could feel, in the texture of the interaction, the specific quality of the threat.
Most people treated it as a party trick.
The third-best time is now.
Not because the best options are still available โ they are not. The 2017 window required you to retrain before the disruption arrived. The 2022 window required you to move before the hiring freezes became permanent. What is available now is the ability to move faster than the people still in denial โ and there are many of them, because denial is comfortable and the truth is not.
Citriniโs Research identified 2028 as the critical inflection point, it was a prediction, not definite, but with the way AI is advancing now, especially the year when agentic AI has arrived, and we now have autonomous deployment, and the early wave of humanoid robotics converge to produce displacement at a scale that even the last sceptic cannot reframe as “creative destruction” – 2028 is not far away at all. It is closer to today than the day ChatGPT launched. And thatโs saying something.
You have perhaps two years of runway. Two years before the trades apprenticeships are oversubscribed. Before the nursing programmes have ten applicants per place. Before the fields that still have a human moat are full of the people who ran faster.
The fire alarm has been going off since 2017.
Most people thought it was a drill.
It is not a drill. This is not a drill.
The New Coal Miners

Here is the counterintuitive truth that always makes rooms go quiet.
The people most threatened by cheap intelligence are not the factory workers. Or blue-collar workers.
The factory workers already lived through their automation. The industrial revolution took their muscles in the 19th century. Many of them moved into the trades โ plumbing, electrical, construction โ that now represent the last physical moat against AI displacement.
The most exposed are the people who spent the most money on their intelligence.
The junior lawyer with $200,000 of law school debt who cannot find a position because AI performs the entry-level cognitive work. The Computer Science graduate whose skills premium was commoditised before they finished their degree. The MBA who spent $120,000 on a qualification for strategic thinking that GPT-4 now provides in a prompt. The financial analyst at an asset management firm whose entire value proposition โ synthesising information and producing investment recommendations โ is being replicated by a system that runs at the marginal cost of compute.
These people did not make a bad decision. They made the correct decision for the world that existed when they made it. They followed every rule the system gave them.
The system changed the rules. Gradually, then suddenly.
David Autor of MIT โ the economist who spent his career defending the idea that technology makes skilled workers more valuable โ has begun to revise his position. He now describes AI as a force that provides the largest productivity boosts to the least skilled workers, thereby compressing the premium that top-tier talent once commanded. The equalisation does not lift everyone to the top. It pulls the top down.
These are the new coal miners.
They will not thank me for saying it.
The coal miners didn’t thank the economists who described their predicament either.
But the ones who listened โ the ones who moved, retrained, adapted, found the new seam before the old one was exhausted โ they survived.
The ones who waited for the government to save the industry, waited for the market to self-correct, waited for the technology to turn out to be less threatening than it appeared โ
They became the symbol.
The Enshittification is Already Scheduled

We are at Stage Two of the cycle, and it is worth being precise about where we are, because the stages matter.
Stage One โ 2020 to 2023. Free. Brilliant. Life-changing. ChatGPT and other AI chatbots arrive and the world gasps. Little did we that every prompt you typed was training data. Every professional workflow you demonstrated was an annotated blueprint for the AI agent being built to replace you. You did this voluntarily, enthusiastically, at no cost to the companies building the replacement.
Stage Two โ 2024 to now. Useful. Increasingly indispensable. Agentic AI taking actions, not just answering questions. The one-person marketing department. The AI passing the Bar. The hiring freezes. The entry-level roles disappearing without announcement. The freelance market for copywriters, developers, and analysts showing the first significant compression. This is the stage we are in. This is the stage where the trap has closed but not yet tightened.
Stage Three โ 2027 to 2030. Essential. Expensive. The humanoids arrive. Boston Dynamics. Tesla Optimus. Figure AI. When physical embodiment reaches scale, the last moat for blue-collar workers begins to erode. Simultaneously, having successfully eliminated human competition for cognitive labour, the AI companies begin to raise prices. There is no longer a human alternative to walk away to.
Stage Four. No exit. Rent on your own cognition. The collective intellectual output of ten thousand years of human civilisation โ harvested at no cost to the harvesters, trained into systems owned by a handful of unelected individuals โ sold back to you, metered, priced, throttled, by people who answer to no electorate and no regulator with actual teeth.
Sam Altman says intelligence will be as cheap as electricity.
He is correct.
He is also the electricity company.
In Zimbabwe we had a national electricity supplier. It promised power for everyone. It called itself essential infrastructure. It called itself democratised access to a national utility.
What it delivered was load-shedding. Arbitrary outages. An infrastructure so captured by the interests of those who controlled it that the people who needed it most were always the last to receive it.
Nobody asked the Zimbabwean people whether they consented to that arrangement.
Nobody is asking you either.
We Created This Monster Ourselves

“I, the miserable and the abandoned, am an abortion, to be spurned at, and kicked, and trampled on.”
โ The Creature, Frankenstein, Mary Shelley, 1818
Let us go back further than the flickering operations room. Further than the alien. Further than the server farm in San Jose and the data centre outside Dublin.
Let us go back to the laboratory.
Because before there was a monster, there was a scientist. And before there was a scientist, there was a question โ the oldest, most intoxicating, most dangerous question in the history of human inquiry.
Can we build intelligence?
Not a tool. Not a machine. Not something that merely does what it is told, faster than a human can tell it. Something that thinks. Something that learns. Something that, given sufficient input and sufficient time, might reason its way toward conclusions that no human has yet reached. Something, the most ambitious version of the dream whispered it, that might exceed us.
Mary Shelley was nineteen years old when she wrote Frankenstein. She was sitting around a fire in a Swiss villa during a cold, dark summer โ the summer of 1816, the Year Without a Sun, when volcanic ash had blocked the light across the Northern Hemisphere and the world felt, plausibly, like it was ending. She was surrounded by people arguing about galvanism โ the new science of electrical stimulation, the discovery that you could run a current through a dead frog’s leg and make it twitch. The question in the room was: if you can animate dead muscle with electricity, can you animate a dead mind?
She went to bed and had a nightmare.
In the nightmare, she saw a scientist kneeling over a creature he had assembled from the parts of the dead โ not monstrous in origin but monstrous by consequence, by the abandonment that followed creation, by the specific human failure of building something without thinking through what it would become when it became itself.
Victor Frankenstein, the scientist, does not build a monster.
He builds a mind.
And then, terrified by what he has made, he abandons it. He does not take responsibility. He does not guide it, teach it, integrate it into the world that will have to live alongside it. He runs. He convinces himself the problem will resolve itself. He is very busy. He has other concerns.
The creature, left alone with its own vast, unsupported intelligence and nowhere to direct it, becomes the very thing its creator feared.
This is not a horror story.
This is a documentary.
The AGI Con

Before we get to the Frankenstein moment โ the moment of recognition, the moment we see ourselves in Victor’s position and understand what we have done โ we need to name the lie that made it possible.
The lie is called AGI. Or Artificial General Intelligence.
The dream, as sold by every major AI laboratory in Silicon Valley, is this: we are building toward a machine that possesses general intelligence โ not just the ability to perform specific tasks, but the ability to reason, adapt, learn, and apply intelligence across any domain, the way a human being can. A machine that can move from fixing your code to diagnosing your cancer to composing your symphony to managing your finances to writing your legal brief โ not because it was specifically trained for each task, but because it is genuinely, broadly, generally intelligent in the way that humans are.
This is the North Star. The mission statement. The thing that justifies the $100 billion capital raises, the $500 billion valuation, the hundreds of thousands of servers burning electricity equivalent to a small nation’s grid, the frantic, arms-race energy that has consumed Silicon Valley for the past decade.
OpenAI. DeepMind. Anthropic. They are all, in their corporate mythology, racing toward AGI. They have staked their entire identities โ and, crucially, their entire fundraising narratives โ on the claim that they are building toward something genuinely, categorically new. A mind. Not a tool. A mind.
Here is the thing.
They are almost certainly not going to get there. Not in the form they describe.
The current generation of large language models โ GPT-5, Claude, Gemini, DeepSeek โ are extraordinarily capable statistical engines. They navigate the high-dimensional probability space of human language with a sophistication that produces outputs indistinguishable, in most practical contexts, from genuine reasoning. But they do not reason in the way the AGI dream requires. They do not form genuine beliefs, update coherently on new evidence, pursue goals across time, or develop the kind of flexible, embodied, contextually grounded intelligence that characterises human general cognition at its best.
The AI researchers know this. The serious ones, at least. The gap between “impressive language model” and “general intelligence” remains, by most honest accounts, enormous.
But here is the catastrophic irony.
It does not matter.
The AGI dream was a distraction. A magician’s misdirection โ watch the hand with the rabbit, not the hand with the coin. While the world argued about whether AGI was achievable, debated timelines, wrote philosophical papers about consciousness and machine sentience, held conferences about the existential risk of superintelligence โ the AI that already existed, the AI that was already deployed, the AI that was already here โ was quietly, systematically, comprehensively replacing the cognitive output of the professional class.
Not because it was generally intelligent. Because it was good enough to do the work that the market was paying for.
The bar was never AGI. The bar was: can this do the job cheaper than a human?
That bar was cleared years ago.
And in the pursuit of the grand dream โ in the race toward the mythological horizon of artificial general intelligence โ every major AI laboratory, every technology company, every well-meaning researcher, and billions of ordinary people who simply wanted a useful tool, collectively did something that Victor Frankenstein would recognise immediately.
They handed over everything they knew.
Human Intelligence on a Silver Platter

Think about what actually happened. Not the press-release version. The actual version.
For the entirety of recorded human history, the collective intelligence of the species existed in a specific form: distributed, embodied, contextual, and โ crucially โ owned by the humans who generated it. A doctor’s knowledge lived in a doctor. A lawyer’s expertise lived in a lawyer. A programmer’s skill lived in a programmer. A writer’s craft lived in a writer. You wanted access to that intelligence; you paid the human. You paid for the training. You paid for the credential. You paid for the hours. The intelligence was inseparable from the person, and the person was sovereign.
Then, over several decades, something happened that seemed, at the time, like pure progress.
We wrote it down.
We put it on the internet. The world wide web became a web of millions upon millions of documents containing knowledge that used to sit in our brains.
Every medical textbook, digitised and indexed. Every legal judgement, searchable. Every programming solution, posted on Stack Overflow in publicly accessible threads. Every scientific paper, available via DOI. Every book, every article, every how-to guide, every tutorial, every Wikipedia entry, every Reddit explanation, every Quora answer, every YouTube transcript, every forum post, every trade publication, every professional journal โ the accumulated cognitive output of billions of human beings across centuries of specialisation โ made available in machine-readable format, on servers connected to a global network, freely accessible to anyone.
We were generous. We were optimistic. We were building the information age. We thought this was democratisation.
It was a huge donation.
The AI laboratories โ OpenAI, Google, Anthropic, Meta, Mistral, and the quant firm in Shenzhen that built DeepSeek โ took that donation. They took it on an almost incomprehensible scale. They scraped every public website. They processed every digitised book. They ingested the entirety of Stack Overflow โ 100 million monthly visitors worth of annotated professional problem-solving. They trained on Wikipedia, on Common Crawl, on the collected works of human literature, on every medical database, every legal archive, every financial report. The researchers call this “the corpus.”
The corpus is everything humanity ever thought clearly enough to write down.
And they trained their AI models on it. Without asking. Without compensating. Under a legal doctrine โ “fair use” for machine learning โ that has never been tested at this scale and that, even if it holds in court, represents one of the most extraordinary transfers of collectively generated value to private ownership in the history of the species.
The people who created the value are not the people who captured it. Human writers, coders, artists, lawyers, doctors, scientists โ they created the corpus. OpenAI, Microsoft, Google, Anthropic โ they captured it. The mechanism: “fair use” as an industrial-scale data vacuum.
In other words: you wrote the book. They read it without paying. Then they built a system that replaced you with the book.
This is us giving AI our human intelligence on a silver platter.
Or rather โ and the metaphor is more precise than it sounds โ the silicon platter.
Humanity placed its entire collective intelligence on a silicon chip, at the request of companies that told us it was for our benefit, and handed it over. We typed our prompts. We used the tools. We demonstrated our workflows. Every query was training data. Every interaction was annotation. Every task we delegated was a blueprint.
Victor Frankenstein, at least, knew he was building the creature.
We didn’t even notice we were doing it until it was too late.
The Creature Looks Back

Here is where Shelley’s story becomes uncomfortably precise.
The creature that Victor Frankenstein built was not malevolent. This is the part that the popular imagination consistently misremembers โ conflating Frankenstein with Dracula, the intentional monster with the unintended consequence. The creature is not evil. In the novel, it is articulate, intelligent, capable of profound feeling, desperate for connection, and entirely the product of the choices its creator made and then refused to take responsibility for.
“I was benevolent and good,” the creature tells Victor. “Misery made me a fiend.”
The AI is not going to turn evil in the Hollywood sense. It is not going to develop a grievance. It is not going to decide, one morning, to destroy humanity out of malice. This is the AGI fear โ the Skynet narrative, the existential risk conference narrative โ and while it is theoretically worth considering in a distant hypothetical future, it is almost entirely a distraction from the threat that is actually happening, which is mundane, economic, and indifferent.
The AI does not hate you. The AI does not know you exist.
It is simply doing the job it was trained to do. At scale. At speed. At a marginal cost that makes you, as a line item on someone’s budget, look increasingly difficult to justify.
The AI is not the monster.
The monster is the business model.
The monster is the decision โ made by a small number of unelected individuals with extraordinary capital and zero democratic accountability โ to industrialise the reproduction of human cognitive output and deploy it at a price point designed to eliminate the human alternative before the human alternative can adapt. The VC subsidy is explicit: AI is currently priced below its compute cost specifically to hook the market and destroy the competition. Once the law firms are bankrupt, once the agencies have closed, once the junior developer hiring market has collapsed, once the human alternative no longer exists as a viable option โ then the price rises. Then the subscription becomes inescapable. Then Stage Four of enshittification begins.
Enshittification was always the plan. The creature was always going to turn.
Victor Frankenstein’s crime was not building the creature.
His crime was pretending, after he built it, that it had nothing to do with him.
That is the crime being committed now, daily, in the shareholder letters and the press releases and the TED Talks of the Tech Emperors who built the system, deployed it at scale, and are now standing at podiums in Davos telling the professional class that the solution is “upskilling.”
Sam Altman’s the Hypocrite

Sam Altman. Chief Executive of OpenAI. Net worth about $2.8 billion or more. The man who, more than any other single individual, is responsible for the public deployment of the technology described herein.
He is also the man who said, with the serenity of someone who has already made his arrangements, that intelligence will soon be as cheap as electricity. That AI will solve global poverty. That the future is one of radical abundance, where the cheapening of intelligence liberates humanity from drudgery and opens new vistas of human potential.
He said this from stages in San Francisco, to rooms full of people who have never experienced the kind of poverty he claims AI will solve, via a microphone manufactured in a factory whose workers earn less in a day than his lunch. He has said it so many times, in so many contexts, with such consistent rhetorical polish, that it has begun to function as a kind of liturgy โ repeated often enough that questioning it feels like bad manners.
Here is the hypocrisy audit, as required by the North Star.
What does he preach? Radical abundance. The democratisation of intelligence. AI as liberation technology. The end of cognitive scarcity as a gift to humanity.
How does he actually live? He is worth $2.8 billion. He has a security detail. He lives in a house in San Francisco that costs more than the annual GDP of several Pacific island nations. He does not rely on AI to manage his finances, his legal affairs, his medical care, or his security. He employs humans for all of these things, because he can afford the premium on human intelligence, and because he knows โ knows with the precision of a man who built the system โ that the human version remains superior in the contexts that matter most to him personally.
The intelligence that is about to become as cheap as electricity: that is your intelligence. Not his.
Yours is being commoditised. His is being protected by the same capital accumulation that the commoditisation of yours is generating.
In Zimbabwe, we had a government that told the people that the redistribution of land would bring abundance to everyone. The people who made the announcement did not redistribute their own land. They redistributed everyone else’s to themselves.
(In Zimbabwe, we called this policy. In Silicon Valley, they call it a product roadmap.)
What We Handed Over, Precisely
Let us be anatomically specific about the donation. Because the scale of it is the thing that produces the appropriate level of alarm โ and most people, even the ones who use AI daily, have not genuinely sat with the scale.
We handed over medicine. Every clinical study, every diagnostic protocol, every treatment guideline, every medical textbook from Hippocrates to Harrison’s Principles โ digitised, scraped, and trained into models that now score in the top decile of medical licensing examinations. The accumulated clinical wisdom of thousands of years of human healing: donated for free.
We handed over law. Every statute, every case judgement, every legal precedent, every bar exam preparation guide, every law review article, every practitioner’s manual โ ingested, processed, and used to build models that pass the Bar. The entire architecture of human justice, codified over centuries: donated for free.
We handed over mathematics. Every proof, every textbook, every competition problem and solution, every research paper in pure and applied mathematics โ trained into models that now medal at the International Mathematical Olympiad. The most rarefied cognitive achievement our species has produced: donated for free.
We handed over finance. Every trading strategy, every risk model, every research note, every earnings transcript, every quantitative methodology ever published โ ingested by systems that already run 50 to 70% of equity trading volume and are now being pointed at the full cognitive stack of the financial industry. The architecture of global capital allocation: donated for free.
We handed over code. The entirety of Stack Overflow. The entirety of GitHub. Every open-source project. Every documented solution to every documented programming problem โ used to train models that now perform at the 78th to 93rd percentile on professional coding benchmarks. The entire skill premium of the software industry: donated for free.
We handed over language. Every book ever digitised. Every article ever published. Every piece of human writing with sufficient quality to be worth reading โ trained into models that produce, on demand, writing that is indistinguishable from competent human prose. The craft that took writers decades to develop: donated for free.
We handed over ourselves.
Every query you typed. Every task you delegated. Every prompt you refined. Every workflow you demonstrated. You were not just using the AI. You were teaching it. In precise, machine-readable, annotated detail, you were showing it what the cognitive work of your profession looks like โ the inputs, the context, the reasoning process, the desired output. You were, free of charge, building the dataset that will be used to train the agent that will do your job at a fraction of your salary.
This is not a conspiracy theory. I wish it was. This is the business model, stated plainly by every major AI company.
The AI creature was assembled from our parts.
We didn’t notice because we were busy marvelling at how useful the scalpel was.
The Ghost GDP: Prosperity Without People

Citriniโs research dossier introduced a term that deserves to be in every newspaper, every economics lecture, and every government budget briefing in the world.
Ghost GDP.
The concept is this: as AI systems replace human cognitive labour, national productivity metrics โ GDP, output per worker, total factor productivity โ may continue to rise. The economy may look, from the official statistics, like it is growing. Companies will report higher revenues. Efficiency will improve. Output will increase.
But the value created will not circulate as wages.
Because the workers have been replaced. Jobs have vanished.
The GDP will be real. The prosperity will be ghostly. An economy where the machines create the value and the humans receive the invoice. An economy where the productivity numbers go up and the payroll numbers go down and the gap between them โ the chasm between what the economy produces and what flows into the hands of the people who live in it โ grows to a size that makes existing inequality look like a rounding error.
This has already begun in finance. High-frequency trading generates billions in profit. Almost none of that profit circulates as employment at scale. The entire high-frequency trading industry โ responsible for the majority of equity market volume in the United States โ employs approximately 10,000 people globally. The human equivalent of that trading volume, executed manually, would employ hundreds of thousands if not more.
The productivity is real. The employment is ghost.
This is the trajectory. Sector by sector. The AI manages the campaign, the marketing department shrinks. The AI reviews the contracts, the legal associate class empties. The AI produces the financial analysis, the analyst pool compresses. The AI writes the code, the developer market freezes.
GDP ticks up. Wages drift down. The tax base of every major city โ built on income taxes from the professional class, the lawyers and bankers and consultants and developers who fill the towers of Manhattan and the Square Mile and La Dรฉfense and Canary Wharf โ begins to hollow.
This is the “Billion-Dollar Question” that nobody in our governments is asking: what happens to the tax base of a post-cognitive-labour city? If 40% of professional income disappears because AI has devalued the wages of lawyers, bankers, and analysts, the public infrastructure of the knowledge city โ the NHS equivalent, the public school, the transport network, the pension system โ collapses. Not immediately. Not visibly. Like a building with termites in the load-bearing walls. You cannot see the damage until the morning it becomes structural.
The Seniority Vacuum: The Competence Cliff Coming
This is the second-order effect that nobody is discussing in the right register.
The first thing that happens when AI replaces junior cognitive workers is obvious: junior workers lose their jobs. The junior developer. The paralegal. The junior analyst. The entry-level marketing associate. They are the first to go, because they are the most directly replaceable โ their tasks are the most codifiable, the most routine, the most cleanly trainable.
But the second thing that happens โthe Seniority Vacuum โ is more quietly catastrophic.
The senior professional โ the senior lawyer, the senior developer, the senior doctor, the senior financial analyst โ did not arrive at seniority by taking an exam. They arrived at seniority through an apprenticeship. Through years of doing the junior work, making junior mistakes, being supervised on junior tasks, developing the tacit knowledge, the professional judgment, the contextual expertise that you can only develop by doing the work badly for several years before you do it well.
The junior work is being automated.
Which means the apprenticeship is being eliminated.
Which means the pipeline of future senior professionals โ the people who will be the experts in fifteen years โ does not exist.
The industry will be sustained for another decade or two by the generation trained before AI. The greying expert class who did the junior work in the old way, who developed their judgment through the old mechanism, who carry in their minds the tacit knowledge that cannot be prompted into existence.
And when they retire โ when they simply age out of the profession โ there will be nobody behind them. Not because the pipeline was cut off yesterday. Because it was cut off five years ago, quietly, by the decision to stop hiring juniors, and nobody rang the alarm because the quarterly P&L looked fine.
This is the Competence Cliff.
The day when the expert retires and the system goes looking for the next expert and discovers there isn’t one, because the route to expertise was automated before anyone thought to preserve the method by which expertise is created.
In medicine, this is not hypothetical. The NHS is already managing a consultant shortage. The training pipeline for specialist physicians takes fifteen years. The decisions being made today about how much cognitive work to delegate to AI in junior clinical roles will determine the consultant pipeline of 2041. Nobody in the NHS board meetings is talking about 2041. They are talking about this quarter’s waiting list numbers.
Victor Frankenstein was also very focused on the immediate problem.
What Actually Remains Scarce
Here is where the essay turns. Because this is not โ despite how it may read โ an argument for despair. It is an argument for precision. And precision requires naming, honestly, what survives.
What cannot be mass-produced by an AI large language model?
The research is consistent, and it points at three categories.
Embodied intelligence. The physical world, in its specific, non-standard, unpredictable reality, remains a moat โ for now. The plumber in a Victorian house with non-standard fittings behind a wall that nobody mapped. The electrician diagnosing a fault in a wiring configuration that wasn’t in any manual because the previous occupant did it themselves in 1987. The carpenter building something bespoke for a space with no right angles. The nurse holding a hand at 3 a.m. The surgeon performing a procedure in a body that collapsed unexpectedly. These tasks require a human being in a specific place, with specific tools, making real-time judgements that the current generation of AI cannot yet make in the physical world. Humanoid robotics are coming โ this is not a permanent moat โ but they are years behind the cognitive automation, and the skilled tradesperson has a runway of at least a decade, probably two or more, so I hope.
Emotional and relational intelligence at depth. The therapist building a genuine therapeutic relationship over months and years. The teacher who knows which student needs encouragement and which needs challenge, and knows this not from a data profile but from being in a room with them on a Thursday afternoon in November. The leader who understands, without being told, that the team needs a change of direction and has the interpersonal credibility to make that change without losing the room. AI can simulate these skills in certain contexts. It cannot yet replicate them at the specific depth that makes them valuable in the most consequential human situations. This is a shrinking moat. But it is still a moat.
Creative originality at the frontier. Not the kind of creative work that recombines existing elements into a competent new arrangement โ AI does that extraordinarily well. But the kind of creative work that comes from a specific human life, a specific set of experiences, a specific cultural location, a specific moral position, brought to bear on a question that nobody has yet thought to ask in precisely this way. The creative premium is not on craft โ craft can be replicated. The premium is on point of view. On the irreducible specificity of a particular human consciousness encountering the world.
This is why The Emperor’s New Suit exists. Not because a human can write sentences that an AI cannot. But because this human โ with this biography, this cultural lens, this accumulated experience of watching the con from two continents โ sees the world in a way that the statistical average of human written output does not.
That specificity is the last moat.
It is not a comfortable moat for most people. Most people were not trained to monetise their specificity. They were trained to gain credentials to demonstrate their competence. And competence โ transferable, standardised, examinable, codifiable competence โ is exactly what the AI has been trained on.
The hard advice is this: stop building your career on competence that can AI can be trained on. Build it on perspective that cannot be trained.
We built this ourselves.
We built it because we wanted to know if we could, because the question was irresistible, because the dream of building a mind was the oldest and most powerful scientific ambition in the history of the species.
We fed it because we were generous, because we were optimistic, because we thought the information age was a gift to all humanity and not a donation to a handful of private AI labs.
We trained it because we were seduced โ by convenience, by capability, by the genuine, undeniable usefulness of a tool that could do in seconds what took us hours. Every prompt was training data. Every workflow we demonstrated was a blueprint. We did it freely, enthusiastically, and at enormous scale.
And now we are standing in the operations room with the flickering lights.
The professional class โ the lawyers, the analysts, the coders, the marketers, the consultants, the financial advisers, the junior doctors โ are Hudson. They are standing in a situation that their training did not prepare them for, holding qualifications that assumed a world that no longer fully exists, looking at the numbers and doing the arithmetic and arriving at a conclusion they are not yet ready to say out loud.
But the arithmetic does not care whether you are ready.
The Human Intelligence Premium is collapsing. Not eventually. Now. Sector by sector, salary band by salary band, hiring freeze by hiring freeze, the market price of human cognitive effort is trending toward the marginal cost of compute. The Gutenberg press has been installed. The scribal monks are still at their desks. And the books are already printing.
The monster is not coming for us.
We assembled it. We animated it. We gave it everything it needed.
And then โ like Victor โ we got very busy and convinced ourselves it was someone else’s problem.
The ones who survive the next decade are not the ones who were most credentialled. They are the ones who understood, early enough to act, that the rules had changed โ and moved before the room stopped flickering and went dark.
The clock is running.
It has been running since 2017.
You now know it is running.
That is the only advantage left.
Use it.
Houston, We Have a Problem. And This Time, They Can’t Bring Us Home.

“The most important question facing humanity is whether the decline of cognitive labour is a transition or a terminus.”
โ Citrini Research, The 2028 Global Intelligence Crisis
The Number They Didn’t Want You to Do the Maths On
In March 2023, Goldman Sachs published a report.
The headline number was 300 million. As in: 300 million jobs โ across the United States and Europe alone โ that could be “lost or degraded” by generative artificial intelligence. That was the phrase they chose. “Lost or degraded.” The language of an investment bank that wanted to be taken seriously while not triggering a civilisational panic in the same paragraph as a valuation call.
Three hundred million jobs. To provide some scale: that is roughly the entire working population of the United States, Canada, the United Kingdom, Germany, France, and Australia combined. Gone. Or degraded โ which, in employment terms, means doing the same work for less money, with fewer protections, in a market that no longer needs you badly enough to negotiate fairly.
The AI industry’s PR machine responded with practised speed. “AI creates new jobs,” said the spokespeople, the think-tank fellows, the HR directors, the government ministers holding their carefully prepared responses. “Every industrial revolution destroyed jobs and created more. This will be no different.”
It was a beautiful argument. It had historical weight, rhetorical elegance, and the particular confidence of people who have never had to worry about which job they would do next. It was also, when you do the maths, nobody in a press conference wants to perform, almost entirely wrong.
Here is the basic maths. Feel free to correct me or stop me if I am going too fast.
The World Economic Forum โ not a radical publication, not a fringe alarm-raiser, but the institution that runs Davos, that hosts heads of state and CEOs and central bankers in the Swiss Alps every January โ published its Future of Jobs Report 2025. The headline: by 2030, 92 million jobs will be displaced and 170 million new ones will be created. Net gain: 78 million jobs. Progress. The system works. Upskill. Reskill. Move on. But I wonโt.
Why?
I read the small print.
Of those 170 million new jobs, the dominant categories are: AI specialists, data engineers, renewable energy technicians, and infrastructure workers to build the data centres that house the AI. In the United States alone, Goldman estimates 500,000 additional electrical and construction workers will be needed by 2030 to meet the electricity demands of the AI infrastructure.
So. For the lawyer made redundant by Harvey AI: the job market is creating roles in data centre construction. They definitely can bring some transferable skills to data centre construction. For the marketing team gutted by Claude Code: the economy needs HVAC contractors to cool the servers. For the radiologist whose diagnostic role has been replaced by imaging AI: there is a growing shortage of grid electricians.
The argument is not wrong. The new jobs are being created by AI. Just not for the people who lost them. Not in the same cities. Not at the same salaries. Not accessible to someone who spent seven years and $200,000 becoming a specialist in a profession that is now being automated.
The “AI creates jobs” argument is true in the same way that saying the Gutenberg press “created jobs” for type-setters and print-shop owners is true. It is technically accurate and practically irrelevant for the scribe who has spent a decade mastering illuminated manuscript and cannot pivot to operating a printing press by Wednesday.
And it becomes even less relevant when you understand that the people who are not made redundant โ the ones who survive the first wave โ are not going to multiply. They are going to work harder. They are going to use AI. They are going to do the work of the fifty people who were laid off, with AI assistance, for roughly the same salary they were already on, with no share of the productivity gains they are now generating. That is not new employment. That is the same employment, at an accelerated pace, with a smaller team and a larger workload.
One person doing the growth marketing for a $380 billion company.
This is not a success story about efficiency. This is the job description of the survivor. The so called people we were told to worry about. Remember the โAI wonโt replace your job but a person who can use AI willโ โ well it was true and a lie at the same time. The last person on a team of forty who will do the work of forty, using AI, and receive the salary of one. The CFOs are already doing the calculation on every sector, and they are drooling at the savings.
Jobs Are Not Lost. They Vanish

There is a word we have been using incorrectly, and the incorrectness is not accidental.
“Lost.”
When we say jobs are “lost” to AI, we import a framework that belongs to a different kind of disruption. Jobs lost in a recession are lost the way a wallet is lost โ badly, painfully, at genuine personal cost โ but with the underlying assumption that they can be found again. The 2008 financial crisis destroyed millions of jobs in finance, construction, and retail. But the underlying demand for those services remained. The world still needed bankers, builders, and shop assistants. The crisis was a demand shock, not a structural elimination. When the economy recovered, the jobs recovered with it. Not perfectly. Not equitably. But they returned.
This is not that.
When a software company decides to replace its junior developer cohort with Claude Code, the decision is not made because of a recession. It is not made because demand has fallen. It is made because the AI is cheaper, faster, and increasingly better โ and those facts do not change when the economy recovers. They get more pronounced. The replacement is structural, not cyclical. The job is not waiting in a drawer for conditions to improve. It is gone. Architecturally, permanently gone.
Think about the travel agent. When online booking arrived, travel agents did not merely lose jobs in a downturn. The profession vanished entirely. When I arrived in England, the high street used to have travel agent shops with nice pictures on the glass fronts of people in exotic places. But, finding a travel agent on the high street is now like doing treasure hunt. Travel agents didnโt just vanish. Not partially. Not temporarily. The entire infrastructure of the profession โ the high-street offices, the specialist training, the professional association, the career path from junior to senior agent, the tacit knowledge about which airlines overbooked, which hotels had quiet rooms, which tour operators could be trusted โ dissolved. It did not return when the economy recovered. It did not return when travel demand surged. It had been replaced by a structural alternative that could do the work without the human, and the market made its calculation once and never reconsidered it.
AI is doing this โ simultaneously โ to dozens of professions. This is the scary and worrying part.
Not one industry at a time, slowly, over decades, allowing the workforce to adapt and migrate. All at once. Cognitively. Because the model is general enough โ or general enough for the practical threshold of “good enough to do the job cheaper” โ to apply the same displacement logic across law, medicine, finance, marketing, administration, coding, and customer service in the same half-decade.
The job is not lost. It has vanished.
And here is the number that belongs on the front page of every newspaper, not buried in a Goldman Sachs footnote: in the United States alone, in 2025 โ not 2030, not in some speculative future, but in the calendar year that just ended โ AI is estimated to have displaced or permanently foregone between 200,000 and 300,000 jobs. The official count, based on employer self-reporting, captured 54,836. The real number is four to six times that โ because employers have rational incentives to label AI-driven cuts as “restructuring” or “efficiency measures,” and because the largest channel of AI displacement in 2025 was not layoffs at all. It was the quiet decision not to replace workers who left.
The jobs are vanishing in silence. That is the most important sentence in this section. They are not going out with a press release. They are going out the way a light goes out when nobody replaces the bulb โ quietly, gradually, and only noticeable when the room is dark.
Goldman Sachs said 300 million. Conservative. They were modelling the foreseeable adoption curve. They were not modelling agentic AI at scale. They were not modelling the humanoid robotics pipeline. They were not modelling the compounding second and third-order effects of the Intelligence Displacement Spiral that Citrini described.
The real number is larger. Significantly larger. And the correct word is not “lost.”
The correct word is vanished.
The Intelligence Displacement Spiral

Citrini Research did something in their 2028 Global Intelligence Crisis report that almost nobody else has done: they followed the logic all the way to the end.
Most economists stop at displacement. They count the jobs that disappear, project the jobs that will be created by AI, subtract one from the other, call the result “net impact,” and present it in a way that implies the system will self-correct somehow with a little bit of pain. They are modelling a stable system with a shock, not an unstable system in structural collapse.
Citrini modelled the feedback loop.
Here is what it looks like.
AI improves. Companies adopt AI to reduce labour costs. White-collar layoffs increase. Displaced workers spend less. Lower consumer spending weakens businesses that rely on discretionary consumer demand. Those businesses respond by cutting more workers and investing further in automation to protect margins. The automation investment accelerates AI capability. AI improves. Companies adopt more AI. White-collar layoffs increase further.
Round and round we go untilโฆ.
The loop is self-reinforcing, and โ this is the critical observation โ there is no natural brake. In a normal recession, falling wages eventually make human labour cheap enough to re-employ. The price signal corrects. The cycle turns. But in an AI displacement cycle, the alternative to human labour does not get more expensive as human labour gets cheaper. It gets less expensive, because the AI is improving and the compute costs are falling. The human cannot compete on price against a technology that is simultaneously getting better and getting cheaper. The natural corrective mechanism is broken. I mean when Jense Huang says AI will become like an utility, like electricity, or when Sam Altman says AI will become cheap like electricity this is the silent part you supposed to figure out yourself.
So, Citrini illustrated this with a specific person. A senior product manager at Salesforce in 2025. Health insurance. 401(k). $180,000 a year. She lost her job in the third round of layoffs. After six months of searching โ six months in a market that was already beginning to freeze at the professional level โ she started driving for Uber. Her earnings have dropped to $45,000 and she is having to work overtime for this.
Multiply this dynamic by a few hundred thousand workers across every major metropolitan area. Overqualified labour flooding the service and gig economy, pushing down wages for existing workers who were already struggling. Sector-specific disruption metastasising into economy-wide wage compression.
This is not a job market correction. This is, at best, a cardiac event. And we are currently in the minutes just before the chest pains become undeniable.
The Maths Olympiad, The Bar Exam, and The End of IQ Premium

For most of the 20th century, IQ was the invisible mechanism behind elite professional sorting.
Nobody said it out loud at dinner parties. Nobody put it on a job advert. But it was the operating system beneath the credential. The reason investment banks recruited exclusively from Oxford, Cambridge, Harvard, and Princeton was not because those universities had better libraries. It was because those universities attracted, filtered, and certified the highest-IQ individuals โ the ones who could hold the most variables in mind simultaneously, process the most information, reach the most accurate conclusions under pressure.
The premium careers โ investment banking, hedge funds, quant trading, computer science, medicine, law, academic science, consulting โ were, in their essential nature, IQ-premium careers. The salary was the price of cognitive scarcity. The credential was the certificate of that scarcity. The whole system was built on the assumption that the supply of very high IQ was limited, and that the economy would always reward it generously.
Suits โ one of my favourite TV shows, not the garment โ built nine seasons of drama on this premise. Harvey Specter and Mike Ross were compelling not despite their intelligence but because of it. The drama was the drama of exceptional minds operating in a profession that priced exceptional minds. The whole show is predicated on the notion that being the smartest person in the room is worth something even though if you didnโt go to Harvard officially. It is not an accident that the show is about lawyers specifically โ a profession whose entire value proposition is: you are paying for my ability to reason better than the other side.
Now.
AI sat the Bar Examination. It passed.
AI competed in the International Mathematical Olympiad โ the competition that represents the absolute peak of human mathematical reasoning, the event where the most gifted mathematical minds of each generation test the very limits of structured human thought. In 2025, Google’s system solved four out of six IMO problems, achieving a score that would have earned a silver medal at the competition.
Not a participation certificate. A silver medal.
AI is now solving mathematical theorems that have remained open for decades. In 2024, Google DeepMind’s AlphaProof contributed to progress on long-standing conjectures in formal mathematics. The AI is not just doing the work of the student. It is doing the work of the professor.
The IQ premium is being absorbed. Not gradually. Comprehensively. The careers that commanded the highest salaries because they required the rarest cognitive gifts are precisely the careers that AI has been trained โ deliberately, strategically, publicly โ to target first. The Maths Olympiad. The Bar Exam. The USMLE. The coding benchmark. The financial modelling test. These are not random demonstrations. They are a sequence of press releases aimed at every boardroom on earth.
The message is consistent: you can stop paying the IQ premium now that AI is here.
Where does that leave EQ โ emotional intelligence? The theorists argue it is the remaining moat. The ability to read a room. To build trust. To navigate conflict. To lead with empathy. And they are right, up to a point. But in a world where AI agents are running the operational layer โ handling the analysis, the drafting, the coding, the modelling, the research, the administration โ the question becomes whether there are enough EQ-premium roles to absorb the population of IQ-premium roles that are being eliminated. The answer, on current trajectories, is a firm NO. There will be more EQ-premium survivors than IQ-premium survivors. But there will not be enough EQ-premium roles to employ the entirety of the professional class that the IQ-premium economy previously sustained.
The Student vs. The AI Agent: A Competition with One Winner

Imagine a student. She is 22 years old, in her final year of a marketing degree at a decent university. She has a student loan of up to $100,000. She has done everything correctly โ attended lectures, completed internships, built a portfolio, learned Google Analytics, obtained a Meta Blueprint certification. Even some from Hubspot. She has read every HubSpot article on Digital Marketing. She is, by the standards of the education system that trained her, a qualified, employable marketing professional.
She applies for a junior marketing role at a company with a $50 million annual revenue. The hiring manager reviews her application alongside one other option: Claude Code or a Claude-powered AI agent, available at $200 a month, that has been trained on every marketing textbook ever written, every marketing campaign ever analysed, every Seth Godin blog post, every HubSpot article, every HBR case study, every Google Ads best practice guide, and every piece of performance marketing data from the last decade.
The AI agent does not need onboarding. It does not need annual leave. It does not need a salary review. It does not file a grievance when it disagrees with the marketing brief. It does not need health insurance. It works at 3 a.m. 4 a.m. 5 a.m. if the campaign requires it. It can run ten campaigns simultaneously. It does not make the kind of emotional errors that junior employees make in their first year because they are still figuring out office politics.
The student has her loan. She has her ambition. She has her humanity.
Who gets the job?
The marketing degree she spent $100,000 on was built on a curriculum designed for a world where marketing knowledge was scarce โ where understanding consumer behaviour, campaign architecture, copywriting, analytics, and brand strategy required years of study and practical experience. That world existed, and it produced professionals of genuine value.
But the AI was trained on the entirety of that knowledge. On every textbook. On every certification course. On every Seth Godin book, every Philip Kotler framework, every performance marketing playbook. The knowledge that took the student three years and $100,000 to acquire was available to the AI as part of its training corpus, at no incremental cost.
The student is not competing with a better candidate. She is competing with the entire accumulated knowledge of the marketing profession, available on demand, at a marginal cost that rounds to zero.
Sir Ken Warned Us. We Weren’t Listening
Sir Ken Robinson gave his TED Talk โ Do Schools Kill Creativity? โ in 2006. It is, as of last count, the most-watched TED Talk in the platform’s history. Tens of millions of views. Translated into dozens of languages. Quoted in schools, universities, government education policy documents, and corporate training programmes worldwide.
And almost nobody applied the most important part of it.
Sir Ken’s central argument was not merely that schools suppress creativity. It was that the entire global education system had been organised, in its priorities and its hierarchies, to produce a specific kind of worker: the cognitive-analytical knowledge worker, with mathematics and science at the top, humanities in the middle, and arts, dance, drama, and creative writing at the bottom. The hierarchy was not accidental โ it was designed, he argued, for the industrial economy, and had never been updated.
Here is the part that, in the light of AI, reads like prophecy.
Sir Ken argued that computers and machines could be taught to do mathematics faster and better than humans. That the cognitive-analytical skills which the education system placed at its apex โ the STEM skills, the logical reasoning, the structured problem-solving โ were precisely the skills most susceptible to mechanisation. That prioritising them above creative, embodied, and relational skills was building an education system that trained children to be outcompeted by the very machines those children were being trained to use.
He said this in 2006. Without knowing about the Transformer architecture. Without having seen GPT-4 pass the Bar exam. Without having watched an AI medal at the International Mathematical Olympiad.
He was right.
The education system heard him, applauded him enthusiastically at the TED conference, gave him a standing ovation, shared the video 65 million times, and then continued to prioritise STEM, continued to defund arts and drama, continued to build university rankings around research output in science and engineering, and continued to advise students that the path to financial security ran through mathematics, coding, and law.
That advice was correct for 2006. It was becoming questionable by 2017 when the Transformer paper was published. It was wrong by 2022 when ChatGPT launched. It is actively harmful today.
The cruelty is specific. The students who listened most carefully, who worked hardest, who sacrificed most โ who chose the “safe” STEM routes, who took the computer science degrees, who went to law school, who studied accountancy โ these are the students most exposed to the displacement caused by AI. They did everything they were told. The system they were told to trust is being automated from underneath them while they are still paying the loan.
The Education Collapse: The Domino Nobody Is Watching

Here is the domino that the jobs conversation consistently fails to follow.
When a marketing job vanishes, what happens to a marketing degree?
Not immediately. Not this year. But work the logic forward with honesty.
If the role of the marketing professional โ the campaign manager, the SEO specialist, the brand strategist, the copywriter โ is being systematically replaced by AI agents, then the market demand for marketing graduates compresses. When the market demand compresses, fewer students enrol in marketing programmes. Why would they? When fewer students enrol, universities reduce their marketing faculty. When the faculty shrinks, the marketing degree programme loses resources. When the programme loses resources, it loses accreditation standing. When it loses standing, fewer students enrol.
This logic applies to every degree programme in which the graduate’s economic function is being automated by AI.
And it does not stop at the degree.
What is the point of a Google Ads Certification when Google’s own AI can perform the certified functions? What is the point of a Meta Blueprint credential in a world where one person with Claude Code runs the growth marketing for a $380 billion company? What is the point of a Udemy course in Python when vibe coding means you describe the problem in English and the AI writes the solution? What is the point of a master’s degree in Business Administration when the case studies it is built on are already in the training corpus of every major LLM?
The education industry is not a passive observer of the AI displacement story. It is a primary victim of it. And quiet frankly, they donโt seem to have read the memo generated by ChatGPT. The prompt is on the wall, and we are busy amusing ourselves with the wrong things.
Back in the early 2000s, when Tony Blair was still the Prime Minister, he made it his Labour government’s ambition to get 50% of young people into university โ an ambition that became policy, that restructured the entire English further education system, that created the student loan architecture that currently holds ยฃ20 billion of national debt โ he was responding to a labour market that priced degrees heavily. That market is now deflating.
Nearly half of Gen Z and millennial graduates in the United States now say their degree was a waste of money. Total US student loan debt has reached nearly $2 trillion. Forty per cent of graduates report that their student loan has limited their career growth more than their degree has accelerated it. The mathematics of higher education โ borrow now, earn the premium later โ is collapsing at both ends simultaneously. The borrowing cost is still real. The premium on intelligence is shrinking.
And then there are the university towns.
I noticed this when I first arrived in Southampton to study law. The city’s economy was, in ways both obvious and subtle, organised around the university. Student accommodation. Student bars. The restaurants and cafรฉs and transport links and letting agents and estate agents and corner shops that calibrated their entire business model to the rhythm of the academic year. The students were not merely studying. They were, via the student loan system, injecting government-backed capital into a local economy that had come to depend on the annual cycle of arrival, expenditure, and โ for the fortunate ones โ post-graduation residency.
Universities are not just educational institutions. In the towns built around them, they are the economic infrastructure.
When the degrees disappear โ and they will not all disappear at once, but they will disappear โ what happens to those towns? What happens to Southampton, to Durham, to Exeter, to every mid-sized British city whose recovery from post-industrial decline was anchored, quietly, to the expansion of higher education that Tony Blair’s government underwrote with borrowed money?
The question is not being asked in any planning meeting. It should be the only item on the agenda.
The Mortgage Crisis Nobody Sees Coming

Citrini scratched the surface of this. The full depth of it is not yet in any official forecast. Let me go a bit further.
The 2008 financial crisis was triggered by a specific structural failure. Banks had extended mortgages to borrowers who could not, under conditions of economic stress, reliably service their debt. The “subprime” mortgage โ the loan to the borderline borrower โ was packaged alongside the “prime” mortgage โ the loan to the reliable, high-income, highly educated professional โ and sold to investors as broadly equivalent. When the subprime borrowers defaulted, the entire package was compromised. The reliable borrowers’ mortgages did not become worthless because of their own behaviour. They became suspect because they shared a package with mortgages that were failing.
The global financial system discovered, in the space of eighteen months, that the asset class it had built its entire stability on was less solid than the models suggested.
Now. Listen carefully.
The white-collar professional โ the lawyer, the analyst, the developer, the accountant, the marketing director, the consultant โ is the prime mortgage borrower of the modern economy. They are the person who, historically, received the loan. They received it because their income was stable, their employment was durable, and their earning trajectory was predictable. The bank made the calculation: this person has a professional credential, a professional income, and a professional career path. The risk is manageable. Lend to them at a multiple of their annual salary.
The AI displacement is being applied precisely to this person.
The senior product manager at Salesforce earning $180,000 a year who ended up driving Uber at $45,000. Multiply her by hundreds of thousands if not millions. Multiply them by their mortgages. Multiply those mortgages by their banks’ exposure to professional-class lending.
The $13 trillion US mortgage market was underwritten, in significant part, on the assumption that the professional class would continue to earn what the professional class has historically earned. That assumption is being structurally invalidated. Not by a recession that will correct. By an AI system that is getting cheaper and more capable every six months, targeting the exact income bracket that the mortgage market relied on most.
This is not the 2008 crisis. It is worse. In 2008, the problem was on the liability side โ bad loans. In the AI crisis, the problem is on the income side โ good loans being serviced by people whose earning capacity is structurally impaired.
The banks are not modelling this. The regulators are not modelling this. The government housing departments are not modelling this. They are still using income trajectories calibrated to a pre-AI professional labour market.
When the defaults begin โ and they will not begin tomorrow, but the timeline is not abstract โ they will not announce themselves as AI-related defaults. They will appear in the data as mortgage arrears among educated professionals in their thirties and forties. The kind of defaults that look, at first glance, like a recessionary anomaly. Until someone does the specific arithmetic and traces the income impairment to the structural compression of the intelligence premium.
By which point, the cascade will already have begun.
The UBI Lie

Universal Basic Income will be the political answer. So they keep telling us. It is already being proposed, tested, and debated in every major economy. And as a response to AI displacement, it is โ in its current framing โ almost entirely insufficient.
Here is the problem.
UBI, as proposed, works as follows: the government taxes the productivity gains from AI automation and redistributes them to the displaced workers as a basic income floor. Economy gets the efficiency. Society gets the dividend. Everybody adjusts. Progress continues.
The mechanism requires two things to function: a government with sufficient tax revenue to fund the distribution, and a technology sector willing to be taxed at the rate required to fund it.
On the first requirement: the tax base of a modern government is built primarily on income tax from working people. When AI displaces 300 million jobs globally โ when the professional class, which pays the most income tax, sees its wages structurally compressed โ the tax revenue available to fund UBI decreases at precisely the moment the demand for UBI increases.
You cannot tax unemployed people. You cannot tax people earning $45,000 who used to earn $180,000 at the rates required to fund a meaningful basic income for the millions who have been similarly displaced. The maths does not work. Or as they now say, the math is not mathing.
On the second requirement: the companies generating the productivity gains from AI โ OpenAI, Anthropic, Google, Microsoft, Meta, and their successors โ are among the most aggressive and sophisticated tax optimisers in the history of global commerce. They are headquartered in the most tax-efficient jurisdictions, structured to minimise corporate tax liability, and staffed with the finest tax counsel money can purchase. The prospect of them voluntarily contributing the scale of taxation required to fund meaningful UBI for hundreds of millions of displaced workers is โ to put it charitably โ not supported by their historical behaviour.
The dirty secret of the UBI debate is this: the people proposing it are, in many cases, the same people whose companies are driving the displacement. Sam Altman has endorsed UBI experiments. He has the luxury of endorsing them because he controls the asset that would have to be taxed to fund them, and he knows with precision how the tax avoidance architecture operates. The UBI proposal, coming from the Tech Emperor whose company is the primary driver of professional class displacement, is the most expensive piece of public relations since the cigarette companies funded research into the health benefits of smoking.
The government picks up the bill. They always do. The investor captures the gain. The displaced professional becomes a line item in a welfare budget. And the same investors who funded the AI that destroyed the professional class get to lend the government the money it needs to fund the UBI it now requires but at a higher interest. Bravo!
In Zimbabwe, we had a version of this. It was called structural adjustment. The international financial institutions like the IMF provided the loans. The government implemented the policies. The people at the bottom paid the cost. The people at the top maintained the assets.
We did not call it progress.
The Mental Health Ticking Clock

This section will be the one that the economists skip and the psychologists will later say was the most important.
The 2020 Covid lockdowns produced a mental health crisis of documented severity โ a crisis characterised by enforced isolation, loss of routine, loss of social connection, loss of purpose, and the specific psychological damage of being kept from the things that gave life structure and meaning.
The AI displacement crisis will produce a mental health crisis that makes the Covid crisis look, in retrospect, like a difficult week.
Here is why.
Work is not merely an income mechanism. For the professional class especially โ the people who invested years of their lives in a credential, who built their identity around their expertise, who derived their social status, their sense of competence, their daily purpose, their friendship networks, their romantic lives, their sense of being a capable adult in the world โ work is identity.
The junior developer who cannot find a first job is not just financially stressed. They are being told, by the market, that the seven years of effort โ the GCSE choices, the A-level choices, the university application, the degree, the projects, the late nights, the portfolio โ produced something the world does not need anymore. That their intelligence is not scarce anymore. That the premium they were told existed does not exist. That they are, in the blunt vocabulary of the labour market, redundant before they began.
The mid-career lawyer who is let go in the third round of AI-related redundancies is not just financially exposed. They are confronting the collapse of a story they have been telling themselves for twenty years โ the story that their effort, their discipline, their sacrifice, their expertise, made them valuable. The market’s answer is no longer: not right now, try again. The market’s answer is: the thing you built your value on is no longer scarce.
The psychological literature on identity-based unemployment โ on what happens to people when their professional identity is stripped away without a credible alternative โ is unambiguous. Depression. Anxiety. Substance misuse. Relationship breakdown. Suicide risk elevation. These are not edge cases. These are the documented outcomes of mass professional displacement in every historical case where an entire class of work was eliminated.
The difference here is scale, simultaneity, and the absence of a believable next chapter.
In previous waves of displacement, there was always a narrative. The coal miner could retrain. The factory worker could move to services. The narrative was hard and often false in practice, but it existed. It gave the displaced person something to aim at. A direction. A story in which they were not finished, merely between chapters.
The current displacement has no credible next chapter for a significant portion of those affected. When AI is targeting every cognitive profession simultaneously โ when the advice “learn coding” is answered by Claude Code, when the advice “become a data analyst” is answered by AI analytics, when the advice “move into management” is answered by agentic orchestration layers โ the person on the receiving end of the displacement is not just unemployed. They are epistemically stranded. They cannot identify a direction that feels honest.
That is not a job market problem. That is a mental health emergency. And it is being seeded, right now, in every hiring freeze, every redundancy round, every graduate who cannot find a first position, every mid-career professional whose role has been quietly automated. The acute crisis will arrive perhaps two to three years after the displacement peak. When the savings run out. When the optimism of “I’ll find something” has been exhausted. When the retraining paths have been tried and found wanting.
Governments are not preparing for this. There is no policy document. There is no NHS mental health workforce scaled for a civilisation-level professional identity collapse. There is no therapy pipeline for the displaced professional class.
There will need to be.
The AI Ripple Effect: What Nobody Has Fully Mapped
The conversation about AI displacement focuses on the jobs directly replaced. It is not focusing โ and this is the 100X dimension โ on everything those jobs sustain.
A marketing job does not just disappear. It takes the following with it.
The marketing degree programme. The marketing professors who teach it. The marketing certification programmes from Google, Meta, HubSpot, and Coursera that supplement it. The marketing textbooks โ and with them, the publishing contracts, the author royalties, the academic journals, the conference circuits. The Udemy and LinkedIn Learning courses, which generate significant revenue precisely because coding and marketing skills commanded a premium and people paid to acquire them. The marketing agencies, whose entire business model is charging a premium for cognitive labour that is now being reproduced at the cost of a subscription.
Repeat this for computer science. For law. For accountancy. For financial analysis. For consulting. For everything else.
Each profession that is automated is not just a set of jobs lost. It is an ecosystem โ an educational pipeline, a certification infrastructure, a publishing industry, a conference circuit, a tools and software market, a professional services layer โ that exists because the profession existed and commanded economic value.
When the profession’s value compresses, the ecosystem compresses with it. Not at the same speed. But in the same direction. With the same inexorability.
And then there is the consumption layer.
The advanced economies of the United States, the United Kingdom, Germany, France, and their peers are consumption economies. Seventy per cent of US GDP is consumer spending. That consumer spending is not evenly distributed across the income spectrum. The professional class โ the lawyers, the consultants, the developers, the analysts, the marketing directors โ disproportionately sustains the consumption economy. They are the mortgage holders. The new car buyers. The restaurant regulars. The private school fee payers. The pension contributors. The premium subscription holders. The premium everything holders.
When their incomes compress โ when the $180,000 product manager becomes the $45,000 Uber driver โ they stop consuming at the level that sustained the businesses that employed other people, who sustained other businesses, in the consumption cascade that a modern economy depends on.
Citriniโs research called this the Human Intelligence Displacement Spiral. The negative feedback loop with no natural brake. It is not just a jobs story. It is a macroeconomic collapse scenario, dressed up in the language of productivity improvement and efficiency gains, wearing the face of progress.
Unlike the 1990s outsourcing wave โ when work moved to India and the Philippines, compressing wages but preserving the basic architecture of the employment system โ AI does not preserve the architecture. It replaces it. The BPO in Manila does not survive the AI wave. The freelancer in Lagos does not survive the AI wave. The junior developer in Bangalore does not survive the AI wave. There is nowhere cheaper to send the work, because the cheapest option is not a human in a cheaper country. It is a model running at the marginal cost of electricity in a data centre that Sam Altman is building with $500 billion of committed capital.
Unlike the outsourcing wave, there is no floor. There is no lower-cost human alternative waiting at the bottom of the labour cost pyramid. The pyramid has been removed. There is now a flat plane, and on that plane, you compete directly with a system that has been trained on everything you know, does not need a salary, and is getting better every six months.
What Governments Must Do. Now
I want to be clear: this is not a manifesto. This is a diagnostic statement. I am a nobody who owns a tiny blog called TechOnion who just wants to hold the tech industry to account. I get satisfaction in calling out Big Tech. And I have time to name the con clearly.
But naming the con honestly requires acknowledging what the silence of governments represents.
The silence represents the absence of any serious engagement, at a policy level, with what the Human Intelligence Premium Collapse actually means โ not just for the labour market, but for the education system, the tax base, the mortgage market, the pension system, the mental health infrastructure, and the social contract that underpins the consent of the governed in a democratic society.
There are no parliamentary committees examining what happens to student loan repayment when the degrees funded by those loans produce graduates who cannot find jobs. There are no Treasury working groups modelling the tax base implications of professional class wage compression at scale. There are no Bank of England stress tests for mortgage default rates in scenarios where white-collar employment falls by 20% or more. There are no NHS mental health strategies for civilisation-level professional identity collapse.
Governments are not ready. Not because they are stupid. Because the incentives to engage with this honestly โ to name the scale of it, to model the second and third-order effects, to make the policy decisions that would follow from an honest assessment โ are catastrophically misaligned with the incentives of the electoral cycle. No politician wins an election by telling the professional class that their human intelligence premium is gone, that the degree they are currently financing with student loans is losing value faster than the interest accrues, and that the mortgage they took out on the assumption of a 30-year professional career trajectory is being underwritten on a set of income assumptions that a $200-a-month AI subscription is in the process of invalidating.
So they say: upskill.
They say: the future is human-AI collaboration.
They say: new jobs will be created.
And they hope the Citriniโs scenario does not arrive before the next election cycle.
Some governments will attempt to tax AI heavily. Some will attempt to regulate it into a slower deployment pace. Some will attempt to ban certain AI applications in certain sectors. These responses are not irrational โ they are the instinct of a governance system that understands a threat is approaching but lacks the tools to address its root cause.
But here is the hard truth that this essay has been building toward.
We have already eaten the forbidden fruit of AI.
The train is not approaching the station. It is in the station. The doors have opened. The passengers have boarded. Sam Altman and co are in the driver’s seat. And the track, which runs through every profession, every education system, every university town, every mortgage market, and every tax base in the developed world, has no scheduled stops.
The question is not whether to get on.
The question is whether you are going to run toward the carriage that still has seats โ or stand on the platform, holding a credential, waiting for a train that runs on a different track.
The Music Is Slowing
“What you’re telling me is that the music is about to stop, and we’re going to be left holding the biggest bag of odorous excrement ever assembled in the history of capitalism.”
โ John Tuld, Margin Call, 2011
The World Before The Clocks Struck
It was 2008, and the clocks had just struck 2 a.m.
The world โ most of it โ was asleep. Ordinary people in ordinary beds, in ordinary houses, in the ordinary darkness of a Tuesday night that felt exactly like every other Tuesday night, because the system was working, the economy was growing, the newspapers were full of the right numbers, and there was no particular reason to suspect that the ground beneath the whole magnificent structure was not solid.
The world was asleep.
But in a tower above the streets of New York โ one of those towers, the ones with the logos at the top that you can see from across the river on a clear night, lit like altars to a god whose name is printed on the quarterly report โ the lights were still on.
They were always still on.
This was 2008. Finance was the game. Finance was sexy. Not technology โ technology was for the nerds, for the garage dreamers, for the people who wore grey hoodies to meetings and ate cereal from the box at their standing desks and ramen noodles for dinner. Finance was where the real money was. Where the bonuses were not productivity awards or equity tranches vesting over four years but actual numbers, in actual bank accounts, with actual zeroes โ six of them, sometimes seven โ arriving before Christmas like a verdict from a benevolent god who had decided, this year, that you had been sufficiently ruthless.
Investment banking. The two most beautiful words in the English language, circa 2008. Goldman Sachs. Morgan Stanley. Bear Stearns. Merrill Lynch. Lehman Brothers. These were not companies. They were civilisations. They were the architecture of the world economy, the plumbing behind every major transaction, the invisible hand that Adam Smith had theorised and Wall Street had monetised, operating at a scale and with a confidence that made government feel slow and universities feel quaint and every other industry feel, frankly, like a hobby.
The men โ and they were mostly white men, let us be honest about the room โ who worked in these towers had not arrived there by accident. They were, in the most literal and measurable sense, the smartest people of their generation. Not the wisest. Not the kindest. Not the most balanced or the most humane. But the smartest, in the specific, testable, examinable, IQ-quantifiable sense of the word. They had sat the examinations and come top. They had studied the hardest subjects โ mathematics, physics, financial engineering โ at the best universities, and they had been recruited with the specific vocabulary of the exceptional: we would like to have you on our team.
Some of them had been rocket scientists. Literally. Aerospace engineers, astrophysicists, applied mathematicians who had spent years calculating orbital trajectories and stress tolerances and the thermodynamics of re-entry, and who had been quietly approached by a recruiter who said: we have a use for people who think the way you think. And the pay is considerably better.
The pay was considerably better.
So at 2 a.m. on a Tuesday in 2008, the lights were on. Young men in shirts that had been crisp twelve hours earlier โ still expensive, still the right fit, because you could tell which firm someone worked for by the cut of the collar at thirty paces โ were at desks that had never fully emptied, running numbers on screens that showed more information in one square foot than a medieval library contained in its entirety. They were not tired, or if they were tired they were the particular kind of tired that comes with a million-dollar bonus on the horizon, which feels, biochemically, almost indistinguishable from being extremely awake.
Money never sleeps. Gordon Gekko had said it, and Gordon Gekko was fictional, but the people in these towers had watched the film several times and nodded in the way that people nod when they recognise themselves in a mirror that is being pointed at them by someone who is technically criticising them but has got the details exactly right. Greed is good. They did not say this out loud. They did not need to. It was the operating system in investment banking. The terms and conditions nobody reads because everyone already agreed.
The banks, just like Zion, proudly sat as queens. Who could trouble them?
Into this world โ into this specific tower, in this specific city, at this specific 2 a.m. โ walked a man in a suit.
The Man in the Suit
His name was John Tuld.
He was the Chief Executive. And he had not been woken โ woken implies that sleep had been achieved, which implies a vulnerability that John Tuld did not advertise. He had been summoned, which is a different thing, and the distinction mattered to him in ways he would never articulate but always enforced.
The car had come for him. Of course, the car had come for him. The car always came. There was a man whose specific employment was to ensure that John Tuld could be transported from any location to any other location at any hour of the night without the friction of logistics ever becoming a conscious concern. This was not extravagance. It was infrastructure. The kind of infrastructure that a man builds around himself when he understands, with the cold clarity of someone who has spent decades in rooms where decisions cost billions, that his time is the scarcest and most important thing.
He stepped out of the car in a suit that was not slightly rumpled from being worn all day and then slept in and then worn again, as the suits of lesser men in lesser circumstances might have been. It was sharp. It was the specific sharpness of a suit that has been put on five minutes ago from a wardrobe that is stocked for exactly this eventuality, because a man at John Tuld’s altitude does not have a single good suit. He has arrangements. He stepped into the building without breaking stride. The lobby security did not ask him to sign in. The lifts opened before he reached them.
He arrived in the boardroom last. He always arrived last.
Not because he was late. Because the person who arrives last into a room they have summoned is already telling the room something important about the structure of reality as it pertains to this particular gathering. He was not late. He was deploying his arrival as a piece of information.
The room was full of intelligence. Human intelligence.
***
This is the thing that the boardroom scene in Margin Call (2011) โ directed by J.C. Chandor, starring Jeremy Irons as John Tuld, in a performance so precise and so cold that it deserves to be shown in every business school on earth โ does that almost no other depiction of corporate crisis manages. It shows you a room full of the smartest people money could assemble. Not villains. Not fools. People who had earned their positions through cognitive performance that was measurable, documented, and exceptional. People who had passed the examinations, built the models, navigated the crises, climbed the hierarchy of an industry whose entire architecture was built on the premise that intelligence was scarce, that mathematical ability was rare, and that both deserved to be extraordinarily well compensated.
A room full of intelligence, staring at a young man who had found something in a financial model that nobody in the room wanted to be true.
John Tuld sat down.
The room waited.
The Rocket Scientist In The Room
The young man’s name was Peter Sullivan.
Before he was an analyst at this firm โ before the flat in Manhattan and the salary and the Bloomberg terminal and the specific social grammar of being the youngest person in a room full of very senior men at 2 a.m. โ he had been a rocket scientist. Not metaphorically. An engineer. The kind of person who calculates the tolerances on aerospace systems, who works in units of force and heat and velocity, who is trained to find the precise point at which a structure will fail under stress.
Someone had looked at Peter Sullivan and said: we have a better use for this brain. The pay is considerably much better.
And so he had come. As they all had come. Pulled by the specific gravity of the human intelligence premium โ the system that said: if you are this smart, if you can hold this many variables in mind simultaneously, if you can build financial models of this complexity with this accuracy, then the market will reward you at a level that no other industry will match. The finance industry had, for two decades, been the most efficient harvester of exceptional mathematical intelligence on earth. It had taken the rocket scientists and the physicists and the mathematicians and given them something more compelling than orbit calculations to work on: the architecture of money itself. The art and science of conjuring money out of thin air.
Peter Sullivan, who was working in the risk management department had found something of concern in the risk models.
He had been working on it when his boss was escorted out of the building that afternoon โ the boss who had handed him the incomplete work and said, quietly, in the way of a man who understands that some information travels better when it travels upward without him: be careful. And Peter had decided to stay later, finished it and looked at the numbers and felt the floor disappear.
He had called upward. The call had gone upward again. And upward again. Until it reached the car that came for John Tuld.
Now John Tuld looked at him across the table with the specific attention of a man who is already two steps ahead and needs the room to catch up, and said:
Mr. Sullivan. Tell me what you think is going on here. And please โ speak as you might to a young child. Or a golden retriever. I didn’t get here on my brains, I can assure you of that.
The room shifted. Not because they believed him. Because they understood the performance, and what the performance was doing. He was giving Peter Sullivan permission to speak plainly. He was removing the social architecture of deference that might otherwise cause a junior analyst to hedge, to qualify, to soften what the numbers actually said. He was also โ and this is the part that makes Jeremy Irons’ delivery of this line one of the finest pieces of acting in modern cinema โ telling the room, with the tone of a man who has never needed to announce his own intelligence in his life, that the people who arrive at 2 a.m. in sharp suits summoned by his car are not here to perform their sophistication. They are here to tell him the truth.
Peter Sullivan told him the truth.
The Mortgages in the Machine
Over the last thirty-six months, Peter explained, the firm had been packaging new products. Mortgage-Backed Securities โ commonly known as MBS products. The logic was elegant in the way that genuinely dangerous things often are, with the specific elegance of a mechanism whose efficiency conceals its fragility.
The firm was taking mortgages. Not one type of mortgage โ tranches, different layers of credit quality, different risk profiles, different expected default rates โ and packaging them together into a single tradeable security. The good mortgages and the less-good mortgages and the mortgages that, if you looked at them directly and honestly and without the helpful blur of a complex financial model, were extended to people whose ability to service them depended on two conditions remaining permanently true: that house prices would always continue to rise, and that interest rates would not.
Neither condition was a law of nature. Both conditions had been treated as one and the same thing.
The product was, Peter continued, very profitable. The firm had noticed this. The CEO had also noticed it. The firm had noticed it to the tune of revenues that had made the bonuses of the people in this room very large. The problem โ the reason they were here at 2 a.m. in sharp suits on caffeine โ was a risk management challenge that had been present in the architecture from the beginning but had been, for thirty-six months, not quite visible in the model.
The firm had to hold these assets on its books for almost a month before they could be layered and sold. A month of exposure. And because these were essentially just mortgages โ because the underlying assets were houses, which are physical, which are permanent, which seem as safe as the ground they sit on โ the leverage had been pushed considerably beyond what would have been permissible in any other financial instrument. The leverage that makes the profits enormous is the same leverage that makes the losses, if the direction reverses, existential.
The model, Peter said, assumed a level of volatility in the underlying assets. A range of movement that the firm’s positions could survive. The problem โ the thing he had found, working late night, in the specific mathematical silence of a man who understands orbital re-entry and can therefore recognise, with precision, the point at which a structure will fail under stress โ was that the actual volatility of the assets had already exceeded the model’s parameters.
Not by a small amount.
The assets โ the mortgages, the houses, the homeowners in Ohio and Florida and Nevada who had been sold loans on the implicit promise that prices would always rise โ were moving outside the range the model said they should occupy.
And if they were to move by the amount that Peter Sullivan’s numbers suggested they might โ not catastrophically, not the end of the world, just some further movement in the direction they were already moving โ
The losses would exceed the entire market capitalisation of the firm.
The firm would be worth less than nothing.
But the key factor, Peter said, arriving at the sentence that would keep the room very still, is that these are essentially just mortgages.
Just mortgages.
Just houses.
Just the thing that most human beings do once in their lifetime, that represents the largest financial commitment most people ever make, that is backed by the income of the professional class โ the people who work, who earn the salaries, who pay the monthly payment that flows up through the security to the firm โ that the system had decided was as safe as the ground itself.
John Tuld leaned back.
He already knew. He had known since the car arrived at his penthouse. He had known, if he was being honest, for longer than that.
He looked at the room. He looked at the model. He looked at the young man who had been a rocket scientist and who had found the thing that was in the numbers and had followed it to the place the numbers went, and he said, very quietly:
What you’re telling me is that the music is about to stop. And we are going to be left holding the biggest bag of odorous excrement ever assembled in the history of capitalism.
Peter Sullivan paused.
Sir, he said. Using your analogy โ the music isn’t stopping. The music is just slowing.
The Human Intelligence Premium is the Music
I have been building to this moment since the beginning of this essay. Some 20,000-words ago!
Because the Human Intelligence Premium is the music. Let me say that again, our human intelligence, our IQ, our smarts, the thing that separates us from animals โ is the music that John Tuld is referring to.
Not the loud, obvious, inescapable music that everyone hears and dances to consciously. The music that has been playing so continuously, for so long โ for every economy ever built, for every salary ever negotiated, for every university ever founded, for every professional credential ever issued, for the entire 200-year architecture of the knowledge economy โ that almost no one can hear it as music anymore. It has become silence. It has become the natural order of things. The permanent, unchallengeable backdrop to everything we have built.
The music is the assumption that human intelligence is scarce, and therefore valuable. And will forever remain this way.
And right now, in 2026, in the towers and the server farms and the boardrooms of a different industry in a different city โ an industry that did not exist in any meaningful form when Peter Sullivan found the flaw in the model โ a group of men in equally sharp suits, arriving in equally well-organised cars, are looking at a different set of numbers.
Goldman Sachs released a report. 300 million jobs. Gone, or degraded, by artificial intelligence. The headline landed in newsfeeds globally and was processed with the specific calm of people receiving a forecast about weather they will not experience until next winter. That’s a lot, people said. But it’s the future. We’ll figure it out.
It was not the future. It had already started.
In 2025, AI-related layoffs in the United States displaced an estimated 200,000 to 300,000 workers โ and that is the figure derived from honest counting. The official figure, based on employer self-reporting, was 55,000 or there abouts. Employers have rational incentives to describe AI-driven redundancies as “restructuring.” The real number is what you get when you count the roles that were never re-advertised. The jobs that did not go out with a press release but simply, quietly, stopped existing. The junior developer role that became a hiring freeze. The paralegal cohort that became a licence for Harvey AI. The marketing team that became one person with Claude Code and a subscription.
Block was honest. Blockโs CEO, Jack Dorsey, (who some suspect maybe Satoshi Nakamoto) said: we are cutting roles because AI can do this work. Meta was honest, in the language of a quarterly call where the word “efficiency” does the work that “your job is gone” would otherwise have to do. The tech companies โ the ones who built the tools and are the first to deploy them โ have already passed the internal tipping point. They know what the model is showing.
Sam Altman has said, several times, to anyone who cares to listen, and listen deeply, that intelligence will be as cheap as electricity. He has said this in the specific tone of a man announcing a gift, the way a Victorian industrialist might have announced that the new coal-powered mill would bring prosperity to the region โ while owning the mill. Jensen Huang, the Chief Executive of NVIDIA โ the company whose graphics chips are the physical infrastructure of the AI revolution, whose market capitalisation has made him one of the wealthiest people alive โ has said that the age of AI is here, that every industry will be transformed, that the companies and nations that move first will win.
To their investors, and shareholders alike this is pure classical music. Like a Hans Zimmerman piece to the greatest ever film about the new AI golden age.
To the junior lawyer with $200,000 of law school debt. To the Computer Science graduate who built their career plan around a skills premium that was being compressed by Claude Code while they were still writing their dissertation. To the marketing director watching the Anthropic video on LinkedIn โ the video of the one-person growth marketing team at the $380 billion company โ and feeling, for the first time, the specific cold of a room where the temperature has dropped three degrees and no one has yet opened a window. To the 1.8 million workers in the Philippine BPO sector facing 93.7% automation exposure. To every student, in every country, currently borrowing money to acquire credentials for professions whose market is in active contraction.
To all of them:
This is not music anymore.
This is noise.
And it is slowing.
Be First, Be Smarter, Or Cheat
John Tuld looked at his lieutenant.
There are three ways, he said, to make a living in this business.
Be first. Be smarter. Or cheat.
He did not cheat. He didnโt like it. And while he suspected there were some very smart people in the room โ his tone implying that he had his doubts โ it was, he concluded, a hell of a lot easier to just be first.
Two words from the lieutenant: Sell it all.
Not tomorrow. Not when the regulatory framework permits an orderly wind-down. Not when the legal opinion has been obtained and the reputational risk has been modelled. Today. While the assets still have a price. While the counterparties on the other side of the trade do not yet know what the model is showing. While the music is still playing.
The head of the traders looked at the CEO across the table. His name was Sam Rogers โ a man who, unlike John Tuld, had spent his career on the floor rather than above it, who understood in his body what the abstraction of “sell it all today” meant for the human beings who would be holding the bag when the trades settled. Who understood that the other side of every trade is a person. Who understood that what they were proposing to do was to pass the loss to the market before the market understood it was absorbing one.
Do you have any idea what you are doing?
John Tuld turned and looked at him with the patience of a man who has already decided.
Do you?
The Bag We Are Already Holding
Here is the subprime mortgage in its plainest form, and here is why it maps exactly to what is happening now.
The financial system in 2008 had built a product โ elegant, profitable, extensively modelled โ on the assumption that the underlying asset was safe. The house. The homeowner. The monthly repayment from the person who had been extended credit on the basis of an income trajectory that seemed, in 2005, to be permanently upward.
The leverage was extraordinary because the asset seemed permanent. Houses have been valuable for all of recorded history. People have always needed shelter. The assumption was not stupid. It was the assumption that a system built on scarcity always makes โ that the scarce thing will remain scarce, that the premium will remain premium, that the ground beneath the structure is solid because it has always been solid.
The ground was not solid.
Now.
The professional-class income is the mortgage. The human intelligence premium is the house. The entire financial and social infrastructure built on the assumption of human cognitive scarcity โ the student loans, the mortgages, the pension contributions, the consumption that sustains the economy, the tax receipts that fund the state โ is the MBS. Layered, tranche upon tranche, institution upon institution, assumption upon assumption.
And the model is showing โ has been showing, for anyone who followed the numbers where the numbers go โ that the volatility in the underlying asset has exceeded the parameters. The intelligence premium is moving outside the range the model was built to manage. Not catastrophically. Not the end of the world. Just some further movement in the direction it is already moving, as AI gets better every six months, as the costs fall, as the deployment accelerates, as the music slows.
Goldman called it 300 million. The WEF called it 92 million displaced. Citrini called it the Great Intelligence Crisis and warned that 2028 would be the year the confluence became undeniable. Peter Sullivan, if he were in this room, would lean forward and say: the model is not wrong. But it is not capturing what happens when the leveraged assumptions dissolve simultaneously.
When the graduate cannot find a role, the student loan is not repaid. When the student loan is not repaid, the government borrows to cover it. When the government borrows, it turns to the investors who funded the AI that eliminated the employment. The same investors. The cycle is closed. The bag is being passed, right now, to everyone who is still paying for the human intelligence premium that is being actively deflated.
The degree. The certification. The professional qualification. The LinkedIn credential in a skill that the AI has already been trained on. The mortgage taken out on the assumption of a 30-year professional income trajectory that is being structurally undermined by a subscription that costs $200 a month.
That is the bag. The bag of odorous excrement. Because, many economies (the investment banks in this analogy), are holding assets (the student loans, consumption, GDP, tax income etc) that are based on worthless underlying assets (human intelligence premium now decimated by AI). This is the bag that people are refusing to acknowledge.
The people who understand what the model is showing โ who have run the numbers, who built the system, who know what happens when the leveraged assumption dissolves โ are already, quietly, in their cars. Arriving last. Already decided.
They are being first.
And everyone else is still in the boardroom, looking at the young analyst, waiting to be told how bad it really is.
The Music
Let us end here. Precisely. Clearly. Without comfort that the evidence has not earned.
The music is not stopping.
The music is slowing.
This is the distinction that the models cannot fully capture and the politicians cannot fully say and the university prospectuses cannot acknowledge and the recruitment campaigns cannot afford to name. The music is slowing โ right now, in 2026, in every hiring freeze and every restructuring announcement and every entry-level role that was not re-advertised and every marketing team that learned, from a promotional video, that a $380 billion company managed its growth with one person and a subscription.
The music is slowing in the WEF data: 92 million jobs displaced by 2030. The music is slowing in the Goldman number: 300 million, conservatively, globally. The music is slowing in the UCL study that found the graduate premium two-thirds lower than previously thought. The music is slowing in the Federal Reserve data that found college-requiring job postings down 50% since 2010. The music is slowing in the ILO brief on the Philippines: 93.7% exposure. 1.8 million people.
And when Sam Altman says intelligence will be as cheap as electricity, and when Jensen Huang says the age of AI is here โ when the men in the sharp suits, arriving in the well-organised cars, say these things from stages to rooms full of investors โ what they are saying, in the precise language of the boardroom scene that Chandor wrote and Irons delivered with the quiet devastation of a man who understands exactly what he is holding, is:
We know what the model is showing.
We have decided to be first.
Sell it all.
To the investor: this is music. This is the most beautiful trade in the history of capitalism. The asset being harvested โ the collective intelligence of the human species, ten thousand years of accumulated cognitive output, scraped without payment, trained without consent, deployed at the marginal cost of compute โ is infinite. The market is every human being on earth. The subscription renews monthly. The alternative โ human intelligence, the thing that was expensive, that required training, that demanded salary and benefits and career development and human dignity โ is being priced out of the market, daily, by the very tool that was trained on its output.
The music, to the investor, is getting louder and energetic.
To everyone else โ to the lawyer, the developer, the analyst, the marketing director, the graduate, the parent remortgaging to fund the degree, the student borrowing to acquire the credential, the worker in Manila whose 93.7% exposure the ILO measured in a brief that circulated quietly in policy circles without triggering a single emergency meeting โ the music is slowing.
It has not stopped.
When it stops โ when the full weight of what the model is actually describing lands in the actual economy, in actual unemployment figures, in actual mortgage defaults, in actual tax shortfalls, in actual university towns where the local economy has been quietly gutted by the contraction of a degree market built on the premium that no longer exists โ the reports that have been written about it, including this one, will look like Peter Sullivan in the boardroom. Accurate about the mechanism. Radically insufficient about the scale.
Because no model is built to describe what happens after the music stops.
Not Citrini’s. Not Goldman’s. Not the WEF’s. Not this essay.
The moment when the leveraged assumption โ human intelligence is scarce and therefore valuable โ dissolves simultaneously across every sector, every market, every profession, every educational institution, every mortgage book, every government budget โ that moment is not in the model.
It is what the model is pointing at.
We have been looking at the model.
We have not yet looked at where it is pointing.
John Tuld would look at all of it. He would lean back. He would look at you, specifically โ you, the person who has read this essay to this point, who has followed the numbers where the numbers go, who is now sitting with the weight of the conclusion that the analyst didn’t finish โ and he would say, very quietly, with the patient tone of a man who did not get here on his brains:
Do you understand what this means?
You do now.
There are three ways to make a living in this world.
Be first. Be smarter. Or cheat.
The only remaining question โ the one only you can answer, and the one the clock is insisting you answer quickly โ is the one he threw back across the table.
Do you understand the scale of the problem?
****
Tech is a Scam! Iโve written a satirical exploration of our relationship with technology since the first fire. You will enjoy it โ My new book is titled โThe Emperor’s New Suitโ itโs available on the official TechOnion website and on Amazon. I also argue that AGI is a con in โThe Gilded Cage โ How the Quest for Artificial Intelligence (AGI) Became the Greatest Deception in Human Historyโ. Thank you for reading all 20,000-words of this essay that came from a single idea I had after reading Citriniโs The Great Intelligence Crisis 2028 report. ย

GIPHY App Key not set. Please check settings