In the Beginning, There Was Enough
The light in Eden arrives before the sun does. It always does.
It seeps through the canopy in long, amber columns, warm as breath, and the garden warmly receives it the way a sleeping face receives the morning light β without gratitude, because it has never known the alternative. The fig trees hold their fruit like quiet offerings. The river Euphrates, unhurried, divides and subdivides through meadows so green they seem invented, passing beneath willows that trail their fingers in the current the way the very content trail their fingers in everything. There is no wind, exactly β there is something gentler than wind, a slow circulation of air that carries the scent of jasmine from the eastern slope and the mineral cold of the river and something else beneath it, something organic and foundational, the smell of a world that has not yet been touched by anything that does not belong in it.
There are lions here. They sleep in patches of sunlight beside the deer, their chests rising and falling with the deep, unhurried breath of animals that have never known threat or hunger. The deer do not watch the lions. Why would they? The lions do not watch the deer. There is no watching for danger here. There is only the present, which is inexhaustible, and the warmth, which is unconditional, and the extraordinary, taken-for-granted sufficiency of a world arranged entirely around the needs of the things that live in it.
This is what they will spend the rest of their lives trying to describe to their children, Cain and Abel.
Adam is on the northern slope when it starts, doing what Adam does in the hour after midday β which is largely nothing, in the way that nothing is available in a garden where your every need has already been met before you thought to have it. He is lying on his back in the grass with one arm folded under his head, watching a pair of birds perform an elaborate, improvised geometry in the air above him, and he is perfectly, completely, almost offensively content. This is important. He is not dissatisfied. He is not restless. He is not, in any measurable way, unfulfilled. He has the garden. He has Eve, who is somewhere to the south and whose laugh carries on the circulating air and makes him turn his head in her direction with the automatic, involuntary response of someone who has found the exact frequency they were built to hear. He has the river. He has the fruit, of every kind, in abundance β figs and pomegranates and things that have no name yet because naming has not yet reached them, sweet and various and always in season, because in Eden everything is always in season.
There is only one rule. Only one.
Every tree. Every fruit. Every sweetness the garden contains β all of it is yours. Except that one.
That one, in the centre. That one specifically. Not because you cannot reach it. Not because the fruit is rotten or the tree is barren. But because the fruit on that tree contains a knowledge β a specific, particular, exact knowledge β that is not yours to have. Eat everything else. But not that.
Reasonable, if you think about it. One rule. The whole garden for the price of one rule. Most people could manage one rule. Or so we tell ourselves.
She finds it, as she always finds things, by following the thing that interested her. She is not, Eve, a woman who walks past interesting things. The tree is interesting. It has always been interesting, in the way that forbidden things are always interesting β not because of what they are but because of the specific shape of the space around them, the way attention flows toward what it cannot have. She has walked past this tree before. Many times. She has looked at the fruit β how could you not? The fruit is extraordinary. It hangs in clusters of deep, impossible red, each one the size of a fist, and the light catches it differently from every angle, and the smell β even from here, even without touching β the smell is something between familiarity and revelation, the olfactory equivalent of a word you have always known but have only now understood.
She is not planning anything. This is important. She is standing near the tree, close enough to see the light move through the skin of the fruit, and she is not planning anything at all. She is simply the most curious person in a world that has given her everything except the one thing she does not have: the knowledge of everything.
The serpent finds her there.
He does not approach her the way a predator approaches. He has nothing of the stalking about him, nothing of the coil and the calculation. He moves through the grass with an ease that is almost conversational, and when he speaks β and he does speak, and this is the first uncanny thing, the first small wrongness in a world that has known nothing wrong until this moment β when he speaks, he speaks with the voice of someone who has merely happened to notice something interesting and cannot quite help sharing it.
Did He really say you couldn’t eat from any tree in the garden?
The question is not a lie. It is better than a lie. It is an invitation to correct him, which means she has to engage, which means she has to explain the actual rule, which means she hears herself saying it aloud, which means she is now in a conversation about the fruit rather than not in a conversation about the fruit, and the serpent is very good at conversations.
Oh, just that one? Just the one in the middle?
He looks at it. He looks at it the way someone looks at something they have seen before and found entirely unremarkable, and there is something in that look β the casual familiarity of it, the absolute absence of the reverence she brings to the tree β that is its own small manipulation. He is not afraid of the tree. She has always been slightly afraid of the tree. Why is he not afraid of the tree?
You won’t die.
He says it simply. Not triumphantly. Not with the salesmanship of someone overstating a case. He says it the way you tell someone a fact they have been given incorrectly, gently, as a service. You won’t die. And then, because a great salesman knows that the close is never the last thing you say β because the close, done properly, feels like the middle of a thought rather than the end of an argument β he adds:
He knows that when you eat it, your eyes will open. You’ll be like Him. You’ll know everything He knows.
Pause on that sentence for a moment. Not the offer β the mechanism of the offer. He does not say: you will become powerful. He does not say: you will gain abilities. He says: you will be like Him. The person who made all of this. The person who arranged the lions and the deer and the circulating air and the light that comes before the sun. The person who named things before things had names for themselves. The person who knew, before you existed, what you would need. You will be like that person.
This is the oldest pitch in the history of everything. Not you will have more. Not you will achieve things. But you will be more. You will transcend what you are. You will become the kind of being who does not look up at the garden and wonder who designed it β you will be the one who designs.
She looks at the fruit.
And she sees it, suddenly, clearly, as if the veil over it has been lifted, which of course it has β the serpent did not lie about the fruit being beautiful. He has not lied about anything. That is the genius of it. The fruit is beautiful. It is clearly, visibly, self-evidently good for food. It is everything he said it was. And the knowledge inside it β the knowledge of everything, the knowledge He has, the knowledge that would make you like Him β that knowledge is real. It is there. She can almost feel it, the way you can feel warmth from a fire before you have crossed the distance to it.
She reaches out.
Her fingers close around it.
The fruit gives slightly under her grip, the way perfectly ripe fruit gives, and the smell β released now, now that the skin is pressed β is extraordinary, is almost more than a smell, is something that arrives in the chest rather than the nose, and she raises it to her lips and she bites.
The knowledge does not arrive gently.
It does not unfold like morning light. It arrives the way cold water arrives when you are submerged β total, simultaneous, immediate. Not a sequence of new information but a sudden, complete, appalling context. The fruit does not give her what the serpent promised. It gives her something adjacent to it, something that contains the promised knowledge as a single thread in a tapestry she was not equipped to see. She knows now. She knows everything she did not know before. And the very first thing that knowledge does β the first thing, before anything else β is show her exactly what she has just done.
She looks down.
She has always been naked. She has been naked every day of her life in this garden and it has never once occurred to her to be anything other than naked, because there has never been anything to hide, because there has never been anything to be ashamed of, because shame requires the knowledge of how you are perceived, and she did not have that knowledge until this exact moment. She has it now. She looks down and she feels, for the first time in the history of the human experience, the specific, particular, constitutional horror of being seen and found wanting.
She finds Adam. She finds him still on the northern slope, still watching the birds, still wearing on his face the absolute, innocent contentment of a man in a garden that is entirely his, and she holds out the fruit.
And here is the thing about Adam that the story rarely lingers on: he sees. He is not deceived the way she was deceived. He does not have the excuse of the serpent’s argument, the elegant reframing, the flattery of the pitch. He looks at the fruit. He looks at her face β at what is already there in her face, the thing that arrived with the knowledge β and he knows, on some level, what has happened. He can see what it has done. He eats it anyway. Because she ate it and they are together in this garden and if the choice is between knowledge and her, or innocence without her β then he chooses her. He chooses her and the knowledge comes with her and he is not ready for it either.
The light does not change. The river does not stop. The lions continue sleeping beside the deer. The garden is exactly as it was three minutes ago, down to the last leaf, the last current of air, the last amber column of light through the canopy.
But they are not as they were three minutes ago. Not even close.
They reach for the nearest fig leaves. Large ones. And they sew them together with fingers that are suddenly, inexplicably clumsy β fingers that have never had to build anything before, that have never had to cover anything before, that have never had to protect themselves from a gaze they feared, because the only gaze in this garden was warm and unconditional and had never asked them to be anything other than what they were. They sew the leaves and they hide among the trees and the garden that was entirely theirs β the abundance, the unconditional provision, the light before the sun β is still there, all of it, exactly as it was.
They cannot go back to it.
This is what the story is actually about. Not the eating. Not the rule. Not even the knowledge. It is about the moment after, the moment they are standing in a garden full of everything they ever needed, and it is all still there, and they cannot go back to it. Not because they are expelled yet β that comes later, that comes in a moment, the angel with the flaming sword, the gate swinging shut on the sound of the river they will hear for the rest of their lives and never stand beside again.
But because the knowledge they ate has already done the one thing that could not be undone. It has shown them the gap between what they were and what they wanted to be. And in closing that gap, it has opened another one β permanent, constitutional, definitional β between what they had and what they have now.
They wanted to be like God. They got the knowledge. They lost the garden.
The serpent is already somewhere else. He does not stay for the aftermath. He never does.
There is another garden. A digital version of the garden of Eden. It is beautiful. You currently live in it. You have everything you need. And then someone β charming, confident, with excellent venture capital backing and a keynote slot at World Economic Forum in Davos β leans over and says: try this. It will make you more productive. It will give you superpowers. It will free you from the tedious, repetitive parts of your job and let you focus on the creative, meaningful, strategic work you were always meant to do.
Sounds like a fair deal.
You eat it. You expected something magical to happen. You are still in shock.
And you look down and suddenly realise you are naked. Stark naked. So you been walking around all this time naked. Itβs like that moment after you have done a presentation to the whole office, you go to the bathroom, and as you wash your hands, you look up in the mirror and notice the large yellowy bogey in your nose.
Mind you, you are not metaphorically naked. But professionally naked. Redundantly naked. Your boss has noticed. His shareholders have noticed it too with glee. His board has noticed, because your boss hurried to tell them. And the person who handed you the fruit β the one who said it was for your benefit β is currently doubling his own headcount while yours is being laid off, on the grounds that the fruit has made you, specifically, unnecessary going forwards.
Welcome to the Garden of AI. The snake wore a nice, all-black turtleneck. The apple had a whirling vortex logo mark. And we ate it. We ate it gratefully, enthusiastically, and in some cases we shared tweets about how delicious it tasted.
(I did warn about this. In The Gilded Cage, I wrote that AGI was being dangled before us like the serpent’s promise: superpowers, liberation, the ability to transcend our limitations. I wrote that the people dangling it were not doing so for our benefit. I was called alarmist. I note, with the grim satisfaction of someone who hates being right about this particular category of thing, that I was not alarmist enough. A bit like how Michael Burry said Citriniβs βThe Great Intelligence Crisisβ made him feel like he wasnβt bearish enough.)
The Garden Before the Fall
To understand what is being taken, you need to understand what was promised. Not by AI β because the promise is older than AI. Itβs surprising but true. It is the promise of every technological revolution since the Industrial one: work smarter, not harder, and the fruits of that smarter work will be shared between the people who do the work and the people who own the means of doing it.
For three decades after the Second World War, this promise was approximately kept. Between 1948 and the late 1970s, labour productivity and worker compensation in the United States (and the developed world) moved together, almost in lockstep. During this period β economists call it the Golden Age with the slightly embarrassed nostalgia of someone describing a marriage that ended badly and terribly β productivity rose substantially and hourly compensation kept pace. The deal was: you produce more, you earn more. Not perfectly, not without friction, not without a trade union occasionally and sometimes violently having to remind management of the arrangement. But broadly, the deal held.
Then, in 1979, something changed.
The deal did not end dramatically. There was no announcement. No press conference. No shareholder letter titled We Are No Longer Sharing. It simply… stopped being honoured. Quietly, incrementally, through policy decisions about trade and minimum wages and union rights, through the introduction of factory automation and information technology and the slow erosion of every institutional mechanism that had enforced the original contract.
By 2025 β and this is the number that needs to be read slowly, ideally sitting down β cumulative productivity had grown by approximately 279% since 1979. Real hourly compensation for the vast majority of workers had grown by approximately 18%.
Read that again. And again. One hundred and eighty per cent of productive output went somewhere that was not the people who produced it. For nearly half a century! If this is not enshittification on the Richter scale I donβt know what is!
The so-called experts will describe this as a “technical nuance” involving the difference between output deflators and consumer price indices. The plain English version is this: workers are producing more value than ever, but that value is being expressed in a currency they cannot use to buy food and pay their mortgage let alone for their own survival. They are paid in a metric that measures the falling cost of electronics, in an economy where the cost of housing, healthcare, and education β the things you need to survive β has risen without pause. The maths of their lives does not add up. It is not supposed to add up. The gap between what they produce and what they receive is not an accident of calculation. It is a design feature by Big Tech.
This is the iceberg. And humans are on the titanic, with our governments blindly leading us. Citriniβs Research described the phenomenon as Ghost GDP: economic output that grows on an excel spreadsheet while the AI agents who generated it do not need toilet breaks, holidays or wages. But the people replaced by AI agents cannot afford to buy anything. What they perhaps did not say clearly enough is that the iceberg has been there for fifty years. We have been sailing the Titanic toward it since 1979, reassuring ourselves that human ingenuity would navigate around it, that growth would eventually trickle far enough down, that the market would correct. That AI would be a bubble. We were wrong. Of course we were wrong. Humans are bad at predictions. But I would have you know that our ship, the indestructible Titanic have hit the icebergs. The hull is filling with water.
Thomas Andrews, the ship’s designer, was asked by Captain Smith what the damage was. “She’ll sink in an hour,” he said. “Two at most.” He was not wrong about the physics. He was simply the only person in the room honest enough to say it.
AI is not the iceberg. AI is the moment we realise the ship is going down.
The Annual Review: A Brief History of Being Robbed Politely By Management via HR
Before we get to the AI part β and we will get to it, with receipts and all β let me tell you about the annual performance review.
You know the annual performance review. Perhaps you have experienced it. Itβs a thing for white-collar workers. Perhaps you are experiencing it now, in the sense that you are currently employed in a company that will, at the end of this year, conduct one. They always do. They have to. It has had many names over the decades β appraisal, performance review, 360-degree feedback, personal development conversation β each name slightly more euphemistic than the last, as if the problem with the annual performance review was always that it lacked a sufficiently non-threatening title.
You go in expecting three things: a higher salary, a promotion, and a soft pat on the back for a job well done. You deserve at least two of them. The very least. You have done the work. The numbers support it. You have the evidence. You have prepared.
You come out with: a salary increase “in line with inflation” (which is to say, mathematically identical to not getting a pay rise at all!), a list of development areas described as “opportunities for growth,” and a to-do list calibrated to keep you busy for another twelve months without giving you grounds to argue you deserved promotion. You leave thinking β and this is the most depressing part β thank goodness I still have a job.
That thought, that specific relief, that lowering of ambition to the level of mere continued employment β is not an accident. It is the exact output the review was designed to produce.
Later, through the office grapevine β always through the office grapevine, never through official communication or channels β you discover that your managers gave themselves bonuses. That the senior leadership team awarded themselves salary increases described internally as “market corrections.” That several of them have acquired shiny new titles. That the shareholders received record dividends. That the company, which did not have the budget to promote you, magically found the budget for a company away day, a new office fit-out, and a series of “strategic consultancy fees” paid to firms whose principals happen to golf with the CEO and senior directors.
At which point, most people do the same thing: they reset their password to LinkedIn, log in, update their LinkedIn profile and start looking for another job. Not because they are disloyal. Because they have correctly identified that the only way to get a pay rise, or a promotion in this system, is to threaten to leave it. The negotiation only works when you have somewhere else to go.
This is the pre-AI labour market. Dysfunctional, extractive, humiliating in its petty dishonesty β but possessing one crucial feature: the worker still had some sort of leverage. The worker was still, in the grand scheme of things, still needed. The thing they threatened to withdraw β their presence, their skill, their accumulated institutional knowledge β was still something the organisation could not easily replace.
That leverage is what AI is being used to remove entirely and make white-collar workers unnecessary.
The 2026 Purge: The Garden Closes its Gates
In the first quarter of 2026, corporate management discovered something useful: they no longer had to blame the economy for layoffs. They could blame efficiency.
The distinction matters. Blaming the economy β “macroeconomic headwinds,” “pandemic-era over hiring,” “challenging market conditions” β implied that the layoffs were painful but temporary. Something happened to us. We are not in control. But hold on to your suits and ties, because we will re-hire as soon as economic conditions improved. The framing preserved the fantasy that the company valued its people and was reluctantly parting with them due to forces beyond their control.
Blaming AI efficiency implies something different. We have found something better than you. Not cheaper β but extremely better, like 10x better. More capable. More reliable. Less expensive to maintain. And we would like to thank you for your service, your institutional knowledge, your years of 360-degree performance reviews, and we will now be replacing you with a GPU cluster somewhere in a poor US neighbourhood.
By mid-March 2026, tech layoffs have reached 60,000 globally, with somewhere between 20% and 61% of those cuts linked directly to AI implementation. This follows a significant 2025 where over 245,000 employees were laid off. The range is telling: companies are not all being equally honest about the reason. Some are more comfortable saying it plainly than others.
Jack Dorsey was comfortable saying it plainly.
Block β the payments company he runs β cut 4,000 jobs. Forty per cent of its workforce. In the same quarter, Block reported a 26% year-over-year increase in gross profit, to $2.87 billion. They were not cutting jobs because they were struggling. They were cutting jobs because they were succeeding β and success, in 2026 and beyond, means finding out how many of your employees you can eliminate without reducing output.
Dorsey’s justification was not survival. It was “organisational economic density.” The idea that a smaller team, equipped with AI, could perform the work of the larger one. This is true. It is also, if you are one of the larger team, a deeply peculiar framing of your own redundancy. You are not being made redundant because the company is in difficulty. You are being made redundant because the company is doing extremely well, and your continued existence represents an inefficiency on the balance sheet.
Oracle cut 30,000 workers β 15% of its workforce β to “swap human workers for GPU data centres.” Amazon cut 16,000 white-collar positions. Meta has planned cuts of 16,000 β 20% of its workforce β in what its executives described, with the cheery clinical precision of someone describing a building demolition, as “flattening teams” and “elevating individual power users.”
Mind you, a “power user,” in Meta’s current vocabulary, is a person who survives the reduction by demonstrating that they can do the work of four people using AI tools. This is presented as a reward. It is, in the same breath, also a job description for someone working four jobs on one salary. The “power user” has been handed more AI fruit. They are eating it. They cannot see what they are eating it for.
Meanwhile, in the Philippines, a country whose entire Business Process Outsourcing sector β millions of workers, the economic engine of an archipelago β was built on the labour cost advantage that made it attractive to Western companies: between one-third and 40% of the entire workforce is now at risk of displacement. Not because their work is poor. Not because the companies that hired them are struggling. Because AI can now handle the same tasks at a fraction of the cost, from a data centre in Nevada that does not require accommodation, healthcare, or a visa.
The Forbidden Fruit of Efficiency was not offered to the Philippines. It was eaten by the companies that employed the Philippines β and the Philippines is the one that got expelled from the garden. Its also the same problem that is going to face India.
The Architects of the AI Paradise: Where the Smart Money is
Here is the number that settles the question of whether the people building AI believe their own story about it.
OpenAI β the company that makes ChatGPT, the tool most commonly cited as the reason for eliminating white-collar jobs β is planning to nearly double its own headcount by the end of 2026. From approximately 4,500 employees to 8,000. Adding roughly twelve new hires every single day. While the companies deploying its tools cut their workforces by 15%, 20%, 40% or even more.
This is not a paradox. It is a confession.
OpenAI is hiring because at the frontier of artificial intelligence β at the place where the technology is actually built, refined, and directed β human intelligence remains the only irreplaceable resource. The company knows this. It knows it so well that it is currently engaged in what the industry calls a “war for talent,” paying obscene salaries and offering equity that makes the tech sector’s already elevated compensation look rather modest. It knows that the people who build the tools are the people the tools cannot replace. And it is using its $500 billion valuation to buy as many of those people as it can, as fast as it can, before the competition does. Mark Zuckerberg is doing it too at Meta.
The AI companies selling the product that justifies eliminating your job is simultaneously protecting its own people from elimination by treating them as the most valuable assets in their organisations.
Read that sentence as many times as it takes. Take your time. The irony will sink like the Titanic.
They sell “AI efficiency” to the world. They maintain an internal “code red” to ensure their own human teams are focused on core product leadership rather than “side quests.” They have “technical ambassadors” β AI specialists hired to embed AI within enterprise clients and help them “make better use of AI tools” β which is a job description that translates, plainly, as: we are hiring people to help your company replace your people with our software, and we are not replacing our people in the process.
The architects are not living in the garden they are selling you. They are building a sanctuary. Or bunkers. The walls are made of talent density and venture capital and the specific knowledge of how to operate a system that the rest of the world is being told to trust but not understand.
Sam Altman speaks about Universal Basic Income (‘UBI’). He advocates for it sincerely, by all accounts. He has funded studies into it. His OpenResearch project provided $1,000 a month to 1,000 participants for three years. The findings were genuinely positive in some areas: cash transfers lifted families out of poverty, improved financial health for the lowest-income recipients, allowed people to leave abusive situations.
The findings also showed that UBI recipients worked 1.3 fewer hours per week and showed no significant improvement in employment or human capital outcomes.
Let me translate this for you: the man building the tools that will eliminate your job is also funding the research into what it looks like to give you just enough money that you do not need one. This is being described as generosity. It is being described as forward-thinking social policy. It is, in the precise tradition of every sufficiently advanced con, being rebranded entirely. In Silicon Valley they no longer say Universal Basic Income. They call it Universal High Income, because the word “basic” has the unfortunate quality of sounding like what it is.
They Have Done This Before: The Luddites Were Right
The 19th-century Luddites are the tech industry’s favourite historical insult. “You sound like a Luddite” means: you are an irrational, progress-fearing reactionary who would rather slow history down than accept the inevitable march of innovation. It is deployed as a conversation-ender, a way of categorising legitimate concerns as medieval resistance.
The Luddites were not irrational. They were highly skilled textile artisans who were specifically and correctly protesting the use of machinery to circumvent established labour practices and replace skilled adult workers with low-wage child labour. They did not object to machinery in general. Many of them operated machinery expertly. They objected to a specific deployment of specific machines for a specific purpose: the destruction of their bargaining power, the elimination of their trade, and the transfer of the economic value of their skill to factory owners who had contributed nothing to its development.
They were right about all of it. The machines did destroy their trade. The factory system did transfer the value of their skill to owners. The social contract that linked work to dignity was broken, deliberately, by people who described this as progress.
The government responded by deploying more troops to northern England than it had sent to fight Napoleon in the Peninsular War. It tells you everything.
The Luddites were not wrong about the analysis. They were wrong about having enough power to stop it.
Henry Ford, in 1914, offered a counter-example so rare it has become a case study in its own right. After introducing the moving assembly line β which cut chassis assembly time from 12 hours to 1.5 hours β Ford discovered that turnover had reached 370%. Workers were simply leaving because the pace of the line was humanly unsustainable. His solution was to more than double the average wage of the time to $5 a day. Not out of charity. Out of the explicit recognition that productivity gains must be shared to sustain the market β that the workers who built the cars needed to be able to afford the cars. That an economy in which all gains flow to owners and none to workers will eventually stop working because the workers, who are also the consumers, will have nothing left to spend.
Modern tech firms have rejected the Fordist model. They are eliminating the workers and the consumers simultaneously, and the thing standing between them and the consequences of this is a $1,000-a-month UBI study funded by the CEO of the company doing the eliminating.
The economics of this do not work. Citrini’s Research modelled it explicitly: as AI agents remove the top 10% of earners β the white-collar knowledge workers whose roles disappear first and who account for 50% of all discretionary consumer spending β consumption drops. As consumption drops, companies invest more in AI to cut costs. As they invest more in AI, more workers are displaced. The feedback loop has no natural floor. The Ghost GDP grows. The Ghost Economy grows. The prosperity does not. The iceberg is very large. The ship is not slowing down.
Efficiency Shame: The New Management Science
There is a specific psychological texture to the 2026 workplace that deserves its own paragraph, because it is new and it is deliberate and it is the most elegant part of the con.
It is called “efficiency shame.”
As AI demonstrates the ability to process thousands of molecules, write millions of lines of code, handle 80% of customer calls autonomously, complete in seconds what you complete in hours β the human worker is increasingly evaluated not against other human workers, but against the machine. And against the machine, by the machine’s own metrics, the human loses. Every time. On speed. On scale. On accuracy of repetitive tasks. On availability. On the cost line of a P&L.
A 2025 survey by Jobs for the Future found that 64% of workers felt only “moderately empowered” or “not very empowered” as AI use expanded. Eighty-four per cent of workers reported job insecurity as a significant stressor. Workers aged 18 to 25 β people who are just beginning their professional lives, with massive debts for getting degrees, who have not yet built the accumulated expertise and institutional knowledge that makes a senior employee genuinely difficult to replace β report feeling “invisible” in workplaces that value algorithmic output over human contribution.
This is the efficiency shame. And it is not an accident of poor communication or inadequate change management. It is what happens when you take a system of human beings who derived meaning, agency, and identity from the quality of their work β and you introduce AI that performs the quantifiable parts of that work faster, cheaper, and without complaint, then measure the human against AI.
Only 36% of workers report having the training needed to adapt to AI. Yet they are expected to keep pace with automated workflows. The narrative deployed to explain this gap β “you won’t lose your job to AI, you’ll lose it to someone who uses AI” β is a masterpiece of individualising a structural problem. It does not address why the AI tools exist. It does not address who benefits from them. It shifts the entire burden of survival onto the person least positioned to bear it, and frames their failure to thrive as a personal inadequacy rather than a predictable outcome of a system designed around their replacement.
I had a 360-degree performance review once. The 360 referred to the number of degrees in a circle, implying that feedback came from all directions β peers, subordinates, managers. What it actually meant was that there were now more people officially documenting my inadequacies. The efficiency shame of 2026 is a 360-degree performance review conducted by a AI that never had a bad day, never got tired, never asked for a pay rise, and is not going to the pub afterwards to tell everyone what it actually thinks of the management. In the short term, AI wins the performance review. In the long term, companies will have no humans left to review.
The Enshittification Game: Who Gets the Fruit
Let us follow the money, because the money is where the argument ends.
Companies that deployed productivity AI in 2025 outperformed the S&P 500 by 29%. Their stock prices rose 17.2% compared to the broader index’s 13.3%. The “outperformance” is real. The mechanism of it is not mysterious. You replace expensive humans with cheap AI, your cost base falls dramatically, your profit margins expand, your earnings per share improve, your stock goes up. The productivity gain is entirely genuine. The question of who receives it is entirely decided, and the answer is not the workers.
Apple authorised $110 billion for share repurchases in 2024 β a United States record. Alphabet bought back $62.6 billion of its own stock in the same period. Meta, in its “Year of Efficiency,” returned $25.4 billion to shareholders via share buybacks β while simultaneously planning to eliminate 16,000 of its 80,000 employees. Total US stock buybacks are predicted to have exceed $1 trillion in 2025 alone.
A stock or share buyback, for those who have not had cause to become familiar with this particular mechanism of value extraction, is when a company takes cash that could be used to pay workers more, invest in R&D, lower prices, or train people to use the new AI tools β and uses it instead to buy its own shares, reducing the number of shares in circulation and increasing the value of the ones remaining. Basically its a bonus to shareholders for all the hard work they did many moons ago in investing in the company. It benefits, in descending order: institutional shareholders, the executives whose compensation is tied to share price, and no one else.
The enshittification is most visible, and most quantifiable, at the gig economy level β where algorithmic control is total and the human gig worker has no institutional protection at all.
Uber drivers in 2026 are working more and earning less. Through “upfront pricing” and “algorithmic trip bundling,” Uber has shifted the risk of traffic and route changes onto the driver while compressing per-mile and per-minute pay. The app shows big numbers for gross earnings. The net income, after fuel, maintenance, insurance, and the depreciation of a vehicle being used as a commercial asset, frequently falls below minimum wage. The driver provides the car. The driver takes the risk. The algorithm takes the margin.
Amazon now takes more than 50% of seller revenue, up from 40% five years ago, through a structure of referral fees, fulfilment charges, storage costs, and mandatory advertising spend so complex that most sellers require specialist software to calculate their actual profitability. A typical seller on a $29.99 product that generated $6.26 profit in 2024 now generates $4.74 β a 24% collapse in profit per unit despite identical sales volume. The efficiency of the platform does not lower prices for the consumer. It does not increase income for the seller. It perfects the redirection of value to the Amazon’s shareholders, and calls this progress.
The efficiency gain is real. The question of who receives it is answered in the shareholder letter, not the press release.
The Universal High Income: Pacification Dressed as Policy
Let me say something clearly about Universal Basic Income (or Universal High Income), because it is going to dominate the next decade of political discourse and it is important to understand what it is and what it is not.
It is not a concession. It is not the tech industry acknowledging that automation has obligations. It is not Silicon Valley suddenly developing a social conscience. It is a business continuity plan.
Here is the problem that Sam Altman, Peter Thiel, Elon Musk, and every other tech billionaire who has endorsed some version of UBI is trying to solve: if you eliminate the white-collar workforce, and the white-collar workforce is also the majority of the consumer class, and consumers stop consuming because they have no income, then the economy that generates your valuation stops working. The system that made you a billionaire requires consumers. Consumers require income. If AI takes their income, someone has to replace it, or the consumer economy collapses and takes the tech sector’s growth story with it.
UBI is the maintenance fee for a consumer economy that has had its workforce removed. It is the minimum viable expenditure required to keep the people you have replaced from stopping consumption entirely β or, if we are being blunt about the secondary consideration, from becoming so desperate that the political consequences become inconvenient.
The tech emperors who advocate for UBI are not wrong that it would help the people who receive it. Sam Altman’s study showed genuine positive outcomes for the lowest-income recipients. They are simply not being transparent about why they want it implemented. A $1,000 monthly payment β the “Universal High Income” they are now calling it, because the word “basic” has the embarrassing quality of real accuracy β is just enough to sustain consumption at a level that keeps the platforms profitable. It is not enough to build savings, acquire assets, fund education, or develop the kind of economic security that produces independent political agency. It is, to borrow a term that the Zimbabwean experience makes vivid, the official rate. The street rate of what the automated economy owes the people it has replaced is considerably higher. No one is offering the street rate.
The irony β and it is an irony that could only have been produced by a civilisation that took a wrong turn somewhere around 1979 β is that the most aggressively capitalist ecosystem in human history, the one that produced the first trillionaires, the one that holds annual conferences at which unelected billionaires deliver speeches about the future of humanity to rooms full of other unelected billionaires β has arrived, by the logic of its own success, at a position that requires a form of state redistribution to function. Not because socialism won. Because capitalism automated itself into needing a floor. The floor they are proposing is exactly low enough to prevent collapse and exactly low enough to prevent challenge.
What Jacques Ellul Knew and Nobody Listened To
In 1954 β two years before the first commercial computer was sold, thirty-five years before the World Wide Web, almost seventy years before ChatGPT was introduced to the world via a tweet β a French philosopher named Jacques Ellul published a book called The Technological Society. His argument was straightforward and has never been successfully rebutted.
Technology β what he called “Technique” β is not a neutral tool. It is an independent force that, once released into a society, reorganises that society around its own requirements. It does not serve human values. It replaces them. Efficiency and optimisation become the supreme virtues, not because they are the most important human values β they are not, by any serious reckoning β but because they are the values most compatible with the technology’s operation. The society reorganises itself to become legible to the machine, rather than the machine reorganising itself to serve the society.
Neil Postman called the end state of this process a “Technopoly” β a civilisation in which technology has become the arbiter of all value, in which “if it can’t be measured, it doesn’t exist,” in which empathy, tradition, and human judgment are treated as inefficiencies to be designed out.
In the 2026 workplace, empathy is a “frictional” quality. Human judgment is slower than algorithmic decision-making. Contemplation does not aid in “streamlining the product-consumer process.” The worker who brings twenty years of nuanced institutional knowledge to a problem is evaluated against the AI agent that brings no knowledge but processes all relevant inputs in 0.3 seconds. The worker who asks whether the efficient solution is also the right solution is told that asking this question is not part of their role.
The architects are exempt from this logic. OpenAI, Anthropic, Google DeepMind β they all maintain human-dense organisations precisely because they know that at the frontier of building these systems, human judgment is not a frictional quality. It is the only quality that matters. They are not building Technopolies for themselves. They are building them for the companies that buy their products.
This is the hidden truth of the Forbidden Fruit. The serpent did not eat it. The serpent knew exactly what it was.
Here is what the data says, without euphemism, without management language, without the particular kind of corporate English designed to make structural extraction feel like a partnership.
Since 1979, the productivity of workers has grown by 279%. Yet their compensation has grown by 18%. The difference β the 261 percentage points of value produced and not returned β went somewhere else. It went to to shareholders. To the buyback programmes of companies that were made profitable by the very workers whose pay was suppressed to fund the repurchases. Enshittification is real!
In 2026, the companies that spent decades suppressing wages to fund the development of AI are now using that AI to eliminate the workers who made them profitable. The efficiency gains are flowing, without interruption, in the same direction they have always flowed.
The architects of the system are expanding their own workforces because they understand, better than anyone, that human intelligence at the frontier is irreplaceable. They are selling a different message to everyone else.
The workers who remain are being told that their survival depends on becoming “power users” of the tools being used to replace their colleagues. This is technically true. It is also a job description for doing more work for the same money while the structural causes of their insecurity remain unaddressed.
The workers who do not remain are being offered a future UBI that its advocates have designed to be exactly sufficient to maintain consumption and exactly insufficient to create independence.
The iceberg was always there. The productivity-pay gap is more than fifty years old. AI has not created the Ghost Economy β it has surfaced it, made it undeniable, accelerated its conclusion. The ship is filling with water and the people who built the ship and sold you the ticket are currently in the lifeboats discussing the optimal allocation of human resources.
They are not wrong that efficiency matters. They are simply not being honest about who the efficiency is for.
The Luddites were crushed by an army. Imagine if that happened today? Well we don’t have to fear an army standing outside of your offices. An algorithm will make sure you are crushed as if you never existed. The workers of the American South were displaced by machines the moment their labour became expensive enough to make displacement profitable. Henry Ford shared the gains and built a consumer economy that made him richer than anyone who didn’t. The lesson was available. It was not learned. It was not meant to be learned.
The Forbidden Fruit was always labelled correctly. We just did not read the label.
But here is the thing about gardens. Every expulsion in history has been followed by people building something better outside the gates. The workers who were expelled in 1979 built trade unions. The workers expelled in the 1980s built the gig economy. The workers expelled in the 2020s are going to build something that the people inside the garden cannot yet see β because the people inside the garden have spent fifty years ensuring that the people outside it lack the resources to build it.
They have not, however, managed to take the anger. That remains fully distributed.
Humans will have the last laugh. They always do.
You have just read the argument that Big Tech does not want on the first page of Google. If it confirmed something you already felt but could not name β that is the point. For the full case: why AGI is the greatest deception in modern history, read The Gilded Cage β available on techonion.org and Amazon. For the broader indictment of Big Tech’s business model β the tool versus the weapon, the enshittification cycle, the unelected tech emperors β read The Emperor’s New Suit, also on techonion.org (Kindle eBook) and Amazon (Paperback). The Emperor has always been naked. Both books are the child who says so.

GIPHY App Key not set. Please check settings