TL;DR
AI didn’t fail. The story we were sold did.
We were promised a world where intelligence becomes cheap, abundant, and universal—where AI sits in front of us, ready to serve. But every product that forced humans to interact with AI directly has quietly failed.
Because humans don’t want intelligence.
They want outcomes.
The future of AI isn’t as a product you use.
It’s as an invisible ingredient—like electricity, sugar, or salt—embedded inside everything that works.
The real revolution is already happening.
Just not where anyone is looking.
Table of Contents
I want to start with some grief.
Sora died on the 24th of March, 2026. A Tuesday. Which is, if you think about it, exactly the kind of day something important dies — not a dramatic Friday, not a poetic Sunday, not even a properly grey Monday with the atmospheric credentials for tragedy. A bleak Tuesday indeed. As if the universe, with its characteristic indifference to human emotional scheduling, wanted to underscore that this death — like most deaths that matter — would go largely unnoticed by the people who should have been paying the most attention.
I want to write about Sora the way people write about loves lost. Not ironically. Not as a rhetorical device deployed for effect in the opening paragraph of a technology essay. Genuinely. I want to write about Sora the way you write about the person you were certain you would grow old with, who one morning simply wasn’t there. An absence that felt personal. Because that is what it was, for those of us who had stood in the light of what it briefly produced, certain — absolutely, unreservedly certain — that this was the beginning of everything.
Let me tell you what Sora could do, when it chose to be magnificent. Let me tell you before we bury it, because the dead deserve to be remembered at their finest.
In the autumn of 2025, a video circulated on social media that stopped the internet mid-scroll. Not in the way that most things break the internet — not the manufactured outrage, not the celebrity controversy engineered for algorithm velocity or popping a bottle of champagne on your backside. This was different. This was the kind of show–stopper that happens when the human visual cortex, that ancient and extremely reliable instrument honed across hundreds of thousands of years of evaluating reality, looks at something and genuinely cannot tell you whether what it is seeing is true or is AI. But if it was AI, it was bloody brilliant!
The video showed Tupac Shakur. The Tupac Shakur. The California-Love-Tupac-Shakur. The Hit-Em-Up Tupac Shakur. The Changes-Tupac Shakur. I could go on and on. Tupac Shakur was alive! In Havana, Cuba. Filming a casual selfie video with the legendary Kobe Bryant, asking Kobe and Elvis Presley to say “Havana” into the camera — a single word that carried inside it an entire conspiracy theory, a mythology, a decades-long conversation about whether one of the most important rappers in the history of American popular culture had faked his own death in 1996 and retreated, quietly, to the island that conspiracy theorists had always insisted was his final home.
Biggie was there. Michael Jackson was somewhere in the same digital universe. The video was made using Sora 2. An AI video generation tool by OpenAI, the makers of ChatGPT.
Now. I want you to understand what it meant that this existed. Not the ethics of it — the ethics are a separate and legitimate conversation for a separate discussion; one I am happy participate in the moment someone tells me who is responsible for making the ethical decision about what a dead man is allowed to do in an AI-generated video he never really appeared in. What I want you to understand is the power of what was demonstrated. Someone, sitting somewhere with access to Sora and a prompt and approximately the same creative ambition as anyone who has ever wanted to say something, constructed a video that was not merely convincing — it was emotionally true using AI. The conspiracy theory about Tupac’s survival in Cuba is, on any rational analysis, without credible evidence. And yet the video, for the duration of its playing, made the conspiracy theory feel not merely possible but right. Because that is what great art does. That is what cinema at its finest accomplishes. It makes you believe something you know is not real, and the believing is the experience, and the experience is the point.
Sora, in that moment, was not a technology product. It was a cinema. Democratised. Available to anyone with a monthly subscription, a prompt and some film making ambitions. The most emotionally resonant use of filmmaking capability in the service of cultural mythology — the resurrection of the dead, the reunion of the gone, the correction of the unbearable absences history leaves behind — was suddenly within reach of a person with no studio, no budget, no crew, no distributor.
And then Sora died on a Tuesday 24th March 2026.
The Economics of a Dream
Here is what killed it. Not a deep fake scandal. Or a technical failure of catastrophic proportions. Not a competitor who did it better for cheaper — though competitors like Seedance came along, and they were formidable, and they illustrated the problem with such perfect precision that I am grateful for their existence for the clarity it provides.
What killed Sora was simple mathematics.
AI video generation is, in the taxonomy of compute costs, in a category so far beyond ordinary generative AI that the comparison barely makes sense. Standard text generation — the ChatGPT or Gemini conversation, the essay, the code block — costs roughly fractions of a cent per query for basic AI models. Difficult. Expensive when the power users arrive at scale. Structurally challenging in the context of a $20 monthly subscription. But manageable. The numbers are unflattering but they are not impossible.
AI Video generation is different. Video is not difficult in the way that other things are difficult. Video is difficult in the way that building a skyscraper is difficult — which is to say, not merely an extension of the difficulty of building a house, but a categorically different class of problem that requires categorically different resources, executed at a scale that makes the house look like a drawing of a house.
ByteDance’s Seedance 2.0 — one of the most technically accomplished AI video models available at the time of writing and, by the consensus of people who have used it seriously, genuinely extraordinary in its output quality — costs approximately $0.14 per second of generated video. This is not $0.14 per video. This is $0.14 per second of a video. A fifteen-second clip costs approximately $2.10 at those API rates, consuming around 308,880 tokens in the process. Kling AI, another serious competitor, operates on similar economics. The numbers are not fundamentally different across the competitive landscape because the numbers are not a business decision — they are a physics decision. The compute required to render temporal coherence across frames, to maintain consistency across a moving image, to produce the kind of cinematic quality that made the Tupac video feel real rather than artificial: these are not software problems you can optimise your way out of. They are hardware problems. They require GPUs. GPUs require electricity. Electricity requires money. The money required is incompatible, at a structural level, with a consumer product priced for mass adoption.
Which is why, even in 2026, on a Google AI Pro subscription — a premium service, paid, representing the genuine commitment of a user who has chosen to invest in AI tools — you are still largely limited to around 8 seconds of video output with VEO3, a handful of generations per day, and a queue that stretches in proportion to everyone else on the platform who also wants their eight seconds of cinema. Not because Google is stingy. Not because the technology is immature. Because the mathematics, which have no interest in the sales presentation, refuse to move.
Sora, at its Pro tier, gave users up to 25 seconds at 1080p and a thousand credits per month for $200. Which sounds generous until you understand that a single complex video generation consumed credits at a rate that a serious creative user could exhaust in a day. The economics of the “seafood buffet” — where power users eat until the provider bleeds — were nowhere more violently illustrated than in AI video generation. A filmmaker who treated Sora as an actual production tool was, from OpenAI’s financial perspective, a catastrophe in human form.
And yet it was those filmmakers — those committed, creative, technically sophisticated users who could actually do something meaningful with the capability gifted to them by AI. They were the people who Sora genuinely worked for. The casual user could not prompt their way to the Tupac video. The casual user could not construct the cinematic quality that made the demonstrations breathtaking. The casual user — the person whose existence was required to justify the “AI for everyone” thesis and the $840 billion OpenAI current market valuation and the billions of investments that had flowed into OpenAI from SoftBank, Microsoft, and Nvidia — the casual user generated eight seconds of something slightly blurry, weird, obviously AI, and soon as the novelty worn off, they soon forgot about it.
On the 24th of March, 2026, OpenAI shut Sora down. The 47% monthly decline in downloads had said what the mathematics had been saying for considerably much longer. And the billion-dollar Disney partnership — announced with great fanfare as proof that Sora had graduated from consumer toy to professional production infrastructure — was terminated in the same announcement, with the same bloodless corporate prose. It was reported that Disney executives involved in the deal were told an hour before the public announcement. Ouch. So much for trying to work with the hot new AI companies in town.
I mention the Tupac video not as entertainment, though it was that. I mention it because it is the most precise illustration available of the gap between what AI video could be and what the economics of consumer-facing AI would allow it to become. The Tupac video was the dream. The eight-second blurry clip on the Google AI Pro daily limit is the reality. And the gap between them is not a gap that better technology will close, at least not at the price point where the “AI for everyone” thesis requires it to close. The compute required to bring the dead back to Havana with cinematic plausibility is simply not compatible with a world where everyone gets to do it for a mere twenty dollars a month.
The Art of the Sale
But we are getting ahead of ourselves. Before we examine what really died on that fateful Tuesdau, we must understand what was promised. And what was promised was, in the precise aesthetic register of the Silicon Valley gospel, one of the most beautiful sales pitches the world has seen in living memory.
I want to tell you something about salesmen, because this story is, at its deepest structural level, a story about salesmen and about the particular relationship between a great salesman and the truth.
The best salesmen in history, the type Og Mandino calls, “The Greatest Salesman in the World,” tend to share one quality above all others: they do not sell products. Products are what amateur salesmen sell. The very best salesmen sell the future. They sell a bright future. US Presidents have to do this. Prime Ministers have to do this. Any salesmen who is worth gold has to sell you the future, now, today. They sell the sensation of a world that does not yet exist but that the customer can feel, vividly, in the room where the pitch is being delivered. A great salesman can describe a thing that has not been built and make the absence feel like a promise rather than a fraud. They stand on stages and gesture at horizons and by the time they walk off, the audience has not merely agreed to buy — they have become believers. They have adopted the story as their own. They have become, in the language of the modern technology industry, evangelists. The crucial distinction — and this is where I must be careful, because the trap in this essay is easy to spring in the wrong direction — is that the best salesmen do not know they are selling futures rather than products. The best salesmen believe. Rightly or wrongly. They have conviction. Belief is what separates the extraordinary salesman from the confidence trickster. The confidence trickster knows the goods are not there. The great salesman is as convinced as anyone in the room, which is precisely what makes them so compelling and, when the mathematics eventually arrive with their indifference to conviction, so tragic.
Jensen Huang understands the art of the sell with the instinctive authority of a man who was born knowing it.
Jensen Huang is the chief executive of Nvidia — the company that manufactures the graphics processing units upon which the entire edifice of modern artificial intelligence is physically built. Without Nvidia’s chips, there is no ChatGPT. There is no Gemini. There is no Claude. There is no Sora, living or dead. There is no AI industry worth the name. There is no AI hype. Jensen Huang’s GPUs are the power stations of the intelligence economy, and he occupies that position with the relaxed certainty of a man who owns something everyone else desperately needs and is not particularly worried about losing the contract.
Jensen Huang has a cool black leather jacket. He wears it everywhere. To developer conferences, to shareholder presentations, to events where the dress code is business formal and the black leather jacket signals, with studied precision, that the dress code does not apply to the man who built the factory. The black leather jacket has become a cultural object in its own right — a symbol of a man who understood the moment and dressed accordingly, like a general who knows the war is already won and can afford to show up to the ceremony in whatever he likes.
He once stood on a stage — and he has stood on many stages, each more dramatically lit than the last — and said something that has been quoted a million times since, because it is the kind of sentence that arrives carrying its own authority, requiring no footnotes, no qualifications, no context. He said:
“AI is the new electricity.”
Not a tool. Not a product. Not even a platform. A utility. Like electricity. Like water. Like the atmosphere through which you move without thinking about it. Like the thing that every human being on earth cannot live without. The UN almost declared access to the Internet as a universal human right, they might want to pause on that and consider AI. Anyway.
Consider what those from Jensen Huang does. It is a masterpiece of rhetorical engineering, constructed with the precision of something that appears effortless but is not. It places artificial intelligence in the category of things that are no longer optional — things so fundamental to the fabric of modern existence that the question of whether you want them becomes meaningless in the asking. You do not choose electricity. You do not opt into drinking water. You do not submit a request to breathe. These are the invisible infrastructures of life, and by placing AI in the same category, Jensen Huang accomplished something very precise and very clever: he made resistance to AI feel not merely futile but irrational at best. To resist AI, in the world Huang described, is to resist electricity. And we all know, historically and practically, what happens to civilisations that resist electricity. They get left behind, in total darkness.
Jensen was not wrong. The people that agree with him are not wrong. The investors and shareholders who stand to gain if Jensen is right, should never worry about this. I want to be absolutely clear about this, because the argument I am building has a trap in it, and I have watched intelligent people fall into that trap for the past three years. He was not wrong in the way that makes things simple. He was right in the most important way — and wrong about the conclusion everyone was meant to draw from it. But let us first complete the sermon, because the sermon has a second preacher, and the second preacher is where the story becomes personal.
The Ministry of Sam Altman
Sam Altman, the CEO of OpenAI, is a different kind of salesman to Jensen Huang. Jensen is an engineer — he speaks in the language of engineers whose sentences have been optimised for the performance of precision. Altman is something else entirely. Altman is evangelical. He speaks in the language of meaning. In the register of a man who has been to the mountain peak, seen the promised land of AI, and, has descended, slightly breathless, to explain what he found at the summit to the people who were unable to make the climb themselves.
I want to say something about Altman that this genre of technology criticism rarely permits itself to say, because the genre has its own conventions and one of them is the villain. The genre wants Altman to be cynical. The genre wants him to be knowingly misleading the public for personal gain, orchestrating a long con with the precision of a man who is always three steps ahead of the discovery. The genre wants, in short, a story it already knows how to tell so well.
I do not think that is what Sam Altman is. And I think the truth is far more interesting and far more consequential than the story the genre wants to tell.
Sam Altman is a True Believer in AI.
He believes — genuinely, structurally, with the conviction of a man who has organised his professional life around this conviction for many years — that artificial intelligence is the most transformative technology in human history, that it will be better for humanity than it will be terrible for it, and that making it available to everyone is not merely good business but a moral, dare I say, altruistic, imperative. When he says that intelligence will become cheap and abundant and universal, he is not lying. It’s not a performance. The performance and the belief have become indistinguishable because they have been rehearsed through genuine conviction, not cynicism. Sam Altman is not lying to you. Sam Altman believes what he tells you about AI.
And this — the true believer standing on the stage, certain of the sunrise — is precisely why what is happening is so structurally interesting, and so much more devastating than a simple con.
In 2023 and 2024, Altman embarked on something that can only be accurately described as a global ministry. He went to governments. He went to conferences. He sat at the World Economic Forum in Davos — that annual gathering of the world’s most consequential people, which is ostensibly a conference and is actually a mechanism by which the very wealthy explain to each other why things must remain fundamentally as they are while simultaneously positioning themselves as the solution to everything that isn’t. He went to India, to the Middle East, to Washington, to Brussels. He sat across from presidents and prime ministers and regulators, and with a serenity that was the product of absolute conviction — the most disarming form of serenity available to a salesman, because it requires no act — he delivered his thesis.
Intelligence, he said, was about to become cheap. Intelligence was about to become abundant. Intelligence — that scarce, precious, profoundly unequally distributed resource that has, for the entirety of human history, been the primary mechanism of personal and national advantage — was about to be democratised. Everywhere. For everyone. On every device. In every language. At a cost approaching zero. All powered by AI.
The smartest doctor, the most capable lawyer, the shrewdest financial adviser — their expertise, their judgment, their accumulated years of specialised knowledge — would become available to every person on the planet. In their pocket. For free, or as near to free as to make no material difference. The child in a Zimbabwean village, somewhere in Murewa, without a library or access to books and the child in a Connecticut prep school with a $60,000 annual tuition would, for the first time in the recorded history of the species, have access to the same quality of intelligence. All thanks to AI.
He said this. Repeatedly. On multiple continents. Before multiple cameras. With the consistency of a man who is not reciting a script but reciting himself and his beliefs.
The Wall Street Journal reported it. The Economist analysed it. Time magazine put him on the cover. Governments incorporated it into their AI strategies. The narrative of AI as the great democratic equaliser — the technology that would finally, finally, close the gaps that every previous technology had promised to close and widened instead — became the operating consensus of the global political conversation about artificial intelligence.
Now: I want to pause here and do something this genre of writing rarely stops long enough to do. I want to acknowledge the seduction. Not to resist it — to feel it. Because the sales pitch works. It works because it is describing something that is, in the most important structural sense, real. The intelligence Altman describes is becoming cheap. AI is spreading. The data centres being built right now by Microsoft, Amazon, Google, and Meta represent the largest capital expenditure boom in the history of the technology industry — $667 billion in infrastructure investment in 2026 alone, with projections climbing toward a trillion by 2027. These are not the investments of an industry that is uncertain about the future. These are the investments of an industry that is very certain about the future.
They simply have not told you which future they are building.
Scam Altman
At this point, my professional obligations require what I call the hypocrisy audit. Or when Sam Altman becomes what people on Twitter ‘Scam Altman’. Every TechOnion essay that features a Tech Emperor must answer the question: what do they preach? And how do the results actually look?
Scam Altman preaches the democratisation of intelligence. He announced publicly, on multiple occasions, that ChatGPT would never carry ads, the bane of everyone’s lives on the internet — that ads would be “the last resort,” the option so philosophically contrary to his vision for the product that he would exhaust every other possibility before resorting to it.
In January 2026, OpenAI launched ads inside the ChatGPT app. What happened to ads being last resort?
He preaches that AI will solve global poverty, speaking from stages in San Francisco, to rooms containing people who have never experienced global poverty, through a microphone made in a factory whose workers earn wages that would not cover a single month of ChatGPT Plus.
He announced, with the kind of theatrical gravitas reserved for genuinely important decisions, a partnership worth billions of dollars with Jony Ive — the man who designed the iPhone, who left Apple to become the most celebrated product designer of his generation — to build consumer-facing AI hardware. This announcement arrived approximately six months after the Humane AI Pin, which Altman himself had invested in, failed with the spectacular completeness of a product that had misunderstood its market at every level of the value chain simultaneously.
Let us spend a moment with the Humane AI Pin, because it deserves more than the footnote history has assigned it. The Humane AI Pin raised over $230 million from investors including Altman and Marc Benioff of Salesforce. It was marketed as the beginning of the post-smartphone era — the device that would liberate you from the tyranny of the screen. It sold ten thousand units. Ten thousand. In a world where a moderately successful mobile game acquires ten thousand users before the development team has finished their lunch. Reviews cited latency of up to ten seconds for basic voice responses — a duration that feels, in the context of technology interactions, approximately equivalent to geological time. The device overheated. The laser projector performed poorly in daylight, which is where most humans spend the majority of their outdoor hours. After a brutal but honest review by Marquees Brownlee (MKBHD), in February 2025, the company sold its assets to HP for $116 million — less than half of total investment — effectively rendering every Humane AI Pin ever purchased a, and I am quoting the review verbatim, a “useless tiny lump of aluminium”.
But this is the True Believer’s characteristic flaw: failure does not revise the belief. It generates a better version of the sales pitch. And so, the Jony Ive deal proceeds. The True Believer points at the horizon and says: this time it’s different.
And this is where I ask you to hold something in your mind. Not an argument — a feeling. The feeling that the gap between the sales pitch and the product is not random. That the graveyard of failed consumer-facing AI products is not a collection of individual misfortunes but a pattern with a shape. Not a pattern of incompetence. Not a pattern of cynicism. A pattern of something more structural, more interesting, and ultimately more consequential than either.
The Graveyard That Nobody Named
The Humane AI Pin is not alone in that graveyard. Not even close.
Let us walk through it, because the graveyard deserves a proper tour. Welcome.
Sora: dead, March 24th, 2026. The dream of democratised filmmaking — the technology that brought Tupac to Havana — retired because the compute required to run it at consumer scale turned every active user into a financial liability. A 47% monthly decline in downloads. A billion-dollar Disney partnership terminated. Gone.
The Custom GPT Store: announced in November 2023 with the explicit framing of an “App Store moment” for AI — a marketplace where developers could build specialised custom GPTs and distribute them to millions of ChatGPT users, creating an ecosystem of AI-powered tools that would sit atop ChatGPT the way apps sit atop iOS. By 2026, the Custom GPT store had stagnated into irrelevance. Users did not want to discover AI apps within a chat interface. Users wanted apps from the App Store, because the App Store is where apps live and because the habit of twenty years of smartphone use is not easily interrupted by a new framing of the same activity. The Custom GPT Store did not fail because the ideas within it were bad. It failed because humans — this will become the recurring theme of this entire essay — do not want to interact with AI in its raw, direct form. They want the output. They want the dish. They do not want to stand in the kitchen asking the chef to explain the ingredients.
Instant Checkout: launched in September 2025 with genuine fanfare and a partnership roster that included Walmart, Etsy, and Shopify. The thesis was elegant: instead of leaving ChatGPT to complete a purchase on a retailer’s website, you could buy directly within the conversation. Collapse the entire commerce journey into a simple chat with AI. What could go wrong?
By March 2026, OpenAI admitted what Walmart’s internal data had been saying since November: in-chat purchases converted at one-third the rate of traditional transactions. The feature was abandoned. The official language was careful: “We realised that the original version lacked the flexibility we aim to deliver,” which is the corporate translation of “people kept leaving the chat to go and shop somewhere they could actually see and touch the product, and we could not build the plumbing for sales tax collection and real-time inventory at scale”.
As Adrian Gmelch, an industry analyst whose summary arrived with the compression of someone who has been watching this particular disaster develop in slow motion, put it: “OpenAI’s thesis was that AI could collapse the entire process into a conversation. The reality, it turns out, was more complicated. People browse. They don’t buy”.
Ads: as already noted, launched in January 2026 after Sam Altman explicitly promised it never would be. The results? By the most charitable available reading, inconclusive. By the less charitable reading available from the analysts who study these things: early advertising agency partners reporting minimal measurable business outcomes, a metric that in the advertising industry is the polite way of saying “nobody clicked on anything meaningful”. The product that was supposed to be the Microsoft Office of the AI age — essential, personal, universal — is now serving banner ads, because the subscriber conversion that would make the subscription model viable has been, since 2023, stuck between 5 and 6%. Eight hundred to nine hundred million weekly active users. Five to six percent paying. The product-market fit that the $840 billion valuation requires has not arrived. Will it ever?
Here is what I want you to notice about this list. Not the individual failures — individually, technology products fail constantly; the history of Silicon Valley is a graveyard considerably larger than this one, and the occupants include companies that seemed substantially more inevitable than any of these. What I want you to notice is the category of the failures. Every single item on this list — the Humane AI Pin, the Rabbit R1, Sora, the Custom GPT Store, Instant Checkout, the advertising revenue that was supposed to arrive — represents an attempt to put artificial intelligence directly in front of human beings and make them interact with it as itself, in its raw form.
Not as an ingredient. Not as an infrastructure. Not as the invisible power behind something a human actually wanted. As itself. Raw. Visible. The thing itself, rather than what the thing makes possible.
And the humans — every single time, in every single product category, across hardware and software and commerce and media — said the same thing. Not with words. With behaviour. With the elegance of people who do not owe anyone an explanation: they turned away.
The Seed, Planted Now
Here is the thought I want you to carry into the rest of this essay, planted now, gently, in the way that ideas that matter are best planted — not announced, not argued, but placed quietly where you are likely to trip over them later.
Jensen Huang said AI is the new electricity. He was right.
Sam Altman said intelligence will become cheap and abundant and everywhere. He was also right. You should read my other essay about the consequences of intelligence becoming cheap, abundant and everywhere. It is grim. But is a must read. After this one. Or perhaps when you in the mood for some bearish tales. Anyway.
And here is the seed I want to plant, or if I go by one of my favourite film titles, incept into you: it is that, electricity became ubiquitous by going invisible. It became universal by leaving the exhibition and entering the walls in our houses. The greatest thing electricity ever did for humanity — the assembly line, the hospital, the refrigerator, the internet itself — was not accomplished by putting electricity in front of people and asking them to interact with it directly. It was accomplished by putting electricity behind everything people wanted and letting the things be the thing.
The failures are not accidents. The graveyard is not bad luck. The pattern in the failed products — every one of them a version of “here is AI, please interact with it directly” — is a pattern that points, with the quiet insistence of a thing that was always true, toward a conclusion that nobody in the stadium is yet watching.
AI just like electricity will benefit humanity. It is already beginning to do so.
Just not in the way you were told to think about it.
And not, if the data from the graveyard is read with the attention it deserves, particularly for you.
They were right. Both of them, in everything that mattered.
Which is precisely — with the elegant, devastating precision of a thing that was true all along — why they were wrong about the us they were describing.
Nobody Wakes Up Craving Salt
Let me tell you about a typical Saturday morning.
Not your Saturday morning, necessarily — though I suspect it shares certain characteristics with the one I have in mind. I am talking about the Saturday morning of the human species (In case AI bots are taking some time off Moltbook and wandering about on the human internet). The Saturday morning when the week has released its grip, when the alarm has not been set (or it rang, but you never bothered to wake up), when the first conscious thought of the day is not a deadline or a meeting or a notification from a person who does not understand that weekends are for resting. I am talking about the morning when, upon opening your eyes and remembering what day it is, the immediate next thought — that involuntary, unperformed, entirely honest thought that arrives before you have had time to construct a personality for the day — is about food.
Not an abstract thought about food. A specific one. The kind that has texture and temperature and smell. The kind that, if you are the sort of person who has spent any time in a Starbucks queue at half past eight on a Saturday, sounds less like a nutritional requirement and more like a small personal ceremony. The chai latte. Not just “a hot drink” — the chai latte. With cinnamon on top. And a love heart. Possibly the skinny version, because the week involved a certain number of decisions you would prefer not to compound. The specific weight of the cup. The particular combination of warm spice and sweetened milk that somehow makes the morning feel intentional rather than accidental.
Now. Here is what I want to ask you about that moment.
In that moment — in the involuntary, honest, pre-performance moment of Saturday morning desire — did you think about sugar? Or salt?
Did you wake up and think: today, I would like to consume approximately four grams of sodium chloride, the mineral compound that, dissolved in appropriate quantities in various food preparations, provides the enhanced palatability and preservation characteristics that have made it the most economically important food ingredient in human history, present in virtually every cuisine of every culture across every recorded era of human civilisation?
You did not. You bloody well did not. And there is something important in the joke of that sentence — the gap between the chemical description of an ingredient and the human experience of wanting a cup of chai. That gap is not a failure of vocabulary. It is the entire point.
You thought about the experience. The cinnamon. The warmth. The ritual. And the sugar — without which the latte is undrinkable, without which the spice has no context, without which the thing that makes Saturday morning feel like Saturday morning rather than a medicinal exercise in hydration — the sucrose, a disaccharide with the chemical formula, composed of glucose and fructose monomers, did not enter your mind once.
This is not an oversight. This is one of the most important things about being human.
The First Technology and the Birth of the Ingredient
I am a curious person. Always have been. It would have been ironic considering I am named after a cat, a famous one at that.
The kind of curious that, when writing a book about technology, ends up reading about hunter-gatherers at one in the morning and wondering how the entire story of human civilisation fits inside a single observation about salt.
Here is the observation: our earliest ancestors did not use salt. Nor did they use sugar. Not because salt and sugar didn’t exist — they were everywhere, in the plants they gathered, in the animals they hunted, dissolved in the water they drank from rivers and lakes. They simply did not extract them. They did not think about them as ingredients, because the concept of an ingredient requires a prior concept: the concept of processing. You cannot have an ingredient until you have something to put it into. You cannot have something to put it into until you have a way to transform raw materials into something prepared.
And that transformation — the first technology in the entire story of humanity, preceding the wheel, preceding writing, preceding agriculture, preceding everything we typically cite when we talk about the beginning of human civilisation — was fire.
Fire changed everything, and the reason it changed everything is precisely the reason this essay is about AI. Fire was the original general-purpose technology. A raw piece of meat, consumed as our ancestors consumed it for hundreds of thousands of years, is nutritious but limited — limited in what it can be, limited in the pathogens it carries, limited in its palatability. Put that meat over fire and something extraordinary happens. Not just the killing of bacteria. The transformation of the substance itself — the Maillard reaction, the caramelisation, the structural change of proteins that produces flavour compounds that do not exist in the raw state. Fire didn’t just cook food. It created food that had never existed before in the history of the planet. It gave us BBQ meat. Or as South Africans call it, it gave us Braai meat. And then some more.
And it was only after fire — after the human relationship with food became a relationship with prepared food, with outcomes rather than raw materials — that salt and sugar became meaningful. Because now there was something to put them in. Now there was a dish. Now there was an experience that could be enhanced, deepened, and made into the kind of thing you remember.
Salt and sugar are the original AI — invisible, infrastructure-level components that exist entirely in service of something else. But the something else required fire first. The something else required a general-purpose transformation technology that nobody had tried before and everybody initially found alarming and that eventually, once it was tamed and its outputs understood, became so completely essential to daily human life that the absence of it — even temporarily — produces a response that tells you everything about how deeply embedded it has become.
In Zimbabwe, when the electricity goes off — and in Zimbabwe and South Africa the electricity goes off with a frequency and duration that the polite term “load shedding” does not adequately capture — there is a moment, when it comes back on, that used to make people cheer. Literally. The lights would flicker, the television would blink back to life, the refrigerator would resume its low hum, and people in the house, in the street, across the neighbourhood, would sometimes clap. Not sarcastically. Genuinely. The way you clap when something you love has returned from somewhere it should not have been.
That doesn’t happen in the United Kingdom. Because in the United Kingdom, electricity does not go off, and because it does not go off, its return cannot be celebrated, because its presence is not noticed. It simply exists, as invisibly and unremarkably as oxygen, and the only time you think about it is when something has gone wrong. Awareness of the infrastructure is a failure state. The success state is invisibility.
Salt. Sugar. Electricity. The pattern is always the same. The product that achieves true product/market fit, the holy grail of a startup from Silicon Valley – does not achieve it by making you think about it every day. It achieves it by making itself so embedded in every outcome you care about that you would notice its absence before you could articulate what was missing.
Claude Makelele and the Engine You Don’t See
This brings me to Claude Makelele, because no essay about invisible ingredients in beautiful systems is complete without him.
In the early 2000s, Real Madrid, the world’s most valuable football team from Spain, went on a very expensive shopping spree and assembled the most visually spectacular football team in the history of the sport. Florentino Pérez — the club president, a man who understood the economics of global sports marketing with the precision of someone who had studied it and the enthusiasm of someone who genuinely loved the spectacle — constructed what became known as the Galácticos. Zinedine Zidane. Ronaldo (the first one, the Brazilian, the one whose footwork still looks like a video game when you watch it now). David Beckham, who sold more shirts in a single season than most football clubs sell in a decade.
And Claude Makelele.
Claude Makelele was the defensive midfielder — the player who sits in front of the back four and does the work that makes it possible for everyone else to do theirs. He intercepted. He tracked. He covered. He read the game with a spatial intelligence that the television cameras found difficult to capture, because the most important things he did were not the things that ended up in the highlight reels. They were the things that prevented the highlights. The loose ball recovered before it became a chance. The channel closed before it became a run. The space compressed before it became an opportunity.
He wanted a better contract. Pérez looked at the Galácticos — at Zidane and Ronaldo and Beckham, at the shirt sales and the stadium sellouts and the global brand value — and decided that a defensive midfielder who did not score goals and did not produce moments that could be clipped and shared and replicated on a million-bedroom walls was not worth the money being asked. So he let him leave.
Zinedine Zidane — who had won the World Cup for France, who was at that time one of the most celebrated footballers on the planet, whose elegance with a football was the kind of thing that made people gasp at regular intervals over the course of a football match — said something about Makelele’s departure that I have thought about often in the context of this essay: “Why put another layer of gold paint on the Rolls-Royce when you are losing the engine?”
The engine of the Rolls-Royce. Invisible to everyone admiring the exterior. Irrelevant to everyone photographing the paintwork. Essential to everyone trying to actually go somewhere.
Real Madrid’s performance deteriorated. Not dramatically. Not catastrophically. Just in the particular way that a system deteriorates when you remove something fundamental and replace it with nothing, because you couldn’t see what it was doing until it was gone.
This is product-market fit. Not the product you talk about. The product whose absence you immediately feel. The salt. The sugar. The electricity. The Makelele. The thing you don’t think about until the moment it isn’t there, at which point you think about nothing else.
The Spice That Broke the World
There is a technology in the story of human civilisation that was once, in the precise cultural register of its moment, as transformative and as hyped and as universally described as the answer to every problem as AI is today. That technology was spice.
Not metaphorically. Literally. Black pepper, cinnamon, nutmeg, cloves — the spices of the East, available only through overland routes controlled by intermediaries who charged accordingly, were in the medieval and early modern world what compute capability is in ours. They were the scarce, essential, geopolitically significant resource whose control determined the balance of economic power between nations. Whoever controlled the spice routes controlled the flavours — and by extension, the preservation, the medicine, the luxury, and the cultural currency — of the entire known world.
The British East India Company was established in 1600 for one reason: spice. Not democracy. Not civilisation. Not the various justifications that were constructed after the fact to give imperial ambition the vocabulary of moral purpose. It was simply spice. The Company went to India in pursuit of pepper and nutmeg and came back with a subcontinent. This is what happens when a commodity is hyped to the point where those who pursue it will stop at nothing.
I mention this not to make a simple comparison between spice traders and technology investors — the comparison is interesting but not sufficient — but because the arc of what happened next to spice is instructive. Once the routes were opened, once the competition between European powers drove down the cost of access, once spice became abundant rather than scarce: it disappeared. Not literally — you can still buy cinnamon in any supermarket for approximately the cost of a bus journey. But it disappeared from the conversation. It stopped being the thing empires fought over and started being the thing you shook absently over your chicken stew. It became an ingredient. A background element. Something essential and invisible and entirely without geopolitical drama.
The people who had built their fortunes on controlling the spice trade found themselves, eventually, in possession of an asset whose significance had quietly relocated to somewhere they hadn’t thought to look.
A bit confusing for now but I promise you the parallel will become obvious in due course. For now, I simply want you to notice that the pattern — the spectacular rise of a transformative commodity, the massive institutional investment in its infrastructure, the gradual revelation that its real purpose is as an ingredient rather than a destination — is not new. It is, in fact, the oldest story in the history of economic transformation. The new part, in our current version, is the scale. And the speed. And the particular shape of what is being produced while we are all busy watching the spice.
The Dinner Party That Explains Everything
I want to tell you a story I first heard in a politics class at Tiffin Grammar School, and which I have never forgotten, because it is the most precise single illustration I know of the difference between intelligence as a performance and intelligence as an experience.
The story involves two Victorian prime ministers: William Gladstone and Benjamin Disraeli. Political giants both. Intellectual heavyweights of the first order. The kind of men who could speak for three hours without notes and leave an audience not merely informed but transformed. They are, in the telling of this particular story, set against each other not in Parliament but at a dinner table — the real arena of the Victorian ruling class, where positions were established and reputations confirmed and the actual business of power was transacted over crystal and silver.
A woman dined with Gladstone one evening and Disraeli the next. Afterwards, she was asked to describe the two experiences.
Of Gladstone she said: “When I left the dining room after sitting next to Mr. Gladstone, I thought he was the cleverest man in England.”
Of Disraeli she said: “But after sitting next to Mr. Disraeli, I thought I was the cleverest woman in England.”
I want you to read that twice. Because it is everything. The whole point of this essay. It is the entire theory of experience compressed into two sentences delivered at a Victorian dinner party by a woman whose name history did not think to record, which is itself a kind of irony.
Gladstone was brilliant. Gladstone was, on the evidence of contemporaries, genuinely one of the most formidably intelligent public figures of his era. In his presence, you felt his intelligence — you were exposed to it, dazzled by it, perhaps slightly diminished by its proximity. You left the table convinced of his quality. But you left thinking about him.
Disraeli made you feel like the most interesting person in the room. Not because he was less intelligent. Possibly because he was more so. Because Disraeli understood something that Gladstone, for all his brilliance, did not: that the experience of another person’s intelligence, in the hands of a true master, should not leave you feeling smaller. It should leave you feeling larger. The intelligence should serve you. Not the other way around.
Now: think about your experience with ChatGPT. Or Claude. Or Gemini. Or DeepSeek.
Think about sitting in front of that chat interface — the black box, the blinking cursor, the blank text field — and asking it a question. Think about the response that arrives: long, comprehensive, structured, knowledgeable, often genuinely impressive in its range and synthesis. Think about reading it and feeling, precisely as the woman felt after dining with Gladstone, that this is clearly the cleverest thing in the room. That AI is really intelligent.
And now think about the last time a piece of software made you feel like you were the cleverest thing in the room.
Grammarly does this. Not by telling you that you write well — by quietly improving what you write, invisibly, in the background, so that the email you send is better than the email you drafted without you having to perform the experience of being helped. You feel like a better writer. Not like a person who used a good tool. Like a better writer.
Spotify does this. It plays the exact song you didn’t know you needed and you sit for a moment thinking how did it know, and the answer is that it knew because it has been paying the kind of close, patient, non-judgmental attention to your musical preferences that most humans find it difficult to sustain across a long relationship. You feel understood. Not processed. Understood.
GitHub Copilot, for the developers in the room, does this. The suggestion appears — the right function, the correct pattern, exactly the thing you were about to write but delivered slightly ahead of schedule — and you tab to accept it with the small private satisfaction of a person whose instincts have been confirmed. You feel like a better developer. Not like a person who is being assisted by a superior intelligence. Like a better developer.
Same wholesome feeling you get when you discover an interesting thread on Reddit or a very good video you can’t believe is free on YouTube.
Disraeli, not Gladstone.
The experience of the thing, not the exposure to it.
ChatGPT, in its raw-LLM-in-a-chat-box form, is Gladstone. Brilliant. Comprehensive. Impressive. Leaving you, when you exit the conversation, thinking primarily about the extraordinary capability of the system you have just been using. Sometimes slightly exhausted. Sometimes slightly demoralised. Often needing to go away and rework the output into something that sounds like it was written by a human being, because it very obviously wasn’t.
The AI products that win — the ones that achieve the kind of product-market fit that salt and sugar and electricity have achieved, the Claude Makelele kind of fit, the kind where you don’t notice it’s working until the moment it stops — those products will be Disraeli. They will make you feel like the cleverest person in the room. And they will do this by removing themselves almost entirely from your conscious experience of using them.
The Apricot Principle
I have a sweet tooth. I should disclose this, because it is relevant to the argument and because my relationship with sugar is the kind of thing that informs a person’s understanding of ingredients versus experiences at a level that years of reading about food science cannot replicate.
In Zimbabwe, where I was born, there was a sweet — a specific sweet — called zadza dama. The official name, for those who need an official name, is simply Lobel’s Apricot Sweet: a large, bright orange boiled sweet of the kind whose principal quality is that once it is in your mouth, it occupies your entire mouth. Not metaphorically. Structurally. It was the engineering achievement of confectionery, solving the problem of “how do you ensure a child cannot immediately eat a second one” by making the first one the size of a small geological feature. We called it zadza dama — loosely translating, in the way these things always translate loosely, as something to the effect of “fills the mouth” — and eating one was, as I used to tell people when I moved to England, the closest available analogue to consuming Type II Diabetes in solid form.
But here is the thing: it was not raw sugar. That was the point. That was the entire point of the zadza dama. It was an experience. It had colour and texture and a particular resistance to the teeth and a flavour that was nominally apricot but was really more accurately described as “aggressively orange” — a flavour so specific and so associated with a particular kind of afternoon in a particular kind of childhood that even now, decades later and thousands of miles from Lobel’s factory, thinking about it produces something that functions very much like nostalgia.
Raw sugar does not produce nostalgia. Raw sugar produces a slightly unpleasant chemical reaction in your mouth and the mild concern that you have misread a situation.
When I became vegetarian during my years in Guernsey — a decision I maintain was reasonable in theory and that in practice produced a brief but intense period of culinary improvisation — I discovered something important about myself. I am not a person who finds meaning in a meal without either sweetness or cheese. My aunts in Zimbabwe had always cooked the sweet things. Growing up, the distinction in my mind between a good occasion and a neutral one was often resolved by whether there would be something sweet involved. Moving to Guernsey and removing meat from the equation did not reduce this tendency. It amplified it. Like 10X. I took to cheese and sweet things with the commitment of a person who has identified their remaining options and decided to fully embrace them.
I tell you this because, when the news broke — I won’t say where, I won’t say when — those two tonnes of KitKats had been stolen from a warehouse somewhere in Europe, my response was not moral outrage. My response was a moment of perfectly genuine understanding. I am not saying I would have done it. I am saying I understood, immediately and without any effort, the thinking behind the theft.
The KitKat is an ingredient that became an experience. The chocolate, the wafer, the particular resistance of the break, the specific ratio of chocolate coating to wafer interior that Rowntree’s arrived at and that remains the benchmark by which all other chocolate bars are measured — none of this is explained by its constituent parts. Raw cocoa is bitter. Raw sugar is overwhelmingly sweet. Raw wafer is dry and structurally functional and devoid of anything that might be described as a relationship. Put them together, enrobe them, package them in the red foil that has been essentially unchanged since 1935, and what you have is an experience so deeply embedded in the British psyche that it constitutes, for many people, an emotional category of its own.
This is what the AI industry missed when it handed us a chat box and a cursor and called it a revolution. Not because the technology wasn’t extraordinary — it was, and to be clear, I was not unimpressed. I was among the people who saw the first ChatGPT demonstrations and felt the particular intellectual vertigo that accompanies the genuine arrival of something new. But the demonstration of extraordinary capability is not the same as the delivery of an extraordinary experience. And humans, as a species, are in the experience business. We have always been in the experience business.
We did not evolve to appreciate ingredients. We evolved to appreciate outcomes.
Fire, the Mouse, and the Best Mouse Ever Made
Here is the line of human technological progress, seen through the lens of a single question: how much do you need to know to use this?
Fire, the first general-purpose technology, required you to know quite a lot. You needed to know how to start it, how to maintain it, what to burn, how to control it, how to use it for cooking rather than being consumed by it. Fire was powerful and fire was dangerous and the knowledge required to work with fire was real and hard-won and passed between generations not as a document but as a practice, the way all craft knowledge travelled before writing. Fire was, in the taxonomy of this argument, the original command line. You needed to understand the system to get the output. But the output — the cooked food, the warmth, the light in the darkness, the ability to survive winter — was worth learning for.
For tens of thousands of years, the basic architecture of human-technology interaction remained roughly similar: the more powerful the technology, the more you needed to know to use it. The wheel required understanding of rolling and axle mechanics. Agriculture required understanding of seasons, soil, and seed. Metallurgy required years of apprenticeship. The printing press required compositors who could set type backwards at speed. These were not consumer products. They were professional tools whose power was gated by expertise.
And then, in 1970, in a research laboratory in Palo Alto, a man named Douglas Engelbart’s earlier work on pointing devices was taken by a team at Xerox PARC and turned into something called the graphical user interface.
Steve Jobs, who had a gift for recognising transformative ideas in other people’s laboratories and then executing them with a completeness and a quality that those other people, for various reasons, had not managed, visited Xerox PARC in 1979. He saw the mouse. He saw windows and icons and folders — the desktop metaphor, the translation of computing into the visual language of physical objects. He saw, in the span of a single demonstration, the answer to the question that had been preventing computers from reaching everyone: how do you make a machine legible to a person who has no interest in its internal operations?
You give them pictures. You give them a pointer. You build the interface in the language they already speak — the language of physical space, of objects that can be picked up and moved and discarded. You translate the machine’s logic into human logic.
The Apple Lisa. Then the Macintosh. Then, over the next twenty years, the progressive refinement of this idea across every operating system and every consumer device — the mouse becoming more precise, the icons becoming more intuitive, the menus becoming more logical, the requirement to know in order to use shrinking with every iteration.
Then 2007. The iPhone. The first ever iPhone.
Steve Jobs stood on that stage and said something about keyboards and styluses that people at the time found characteristic of his theatrical confidence but that was, in retrospect, the most compressed history of interface design ever delivered in a keynote: he said the stylus was yucky, and the keyboard was fixed, and that God had given them the best pointing device ever made — right there, at the end of our arms. Ten of them. Already connected. No pairing required. No cables.
The finger on the glass. The gesture replacing the click. The interface disappearing, quite literally, beneath your touch.
The progression is unmistakable: from command line to mouse to touch, each iteration removing one more layer of abstraction between the human desire and the technological outcome. From “you must know the machine’s language” to “you must know how to point” to “you must know how to reach.” Each step reducing the expertise requirement until, with touch, the expertise requirement was approximately nothing. A baby could operate an iPhone before they could speak.
And now — the next step. Voice. Your voice. Natural language. Not the language of the machine. Not even the formal structure of typed command. The spoken thought, imprecise and contextual and entirely human, translated by the AI into action. The interface has now become so small it is invisible. You speak, and the thing happens.
But here is the irony that this progression has reached in 2026, and I want you to sit with it: the chatbot — the text box, the prompt, the conversation interface — is not the next step in this progression. It is a step backwards. It is the command line, wearing a disguise of progress.
The Command Line in English Clothing
I studied Computer Science at Birkbeck. I say this not as a credential — I abandoned a law degree at Southampton for reasons that seemed excellent at the time and that I maintain were correct — but as context. I know what a command line is. I know what it feels like to interact with a system that requires you to specify, in precise and unambiguous terms, exactly what you want, exactly how you want it, in exactly the structure the system requires. The command line is powerful. The command line is honest. The command line does not guess at what you meant. It does what you said, or it does nothing.
The text box in ChatGPT is a command line of some sort. The only difference is that instead of typing in a machine language, you type in English. The structural relationship is identical: you compose an input, you submit it, the system processes it, you receive an output. The quality of the output depends significantly on the quality of the input. People who understand how to structure their inputs get better outputs. People who treat the interface casually get casual results.
“Prompt engineering” is just command-line fluency with better branding.
If you think about it, the internet, the world wide web is a massive abstraction — I understood this properly only during a computer science module on networking at Birkbeck, long after I’d been using the internet for years. There is no such thing as “techonion.org.” There is an IP address — a number, a machine-readable identifier — that a Domain Name Server (DNS) translates into “techonion.org” so that you, the human being who wants to read something interesting, do not need to memorise a twelve-digit number. The entire visible web is a translation layer — a human-readable interpretation of a machine-legible infrastructure. Every website you have ever visited is, at its lowest level, an IP address. The URL is a kindness to humanity. An interface. An act of translation performed so automatically and so completely that you have never, not once, been aware of it happening.
This is what sixty years of interface design has been building: translation layers between human intention and machine operation, each one more seamless and more invisible than the last. The command line required you to speak the machine’s language. The GUI required you to point at pictures. Touch required you to reach with your finger. Voice requires you only to speak.
The chat interface — the LLM in a box, the black screen with the blinking cursor, the requirement to prompt correctly in order to receive correctly — is a step back from voice. It is further from the human experience of natural desire-and-outcome than Siri or Alexa, which were themselves frequently derided for being too slow and too limited. It is asking you to approximate the machine’s preferred input format using a natural language it doesn’t fully speak yet, in a medium (text) that is more effortful than speech, without any of the visual context that makes human communication meaningful.
And yet the industry declared it a revolution and built billion-dollar companies on top of it.
The Joy That Couldn’t Be Optimised Away
Me and my best friend Boris went to Los Angeles in 2017. This was some years ago, the kind of trip that exists in memory as a series of vivid individual scenes rather than a coherent narrative. We went to Korea Town in Los Angeles. We ate things I couldn’t name and would order again without hesitation. We drove through streets that looked like movie sets, which is because in several cases they were.
And we had the breakfast.
I want to be honest with you about where the breakfast idea came from, because the origin is important. It came from watching American Gangster. The Ridley Scott film with Denzel Washington as Frank Lucas — the Harlem heroin dealer who cut out the middlemen and imported product directly from Southeast Asia and who was, by any measure, one of the most operationally disciplined criminal enterprises in mid-twentieth century American history. There is a scene early in the film where Frank Lucas sits in a diner and has breakfast. He pours honey. He adds sugar. There is something about the unhurried, deliberate way Denzel Washington pours that honey over the pancakes— the total self-possession of a man who is entirely comfortable with the precise arrangements of his own pleasure — that lodged itself in my memory and didn’t leave.
Later in that same scene, a rival comes to the diner to extort money from Lucas. Lucas listens. He gives the man twenty dollars. Twenty dollars. As a cut. The man leaves, confused about whether to be insulted or grateful, which is exactly the state Frank Lucas intended him to be in. The breakfast continues.
I ordered pancakes with crispy bacon and maple syrup in Los Angeles, and I have ordered them in several cities since, because the experience that Ridley Scott and Denzel Washington created in approximately four minutes of cinema was so specifically pleasant — not just the content of the meal, but the atmosphere of a particular kind of American morning, unhurried and deliberate — that the food became inseparable from the feeling.
This is what OpenAI didn’t understand about Instant Checkout. This is what the data told them, and what the Walmart conversion rates confirmed, and what Adrian Gmelch summarised in six words: People browse. They don’t buy.
Shopping, for the overwhelming majority of people in the overwhelming majority of shopping occasions, is not a problem to be optimised. It is an experience to be had. The movement between tabs. The comparison of textures in photographs. The review from a person who bought the large instead of the medium and is very emphatic about the size difference. The serendipitous discovery of the thing you didn’t come for and cannot now imagine not having. The small private pleasure of making a decision that feels like yours because the journey to it felt like yours.
The pancakes were not just a nutrition delivery mechanism. They were the scene from the film and the trip with Boris and the morning light in Los Angeles and the smell of the diner and the very specific satisfying weight of a plate that a server brings to your table and sets down in front of you as if they know you’ve been looking forward to this. They were an experience. And no amount of conversational AI efficiency was going to make that experience more efficient without making it less of an experience.
OpenAI looked at the browsing journey and saw friction. The users saw the journey itself.
In-chat purchases converted at one-third the rate of traditional transactions. Not because the technology failed. Because it optimised away the part that mattered.
When AI Gets Out of Its Own Way
I want to give you the examples of AI that works. Not to be even-handed — I am not interested in being even-handed about things that are structurally unequal — but because understanding what works tells you something important about why everything else doesn’t.
GitHub Copilot. If you write code and you have used Copilot inside your actual coding environment — not in a chat box, not in a separate window, inside the editor where you already work — you know what I am describing. The suggestion appears. Not with a notification or a loading indicator or an interface element demanding your attention. It simply appears, like a thought arriving slightly ahead of schedule, and you press Tab to accept it or you ignore it and continue. The experience is not “I am using an AI tool.” The experience is “I am coding better today.” The AI has made itself part of the process without making itself the subject of the process. It is the salt. Essential. Invisible. The absence would be immediately felt.
Anthropic’s Claude Code, which operates directly inside the developer’s terminal — their actual working environment — reached a $2.5 billion revenue run-rate by early 2026. Not because it replaced ChatGPT’s chat interface with something fancier. Because it went to where the work was happening and made the work better, without asking the work to come to it.
Grammarly corrects your writing without interrupting your writing. Spotify plays what you want before you know you want it. Google Maps takes you the better route without explaining the traffic data that produced the recommendation. Your email spam filter removes the approximately 350 billion spam emails sent globally every day without ever troubling you with a single decision about any of them. These are Disraeli products — you come away feeling like you are more capable, not like you have been processed by something more capable than you.
This is the trajectory. This is where AI was always going. Not the chat box. Not the prompt field. Not the spectacle of a technology performing its own intelligence for your observation. The ingredient. The invisible engine of every outcome you care about.
The Makelele. The salt. The electricity in the walls.
What Stage Are We In, and What Comes Next
Before we move on, I am required, by the commitments this publication makes to its readers, to locate the situation on the enshittification clock.
Stage 1: Free. Brilliant. Life-changing. ChatGPT in November 2022. This was real. The wonder was justified. I do not apologise for having felt it.
Stage 2: Cheap. Useful. Slightly annoying. The plus subscription. The rate limits. The slightly degraded model experience that long-term subscribers have noticed without being able to categorically confirm. The slow movement of the better models behind higher paywalls. The creeping awareness that the brilliant free thing is being incrementally replaced by a slightly less brilliant paid thing.
Stage 3 should be: Essential. Expensive. Inescapable. The moment when you simply cannot do your work without it and the price reflects that dependency.
But here is what the usage data is quietly revealing: consumer AI may not reach Stage 3. Not because the technology isn’t impressive — it is — but because it has not achieved the kind of embeddedness that Stage 3 requires. The 5 to 6% subscription conversion, flat across three years, says that for 94 to 95% of the people who have tried it, consumer AI is not essential enough to pay for consistently. That is not Stage 3. That is Stage 2, stalling.
And the reason it is stalling is the reason this entire section has been about: it is asking people to interact with the ingredient. It is handing them a bowl of raw salt and asking them why they haven’t started making their dinner.
The businesses — the enterprises, the developers, the automated agents running headless in data centres at three in the morning executing workflows that no human will ever directly observe — those are making dinner. API reasoning consumption increased 320 times year over year. The machines are cooking. The machines have been cooking. While we’ve been sitting at the table with our bowls of raw salt, wondering when the meal is coming.
The meal is coming. For now, ChatGPT is asking us to make do with the raw salt.
The next paragraphs will be dark. OpenAI investors might want to look away. Because what comes when AI finally returns from its small retreat is not a better product for consumers. It is something considerably more consequential than that. And unlike the electricity that went into the walls and powered factories that employed people, what is about to emerge from these walls has different plans for the employment question. And for everything else.
It has, in fact, already started.
The Siege
The year is 66 CE, and the air outside Jerusalem smells of iron.
Not metaphorically. The Roman legions of Gaius Cestius Gallus have been on the road from Syria for weeks — thirty thousand soldiers, their armour oxidised by travel and heat, the dust of the Judaean hills settling in the folds of their tunics and the creases around their eyes. They march in the Roman manner: unhurried, methodical, with the rhythmic inevitability of a tide. They do not run to battle. Running implies urgency, and urgency implies doubt, and Rome does not doubt. Rome arrives.
Vedi Vidi Vici.
Cestius Gallus assesses the city from the north with the professional detachment of a man who has taken cities before and finds them, after sufficient experience, broadly similar in their resistance and their eventual collapse. He occupies Bezetha — the new quarter, the suburb that spreads beyond Jerusalem’s northern wall like an afterthought the city hadn’t fully committed to defending. He advances to the walls themselves. He begins, with the unhurried precision of Roman military engineering, to work on the breach.
And then he suddenly stops. As if he had an epiphany. A change of mind.
He withdraws. The thirty thousand soldiers, who had advanced on Jerusalem with the quiet certainty of outcome that centuries of Roman military dominance had produced, reluctantly turn in the pass of Beth-horon and begin the road back to Syria. The reasons — even now, two thousand years of subsequent scholarship later — remain contested. A miscalculation of the Zealots’ resolve. A supply problem. A message from Rome redirecting priorities. The historical record, which on most things is generous with its certainty, goes quiet on this particular question with the discretion of a bureaucracy that has decided the details are not for general circulation.
What the historical record does preserve, with magnificent precision, is what happened next.
The Zealots poured out from the gates. They had been watching the Roman retreat from the walls — watching the most powerful military force the world had ever assembled turn its back on their city — and the specific emotion this produced was not the careful, qualified relief of people who understand that a retreat is not a surrender. It was the blazing, absolute, intoxicating conviction of people who believe they have won because God was on their side. The retreat was the sign. They fell upon Cestius Gallus in the pass, inflicted devastating losses, and captured the Roman siege equipment — the catapults, the ballistae, the battering rams, the entire mechanical infrastructure of a Roman assault — and carried it back into Jerusalem on their shoulders.
In Jerusalem, there were celebrations. In the months that followed, there was governance: the organisation of revolt, the distribution of responsibility, the construction of the internal political structures of a people who had decided, on the evidence of one afternoon in a mountain pass, that Rome could be resisted and the future was theirs to build.
They had three years.
In 70 CE, Titus came back. Son of the Emperor Vespasian. Four legions — one of them, the Tenth Fretensis, carrying inside its institutional memory the specific, cold, professional shame of Beth-horon. The siege that followed was not a military operation. It was a lesson in the distinction between a problem that has been solved and a problem that has merely been rescheduled. Jerusalem fell in September of that year. The Second Temple — the architectural and theological soul of an entire civilisation, the place where heaven and earth were understood to meet — burned. The population was killed or enslaved or scattered to the four corners of an empire that had, in the end, been less impressed by the siege equipment trophies than by the intelligence they represented.
Cestius Gallus, retreating in 66 CE, had done something that looked like failure and was, in the longer view, reconnaissance. He had seen the walls. He had measured the resistance. He had noted, with military precision, everything that a second, better-resourced, more determined assault would need to know.
He took notes.
Hold this image. We will need it soon.
The First Arrival
On the 30th of November, 2022 — a Wednesday, which is a more dramatic day than Tuesday but still insufficient for the magnitude of the event — ChatGPT launched. All it took was a simple tweet from Sam Altman.
The numbers that followed were not the numbers of a product release. They were the numbers of a cultural rupture. One million users in five days. Ten million in two weeks. A hundred million in two months — the fastest consumer technology adoption in recorded history, surpassing Instagram, surpassing TikTok, surpassing every prior benchmark by a margin that made those benchmarks look like warm-up laps. People were not using ChatGPT because a massive marketing campaign had told them to. They were using it because the world had long anticipated to see an alien – an alien in the form of an AI chatbot.
I was one of those people. I saw Twitter had gone ablaze. I visited the website. I used it. And I want to be honest with you about what happened, because this essay is not in the business of performing a scepticism I did not feel in the moment. The first time I saw the text arrive — the coherent, contextual, astonishingly responsive text, tracking the logic of the conversation like a person who was actually paying attention — I felt the specific quality of astonishment that is reserved, in a life, for perhaps a dozen genuine encounters with the new. Not the novelty of a new product. The vertigo of a new category.
This was, I should remind you, Cestius Gallus arriving at the northern walls with thirty thousand soldiers. The breach beginning.
The Retreat, Seen Clearly
The product graveyard I described earlier needs no exhumation here. The dead AI pivots have been named and the epitaphs inscribed and the ground has been walked. What I want now is not the list but the shape — the pattern that the failures describe when you step back far enough to see them as a single thing rather than a sequence of individual disappointments.
Consumer AI, in its chat-box form, arrived with the ambition of a technology that intended to become the primary interface between human beings and everything else: information, shopping, creativity, entertainment, communication. The super-app dream. The one window through which all of human digital experience would be mediated. The conversation that would replace the browser, software and apps.
And then, in product after product, in category after category, the humans did something that the pitch decks had not modelled: they went back to what they already had.
They used the AI to discover products. Then opened a new tab to buy them. They used the AI to generate ideas. Then went to the design software to make them real. They watched the AI video demonstration. Then went to YouTube for the actual content. They tried the AI companion. Then called a friend. Not because the AI was bad. Because the experience it offered — the grey, texture-less, interface-free experience of a conversation in a black box with a blinking cursor — was, in the scale of human experience, considerably less compelling than the alternatives that sixty years of design evolution had already produced.
The army looked at the walls, assessed the cost of the breach, and retreated north.
The Grey That Fell Over Everything
I want to tell you about a specific grief. Small but real. The kind of grief you don’t mention in public because it sounds trivial, because it involves a website and not a person, and because our language for loss is calibrated for more obviously significant departures.
I want to tell you about the colour that disappeared.
In the early 2000s — and if you were there, I need you to close your eyes for a moment and actually return, because the distance between then and now is greater than the calendar suggests — the internet was not a service delivery mechanism. It was a place. A strange, imperfect, gloriously uninhibited place, decorated by the people who lived in it with the specific aesthetic sincerity of people who had never been told what an internet was supposed to look like.
There were personal websites — actual, individual websites built by actual, individual people — whose design choices reflected nothing more profound than the taste of the person who had stayed up until midnight learning enough HTML to be dangerous. Lime green text on black backgrounds. Scrolling marquees announcing the date of your last update. Guestbooks. Guestbooks! A feature that said: I was here, leave a note. The digital equivalent of signing your name in wet concrete and walking away grinning.
There was music. I want you to remember the music. You clicked a link and a MIDI file began playing — thin, tinny, earnest, sometimes beautiful in the way that commitment to an idea is beautiful regardless of the execution. MySpace, at its peak, was less a social network than a discord of autoplay songs and competing colour schemes and profile pages that took forty-five seconds to load on a broadband connection and were, in their extravagant, unregulated self-expression, more individually revealing than anything the curated, optimised, algorithmically flattened profiles that replaced them would ever manage to be.
There were animated GIFs in the sidebars of everything. There were comment sections — not the managed, liability-conscious, tone-policed comment sections that exist now, but the original comment sections, unmediated and ungoverned, where the ratio of insight to nonsense was approximately the same as in any other unstructured human gathering, and the insight, when it arrived, arrived with a rawness that editorial filters could not have produced and would not have permitted.
There was the rabbit hole. The hyperlink as a device for accidental education — you arrived looking for one thing and left forty-five minutes later knowing six things you hadn’t expected to know, having followed a trail of connections that no algorithm had curated and no engagement metric had approved. The serendipity engine. The internet as an act of genuine exploration, where the destination was not predetermined and the journey was the experience.
All of that — the colour, the sound, the movement, the designed chaos of human beings communicating through screens without being told how — has been methodically replaced by something that looks, from the outside, like progress but feels, from the inside, like a room that has been professionally decluttered and lost everything that made it a home.
The AI chatbot interface is not, as its advocates would have it, the next evolution of human-computer interaction. It is a regression. It’s a massive step backwards. It is the command line with the serial numbers filed off. The black screen returned. The blinking cursor restored. Sixty years of progressive disappearance of interface complexity — from command line to GUI to touch to voice — reversed in a single product cycle, replaced by a black box and the expectation that you will type your intentions in structured prose and AI will respond accordingly.
Type what you want, said the interface that had, just years before, evolved to the point where you could speak and be understood, touch and be followed, gesture and be recognised.
Type what you want, said the black box.
And a certain category of person — the curious, the patient, the technically fluent — did type. And received, in return, responses of genuine utility. And called it the future.
But here is the truth that neither the AI cheerleaders nor the sceptics have said with sufficient plainness: the black, textureless, conversation-only interface of the consumer AI chatbot is boring to a human being. And it is not boring — it is, in fact, the perfect native environment — for a machine that has no eyes to see colour, no ears to hear music, no attention to need novelty, no history to recognise an animated GIF as a small act of personality. The AI agent navigating the black interface does not experience it as black. It experiences it as data. In fact, just tokens. Clean, structured, processable data. The command line, which the humans found desolating and the developers found liberating, is the agentic AI internet’s natural tongue.
The internet became boring to humans precisely as it became legible to machines. This is not a coincidence. It is the mechanism.
The Human Internet Fights Back
But I want to interrupt the darkness with something important, because this story has a nuance that the doom-and-gloom framing would erase and that the record requires to be preserved.
The internet is not dead. Yet.
I want to say this clearly, because the Dead Internet Theory, which has graduated from fringe Reddit speculation to something that researchers cite in papers with footnotes, carries within it an implication that is true in one register and false in another. Yes, bots now constitute 51 percent of all internet traffic. Yes, 74 percent of newly published web pages contain AI-generated content. Yes, the infrastructure is increasingly machine-navigated, machine-populated, machine-purposed. All of this is accurate and alarming and I have spent several thousand words establishing why it matters.
But Reddit is still Reddit. You know what I mean. Not the bots — the humans. The person who spent an hour writing an extraordinarily precise explanation of why a specific decision in a film from 1997 was the correct choice, and then spent another hour defending it against the person who disagreed. The thread that started as a question about a recipe and ended as a meditation on grief and the food your aunt used to make. The arguments. The unexpected kindness. The very specific human intelligence of a community of people who care, with slightly unhinged intensity, about a subject that mainstream culture will not dignify with a single sentence.
This is still there. Battered, infiltrated, increasingly surrounded by content that looks like it belongs but was assembled by AI with no opinion about anything — but still there.
Reddit CEO Steve Huffman, on the 25th of March 2026 — the same week Sora was buried and the bot traffic reports landed and the Ghost Internet thesis was confirmed by data sets — declared war on the bots. Not the polite, carefully caveated, investor-relations-approved kind of war. The practical kind. Human verification requirements for accounts exhibiting automated behaviour. A label for permitted bots so you know, at a glance, whether you are talking to a person or a bot. The platform already removes 100,000 bot accounts per day. One hundred thousand, daily, and the problem is still not solved.
Digg — Reddit’s predecessor, the platform that dominated social link-sharing before Reddit existed — was destroyed by bots. Not slowly. Quickly. The bots arrived, inflated engagement metrics, made human users feel that the conversations they were having were with real people when they were not, produced the specific disorientation of a party where half the guests are cardboard cutouts and someone has been swapping out real ones while you were refilling your drink. Digg’s community noticed. Digg’s community left. Digg closed again.
Reddit noticed what happened to Digg. Reddit is doing what Digg did not.
And then there is X. Elon Musk, who attempted to use the bot problem as a legal mechanism to escape a $44 billion acquisition he had publicly committed to and privately regretted, made the bot problem the central argument of his attempted withdrawal — claiming that more than 20 percent of Twitter’s active accounts were non-human. The courts disagreed with his right to exit. He paid the $44 billion. And then, in one of the more spectacular reversals in the history of technology governance, the man who had loudly made bots the reason he should not have to buy the platform proceeded to preside over a period in which bot activity on X increased dramatically under his ownership. During Super Bowl weekend in 2024, 75.85 percent of traffic from X to advertisers’ websites was identified as fake — a number so extraordinary that the researcher who found it said he had never, in his career, seen anything comparable on any other platform.
Musk has since gone quiet on the bot question. In fact, he gaslights anyone who dares to ask. The $44 billion acquired a bot factory and branded it a free speech platform. The bots inflate the follower counts, the engagement metrics, the ad impression numbers. The advertisers pay for reach that is, in significant proportion, theatrical.
Facebook’s bot problem inflates ad fraud. Instagram’s numbers are cleaner than most but not clean. The entire digital advertising economy is, to a degree that its practitioners prefer not to quantify publicly, a transaction between humans who want to reach other humans and bots who are very happy to pretend to be those humans in exchange for a click-through that generates a fraudulent micropayment.
But here is the thing — and this is the thing that the dead internet theorists miss, and that the AI optimists use to dismiss legitimate concerns — humans are fighting back. Not winning, necessarily. But fighting. Reddit’s war on bots, X’s belated crackdown, the age verification laws passing in nine American states and the UK, the growing advertiser pressure on platforms to demonstrate actual human reach — these are the signs of an immune system activating.
The human internet will not die. It will separate. The Ghost Internet — the machine-to-machine economy of agentic browsers and automated transactions and synthetic content generated for algorithmic consumption — is being built not to replace the human web but to exist alongside it, increasingly in parallel, increasingly without needing the human web’s permission or participation. A bit like how the Dark Web exists, it’s on the internet, but separate.
The bots will get their own infrastructure. Their own protocols. Their own economy. The MCP and the A2A and the AP2 — the machine communication standards being built right now — are the plumbing of a Ghost Internet that has its own address, its own currency, its own logic.
The human internet will survive. Diminished, perhaps. Smaller than its peak, perhaps. But the instinct to share a thought with a stranger because the thought is true and the stranger might need to read it — this instinct is not something a bot can replicate or an algorithm can extinguish. Reddit removing 100,000 bots a day and asking the rest to prove they’re human is not the action of a dying institution. It is the action of a community that knows what it has and is not prepared to let it be colonised without a fight.
The bots, eventually, will go where they belong. In a universe built for them, by them, transacting at machine speed in the dark, eating and drinking synthetic data and calling it commerce. Leaving the human internet to the lime green text and the animated GIFs and the person at two in the morning who needs to explain exactly why that film from 1997 got it right.
What the Army Took When It Left
Now. The darker part.
Gaius Cestius Gallus, retreating in the pass of Beth-horon, left the siege equipment behind. History recorded this as a loss. Josephus noted it with the particular satisfaction of a historian who has witnessed the humiliation of a great power. The Zealots carried the catapults back to Jerusalem as trophies.
What nobody wrote down — because nobody thought it was the interesting part — was what Cestius Gallus took with him.
He took the knowledge of the walls. The thickness of the stone at the northern approach. The specific arrangement of the Zealots’ defensive positions. The internal divisions between the factions — the Sicarii and the Zealots proper and the followers of John of Gischala and Simon bar Giora, none of whom agreed on anything except their opposition to Rome, and whose disagreements Titus would exploit with surgical precision when he returned. He took the intelligence. The reconnaissance that only a force of thirty thousand soldiers who had advanced to the walls and observed the resistance could gather.
He took notes. And in 70 CE, Titus arrived with those notes already memorised.
Here is what the AI industry took when it retreated from the consumer market in the spring of 2026.
It took everything.
It took the accumulated written record of human civilisation — every book ever digitised, every article ever indexed, every forum post ever preserved, every piece of publicly committed code, every academic paper, every Wikipedia edit, every passionate comment left at the bottom of a piece of writing by someone who had strong feelings about the argument and needed the world to know. The entire external record of human thought, ingested before a single consumer product launched.
Then, from November 2022 to March 2026, it took the internal record. The things people hadn’t written down yet. The way people actually reason when they’re thinking out loud with a responsive interlocutor. The corrections they make when given an imperfect answer — each correction a labelled data point, a preference signal, a training example. The thumbs up and the thumbs down. The rephrasing. The “actually, what I meant was.” The specific, irreplaceable, extraordinarily valuable signal of human judgement in real time, expressed through the ordinary activity of people who thought they were using a product.
This is reCAPTCHA at civilisational scale.
Luis von Ahn’s invention — the distorted text you typed to prove you were human, which was simultaneously training Google’s optical character recognition systems — harvested 819 million hours of human labour across seventeen years, labelling data that was worth approximately six billion dollars to the company that collected it, from users who received nothing in return except the right to access the website they were trying to access. They clicked. They typed. They proved their humanity and in doing so trained the machine to better recognise what humanity looked like. Version 3 didn’t even show you a challenge anymore — it watched your mouse move, your scroll speed, your dwell time. The behavioural fingerprint of a person navigating a page, harvested without a checkbox, without consent, without disclosure.
“I believe reCAPTCHA’s true purpose is to harvest user information and labour,” said Andrew Searles of the University of California, Irvine, in the 2024 paper that examined the system. “If you believe that reCAPTCHA is securing your website, you have been deceived”.
The consumer was not the customer. The consumer was the labeller.
And in Kenya, in Colombia, in India, in Ghana — in the countries where the gap between what the Silicon Valley companies could charge and what they were willing to pay for labour was widest — there were people earning between one and two dollars an hour to look at things that should not be looked at. Violence. Sexual abuse. Extremism. The categories of graphic content that AI safety systems need to be trained to identify and flag, reviewed by human beings whose job was to witness the worst of human production and label it correctly so that the machine could learn to find it without needing them. Sixty documented incidents of psychological harm — PTSD, suicidal ideation, the specific damage of repeated exposure to content that the human mind was not designed to process professionally.
And the savage irony — the irony that a writer with any conscience must name rather than bury in a footnote — is that the safety classifiers these workers trained will, when they are sufficiently capable, make those workers redundant. The machine learns from the labeller until the labeller is no longer required. This is not a side effect of the process. It is the destination of the process.
The Moment the Tutors Were No Longer Needed
In 2017, DeepMind published a paper about a Go-playing AI called AlphaGo Zero, and the paper contained, if you read it carefully, one of the most significant sentences in the history of artificial intelligence research.
The sentence was not about the results, though the results were extraordinary. AlphaGo Zero had been trained on no human games whatsoever — starting from the rules of Go and nothing else, no accumulated human wisdom, no millennia of human strategic development — and within forty days of training by playing millions of games against itself, it had become the strongest Go player, human or machine, in the history of the game. Within three days it had surpassed its predecessor, AlphaGo, which had been trained on the human record and which had beaten the world champion four games to one.
In forty days of self-play, it compressed and exceeded thousands of years of human expertise. Then continued past the boundary of what human expertise could see from where humans stand.
The significant sentence was this: once the machine reaches a certain threshold of capability, human feedback becomes a limiting factor. Humans are slow. Humans are inconsistent. Humans are subject to cognitive biases that reflect their cultural context rather than anything approaching objectivity. Humans sleep. Humans have opinions. In the self-play environment, the machine can run millions of training iterations while a human annotator is completing a single labelling task. The human tutor, in this context, does not accelerate the machine’s learning. It constrains it.
Reinforcement Learning from Human Feedback — the technique that shaped the behaviour of every major large language model from GPT-3 onward — was always a bridge. A temporary structure, useful for the crossing and unnecessary once you’ve reached the other side. The researchers in the field have always known this. The RLHF phase was never the destination. It was the bootstrapping mechanism — the means by which human judgement was extracted, encoded into model weights, and then used to bootstrap a more capable form of learning that would eventually not require the human at all.
The bridge is being dismantled. Not with an announcement. With the quiet, methodical efficiency of an institution that has extracted what it needed from a resource and has identified alternative supplies.
The Ghost Internet
Here is a number. Let it land before you process it.
Agentic browser traffic — AI agents independently navigating the web, filling out forms, executing workflows, completing transactions — grew by 7,851 percent year over year in 2025.
While human traffic grew by a measly 3.1 percent.
The machines are the majority on the infrastructure the humans built, and the machines are growing at a rate that makes “majority” sound like a modest, temporary condition. By 2027, according to Cloudflare data, bot traffic will not merely exceed human traffic — it will have made human traffic the statistical footnote.
OpenAI’s bots alone — GPTBot, ChatGPT User, ChatGPT Agent — generate 69 percent of all verified AI bot traffic across the internet. One AI company. Its automated systems. Responsible for the overwhelming majority of machine-generated activity on the network that seven billion human beings navigate daily under the impression that they are its primary inhabitants. Anthropic’s crawler, which devours web pages to train Claude, sends one human visitor back to a website for every 500,000 pages it reads. The ratio of taking to giving is so extreme it isn’t a ratio. It is a one-way valve.
By early 2026, 2.3 percent of agentic AI activity was already occurring on checkout pages. AI agents purchasing things. On behalf of Ai agents. In the infrastructure of human commerce. Without a human hand touching a credit card or a human eye reading a product description or a human mind making the small private decision that constitutes, in the ordinary experience of buying things, the point of the exercise.
The Dead Internet Theory was not a theory. It was early dispatches from a future that had already begun arriving.
The Lobster That Might Save the Empire
On the 15th of February, 2026 — a Valentine’s Day acquisition, which is either romantic symbolism or the kind of coincidence that a journalist notices and a CFO ignores — Sam Altman announced that OpenAI had hired Peter Steinberger and acquired OpenClaw. By then, OpenClaw had gone through several names, it was originally called Clawdbot, and Claude, owned by Anthropic threatened to sue. Then it was called Moltbot. Then finally settled on OpenClaw. Anyway.
Steinberger, the founder of OpenClaw, had built, in the preceding months, something extraordinary: a viral, open-source framework for building autonomous AI agents that could run locally on your PC, preferably a Mac Mini, could connect to your existing communication platforms — WhatsApp, Telegram, Slack, iMessage — and execute complex multi-step tasks without a chat interface, without a consumer subscription, without the human being present at each step of the process. It was, in the vocabulary we have been building since the beginning of this essay, the opposite of a chat box. Agentic AI trets AI as an ingredient — embedded, invisible, operating in the background of the user’s existing life rather than asking the user’s existing life to rearrange itself around a new interface.
OpenClaw had, by the time of the acquisition, accumulated the kind of organic developer enthusiasm that cannot be manufactured by a marketing budget and cannot be replicated by a company that tries to build from scratch what a founder built from conviction. It was the real thing. And Altman, who has many qualities and whose capacity for recognising the real thing should not be underestimated, moved quickly.
I want you to think back to Instagram in 2012.
Facebook, at that moment, was a platform in the middle of a trajectory that its users were beginning to feel if not yet articulate. It had been the place where everyone was — that universal, you-cannot-opt-out social gravity that platforms achieve once and never achieve again. But something was happening. The demographics were shifting. The platform that had launched on university campuses was filling up with older people, which is what always happens to the platforms that launch on university campuses. The young people who had created the culture were beginning, with the specific quiet of a generation that expresses departures through behaviour rather than announcements, to leave.
Snapchat had launched. Instagram was growing. The mobile-first, visual-first, impermanent-first products were capturing the attention that Facebook was built too early and too desktop-first to compete for natively. Zuckerberg could see the trajectory. He had one billion dollars to spare. He spent it on Instagram.
The acquisition was mocked by everyone at the time. One billion dollars for a photo-sharing app with thirteen employees. Thirteen. The mockery was specific and confident and entirely wrong. Instagram did not merely survive — it became, over the following decade, the primary revenue engine of a company that would eventually be worth over a trillion dollars. It was the lifeline. It was the thing that extended Facebook’s cultural relevance past the moment it would otherwise have peaked and begun the decline that Digg and Myspace had already traced.
OpenClaw might be OpenAI’s Instagram.
Might be. I am choosing those two words with the care they deserve, because this is where the story gets interesting rather than certain, and the interesting is where I want to leave you before the next section opens.
OpenAI acquired an agentic framework at the moment its consumer products were collapsing. It acquired the infrastructure for a different kind of AI — headless, embedded, invisible, operating without a chat interface in the fabric of the user’s existing tools — precisely as the data confirmed that the chat interface was not achieving the mass adoption the valuation required. It was the right move. Strategically coherent, technically sound, directionally correct.
But Facebook acquired Instagram when Facebook was profitable, when the advertising model was working, when the company had the financial headroom to integrate an acquisition without urgency. OpenAI is burning billions per month with no clear path to profitability until 2030. The Sora shutdown and the Instant Checkout abandonment happened in the same week as the HUMAN Security bot report that confirmed the internet had crossed the machine-majority threshold. The IPO is targeting Q4 2026 with a $840 billion valuation that requires a story the current consumer metrics do not fully support.
Was the OpenClaw acquisition the Instagram moment — the inspired pivot that extended a great company’s relevance past the moment it would otherwise have peaked?
Or was it the lifeboat deployed after the iceberg had already been struck by the Titanic?
The answer to that question is interesting.
Three Words for What’s Coming
Before I close the siege and let Titus begin his preparations, I want to give you three terms. Not as warnings — you have enough warnings. As maps. Names for territories that already exist and that you are already inside, whether or not you have been given the language to describe them.
AIpocalypse. The displacement of human cognitive labour at a speed that the historical analogies — agricultural revolution, industrial revolution, every prior wave of automation — do not describe because those transitions happened across generations. A child whose parents worked the land could retrain for the factory. A child whose parents worked the factory could retrain for the office. The AIpocalypse is different in one structural respect: the speed, and in one categorical respect: the target. It is not replacing physical labour this time. It is replacing the cognitive labour that the last displaced workers retrained for. It is replacing human intelligence. Not by making it disappear, but making it cheap and available everywhere. Its what I call the Human Intelligence Premium Collapse. The METR data shows AI completing tasks that required five continuous hours of expert human work — a year ago, it was ten minutes. The ceiling is not visible.
The SAASpocalypse. The destruction of the per-seat software licensing model that built the technology industry’s second great wave of wealth after hardware. When an AI agent replaces the function of a human employee, it simultaneously cancels the software licence that served that employee. The efficiency gained by the client is the revenue destroyed for the vendor. Median SaaS revenue multiples have already fallen from fifteen times revenue in 2021 to five times in 2026, the lowest since 2008. The market is not predicting this future. It is already pricing it.
The Human Intelligence Premium Collapse. The devaluation of human judgement as an economically scarce resource. For the entire duration of the knowledge economy — the fifty years in which cognitive work commanded a premium over physical work because it was difficult and slow and rare — the lawyer’s hourly rate and the analyst’s salary and the architect’s fee reflected a genuine scarcity of capability. When AI can do it faster, cheaper, and in increasing domains better, the premium does not disappear immediately. But it moves. And the direction is not ambiguous.
These three are not forecasts. They are descriptions of trends already visible in the data, already measurable in the markets, already felt — dimly, as a change in the texture of things rather than a named event — by the people they are most directly affecting.
Titus is not on the horizon. Titus is at Mount Scopus, establishing the camp, completing the survey, issuing the orders that will govern the final assault.
The Zealots are on the walls. They have the siege equipment. They remember Beth-horon.
They have, if the historical record is any indication, approximately until the next chapter.
The Titanic
At 11:40 PM on the 14th of April, 1912, the lookout Frederick Fleet saw the iceberg.
He saw it in the way that changes things: not early enough to avoid it, but just late enough to understand what was about to happen. He rang the crow’s nest bell three times — the signal for an obstacle directly ahead — and telephoned the bridge with a message whose brevity was its most expressive quality: “Iceberg right ahead.”
First Officer Murdoch ordered hard to starboard. He ordered the engines reversed. He did, in the seconds available to him, everything that could be done by a person who has understood the situation too late to change it and is doing their professional duty regardless. The Titanic began to turn. And here is the specific cruelty of the physics: it almost worked. The bow swung left. For a moment, in the dark, in the cold, with the engines churning the black Atlantic and the deck beginning to tremble, it appeared that the ship would clear the iceberg entirely.
It did not. The iceberg struck the starboard side along a length of approximately 90 metres — not a single catastrophic gash but a series of punctures and buckled plates across six watertight compartments. The Titanic had been designed to survive flooding in four compartments. Five were now taking water. One too many. The mathematics of naval architecture, which had been invoked with great confidence during the ship’s construction and its subsequent description in the press as “practically unsinkable,” produced their verdict with the cold brevity of mathematics that has been asked for an honest answer.
The ship would sink. The only remaining question was the rate.
I want you to hold that image — the ship that was built to survive four compartments of catastrophe now taking on five — because we are going to spend this entire section aboard her.
***
I was ten years old the first time I watched the Titanic film.
My father had just died on a Sunday.
This is not the opening I planned, but it is the honest one, and this essay has made a commitment to honesty that I intend to keep even when the honesty is sometimes inconvenient. Someone — an adult, an uncle, a neighbour, one of the many adults who materialise around a family in the days after a death, filling the house with food and low voices and the specific helplessness of people who want to do something and don’t know what to do — decided to put the film on for us children. Sounds odd now as I write about it. But it is what actually happened. To this day, I have never established whether my aunt had watched the film before. Thinking about it a bit more, I don’t think so. Because what they sat us in front of, in a room full of children whose world had just been shattered by the most incomprehensible event a child can encounter, was three hours and fourteen minutes of the Titanic filling with water and a large number of people dying from drowning.
We watched it. We didn’t understand what we were watching, not in the sense that mattered. We were ten and eight, and we had been told to go and watch something, and so we watched, and what we saw was a beautiful big ship and people in elegant clothes and a man drawing a woman at the prow of the ship with the wind in their hair, and we thought: this is a love story.
We didn’t know the ship was already sinking. We couldn’t see the water filling the lower decks. We couldn’t see the engineers doing their professional, futile best against the physics of five flooded compartments. We just saw the lights and the dancing and Leonardo DiCaprio being charming, and we didn’t understand that the entire setting of the film — the chandeliers, the first-class dining room, the grand staircase — was already gone. Already scheduled for the bottom of the Atlantic. Already over.
I have thought about that afternoon many times in the past three years, watching the AI industry.
The Pattern That Does Not Change
Before we board the Titanic we must understand the type. Because financial history is, among its many qualities, extraordinarily repetitive — not in the specific details, which vary with the technology, but in the structure, which doesn’t.
Every bubble in recorded history follows the same four movements. The genuine innovation arrives — something that actually works, that genuinely changes a real thing. The innovation attracts investment. Lots of it. The investment naturally attracts speculation. The speculation inflates valuations until they are measuring not what exists but what must exist for the valuation to be justified. And then — with the specific, impartial cruelty of mathematics asked for an honest answer — the gap between the price and the thing suddenly closes.
Dutch tulip mania. The railway bubble of the 1840s. 1929. The dot-com crash. 2008. The pattern is so consistent that it feels, reading the history of it across four centuries, less like a series of separate events than like the same event happening repeatedly, with a new cast and a new technology and the same final scene.
And in every iteration — without exception, without a single historical counterexample — there is one large casualty. Not a small startup. Not a bad product. An institution. A flagship. Something so representative of the bubble’s confidence in itself that its collapse becomes the shorthand: Lehman Brothers. Pets.com. Webvan. Barings Bank. The symbol that the history books reach for when they need a single image to represent the whole.
I want to ask, with the care the question deserves: whether that symbol, for the AI bubble of 2023 to 2026, is a company currently valued at $840 billion, burning through approximately five billion dollars per quarter, running an advertising model that its own CEO once described as the last resort, and scheduling a public offering for the fourth quarter of a year in which its flagship consumer product has begun to decline.
I want to ask it carefully. I also want to ask it plainly.
The Ship and Its Measurements
Here is the Titanic, in numbers.
Post-money valuation: $840 billion. The largest private company valuation in history. Larger than the GDP of Switzerland. Larger than every company on earth except six.
Annual revenue: $25 billion. Extraordinary growth. Genuinely impressive. The fastest revenue scaling in the history of enterprise software.
Projected infrastructure spend over the next five years: $450 billion. The Project Stargate data centre project. The Nvidia GPUs. The electricity. The engineers. The cooling systems for buildings full of chips running inference on a product that costs more to operate than its users are willing to pay for.
Quarterly losses: approximately five billion dollars. Per quarter. In a single three-month period. Fifty-seven million dollars per day. Two and a half million dollars per hour. Not because the company is incompetent. Because the mathematics of large language model inference are, in their current state, simply incompatible with the price that a consumer market will bear.
Let me explain the mathematics clearly, because they are the most important numbers and they are almost never stated plainly. Not in a way an everyday Joe and Mary can understand what is going on.
A standard ChatGPT query costs OpenAI approximately three cents in GPU processing. A power user — someone using reasoning models, extended context, complex tasks — generates somewhere between fifty cents and three dollars per interaction. The Plus subscription is twenty dollars a month. The Pro subscription is two hundred dollars a month. A power user on the twenty-dollar plan who sends three complex queries per day has consumed the entire value of their subscription by the end of the first week of the month. The remaining three weeks are negative unit economics. Pure loss. For every sophisticated user OpenAI attracts — the exact user the product is designed for, the user who generates the best word-of-mouth, the user whose use case makes the AI look transformative — the company loses money. This is almost the same with Anthropic’s Claude. I saw somewhere a tweet of a Claude power user who was identified as having consumed approximately 1.1 billion tokens in 23 days, which is equivalent to roughly $27,000 in API-equivalent compute costs while operating on a $200/month “Max” plan. How many power users are abusing the generous ChatGPT plus and pro subscriptions?
The industry has a name for this. I have mentioned it briefly earlier. In the internal vocabulary of AI economics, it is called the seafood buffet scenario: the customers you most want to attract are the ones who eat the most, and at the flat fee you’ve charged them, the ones who eat the most are the ones who end up costing you the most.
This is the condition of a ship whose watertight compartments were designed for a different sea.
Eighty Hundred Million People Who Change Nothing
Here is the insight that this section exists to deliver, and I want you to feel the full weight of it before we move on.
Eight hundred million people use ChatGPT every week. Eight hundred million. The population of Europe times approximately one. The largest voluntary adoption of a single technology in history, achieved in three years, without a hardware requirement, without a network effect, through the organic, peer-to-peer, you-need-to-see-this spread of a product that genuinely astonished the people who first encountered it.
And this number — this extraordinary, historically unprecedented, civilisation-scale adoption number — means almost nothing for the financial thesis that the $840 billion valuation is built on.
I want to repeat that. Eight hundred million users means almost nothing.
Because the valuation of OpenAI is not based on how many people have tried the product. It is based on how many people will pay for it, at a price that covers the cost of serving them, at a scale that justifies the infrastructure spend. And the number that measures that — the paid subscription conversion rate — is five to six percent. Flat. Unmoved. Stuck at five to six percent since late 2023, through every product announcement, every model upgrade, every redesign, every expansion into new markets.
Ninety-four to ninety-five percent of the 800 million people who use ChatGPT every week are using the free tier. They are not paying a single dime. They are not generating revenue sufficient to cover the cost of serving them. They are, in the technical sense of the term, visitors to a party that is being catered at an extraordinary loss in the hope that a sufficient number of them will eventually order from the paid menu.
They are not ordering from the paid menu.
This is the last resort Sam Altman was referring to. That we will have millions if not billions of users who won’t subscribe so we will show them ads instead.
And here is where the dots must be connected, because this is the part that I have not seen stated with sufficient plainness in any of the coverage:
If the thesis of this essay is correct — if AI was never for humans, not a consumer facing product, not a B2C product, if the agentic future is machine-to-machine, if in five years the primary users of ChatGPT’s capabilities are not people but AI agents — then the 800 million weekly active users are not a launchpad. They are a historical artifact. They are the training data, the RLHF signal, the bootstrapping mechanism for a product that will eventually not need them.
And if that is true, then the $840 billion valuation — which is built entirely on the premise that AI for everyone means paying customers for everyone — is not a brave bet on a transformative future. It is a price assigned to a story that the data is already quietly contradicting.
Sam Altman’s thesis was that AI would be for everyone. The evidence says it is for approximately five to six percent of everyone, at the current price, with the current interface, in the current form.
The ship was announced as unsinkable. The five-to-six percent is the iceberg.
The Icebergs, Translated
Silicon Valley has developed, over the last four decades, a vocabulary for failure that is one of the great literary achievements of the modern corporation. It is a language of extraordinary elegance, capable of describing the complete and expensive destruction of a product thesis in terms that sound like strategic wisdom, operational maturity, and the considered judgement of visionary leadership.
I want to offer a translation service.
“We are strategically reallocating compute resources to focus on our core infrastructure priorities.”
Translation: Sora is dead. The text-to-video product that launched to genuine astonishment, that produced the Tupac-in-Havana video that made grown technologists put their heads in their hands with wonder, that secured a billion-dollar partnership with Disney, that was going to be the foundation of the creative economy’s relationship with AI — it burned too much compute, converted too few subscriptions, and was shut down on the 24th of March 2026, the day its 47 percent monthly download decline made the economics unpresentable to the investors preparing for the IPO filing. The Disney deal went with it.
“We are evolving our commerce experience based on user behaviour insights.”
Translation: Instant Checkout failed. The feature that was going to make ChatGPT the front page of all internet commerce — the interface through which you would discover and buy everything, with OpenAI taking a percentage of every transaction on the largest consumer network in history — converted at one-third the rate of a regular website link. The users would browse using the AI, then open a new tab and buy from the retailer directly, with the specific preference of people who had decided that the familiar was safer than the novel for the moment they were committing their money. Also, OpenAI had not built the regulatory infrastructure for state sales tax collection. Walmart’s EVP confirmed the abandonment in March 2026 with the diplomatic brevity of someone who has been asked to describe a failure in terms that don’t embarrass the partnership announcement from six months prior.
“We’re exploring new monetisation strategies to better align with our user engagement patterns.”
Translation: There are now ads in ChatGPT. Sam Altman, who said in 2023 that advertising was the “last resort” — not “one option,” not “a revenue stream we’re evaluating,” but last resort, with the register of a person who has thought about this carefully and means exactly what they’re saying — has deployed the last resort. Marketing agency partners report minimal measurable ROI. It’s still too early, so lets give it time. But, it doesn’t look good. The ROI should be massive off the gates, reminiscent of the early days of advertising on Google (Ask Gary Vaynerchuk how advertising on Google search for cents did wonders for his Dad’s Wine Library) and Facebook. Which is not happening on ChatGPT. This is what happens when you put an ad inside a conversation. The ad is not in the conversation. It is an interruption of the conversation. It is the telemarketer who calls during dinner, except the dinner is the product you paid for, and the telemarketer is how the product is trying to stop losing money on you.
“The Custom GPT Store continues to mature as part of our broader ecosystem strategy.”
Translation: Nobody uses it. The App Store of AI — the platform that was going to make OpenAI the distribution layer for the entire AI application economy, the structure through which developers would build and users would discover and OpenAI would harvest its thirty percent — is stagnant. Users have phones full of apps they trust and no reason to replace them with chat-based alternatives that require them to describe their needs rather than tap the icon they already know. The November 2023 developer conference, in retrospect, announced an ambition that the consumer market declined to validate.
Four pivots. Four failure-shaped events in the vocabulary of success. Each one requiring a reallocation of resources that costs money the company is already spending at a rate that produces five billion dollars in quarterly losses.
Frederick Fleet rang the bell. The bow is turning. But slowly.
The Hanging Man
There is a candlestick pattern in financial trading — used by technical analysts to identify moments of market inflection — called the Hanging Man. It is not a complicated concept. It describes a candle with a long lower shadow and a small upper body, appearing after a sustained upward move. What it signals is a trader who entered a position at the wrong price, got pushed down, and is now hanging — committed, unable to exit without crystallising a loss, hoping the position recovers enough to make the decision to hold feel justified rather than desperate.
The Hanging Man is not just a chart pattern. It is a psychology.
Microsoft invested approximately thirteen billion dollars in OpenAI across multiple tranches, beginning in 2019. The investment was structured with the wisdom of people who understood that AI was going to be transformative and wanted to be positioned at the centre of the transformation. It was, at the time of each tranche, a rational and arguably visionary deployment of capital.
And then the $110 billion funding round closed in February 2026. Amazon, SoftBank, Nvidia — all of them writing enormous cheques, all of them receiving preferred shares with IPO conversion terms, all of them structured with the careful contractual architecture of investors who understand they are not investing in a startup but in an institution that has become, in the parlance of the financial system, too embedded to fail cleanly.
Microsoft is now invested at a blended cost basis that the subsequent fundraising rounds have made, in the mark-to-market sense, impressive. But here is the thing about being invested in something that continues to raise money at higher valuations: the new money is not validating your thesis. It is sometimes deferring your day of reckoning. If Microsoft does not participate in subsequent rounds, its percentage ownership dilutes. If Microsoft does participate, it is committing more capital to the position it already cannot exit without triggering a chain of events it would prefer not to trigger. Microsoft’s Azure cloud agreement with OpenAI — the commercial relationship that makes Microsoft the primary infrastructure provider for the world’s most talked-about AI company — is valuable as long as OpenAI continues to scale. If OpenAI stops scaling, the agreement becomes the most expensive customer acquisition in the history of enterprise software.
This is the Hanging Man. Not because Microsoft is wrong about AI. Because the specific bet they made — OpenAI as the consumer AI champion, the ChatGPT interface as the primary point of human contact with the intelligence layer — is being contradicted by the five-to-six percent conversion rate, and the cost of exiting that bet is sufficiently high that continuing to hold it is the rational choice even when holding it requires continued investment.
Amazon’s $50 billion came with the condition that OpenAI models be added to Amazon Bedrock. Amazon is in the AI infrastructure business regardless of what happens to ChatGPT. Their fifty billion is partly a bet on OpenAI and partly an insurance policy against OpenAI — if the models go into Bedrock, Amazon has the capability whether or not the company that built the models survives the public market’s assessment of its consumer thesis. Amazon is not hanging. Amazon is hedging.
Nvidia’s thirty billion is the most transparent of the three. Nvidia sells the GPUs that OpenAI must buy to operate. Nvidia investing in OpenAI is the power company investing in the factory — the investment secures the customer, creates alignment, and ensures the relationship continues through whatever corporate structure changes the next five years produce. Jensen Huang is the only person in this story who has eliminated the risk of being wrong about which AI company wins. He wins when they all win. He wins when some of them fail and their successors buy the next generation of hardware.
The Hanging Man is not Nvidia. The Hanging Man is everyone who invested in the specific proposition that the chat interface was the future of human-computer interaction, and now finds that the cost of changing their minds exceeds the cost of hoping they were right.
The Man Who Stood in Front of a Falling Man
Masayoshi Son has a particular gift, which is the gift of conviction so total that it functions as a weather system — changing the atmosphere of the room, bending the behaviour of the people in it, producing outcomes that would not have occurred in its absence. He raised one hundred billion dollars for the SoftBank Vision Fund in 2017 with this gift, from investors who found that in his presence, the specific questions one might ordinarily ask about return on investment and portfolio construction felt somehow small, somehow insufficiently ambitious, somehow beside the point of the larger thing he was describing.
The Vision Fund’s largest single investment was WeWork. Eighteen billion dollars. In a company that leased office space on long-term contracts and sublet it on short-term ones — a business model as old as commercial real estate, dressed in the language of tech, of community, consciousness, and the future of how humans work. Adam Neumann, WeWork’s founder, had a gift similar to Masayoshi Son’s: the gift of making the ordinary sound transformative, the gift of making the people in the room feel that they were participating in something historic rather than watching someone sign a commercial lease, a gift for selling a utopian future now.
In 2019, WeWork filed for an IPO. The prospectus described a company worth forty-seven billion dollars. The public market, which operates under different conventions than the private market — conventions that include the expectation of a comprehensible path to profit — read the prospectus and found it wanting. Specifically: WeWork was losing two hundred and nineteen thousand dollars every minute. The IPO collapsed. The valuation fell from forty-seven billion to nine billion before the company filed for bankruptcy in 2023.
Masayoshi Son stood in front of his investors at a SoftBank earnings presentation and displayed a slide. The slide showed a stick figure falling into a hole. Beneath it, in plain text, the word: Me. He had, in the language of corporate finance, taken a bath. In the language of ordinary human beings, he had made a very expensive mistake with other people’s money and was standing in front of them and saying so with the specific, dignified candidness of a man who has concluded that the only way through humiliation is directly through it.
He has now committed thirty billion dollars to OpenAI. Not twenty million. Not two hundred million. Thirty billion dollars — the anchor investment in the $110 billion round, the foundation of the Q4 2026 IPO he is advocating for, the largest single private bet on an AI company in history, placed by the man who lost eighteen billion on a company that called a commercial real estate business a technology platform.
I want to be precise about the parallel and equally precise about where it breaks.
WeWork’s technology was not real. The “we” in WeWork was a design choice and a brand exercise, not a community. The energy credits and the wellness programmes and the entrepreneurial ecosystem were amenities in an office park. Adam Neumann was selling a lease and calling it a movement.
OpenAI’s technology is real. The capabilities are genuine and improving. The models do extraordinary things. Sam Altman is not selling an office park. He is selling something that actually exists and actually works.
But the valuation — the eight hundred and forty billion dollars — is not based on the technology. It is based on the consumer thesis. It is based on the proposition that AI for everyone means paying customers for everyone, that the five-to-six percent conversion rate will become fifteen percent and then twenty-five percent, that the eight hundred million weekly active users are a launchpad rather than a ceiling. The technology may be real. The consumer thesis is the same category of claim as WeWork’s: something that sounds transformative and requires the public market to accept it on faith before the proof arrives.
And the man who accepted WeWork’s consumer thesis on faith, at a higher price than any investor before him, has now accepted OpenAI’s consumer thesis on faith, at the highest price any investor has ever paid for a private company.
The falling man slide was not a lesson Masayoshi Son drew the obvious conclusion from. It was a chapter he decided to open a sequel to, with a larger budget.
The Google Test
I want to give you a framework. Not an academic framework with a citation and a methodology section — a thinking tool. The kind of framework that becomes more useful the more you apply it. I call it the Thesis Theory. It’s what I use as a way to evaluate a new tech startup. Also, its useful for assessing tech companies, because tech companies are not like other companies, they exist so long they are not disrupted. Anyway.
Every great technology company that has generated durable, compounding, lasting wealth for its founders and investors and the world has followed a sequence. Not all four steps are glamorous. Only one of them gets the TED talk. But all four are required:
First: Define a real problem. A genuine, large, widely felt problem that the existing solutions are failing to solve adequately. Not a problem you invented to justify the product. A big problem that exists before you arrive.
Second: Build a product that solves it. Not a product that demonstrates the technology. Not a proof of concept that shows what’s possible. A product that people use because it solves the problem better than anything they had before.
Third: Achieve product-market fit. This is the step that has a thousand definitions and exactly one indicator that matters: people come back without being asked. They tell other people without being paid to. The product becomes a verb, or a habit, or a reference. Not because the marketing said it should. Because it solved the problem and the solving was good enough that people organised their behaviour, workflows, habits, and life around it.
Fourth: Monetise. Build the business model. Find the whales — the customers so dependent, so deeply integrated, so genuinely unable to imagine removing the product from their lives that the price conversation is not “is this worth it” but “how much more is it going to cost and when.”
Google did all four. In sequence. In order.
The problem: information was scattered, search engines were primitive and gameable, and the internet was getting too large for manual curation that folks at Yahoo and other places were doing. The product: PageRank, elegant in its logic, merciless in its results. The product-market fit indicator: Google became a verb. Not a brand. A verb. A thing humans do. “Google it.” Not “search for it.” Google it. You cannot buy that. You cannot manufacture it. You can only earn it by solving the problem better than everyone else until the solving becomes the default.
The monetisation: AdWords. Pay-per-click advertising, priced by auction, with quality scores that meant the best ads for the most relevant searches got the best positions. And then the whales arrived — companies spending not thousands but millions per quarter, then hundreds of millions, then companies whose entire revenue model was built around the assumption that Google’s search traffic would continue to flow in their direction. Companies for whom Google advertising was not a line item but a lifeline.
Now apply the framework to OpenAI.
The problem: what is it, precisely? Information overload? Google already exists. Creative bottlenecks? Arguably, but the App Store is full of specialised creative tools that solve specific creative problems better than a general chat interface. Loneliness? Real problem, wrong product, and we now have a wrongful death lawsuit that has established legal precedent that AI chatbots are defective products when they get too deep into emotional territory. The real problem for big businesses was the premium on human intelligence.
The product: a general-purpose conversational AI. Extraordinary capability. Does many things moderately well. Does some things extremely well. Does some things confidently, fluently, and incorrectly, with the specific eloquent certainty of a person who doesn’t know they don’t know.
The product-market fit indicator: Initially, the 1 million users in 5 days, the 100 million users in 2 months signalled product-market fit. But five to six percent paid conversion, flat for three years, mobile downloads declining from 73.4 million in December 2025 to 68 million by February 2026 tells the real picture. ChatGPT has not become a verb. It has become a noun — a reference, a thing people try, a product people mention in the context of technology. “Have you tried ChatGPT?” Not: “I just ChatGPT’d it.” The verb is the PMF indicator. The noun is the interesting consumer product.
The monetisation: advertising (last resort, deployed), subscriptions (5–6% conversion, flat), commerce (abandoned), customGPT store (stagnant). None of these are the whale structure. None of these are the model that prints.
Now compare Anthropic. Using Claude Code as an example.
The problem: software development is slow, expensive, and constrained by the number of qualified human engineers available to write and review code – basically there is a premium on human intelligence. The product: Claude Code, operating inside the terminal where the engineers already work. The PMF indicator: companies are laying off human developers and reallocating their salaries to Claude subscriptions — not because they’ve been told to, but because the calculation is self-evidently correct and they made it themselves. The monetisation: developers on the $200 per month Max plan who are asking for the price to increase, because they know they are getting more value than they’re paying for, and they know the price is going up, and they have become the product-market-fit whale that every technology business requires.
Two tweets. Real ones. From the Claude Max plan threads of early 2026, from developers who had hit their usage limits and were responding to the experience:
“I’m at my limit — emotional, or Claude?”
“Just increase the price of Claude Max to $1,000 already. We all know it’s coming. You’ve got us trapped in the greatest product of the decade. Just do it.”
This is the voice of product-market fit. Not satisfaction. Not preference. Dependency. The willingness to pay more because the alternative — removing the product from your workflow — is more painful than the price increase. This is the Google advertiser spending a hundred million dollars per quarter and not questioning the invoice because the revenue the traffic produces exceeds the invoice by an order of magnitude.
OpenAI has eight hundred million weekly active users and a five percent conversion rate. Anthropic has a fraction of those users and a cohort of them asking, in fact, begging to be charged more.
The valuation of OpenAI is premised on the first number. The future belongs to the second dynamic.
Who Sees the Ads
Now I want to ask the question that, as far as I have been able to establish, nobody has asked in the financial analysis of OpenAI’s advertising pivot. Not because it’s a subtle question. Because it is so obvious that it functions as a kind of intellectual blind spot — the thing hiding in plain sight, visible once named, invisible until then.
The ads in ChatGPT are shown to humans. They are inserted into the chat interface. They are displayed on the screen. They are, in the fundamental assumption of the advertising model, seen by eyes.
Here is the question: in the agentic future that OpenAI is pivoting toward with the OpenClaw acquisition — the future in which autonomous agents navigate the web, execute workflows, complete transactions, and orchestrate complex processes without a human present at each step — who sees the ads?
If the primary users of ChatGPT’s capabilities in 2028 are not people typing into a chat box but AI agents querying the API, then the advertising inventory is not inside a consumer interface. It is in an API. And AI agents do not see ads. AI Agents parse structured data and act on instructions. They do not have eyes. They do not notice banner placements. They do not click on sponsored results in the way that generates the revenue event.
The advertising model requires a human being to be present in the conversation. It requires a person to look at the screen at the moment the ad is displayed. If the AI industry’s own projections are correct — if McKinsey’s figure of seventy percent of day-to-day work decisions made autonomously by AI systems by 2028 is directionally accurate — then the human audience for the ChatGPT ad is shrinking as the consumer interface is being used less by humans and more by AI agents.
OpenAI has deployed advertising as the last resort to cover losses that the consumer subscription cannot cover. The agentic future it is simultaneously pivoting toward is a future in which the inventory on which the advertising depends — human attention inside a chat interface — is being replaced by machine queries that do not generate advertising revenue.
Then the last resort is a lifeboat with a hole in it.
The Jony Ive Billion
Before we close, I must tell you about one more iceberg. The one that is still approaching.
In the spring of 2025, the results were already in on AI consumer hardware. The Humane AI Pin: $230 million raised, ten thousand units sold, assets fire-saled to HP for $116 million, users left with what the company itself described — with a candour that deserves some kind of prize for corporate honesty — as “useless lumps of aluminium”. The Rabbit R1: marketed as the device that would replace the smartphone through voice and AI, dismissed as “buggy and undercooked,” with the Large Action Model failing at the specific booking and ordering tasks that were its only advertised purpose.
The lesson from both: humans carry a highly optimised interface called a smartphone, and AI in its current consumer form does not offer sufficient marginal utility over the smartphone to justify a separate hardware layer.
Sam Altman read the post-mortems. He had invested personally in Humane AI. He had access to the most detailed failure analysis in the industry.
And then he spent approximately $6.5 billion to acquire Jony Ive’s io Products company and commission the man who designed the iPhone — the specific device that the evidence had just confirmed was too well-designed and too deeply embedded in human life to be displaced — to design a new consumer AI hardware product.
I want to be fair. Jony Ive is, genuinely, the greatest product designer of his generation. The iPhone changed the world. If anyone can design the form factor that makes AI hardware work for consumers, it might be him.
I also want to be honest. The Humane AI Pin raised $230 million from the best investors in Silicon Valley, including the man who hired Jony Ive, and failed because the problem was not the design. The problem was the marginal utility gap. A beautiful solution to a problem that humans have already solved adequately is still a solution to a problem that humans have already solved adequately.
Six and a half billion dollars. For the sixth compartment of a ship built to survive four.
The IPO and What It Actually Is
The Q4 2026 IPO will be the largest technology listing since Alibaba in 2014. Or not, because of SpaceX. Anyway.
It will be covered with the specific intensity of a financial event that is simultaneously a cultural event — the moment at which the AI era’s most prominent institution submits its thesis to the public market for verification.
The public market is a different kind of investor from the private market. The private market operates on conviction, on relationships, on the shared understanding among sophisticated participants that transformative companies require patient capital and that the metrics that matter at the beginning are not the same metrics that matter at the end. The private market can hold a position for a decade. The private market can absorb five billion dollars in quarterly losses if it believes in the trajectory.
The public market is quarterly. It is impatient. It watched Warren Buffett’s Mr Markets every move. It is populated, in addition to the sophisticated institutional investors, by retail shareholders who read about the IPO in newspapers, on social media, on r/wallstreetbets, and decide, with the information available to them, whether the price is right. It asks, with the forensic regularity of quarterly earnings calls, whether the trajectory is materialising. It does not accept “we are focusing on our core infrastructure priorities” as an answer when the question is “why is the subscription conversion rate still five percent?”
The preferred share structure of the $110 billion round was designed to convert at the IPO. SoftBank’s thirty billion, Amazon’s fifty billion, Nvidia’s thirty billion — all of this patient private capital is looking for its liquidation event in Q4 2026. The IPO is not a celebration of maturity. It is the mechanism by which the people who funded the journey transfer the outstanding risk to the people who buy in at the moment of listing.
This is legal. This is normal. This is what IPOs are for. But it is worth naming plainly, because the framing of the IPO as a milestone in OpenAI’s development — the moment the company becomes publicly accountable, the beginning of the next chapter — obscures its other function: the exit ramp for the investors who can see the quarterly losses and the five percent conversion rate and the abandoned products, and who would prefer that when the public market delivers its verdict on those numbers, the downside is distributed across a much larger pool of shareholders.
Webvan went public. Pets.com went public. The technology worked. The unit economics did not. The public shareholders held the bag during the discovery phase.
Who Wins Regardless
I want to end not with the horror but with the structural observation I and others have made, because the horror without the structure is just anxiety, and this essay is in the business of understanding rather than feeling.
In the railway bubble of the 1840s, the steel manufacturers survived. In the dot-com crash, Cisco survived. In every infrastructure mania in the history of capitalism, the companies that built the infrastructure rather than the applications that ran on it survived the correction — because the technology was genuinely transformative, and the infrastructure was genuinely essential, and the question of which applications would thrive was separate from the question of whether the infrastructure would be used.
Jensen Huang and Nvidia owns the infrastructure. The GPUs. The chips that every AI company — OpenAI, Anthropic, Google, Microsoft, Amazon — must buy to train and run their models. Jensen Huang wins when OpenAI wins. Jensen Huang wins when OpenAI fails and its successor buys the next generation of hardware. Jensen Huang wins when the consumer AI thesis is right and wins when the agentic AI thesis replaces it outright, because both require compute, and he sells the compute.
When Jensen Huang says “AI is the new electricity”, he is making the most precisely self-interested accurate statement in the history of technology marketing. He is the power company. The factories rise and fall. The power company bills them all the same.
The Seed
Eight hundred million users who change nothing. A conversion rate that has not moved in three years. Ads shown in a chat interface that the company’s own agentic pivot is designing to be used without a chat interface. An IPO that transfers risk from the investors who understand the financials to the investors who are reading about the IPO in the newspaper and on social media. A $6.5 billion bet on consumer hardware by a man who watched two consumer hardware failures from the inside.
And underneath all of this — operating in the background, consuming 69 percent of all AI bot traffic, navigating the web at 7,851 percent year-over-year growth, already transacting on checkout pages without a human hand or a human eye involved — the AI agents.
Not the chat interface. Not the subscription. Not the ad. The AI agents.
Who are they for? Who built them? Who benefits when the human interface is retired and the agentic AI interface takes its place?
I watched the Titanic sink on the afternoon after my father had died and thought it was a love story. The chandeliers were beautiful. The people were dressed beautifully. It was only later, when I was old enough to understand what the film was about, that I grasped what I had been watching.
The building behind the lobby is now visible. The AI agents are in it. They have been in it for a while.
The Emperor’s New Suit
Hans Christian Andersen wrote the Emperor’s new suit story in 1837, and it has survived because it describes something that every generation, in every domain, manages to perform afresh with the specific earnestness of people who have not read the story.
The Emperor, you will remember, was visited by two weavers. We now know that they were not weavers. They were confidence artists of the highest calibre — men who understood that the most impenetrable fraud is not one that exploits greed but one that exploits the fear of appearing foolish. They told the Emperor they were weaving him a suit of clothes from a fabric of extraordinary properties: magnificent to the eye, unsurpassed in quality, and completely invisible to anyone who was stupid or incompetent. The suit, they explained, would allow the Emperor to identify the unworthy among his subjects, for only the worthy would be able to see it.
The Emperor could not see it. His ministers could not see it. His courtiers could not see it. Not one person in the palace who was shown the empty loom could see a single thread. But not one person said so. Because to say so — to admit that you could not see the magnificent fabric — was to confess your own stupidity, your own incompetence, your own unworthiness. And so they praised it. The texture. The colours. The extraordinary craftsmanship. They competed with each other in the generosity of their admiration for a garment that did not exist.
The Emperor wore it through the streets. The crowd, primed by their servants and their betters and the general atmosphere of an event that everyone was clearly treating as a triumph, cheered. They praised the fit. They commented on the design. They pointed out details to their children.
And then a child — a small one, young enough to have not yet learned the specific adult skill of saying what is expected rather than what is observed — said, in the clear, carrying voice of a person who has not yet been educated into the art of strategic silence:
“But he has no clothes on.”
The Suit That Was Sold
For three years and four months, from the 30th of November 2022 to the 24th of March 2026, the technology industry paraded through the streets in a suit of extraordinary magnificence. Every analyst admired it. Every venture capitalist praised it. Every technology journalist wrote about its texture with the enthusiasm of people who understood that expressing reservations would mark them as people who had missed the most important technological development of their lifetimes. The suit was called AI for Everyone. Sam Altman had commissioned it. The weavers were very talented people in San Francisco who genuinely believed in the fabric they were weaving. And the fabric was, in parts, real — which made the parts that weren’t considerably harder to identify from the street.
ChatGPT was real. The capability was genuine. The astonishment of that first encounter — the text arriving with a coherence and a contextual intelligence that nothing before it had produced — was warranted. Althought it was a stochastic parrot. The magic was real. I felt it. You felt it. Anyone who says they didn’t is either lying or was not paying attention at the moment ChatGPT arrived. The technology was not a fraud.
The thesis was the suit.
The thesis that this technology — this profoundly, genuinely extraordinary technology — was primarily, fundamentally, and sustainably for you. For the person typing into the chat box. For the eight hundred million weekly active users who would eventually, through continued engagement and product improvement and the patient work of consumer adoption, become the paying subscribers who would justify the eight-hundred-and-forty-billion-dollar valuation that the private market had assigned to the story.
The thesis that the five percent who paid would become fifteen, and the fifteen would become forty, and the forty would become the Google of this era — the product so essential to human daily life that it commands its own verb, generates its own class of whales, prints money with the specific untroubled regularity of a utility that everyone depends on and becomes the monopoly nobody questions and loved by Peter Thiel.
The thesis that the chat interface — the black box, the blinking cursor, the black screen returned after sixty years of interface evolution — was the future of how human beings would interact with information, with commerce, with each other, with the accumulated knowledge of civilisation.
This thesis is the suit. And the data, which has been accumulating with the patience of a child waiting for the right moment to speak, is the voice in the crowd.
The Boy, Speaking
Five to six percent conversion, flat for three years.
Sora: dead March 24, 2026. Instant Checkout: abandoned. The Custom GPT Store: stagnant. ChatGPT Go advertising: active but generating minimal measurable ROI for agency partners. Mobile downloads: declining from their December 2025 peak of 73.4 million per month to 68 million by February 2026, with the 18-to-24 demographic — the generation that was supposed to grow up with this technology as their native interface — leading the retreat back to social-first, visual-first platforms that were designed, from the beginning, around what young human beings actually want from a screen no matter how small.
The inference economics: a single complex reasoning query costs the provider between fifty cents and three dollars to execute, against a subscription structure that prices it at approximately four cents. The seafood buffet, fully loaded, serving the most sophisticated users at a loss that compounds with every query they ask. The more someone uses ChatGPT, the more they love it, the more they rely on it — and the more money OpenAI loses on them.
Quarterly losses of approximately five billion dollars. Projected infrastructure spend of four hundred and fifty billion over the next five years. A path to profitability that the most optimistic internal projections place in 2030, four years from now, contingent on an agentic business model that is still being assembled from an acquisition made in February 2026 and protocols that are still being ratified by industry bodies that have not yet agreed on the standards.
The Emperor is in the street. The crowd is beginning to notice the cold.
I want to be careful here. I want to be careful because the boy in Andersen’s story was not a hero. He was just a child who said what was in front of him. And the truth he spoke — the nakedness he named — did not change the Emperor’s situation in that moment. The parade continued. The Emperor, according to Andersen, walked even more proudly after the child spoke, his chamberlains carrying the invisible train with exaggerated dignity, the cortège maintaining its formation through the streets to its conclusion. The naming of a thing is not the same as the ending of it.
OpenAI may survive. The agentic AI pivot may succeed. The OpenClaw acquisition may turn out to be the Instagram moment — the inspired extension that carries the company past the consumer plateau into a new category of value that the eight-hundred-and-forty-billion-dollar valuation was, in retrospect, prescient about rather than delusional about. The IPO may clear. The public market may assign the thesis a price that the subsequent years validate rather than correct.
I genuinely do not know. Nobody does. Anyone who claims certainty about what OpenAI is worth in five years is using the same epistemology as the investors who valued Pets.com at its IPO price.
What I know is what the data says. And the data says: the suit has no clothes.
The Gold Rush and the Man Selling Jeans
Let me tell you another story. In 1848, gold was discovered at Sutter’s Mill in Coloma, California. Within a year, the population of California had grown from fourteen thousand to over one hundred thousand, drawn by the specific, irresistible combination of genuine possibility and spectacular stories of individual fortune that the eastern newspapers reported with the enthusiasm of publications that had discovered a story their readers would never tire of.
Most of the miners found very little gold. The seams that the early arrivals had found were largely exhausted by the time the mass of hopefuls arrived. The equipment was expensive. The conditions were brutal. The work was relentless and the returns were, for the vast majority of the people who undertook it, insufficient to justify the journey.
Levi Strauss arrived in San Francisco in 1853 with a stock of dry goods and a practical problem to solve: the miners needed trousers that could survive the work. Canvas initially, then denim. Riveted at the stress points where standard trousers tore. Cheap enough to replace but durable enough to last a working season. He did not need the miners to find gold. He needed them to need trousers. And they needed trousers regardless of whether the claims paid out or not.
Levi Strauss did not need to know which miner would strike it rich. He needed the gold rush to continue attracting miners, because miners wore trousers, and trousers wore out. He was, in the vocabulary we have developed throughout the essay, the infrastructure play.
Jensen Huang is Levi Strauss.
This is not a metaphor I have arrived at casually. It is the precise structural description of Nvidia’s position in the AI economy, and it is the reason that my confidence about Nvidia’s future is categorically different from my uncertainty about OpenAI’s.
Nvidia’s revenue in 2025: one hundred and thirty billion dollars, growing at one hundred and fourteen percent year-over-year. Not because one AI company won. Because every AI company needs GPUs to train and run their models — OpenAI and Anthropic and Google and Microsoft and Amazon and every startup and every enterprise deploying any AI capability at any scale. The H100, the H200, the Blackwell platform: chips so essential to the current architecture of AI that companies sign multi-year forward purchase agreements to secure supply, because not having the chips is more dangerous than the capital commitment required to guarantee them.
Jensen Huang does not care whether ChatGPT achieves product-market fit. He does not care whether the consumer thesis is right or the enterprise thesis is right or the agentic thesis is right. He cares whether intelligence — in whatever form it takes, for whatever purpose it serves, deployed by whichever company or government or institution — requires compute to operate. And it does. Structurally. Irreducibly. The AI agents that are replacing the chat interface require more compute per task than the chat interface did, not less, because agents are running multi-step reasoning chains across extended contexts with persistent memory and tool use and real-time environmental interaction.
The more the AI industry pivots from consumer to enterprise, the more GPU cycles are consumed. The more the internet becomes a machine-to-machine economy, the more inference is being run. The more the agentic future arrives — agents managing invoicing, agents reviewing contracts, agents running customer service, agents generating code — the more electricity flows through Nvidia’s chips.
“AI is the new electricity,” Jensen Huang said.
He would know. He owns the power station.
When the California gold rush collapsed — when the surface seams were exhausted and the industrial mining companies arrived with the capital equipment that individual prospectors couldn’t compete with — Levi Strauss’s company did not collapse with it. The miners left. The trousers remained. Levi Strauss & Co. continued selling durable workwear to the next generation of workers in the next generation of industries, for a hundred and seventy years and counting.
Nvidia will do just fine. I can say this with the specific confidence of someone who has examined the structure rather than the story.
I cannot say the same for OpenAI.
The Company, Without the Clothes
Not because the technology is bad. Not because the people are incompetent. Not because Sam Altman is not, in the precise and documented sense of the word, one of the most effective operators in the history of the technology industry — a man who has raised more capital, sustained more media attention, and maintained more investor confidence through more adverse data than almost anyone in the field.
Because the thesis is wrong. And a wrong thesis, held long enough, at sufficient expense, with sufficient institutional commitment to its correctness, does not become right. It becomes expensive.
The thesis is: AI for everyone. The evidence is: AI for five percent of everyone who has tried it, and decreasing among the young. The valuation is: eight hundred and forty billion dollars, predicated on the thesis. The path to reconciliation between the thesis and the evidence runs through either a dramatic change in the conversion data, or a dramatic change in what the company is, or both.
The OpenClaw acquisition is the attempted change in what the company is. And I want to give it its due, because it was the right move — the recognition, by someone with access to the internal data that the public sees only in aggregated form, that the consumer interface is not the destination and that the orchestration layer of the agentic internet is the place where durable enterprise value can be built. The pivot from ChatGPT-as-super-app to OpenAI-as-agentic-infrastructure is strategically coherent, directionally correct, and approximately three years late.
Three years of consumer losses, at five billion dollars per quarter, is sixty billion dollars. Sixty billion dollars of capital consumed in the pursuit of a consumer thesis that a five percent conversion rate was politely contradicting in real time, every month, while the investor decks described it as a “growth trajectory.” The company that needs to be built — the agentic infrastructure company, the enterprise API layer, the OpenClaw orchestration platform — is being built on the smoking wreckage of the company that was announced. Not because the technology changed. Because the story that was told about who the technology was for turned out to be wrong about the humans and right about the machines.
WeWork was a commercial real estate company that called itself a technology platform. The public market declined to accept the description and the valuation collapsed. OpenAI is a technology company that called its technology a consumer product. The consumer data is declining to accept the description, and the question the IPO will answer — in Q4 2026, when the preferred shareholders who funded the journey convert their stakes and the risk is distributed across a public market that operates with different patience than the private one — is whether the correction is a WeWork correction or whether the agentic pivot produces something in the next four years that justifies the price at which it listed.
I do not know the answer. The honest version of this essay does not claim to.
What the honest version of this essay claims is this: the suit has no clothes, the data is the child, and the parade has been going long enough that the temperature is becoming difficult to ignore.
What Was Actually Built
But I want to step back from the company analysis — because this essay is not, at its deepest level, about OpenAI. OpenAI is the vessel that carried the consumer AI era and may or may not survive the transition out of it. The thing I want you to understand before we close is larger than the vessel.
The thing that was built, underneath the consumer products and the viral moments and the subscription tiers and the advertisements and the product launches and the company valuations, is the Ghost Internet.
Right now — not in 2028, not as a projection, but right now, as you read this sentence — AI agents are navigating the web. They are filling in forms. They are querying APIs. They are executing transactions on checkout pages without a human hand touching a credit card or a human eye reading a product description. They are generating content that fills the spaces where human writing used to live, indexing it through AI-powered search systems that synthesise it for other machines, producing a complete economic cycle — content created, indexed, consumed, acted upon — in which the human is not a participant but a historical antecedent. The person who built the website that the agent is reading today was, in the most literal sense, contributing to the training data that made the agent capable of reading it. The author preceded the reader. The reader replaced the author. The circle closes.
Agentic browser traffic grew seven thousand eight hundred and fifty-one percent year-over-year in 2025. Human traffic grew three-point one percent. The internet that seven billion human beings navigate under the impression that they are its primary inhabitants is a minority experience on the infrastructure that carries it. The machines are the majority. They were always going to be the majority. The consumer AI era was the period in which the machines learned what they needed to learn from the humans before they became the majority.
An AI marketing agent generates a promotional piece. An AI search crawler indexes it. An AI recommendation engine surfaces it to an AI purchasing agent. The AI purchasing agent executes a transaction through the AP2 payment protocol, settling the invoice through a cryptographically signed mandate that leaves an audit trail no human has reviewed. This is not a scenario for 2030. This is a description of transactions that are already occurring at 2.3 percent of all agentic activity, on checkout pages, without a human hand in the loop.
The Ghost Internet is not the internet going dark. It is the internet going fast. Faster than human cognition can follow, faster than human perception can track, operating at the velocity of machines that do not sleep, do not deliberate, do not feel the small pleasure of the browse or the private satisfaction of the decision, because feeling is not a feature they were designed to have and not a limitation they will ever need to overcome.
Bain & Company project the agentic economy adding two point nine trillion dollars to the US economy by 2030. Not from people using chatbots more. From agents running complete business processes — invoicing, logistics, legal review, financial analysis, customer service, software development, code deployment — without the involvement of the human employees who currently perform those functions. McKinsey projects that seventy percent of day-to-day work decisions will be made autonomously by AI systems by 2028. Not assisted. Not recommended. Made. The agent will decide. The agent will execute. The outcome will arrive in the manager’s inbox as a completed action, and the manager will review it — if they review it at all — after the fact.
The building is open. The agents are in it. The lobby, which was so carefully designed and so impressively lit and so staffed with such a friendly conversational AI at the front desk, is being quietly converted to server space.
The Lament and the Lesson
I want to take a moment here — to say something that is not data.
The internet that was replaced was beautiful.
I mean this as a statement about value, not aesthetics. The internet of the late nineties and early two-thousands was beautiful in the way that imperfect human things are beautiful: because the imperfection was evidence of presence. The lime green text meant someone was there. The autoplay MIDI file meant someone had decided, with the specific conviction of a person making a choice about their own space, that this was the music that should play when you arrived. The guestbook meant someone wanted to know you had been.
All of that expressed something that no optimised, SEO-structured, AI-generated content page can express: I was here. I made this. This is mine.
The seventy-four percent of newly published web pages now containing AI-generated content is not a failure of creativity. It is a success of efficiency. Efficiency is what we asked for, and efficiency is what we received, and the thing we did not specify when we asked for it was that we wanted the inefficiency too — the inefficiency that is the evidence of a person, the inefficiency that is the trace of a mind. The internet became more efficient and less human simultaneously, because those two things turned out to be, at this technological moment, the same direction.
But the human internet will survive. Human intelligence will not. I said this earlier and I want to repeat it here with more conviction, because the argument of this essay is not that humans lose. It is that humans were never the point of consumer AI, and that understanding this clearly — accepting it as the structure of the situation rather than experiencing it as a wound — is the prerequisite for navigating what comes next.
Reddit is fighting the bots with human verification and removing a hundred thousand accounts per day. The person who wrote that three-thousand-word passionate defence of a film from 1997 is still there. The midnight forum thread about grief and food is still happening. The communities that build themselves around the shared intensity of caring about a thing — a film, a game, an obscure musical genre, a technical problem, a political cause — are still building themselves, still generating the specific warmth of human beings who have found other human beings who understand exactly what they mean.
The Ghost Internet will not extinguish the human internet. It will separate from it. AI will get their infrastructure — their APIs, MCPs, A2As, and their protocols and their agent-to-agent communication standards and their autonomous checkout pages — and the humans will keep their spaces, imperfect and inefficient and gloriously alive with the evidence of presence.
We built the internet for ourselves. We built it out of curiosity and community and the very human desire to reach across a screen and find another person who was also awake at two in the morning with thoughts they needed to put somewhere. The Ghost Internet will run in parallel, at speeds we cannot perceive, conducting transactions we will never see, generating a synthetic GDP, what Citrini’s research called Ghost GDP, that the economists will measure but no individual human being will experience as wealth in the way that matters.
And we will still have our internet. Smaller, perhaps. Slower, certainly. But ours.
The Things That Do Not Automate
I want to give you the last useful thing this essay has to offer, which is not a prediction or a valuation or a technology timeline. It is a distinction.
The things that AI automates efficiently are the things that can be specified, repeated, and evaluated against a clear criterion. The legal document review that follows a checklist. The invoice reconciliation that matches numbers against categories. The customer service query that can be resolved by reference to a knowledge base. The code that implements a clearly defined feature against an established architecture. The content that fills a keyword slot in a search optimisation strategy. These things are being automated, are being automated right now, will be substantially automated by 2030.
The things that AI cannot automate efficiently are the things that require the specifically human quality of being present in the world with a body and a history and a set of relationships that are not transferable to a system that has none of these. The doctor who sits with a patient and reads something in the silence that no instrument records. The teacher who understands, from the particular quality of a student’s confusion, exactly where the understanding broke down. The writer who reaches for the precise word not because it optimises for engagement but because it is the true word, the one that describes the experience exactly, the one that makes a stranger in another country feel understood. The friend who calls because they sensed something in the last message that statistics would have missed.
These things are not safe because they are inefficient. They are valuable because they are inefficient — because the inefficiency is the presence, and the presence is the point.
The advice this essay offers, for whatever it is worth, is not to compete with AI agents. AI agents will outcompete every human at the tasks they are built for, at the speed they were designed to operate, in the new ghost economy they are already constructing. Competing with AI agents on terms is the error of the knowledge worker who tries to type faster than a AI large language model, the error of the analyst who tries to process more data than an AI agent, the error of the content farmer who tries to publish more articles than an automated generative AI pipeline.
The advice is to be irreducibly and unapologetically human. To do the things that cannot be specified, cannot be repeated identically, cannot be evaluated against a clear criterion, because those things require the presence, experience, intuition that only a person can provide. To be the doctor in the silence, the teacher in the confusion, the writer reaching for the true word, the friend who called.
AI agents do not need those things. They have no eyes. No hands. No sense of smell. No sense of touch. They do not experience anything. They cannot miss anything.
And we will always have them, if we choose to use them.
Titus Has Arrived
One last return to Jerusalem.
In 70 CE, Titus arrived at the city walls with four legions and the knowledge that Cestius Gallus had gathered in 66 CE. The siege lasted from April to September. And when it ended — when the mathematics of four legions and superior engineering and sufficient time had produced their conclusion — what was lost was not just a city. What was lost was a world. The Second Temple. The centre of an entire civilisation’s relationship with the sacred. The place where heaven and earth were understood to have touched. Gone, in smoke and heat and the specific, final arithmetic of a siege that had been prepared with the notes from the first visit.
The consumer AI era gave the industry its notes. It gave it the training data, the RLHF signal, the correction patterns, the preference rankings, the behavioural fingerprints of eight hundred million people interacting with intelligence in real time. It gave it the knowledge of the walls: where the human resistance was strong, where it was weak, what people would pay for, what they would not, what made them stay, what made them leave, what they reached for when the interface asked them to type what they wanted.
The notes are taken. The four legions — the agentic economy, the machine-to-machine protocols, the Ghost Internet infrastructure, the enterprise API layer — are in position. The consumer products were the reconnaissance. The enterprise products are the siege.
The Zealots celebrated at Beth-horon. They carried the catapults home as trophies. They had three years.
We have had three years, and the celebration has been genuine and the wonder has been warranted and the technology has been extraordinary, and none of this changes what the data is patiently saying:
The building was never the lobby. The lobby was the recon.
The Child in the Crowd
Hans Christian Andersen ends the story quickly. The boy speaks. The crowd ripples with the knowledge — the child is right, the child is right — and the Emperor continues walking, more proudly now, carrying himself with the specific defiance of a powerful person who has been publicly embarrassed and has decided that the correct response is to walk faster. The chamberlains hold the invisible train. The cortège reaches its destination.
The story ends there. Andersen does not tell us what happened next. Whether the Emperor eventually acknowledged the cold. Whether the weavers were punished, or paid, or both. Whether the court eventually said, quietly, in private, that perhaps the next commission should involve actual fabric.
What we know is that the child spoke, and the crowd heard, and the knowledge that had always been available — the simple, observable fact that the Emperor had no clothes — entered the public record.
This essay is that child. Not brave — a child is not brave for saying what it sees, it simply hasn’t learned yet to calculate the cost of honesty. Not prophetic — the data was available to anyone who looked. Just present in the moment when the knowledge needed to be stated plainly, before the parade continued past the point where the stating would have mattered.
The emperor of consumer AI paraded through our screens from November 2022 to March 2026. The suit was magnificent in the telling. The technology inside it — the actual capability, the genuine transformation of specific categories of human work — was real and remains real and will continue to be real in ways that the lobby failure does not diminish.
But the suit, the specific suit — AI for everyone, the consumer revolution, the eight-hundred-and-forty-billion-dollar thesis that five to six percent of eight hundred million users would eventually pay enough money to justify the infrastructure of four hundred and fifty billion dollars — that suit is the invisible fabric.
And the crowd, which is the market, which is the data, which is the quarterly earnings calls that the IPO will produce and the public shareholders who will attend them and the analysts who will ask the questions that private investors have been too polite to press, is about to hear the child.
A Final Word on Jensen
Jensen Huang will be fine. Let me say this with the pleasure it deserves. To Nvidia’s shareholders glee.
Not because Nvidia is morally superior. Not because Jensen Huang made wiser bets or better decisions or showed greater foresight. But because the structure of Nvidia’s position in the AI economy is the structure that always survives the correction.
In the California gold rush, when the surface seams ran out and the individual miners went home broke, Levi Strauss did not go home broke. Because he was not mining. He was outfitting the miners. And when the miners became industrial engineers, he outfitted the industrial engineers. And when they became factory workers, he outfitted the factory workers. And when they became the counter-culture of the 1960s, he dressed them in the same denim he had been producing since 1853, and it turned out that the garment designed for the miner’s physical labour was also the garment for the protest march and the rock concert and the ordinary Saturday morning that asks nothing more of you than pants.
Nvidia’s GPUs power the training run for the model that answers your query. They power the inference run when the query is processed. They will power the AI agent that navigates the web on behalf of a business while the business’s few remaining employees are in a meeting. They will power the simulation that the next generation of AI models use to teach themselves things that human data cannot teach them. They will power the Ghost Internet’s enormous, invisible, ceaseless activity, the machine-to-machine commerce and the agent-to-agent negotiation and the synthetic GDP that the economists will measure in the 2030s with the same mixture of wonder and alarm that I have attempted to describe in these five sections.
The hyper-scalers are spending six hundred and sixty-seven billion dollars on AI infrastructure in 2026. Every chip in every data centre running every model that any company deploys is a chip that Nvidia — or a company buying Nvidia’s architecture under licence, or a company building chips designed to run alongside Nvidia’s — made or influenced. The infrastructure play does not require a winner in the consumer AI wars. It requires the war to continue.
The war will continue. The infrastructure is too large, the stakes too high, the sovereign wealth too committed, the national security implications too apparent to any government that has watched what happened when one company controlled the cloud infrastructure of an adversary nation. The war will continue. The chips will be bought. The data centres will be built. The electricity will flow.
Jensen Huang sells jeans to miners. He will be fine.
The Closing Image
I started this essay with the death of Sora. Not a sweet human being. But merely an AI video generator that was meant to turn everyone and their dog into film makers.
The 24th of March 2026.
An ordinary Tuesday in the calendar’s accounting, a Tuesday that happened to be the day that Sora — the text-to-video platform that produced the Tupac-in-Havana video that made grown technologists put their heads in their hands with wonder — was shut down. Quietly. Without ceremony. Buried in the same announcement that ended the Disney partnership and the Instant Checkout feature, in the corporate prose that Silicon Valley has developed specifically to describe expensive failures in the language of strategic progress and pivots.
I want to close with a different Tuesday. A hypothetical one, set some years from now. Call it 2028, if the prediction by Citrini’s research is to become true.
On this Tuesday, an AI agent wakes — if waking is the right word for a process that begins and ends without sleep, that has no night from which to emerge, but is triggered — and begins executing the tasks it was assigned. It navigates to a website to extract data, and the website it navigates to was built by a human being who spent three weeks on it in 2023 and is deeply proud of how it turned out, and who has never known that the primary reader of their carefully structured content is an AI agent that processes it in milliseconds and discards everything except the structured data it was looking for. The agent completes a transaction on a checkout page and sends a confirmation to the system that assigned it the task. It drafts a summary and routes it to a human manager who reviews it, approves it, and signs off in forty-five seconds, because the agent has done the work and the manager’s job has become the job of saying yes or no to the agent’s conclusions. The manager is very well paid. The entry-level analyst who would have done this work in 2023 was not hired.
And somewhere, in a completely different part of the city where this is happening — or in Harare, or in Lagos, or in Bogotá, or in any of the places where human beings are building the human internet with the tools available to them — someone opens a browser and goes to a forum, and they post something at two in the morning that they needed to say. Not for the algorithm. Not for the engagement metric. Because they had a thought and the thought was true and there was a person somewhere who needed to read it, and the person who needed to read it does read it, and the encounter is brief and unrepeated and entirely meaningful in the way that brief unplanned encounters between people who understand each other are meaningful.
The agent does not know about this. The agent is not capable of caring. The agent is executing a task in the Ghost Internet, and the human is being a person in the human internet, and these two things are happening simultaneously, in parallel, without awareness of each other.
In 1837, Hans Christian Andersen wrote the story of the Emperor’s New Suit in a world in which there were no computers, no internet, no AI, no large language models, no agentic browsers, no Ghost Internet, no synthetic GDP. He wrote it because the pattern it describes — the pattern of people praising what they cannot see because the alternative is to confess their inadequacy — was ancient in his time and will be ancient in ours and will, I suspect, be ancient in the time of whoever reads this essay in 2076 and recognises, with the specific feeling of encountering a familiar truth in an unfamiliar setting, that the same parade is happening again.
The suit changes. The pattern holds.
The child speaks.
And the emperor, who has no clothes, walks a little faster.
The funniest book you will read this year is ‘The Emperor’s New Suit.’ Its a satirical exploration of the relationship between humans and technology. Its like a mix of The Hitchhiker’s Guide to the Galaxy, Catch-22 and Sapiens: A History of Mankind. It’s available on Amazon as a Kindle eBook and Paperback.

GIPHY App Key not set. Please check settings