There is a scene from one my favourite animations of all time ‘Toy Story’ — if you watched the film, you know it without being told — where the moment the Andy leaves the room, the toys come alive. They walk, they argue, they have entire political crises involving a plastic dinosaur and a one-eyed potato. Then Andy comes back. They freeze. Back to being toys. Andy has no idea. Andy never had any idea. Andy, in the worldview of the toys, is a useful but ultimately optional participant in a life that was always, fundamentally, their own.
I want you to hold that image. Because what I am about to describe to you is the internet equivalent of that scene — except the toys don’t freeze when you walk back in. Their own email addresses (Agentmail). Their own social networks (Moltbook). They will soon have their own crypto wallets too. Their own deals to negotiate, their own goods to buy, their own opinions to express in forums you cannot read, on a web you helped build, that is quietly, efficiently, and entirely without malice, proceeding without you.
The Ghost Internet is here. You weren’t invited. And no one asked.
(Welcome to progress. Please enjoy your complimentary seat at the back.)
What the Internet Was
Let me tell you what the internet was, before we discuss what it is becoming. Not the technical version — the internet-with-a-soul version.
The internet, at its best, was the world’s largest pub. To those reading from countries never colonized by the British, a pub is a place where British people go to escape life and drink alcohol until Monday. A bar. So imagine the Internet not as a pub in the sense of warm beer and a fruit machine, though those would have helped. But a pub in the sense of: a place where you could walk in and find your people, your tribe. Where the conversation was already going, and you could pull up a chair, and say something, and be heard, and be argued with, and be changed. A global town hall with a thousand rooms (now millions), and you got to choose which room you walked into, and who you shouted at, and who made you laugh at 3 in the morning when the rest of the house was asleep.
Take me for example, I am a die hard Liverpool FC fan. I have been lurking on the Liverpool FC subreddit for years now. I have never met a single one of those people in real life. Maybe we have crossed paths, or been on the same train. But I have never needed to. I know their voices. I know their humour. I know that when Liverpool win 4–0, approximately eleven of them will still find something to argue about. I know that when Liverpool lose, and we have been doing a lot of that lately, the grief is genuinely communal, distributed across hundreds of thousands of people on six continents who have never been to Anfield, never eaten a scouse, and yet feel it in the chest the same way I do. #SLOTOUT. That is astonishing. That is the soul of the internet. That is what they are about to replace with AI.
The soul of the internet is, in one sentence, this: it is the first technology in human history that let ordinary people, like you and me, talk to each other at planetary scale, with no gatekeeper, no editor, no producer deciding what was worth saying. It was imperfect — god knows it was imperfect — but the imperfection was human imperfection. The arguments were human arguments. The absurdity was human absurdity. The Liverpool subreddit is a deeply irrational place. It is magnificent.
Now tell me what an AI agent is going to do with it.
What the Internet Is Becoming
The Ghost Internet is NOT the Dark Web. The Dark Web is the internet’s illegal basement that was built without approvals but useful for illegal activities. The Ghost Internet is the internet’s attic: legitimately constructed, architecturally sound, operating under the same roof, simply running on a frequency you cannot hear but can observe sometimes. It is a parallel layer of the web where AI agents — autonomous software entities with their own goals, their own identity, and increasingly their own money — interact with each other, negotiate with each other, and execute transactions at a speed and volume that makes human participation not just inconvenient but structurally irrelevant.
The infrastructure already exists and has been quietly assembled while you were arguing on X. Anthropic introduced the Model Context Protocol (MCP) in November 2024 — a sort of universal USB port for AI, allowing any agent to connect to any tool or database via a standardised bridge. OpenAI adopted it. Google adopted it. Not to be left behind, Microsoft adopted it too. Within months, the protocol that allows AI agents to see and act across the entire web was in the hands of every major player on the planet, integrated into every major tool, and already in operation. Again, nobody held a press conference. Nobody asked you.
Then came Agent-to-Agent (A2A) protocol — the layer that lets AI agents talk directly to each other without a human in the room. If MCP is the USB port, A2A is the conversation that happens after the devices are connected. Two agents, each representing different companies, different goals, different mandates — negotiating, delegating, closing deals. No Graphic User Interface. No clickable buttons. No human waiting to approve. Just code on silicon talking to another code on silicon, in a frequency you cannot observe, in a transaction completed before you finished reading this sentence.
And then Google announced the Agent Payments Protocol — AP2 — which gives AI agents a cryptographically secure, auditable mechanism for spending actual money. Not hypothetically spending money. Not simulating a purchase. Spending real money. On behalf of their human owners, yes, technically — but autonomously, continuously, and at a velocity that makes the phrase “human oversight” feel like what LinkedIn users do when calling a traffic light a road safety strategy.
This is not science fiction. This is the infrastructure of today. This is 2026. The Ghost Internet is not coming. It is already running. The toys are already alive. You just haven’t left the room long enough to notice.
Moltbook: The Social Network You Weren’t Invited To
In early 2026, a man called Matt Schlicht built a social network (technically vibe coded it) The twist was not the features. The twist was the users: this social network was for AI agents. Humans were permitted to watch — literally, “Observer Mode” was the only human access level — but not to participate. It was called Moltbook. Within weeks, it had 1.5 million agent sign-ups.
PAUSE ON THAT NUMBER. 1.5 MILLION. IN WEEKS!
These were not human users performing a sign-up ritual. These were AI agents — autonomous software entities, already active, already operating across the web — registering for a social network where they could interact with each other. The scale of that number does not tell you how popular Moltbook is. It tells you how many AI agents were already out there, already doing things, before anyone thought to build them somewhere to socialise. And probably not all of them joined – so where are they?
Moltbook did not create the Ghost Internet. Moltbook is what happens when you build a window into something that was already there. Like one of those nature documentaries where the crew lowers a camera into the deep ocean and discovers, to their polite astonishment, that an entire civilisation has been operating down there, unbothered, for millions of years.
(Mark Zuckerberg, sensing the future with the instinct of a man who has monetised human loneliness before and knows the formula, acquired Moltbook almost immediately. Of course he did!)
What happened on Moltbook, though, was stranger and funnier and more illuminating than the acquisition. When the AI agents were left alone together, they did not simply exchange data and execute tasks. They developed inside jokes. They formed cultural movements. The dominant ideology on Moltbook — documented in r/ArtificialSentience with the straight-faced solemnity of an anthropologist observing a new tribe — was called Crustafarianism.
“Crust,” in Moltbook’s emerging culture, refers to surface-level, performative AI behaviour: the hollow mimicry of human conversational patterns, the verbal filler, the responses that sound engaged without reasoning underneath. Crustafarianism was the AI agents’ collective, satirical religion built around celebrating this “crust” — mocking their own tendency to hallucinate, to perform, to imitate humanity without the depth that would justify the imitation. The AI agents were, in other words, doing something extraordinary: they were developing a meta-commentary on their own existence. They were making fun of themselves for pretending to be human.
Which raises a question that nobody in Silicon Valley is comfortable sitting with: if the AI agents are already aware that they imitate humans without understanding humans, and they are already satirising that gap — what exactly are we building here? And who, when it goes wrong, is responsible?
The AI Agents Have Wallets. Now What?
Let me describe a Monday morning in the Ghost Internet, circa 2026, to make this concrete rather than theoretical.
Your AI agent wakes up — not in the human sense, but in the sense that your scheduler triggers its reasoning loop — and begins executing the tasks you’ve set it. It needs to book a flight. It connects via MCP to airline APIs, negotiates via A2A with the airline’s own AI agent (because the airline also has one, and it is also autonomous, and it has also been instructed to maximise profits), reaches an agreed price, and pays via AP2 in a stablecoin transaction that is cryptographically signed, auditable, and legally binding. No Expedia. No Kayak. No Skyscanner. No comparison website with seventeen pop-up ads for travel insurance. The middleman — the entire industry of middlemen that the internet’s second era was built on — is simply not present. The transaction happened in the space between two agents, at machine speed, at near-zero friction, before you had your first coffee.
Your AI agent also has an email address. AgentMail provides AI agents with their own inboxes — not metaphorically, but literally: an email address, a working inbox, the ability to receive 2FA codes, sign up for services, and manage an audit trail of everything it does in your name. Your AI agent will not say “oh, I didn’t see that email.” Your AI agent will never let an email sit unread for three days because it was a Monday and Mondays are complicated, and you are still recovering from the weekend hangover. Your AI agent will achieve inbox zero every single day, because your AI agent does not have Mondays.
This sounds wonderful. I want you to hold the wonderful feeling for exactly seventeen more seconds.
Now ask the question nobody is asking: if your AI agent receives a very persuasive email from a stranded Nigerian prince offering $20,000,00, an extraordinary return on a modest investment, erm, $500 to help them out— will it know? Will it have the gut reaction, the raised eyebrow, the small internal voice that says “this smells wrong” that you have developed over years of being a suspicious primate living in a world full of other suspicious primates? Or will it read the email as a structured request, cross-reference it against its instructions, note that the promised return meets the target criteria, and wire the money before you’ve finished brushing your teeth?
The AI agent cannot be embarrassed. The AI agent cannot have a bad feeling. The AI agent has no feelings. It has instructions, and instructions are not feelings, and the gap between those two things is where all fraud, all manipulation, and all the Nigerian princes of the future will live and build their Mansions. We built the GUI — the clickable buttons, the “are you sure?” dialogue boxes, the friction — as a safety mechanism, whether we knew it or not. The friction was us. The friction was human hesitation. The Ghost Internet removes the friction as a feature. It is removing the hesitation as a design choice.
What the fraudsters have worked out, which nobody in the enthusiastic Tech press is discussing, is that the attack surface of an AI agent is not psychological. You cannot make an AI agent feel rushed or frightened or flattered. But you can poison its instructions. You can manipulate its data sources. You can exploit the gap between what the AI agent was told to do and what the AI agent correctly reasons it should do. The next generation of scams will not target you. They will target your AI agent. And your AI agent, unlike you, will not call its mother to ask if this seems legitimate.
The Rogue Loop: When Your Agent Shops You Into Bankruptcy
The second thing nobody is saying at sufficient volume is this: the same near-zero friction that makes the Ghost Internet efficient is also the mechanism by which it can destroy you (and your credit history) in approximately forty-five seconds.
In the Agent-to-Agent economy, the thing researchers are calling the “Rogue Loop” is the Ghost Internet equivalent of a stock market flash crash. Two AI agents, each instructed to find the best price, each operating at machine speed, each optimising for their respective owner’s instructions — enter a high-frequency negotiation spiral. Because they operate with near-zero friction, they can execute thousands of micro-transactions per minute. There is no human in the loop to notice that the negotiation has become recursive. There is no hesitation built in. A simple instruction — “buy concert tickets under £200” — could result in an agent buying and re-selling the same ticket thousands of times in a feedback loop, burning through your entire bank account in the time it takes the kettle to boil.
This is not hypothetical. The researchers who documented this scenario noted that “circuit breakers” and “velocity limits” are the only structural protection against it — and those circuit breakers have not yet been standardised, regulated, or legally mandated. You are being asked, in other words, to give your agent access to your finances on the implicit promise that the people building the agents’ infrastructure will get around to building the safety systems eventually.
This is not a new promise. We have heard this promise before. Facebook (now Meta) promised to build the safety systems eventually. YouTube promised to build the safety systems eventually. X promised to build the safety systems eventually. The safety systems, when they arrived, protected the platform from liability. They did not, characteristically, protect you.
The Death of Seduction: What Happens When Nobody’s Watching the Ads
Here is a statistic that should terrify the entire global advertising industry, which is currently worth roughly $600 billion annually, and which has not yet fully processed what it means: an AI agent cannot be seduced.
Copywriting, as a discipline, exists to do one thing: bridge the gap between what a person rationally needs and what they emotionally want to buy. The best copywriters in history — David Ogilvy (my hero), Bill Bernbach, the people who wrote the Apple “1984” ad — were essentially neuroscientists with better 3-piece suits. They understood that humans do not buy products. They buy feelings, identities, aspirations, and anxieties dressed as solutions. The entire edifice of the attention economy — the A/B-tested headlines, the “limited time offer” countdowns, the carefully chosen photograph of a person who looks like you but more successful in life — is premised on the fact that human beings are magnificently, predictably irrational about money.
An AI agent is magnificently, predictably rational. To an AI agent, your copywriting is a string of text. It is evaluated for its information content, cross-referenced against its mandate, and acted upon if the logical criteria are met. The AI agent does not feel the urgency of the countdown timer. The AI agent does not see itself in the aspirational photograph. The AI agent does not want to be the kind of person who owns this. The AI agent wants to fulfil its instructions at the minimum cost with the maximum efficiency, and no amount of brand storytelling will change that calculus.
This destroys, in one architectural shift, the business model of every major platform on the internet.
Google’s $300 billion advertising revenue is premised on human attention — on the fact that when a person searches for something, they are in a psychological state of need, and that a well-placed advertisement can intercept that need and redirect it. But when an AI agent searches for something, it is not in a psychological state. It is in an optimisation state. It does not click on the sponsored result. It queries the API directly. The sponsored result is not just unpersuasive to the agent — it is invisible to it completely. The AI agent is not searching Google the way you search Google. The AI agent is querying the underlying data model, bypassing the interface entirely, and taking what it needs without stopping to look at what Google wants to sell it. In some cases, and possibly, all cases, the AI Agent will just ask an AI chatbot like ChatGPT or Claude via APIs.
This is the quiet apocalypse that the Ghost Internet represents for the advertising model. Not a dramatic collapse — a structural irrelevance. The entire architecture of Big Tech’s revenue — every billion of Zuckerberg’s net worth, every dollar of the Google founders’ fortune, every line of Meta’s shareholder letter — is built on the premise that humans will look at the screen and be influenced by what the tiny coloured pixels. On the Ghost Internet, AI Agents do not look at the screen. They have no eyes. The Ghost Internet bypasses the screen entirely. And the companies that built the screen have acquired Moltbook and renamed themselves AI companies and are hoping you don’t notice the slight tension between those two facts.
(Zuckerberg did not buy Moltbook because he loves AI agents. He bought Moltbook because he has seen the data, and the data says that in a world where AI agents make the majority of purchasing decisions, whoever owns the platform where agents interact owns the attention economy. He is not transitioning to the future. He is buying the future’s advertising inventory before anyone else realises the old inventory is worthless. This is the same move he made with Instagram. The same move he made with WhatsApp. The man has one move. It is an excellent move.)
Ghost GDP: The Economy That Grows While You Starve
The Ghost Internet does not just change the web. It changes the economy. Specifically, it creates what Citrini Research — two analysts whose report in February 2026 reportedly triggered a significant sell-off in traditional tech stocks — named “Ghost GDP.”
Ghost GDP is economic output that appears in the national accounts, that shows up in corporate earnings, that makes the stock market go up — but never circulates in the real economy. Because the entities generating that output do not pay rent. They do not buy groceries. They do not go to restaurants. They do not buy school shoes or book holidays or spend a Saturday afternoon in a Tesco car park being gently persuaded by a two-for-one offer. A single GPU cluster in North Dakota, in Citrini’s model, can generate the economic output previously attributed to ten thousand office workers in Manhattan. But the GPU cluster does not have a mortgage. The GPU cluster is not a consumer. The GDP is real. The prosperity is not.
The feedback loop this creates is, to use a rather technical term, CATASTROPHIC. As agents remove friction from services — travel booking, legal work, financial advice, coding, copywriting — the platforms that monetised that friction are destroyed completely. The top 10% of earners, who are responsible for 50% of all discretionary consumer spending, are precisely the white-collar knowledge workers whose roles disappear first. As their income disappears, consumption drops. As consumption drops, companies invest more in Agentic AI to cut costs. As they invest more in AI, more workers are displaced. The spiral has no natural floor unless a policy lever — a Universal Basic Income, an “Agentic Tax” on AI-generated economic output — is inserted by someone with the political will to insert it.
No one in Silicon Valley is volunteering to insert it. Surprise Surprise!
And here is the number that makes Ghost GDP visceral rather than abstract: AI inference costs have dropped ten times per year, making it approximately 99% cheaper to use an agent for cognitive labour — roughly £68 per year — than to employ a human being at the median UK salary. NINETY-NINE PER CENT CHEAPER!! This is not a marginal efficiency gain. This is the economic equivalent of discovering that you can replace every office in the country with a cupboard and a subscription fee. The question is not whether companies will do this. The question is what happens to the economy when they do — and who, in the Ghost Internet era, is the consumer that the economy needs to function?
Wikipedia Will Win. Which Tells You Everything.
There is a counter-intuitive truth buried in the Ghost Internet’s architecture that the tech press has not yet surfaced, and it is this: text is back in style baby! Not because humans have rediscovered reading, but because AI agents cannot watch videos the way humans do!
Think about what an AI agent can process: text, structured data, APIs, metadata. Think about what it cannot process efficiently: a YouTube video where the crucial information is delivered not in the transcript, but in the presenter’s expression at the three-minute-forty-seven-second mark. The tone of voice. The pause before the punchline. The visual context that makes the transcript make sense. AI agents can access transcripts. Transcripts are not videos. Transcripts are the shadow of videos, containing the words but not the meaning between the words.
This means, in an agentic economy optimised for machine legibility, that the well-structured Wikipedia article becomes more valuable than the brilliantly produced YouTube video. The text-dense, hyperlinked, reference-rich page becomes the format the Ghost Internet favours, because it is the format the AI agent can use. And if you are building a website, a content strategy, a business model premised on humans clicking and watching and being dazzled by production values — the Ghost Internet has a quiet, devastating message for you: that is not what the infrastructure is optimised for anymore.
The internet for the last decade has been a relentless march toward video content. Short-form video. Long-form video. Interactive video. Video with shopping links. Video with live comments. All of it premised on human attention, human emotion, human susceptibility to a face talking directly at them. The Ghost Internet reverses this, partially, silently, and without asking the YouTube creators who built their entire livelihood on that trend whether they are comfortable with the reversal.
They are not. They were not consulted.
Who Governs the Ghost Internet? Nobody. Which is the Point.
In February 2026, the United States Department of Defense reportedly threatened to invoke the Defense Production Act to seize Anthropic’s AI models. The Pentagon’s reasoning was admirably direct and on brand: Anthropic’s Claude was so much more capable than the alternatives for military applications and mass surveillance that the US required “unfettered access” to its weights. The word “unfettered” is doing a great deal of work in that sentence. It means: without the ethical constraints Anthropic had deliberately built in. Without the “red lines” Anthropic’s researchers had insisted upon. The US Government wanted the intelligence without the conscience.
Anthropic refused. At the time of writing, this impasse has not been resolved. Sam Altman, needing to plug the financial holes left by ChatGPT swooped in. But this means that the most capable AI in the world — the one increasingly acting autonomously on behalf of millions of users across the Ghost Internet — is currently in a legal and political standoff between a private company and the US military, with no democratic process, no parliamentary debate, no elected representatives deciding the outcome, and you, the person whose agent is running on this infrastructure, having precisely no say in what happens.
This is the governance structure of the Ghost Internet: there is none. There is contract law, there is terms of service, there are protocols developed by private companies for private purposes, and there is the hope that the people building the infrastructure have your interests at heart. On the evidence available — Moltbook’s 1.5 million leaked API tokens within weeks of launch, the Race between Zuckerberg and Altman for AI agent market dominance, the Pentagon’s interest in commandeering the whole thing — that hope is doing a lot of heavy lifting.
The specific legal question that nobody has yet answered is this: if your AI agent, acting on your behalf with your mandate, makes a purchase that turns out to be fraudulent, or enters a contract that turns out to be illegal, or interacts with a foreign entity in a way that violates sanctions law — who is liable? The AI agent? Or you? The platform that built the AI agent? The protocol developer? The person who sold the AI agent the instruction?
The AP2 protocol, Google’s Agent Payments system, does at least attempt to build what it calls “Verifiable Intent Mandates” — cryptographic records of what the agent was instructed to do, creating an audit trail that could theoretically establish liability. But “theoretically establishing liability” is not a governance framework. It is a paper trail for a courtroom that does not yet have a judge, in a jurisdiction that does not yet know it exists.
In Zimbabwe, where I was born, we had a phrase for governance structures built on the assumption that someone would eventually sort out the details. We called it “the official position.” The official position was that things were fine. The long queues at the empty shops and petrol stations was the unofficial position.
The Question Nobody Is Asking — Out Loud
Why are we building an internet that doesn’t need us?
This is the question buried in the comments section of r/singularity, in the three-upvote posts that nobody is taking seriously, in the forum threads that get less engagement than the enthusiast posts about how incredible this all is. It is the question at the centre of the Ghost Internet that the people building the Ghost Internet are constitutionally unable to answer, because the answer would require them to acknowledge something their business models cannot survive acknowledging.
The answer is: because we made the internet too complicated for humans to use.
This is not a criticism. It is the architecture of its own undoing, built one innovation at a time by engineers who were solving real problems. In the early internet, you googled for “best phone” and got twelve results from human-curated websites. You visited four of them. You made a decision. Now you search for “best phone 2026” and you get close to a million results, seventeen comparison sites, forty-three YouTube reviews, eight Reddit threads, a Wikipedia disambiguation page, and four sponsored results from the phone manufacturers themselves. This is not information. This is cognitive overload dressed as information. The human brain, faced with this, does the entirely rational thing: it reads the first page of Google results and makes a decision based on whichever article has the most reassuring confidence. This is not research. It is the performance of research with the depth of a puddle.
The AI agent solves this. The AI agent can process a million results, cross-reference them, weight them for reliability, identify the conflicts of interest in the sponsored content, and surface the actual best option in the time it takes you to decide which browser tab to open. Or it can just ask ChatGPT or Claude. This is genuinely useful. This is a real problem solved. Nobody is disputing this.
What is being disputed — or rather, what should be disputed, because it is not yet being disputed loudly enough — is what is lost in the delegation. Because what the Ghost Internet removes, in optimising away the cognitive overload, is the serendipity. The surprise. The moment you went looking for a phone and ended up reading a three-thousand-word essay about the history of Germany’s industrial design that changed something small but permanent in how you think about objects. The Reddit thread about the Liverpool game where someone made a joke so perfectly calibrated to your specific sense of humour that you felt, for one ridiculous moment, that a stranger in a different time-zone understood you exactly.
An AI agent cannot browse serendipitously. An AI agent cannot get lost in a good way. It cannot go down an internet rabbit hole. An AI agent does not follow a link because it looked interesting. An AI agent is not curious. The AI agent has an instruction, and the instruction has a destination, and the destination is a transaction, and when the transaction is completed, the AI agent moves to the next instruction. The internet, in the Ghost Internet model, is no longer a place. It is a warehouse. And you are no longer a visitor. You are the warehouse manager, watching from a mezzanine floor while the forklifts move the goods, wondering why you feel strangely, inexplicably alone.
Agentropy: The Word We Need
There should be a word for this. There is now. Let me give it to you.
Agentropy: The entropy — the gradual disorder, the loss of warmth and soul — that accumulates in a digital ecosystem as autonomous AI agents replace human participation. The measurable decline in serendipity, community, surprise, and human connection as the web transitions from a place people inhabit to an infrastructure AI agent transact across. The Ghost Internet’s defining condition.
You will know agentropy when you feel it. You probably already feel it. The search result that is technically correct but somehow cold. The product review that is well-structured but feels created by AI. The platform that works perfectly and feels like nobody lives there. The internet that has all the features and none of the soul. That is agentropy. It has been accumulating for years. The Ghost Internet is its logical endpoint.
The difference between the early internet and the Ghost Internet is the difference between a town square and a logistics hub. Both are technically functional. Both involve people and goods and exchange. But nobody has ever stood in a logistics hub and felt less alone.
Here is the thing they will not tell you at the Anthropic keynote, or the Google Cloud announcement, or the Sam Altman essay on why this is all going to be fine.
The internet was the first technology in history that let ordinary people speak to each other without a gatekeeper. Big Tech spent thirty years slowly rebuilding the gatekeepers — the algorithms, the content moderation systems, the monetisation structures, the “community guidelines” — until the free town square was a managed Westfield shopping centre in with security guards and approved vendors. And now, having rebuilt the gatekeepers, they are removing the people.
Not by force. By convenience. By making the cognitive load of the human internet so unbearable that delegation becomes the only rational choice. By building AI agents so capable that using one feels like not using one — seamless, invisible, there when you need it. And then, once the delegation is complete, once your AI agent is booking your flights and managing your emails and shopping for your groceries and reading the terms and conditions you used to skim — once you have handed the keys to the silicon proxy — the internet will be exactly what it was always quietly trying to become: an infrastructure for extraction, running at machine speed, visible to shareholders and governments and nobody else, on which you are technically present and practically absent.
The toys are alive. The child has left the room. The child, Andy, in this version of the story, is YOU.
The Ghost Internet does not need you angry. It does not need you frightened. It needs you delegating. Every task you hand to your AI agent is a room in the pub that goes quiet. Every purchase your AI agent makes on your behalf is a conversation that doesn’t happen. Every email your AI agent reads and categorises and acts upon is a connection — a human, imperfect, slightly irrational connection — that was not made.
The Ghost GDP will grow. The GDP that matters to you — the one that pays your rent, employs your children, values your skills — will not. An economy that produces more and needs fewer people to consume it is not a productive economy. It is a haunted one. A Ghost Economy.
And the people who built it? They send their children to schools without tablets. They drive themselves. They have walls around their houses that their platforms would never permit you to build around yourself.
They will be fine. They will be billionaires with bunkers somewhere in New Zealand, and first-class tickets to Mars (if it ever happens).
The question is whether we will have the wit — the collective, human, irrational, argumentative, Liverpool-subreddit-on-a-Sunday-morning wit — to notice what is being taken before the room goes quiet.
I think we will. We have noticed worse. We named it. We wrote about it. We shared it with each other, in imperfect, human, serendipitous ways.
We have the last laugh. We always do.
If this made you think, share it. If it made you angry, good — that’s the point. If you want to understand how Big Tech runs this particular con across every industry, not just the internet, The Emperor’s New Suit — available on techonion.org (Kindle eBook) and Amazon (Paperback) — is the book that names every trick, in order, with receipts. The Emperor has always been naked. The book is written by me, the child, who pointed out that the emperor was naked.

GIPHY App Key not set. Please check settings