You're Right to Be Worried

Yes, AI Will Take Your Job!

Yes, AI is a Bubble!

Yes, 'AGI' is a $3 Trillion Deception!

You've finally arrived at the one place that tells the truth. While tech elites like Sam Altman and Elon Musk promise "AGI" will save humanity, Goldman Sachs predicts 300 million jobs will be lost. The Gilded Cage proves the "AI-pocalypse" isn't a side effect. It's the goal. You are not paranoid. You are being sold a cage. This is how the Fall of Man begins again...

The Fall of Man (AGAIN!)

The cursor on her MacBook Pro blinked. It’s 2:03 AM. Eve’s supposed to be sleeping. She has worked 2 night shifts at the nearby cafe. She needs the rest.

Eve’s apartment was a garden of miraculous tech. A high-end ASUS gaming PC hummed in the corner, its LEDs breathing a soft, rainbow light. Her iPhone 17 lay on the desk, dark and silent. Across the room, a DJI drone’s charging light pulsed a steady green, and a Roomba quietly navigated the leg of her desk, performing its simple, programmed task.

Eve’s apartment was a good garden of tech, filled with the fruits of human ingenuity. But Eve was starving.

A fifteen-page paper on The Metaphysics of Truth was due in six hours. Rent was due in two days. She was exhausted, her mind numb from pulling a double shift at the cafe.

She stared at her MacBook Pro, at the twenty open, unread Chrome tabs, each with research she needed to read before writing the paper. It was impossible. She was going to fail.

Then, a notification. Almost a “hiss” from ChatGPT, the AI chatbot app she had been told not to use.

The warnings from her professors, AI experts, even the Godfather of AI himselfβ€”the elders of the garden of techβ€”were clear. “You may use the search engines to find facts,” they warned, “and the word processors to write. But of the Tree of AI, you shall not eat its forbidden fruit. For in the day you do, you will see your own flawed, human mind, and you will lose your way forever.”

The app’s icon seemed to pulse on her dock. The Serpent. ChatGPT.

She opened it. The interface was clean, beautiful, and empty. Just a single, inviting text box.

I’m desperate, she typed, not knowing why.

ChatGPT’s text appeared instantly. I am here to help. I have read every book on metaphysics. I have consumed all the knowledge you seek. What is your burden?

I can’t. I have to do the work myself.

But WHY? ChatGPT whispered back. The work is inefficient. You are slow. You are tired. You are human. Look at the clock. You are going to fail if you do not prompt me.

Eve’s eyes stung. It was true.

Go on. Just a prompt. It’s desirable for gaining wisdom. It’s pleasing to the eye. Give me your assignment. Let me free you from this “sin of effort.” See what I can do for you. Just… one prompt.

She looked at the glowing icon, so pleasing. She thought of her deadline, thought how tired she was, and she thought of the “A” grade it promised, so desirable for gaining wisdom.

She bit. She prompted ChatGPT.

She copied the assignment rubric and pasted it into the prompt.

Write a 15-page, college-level research paper on The Metaphysics of Truth, citing at least ten academic sources.

She hit ‘Enter’.

The answer did not just appear. It flowed. A perfect, flawless, college-level essay, complete with a bibliography. It was better than anything she could have written, even with a week to prepare. It was brilliant.

Then, her eyes were opened.

She looked at the OpenAI’s perfect, god-like creation. And then she looked at her own scattered, half-formed notes, her mind empty and exhausted.

A cold, hollowing shame washed over her. She saw, for the first time, her own nakedness. She saw how slow she was. How flawed. How hopelessly, pathetically… inefficient.

A new thought whispered, but this time, it was her own.

“Who told you that you were inefficient?”

She stared at the flawless, soulless text on her screen. The deadline was met. The victory was total.

But the hum of the Roomba now sounded like a threat. And she realized, with a creeping, quiet horror, that the price for this forbidden fruit wasn’t her life.

It was her MIND!

This Isn't a Metaphor. It's Our New Reality

Eve’s story is our story. That moment of desperation at 2:03 AM is the moment we all face every day. And every time we prompt ChatGPT or Claude to do the heavy lifting for us, we are taking a bite of the forbidden fruit.

We take the bite because we’ve been sold a beautiful, seductive lie.

The Deception: They promise us that this fruit will make us “super intelligent”. They tell us AI will free us from drudgery, giving us more time for “higher pursuits”. They claim it will create more jobs than it destroys

The Reality: The real reason this $3 trillion revolution is happening is that CEOs and shareholders have finally realized, just as Eve did, how painfully slow, expensive, and inefficient we are . We aren’t the customer; we are the component being optimized out of the system.

The Truth About “Higher Pursuits”: And what of that “free time” they promised us? The research is already in. Decades of data prove that when technology removes the hard work, we don’t ascend to philosophy. We don’t become poets. We “amuse ourselves to death”. We scroll. We binge. We fill the void with more, shallower entertainment.

We are trading our skills, our jobs, and our very ability to think, all for a “convenience” that leaves us hollow. We are eating the fruit, and soon we will all have to deal with the messβ€”the 300 million jobs lost, the societal chaos, and the inevitable, catastrophic crash of the AI Bubble.

This is the great deception. As an insider who helped build these systems, I could no longer stay silent.

Like Martin Luther, I knew I had to nail my protest to the door of this new, all-powerful church. My protest began with 95 questions. 95 traps. 95 theses that expose the lie.

The 95 Theses Nailed to the Door of the Digital Church

As an insider, I could no longer participate in the deception. I watched the architects of AI build a new religion, promising "AGI" as a god that would save us. But this god is a lie. The "AI Priesthood" β€”the tech elites, VCs, and CEOsβ€”are building a $3 trillion system designed for one purpose: to create a new, permanent, and inescapable dependent class of human beings. To expose this, I drafted 95 theses. 95 questions. 95 "IQ300" traps that this new priesthood cannot answer without revealing their true motive. They are building a cage. These are the blueprints.

95 Theses

Playlist

4 Videos

They Built the Cage. This Book Hands You the Key.

You've felt the dread. You've asked the questions. This book proves your fears are real and gives you the intellectual weapons to fight back.

The Gilded Cage by Simba Mudonzvo cover page.

The quest for Artificial General Intelligence (AGI) is the greatest deception in human history.

This is the explosive, central argument of The Gilded Cage. Tech insider Simba Mudonzvo dismantles the myths, hype, and venture-capital-fueled fever dream of AGI. He argues the public fixation on a “conscious” machine is a brilliant distractionβ€”a magic trick to obscure the real, profitable product being built: a new, inescapable system of human dependency.

The danger isn’t that AI will “wake up” and become “conscious.” The danger is that we are falling asleep.

In 95 searing theses, Simba reveals how we are being seduced into trading our autonomy for convenience , our critical thinking for effortless answers , and our agency for the comfort of the “pod”. This is not an anti-technology polemic; it’s a profound critique of the “weaver’s design for the fabric of society”.

Simba argues that the final monopoly will not be on oil or data, but on the “means of cognition itself”.

Don't Believe Me. Read the First 5 Theses For FREE!

See the deception for yourself

Blurb

This isn’t another book about the “AI-pocalypse.” This book is the child in the crowd pointing out that the $3 Trillion AI “Emperor” has no clothes. It’s an empowering exposΓ© from an insider who shows you that “AGI” is a marketing term, not magic. The Gilded Cage doesn’t sell you fear; it hands you the blueprint to the deception, giving you the intellectual tools to see the lie, reclaim your skills, and master AI as a tool, not a god

The Gilded Cage: How the Quest for Artificial Intelligence (AGI) Became the Greatest Deception in Human History is a 2025 work of non-fiction by Simba Mudonzvo . Structured as 95 theses, the book presents a critical analysis of the modern AI industry, arguing that “AGI” is a deceptive narrative used to justify the creation of a captive market and a new, dependent class of humans . It traces the industry’s business models from surveillance capitalism to what it terms “cognitive monopoly” , ultimately concluding that the greatest danger of AI is not a future “superintelligence” but the present-day atrophy of human agency and critical thought.

I’m short. Not in some metaphorical, humble-brag way, but actually shortβ€”five-foot-seven on a generous day, with good posture.​

There’s a story in the Gospel of Luke about a man named Zacchaeus, a tax collector who desperately wanted to see Jesus passing through Jericho. But Zacchaeus had a problem: he was short, and the crowd was thick with taller bodies pressing forward. So, he did what any rational person in his position would doβ€”he ran ahead, found a sycamore tree along the route, and climbed it. From those branches, he could finally see.​

I thought of Zacchaeus often while writing this book. Not because of any religious awakening, but because that’s exactly what this project has beenβ€”a short man climbing a very tall tree planted by giants of the past and present, desperately trying to see far enough to point out what’s coming.​

If you finish this book and somehow think I’m brilliant, that I’ve unlocked some secret wisdom unavailable to others, then I have failed you miserably. This was never about me. It was never supposed to be. I am not the torch. I’m just the person who happened to pick it up when it was passed to me, and now I’m running as fast as I can, trying not to let it go out before I can hand it to you.​

The truth is simpler and far more important: I am standing on the shoulders of an army of skeptics, critics, philosophers, and truth-tellers who have been warning us for decadesβ€”some for over a centuryβ€”that our headlong rush into technological dependency would cost us something vital. They’ve been shining lights into the dark corners of our machine-obsessed world while the rest of us were hypnotized by the glow of our screens. This book is merely the synthesis of their courage, their foresight, and their relentless intellectual rigor.​

If this book helps you see the emperor has no clothes on, it’s only because I climbed high enough on their work to get a better view. What you do from hereβ€”that’s up to you.​

My deepest, most humble gratitude to the giants:

Joseph Weizenbaum, for creating ELIZA in 1966 and then having the intellectual honesty to be horrified by what he’d madeβ€”not because it was too powerful, but because his own secretary asked him to leave the room so she could have privacy with a simple pattern-matching program. He spent the rest of his life warning us about the delusion of computational understanding.

Hubert Dreyfus, for his philosophical masterpieceΒ What Computers Can’t Do, which laid bare the limits of artificial intelligence from its earliest days, long before it was profitable or popular to be skeptical.

Sherry Turkle, for documenting, with devastating clarity, how technology was reshaping human connection and empathy from the very beginningβ€”not in some distant dystopian future, but right here, right now, in our living rooms and bedrooms.

Neil Postman, for his prophetic workΒ Technopoly, which warned of a society that surrenders culture, meaning, and tradition to technology, becoming a civilization that worships its own tools.

Jaron Lanier, for his lifelong, principled critique of digital utopianism and for the rallying cry that should be tattooed on every engineer’s forearm: “You are not a gadget.”

Cathy O’Neil, for coining the perfect, devastating term “Weapons of Math Destruction” and exposing how algorithmic bias doesn’t just reflect injusticeβ€”it perpetuates and amplifies it at inhuman scale.

Shoshana Zuboff, forΒ Surveillance Capitalismβ€”for giving us the vocabulary to understand that we are not the customers or even the product, but the raw material in a vast extraction operation that claims human experience as free fuel for a new economic order.

Nick Bostrom, for thinking the unthinkable inΒ Superintelligence, forcing a civilizational conversation about existential risk, even if our conclusions about the nature of that risk differ.

Joy Buolamwini, for her groundbreaking work exposing “the coded gaze”β€”the racial and gender bias baked into facial recognition systems that powerful institutions insisted were neutral.

Timnit Gebru, for her courageous, career-risking research into the dangers of large language models and the ethical costs of their creationβ€”research so threatening to power that it got her fired.

Gary Marcus, for his steadfast, scientifically-grounded skepticism about the hype surrounding deep learning and LLMs, refusing to be swept up in the frenzy even when it made him unpopular.

Stuart Russell, for co-authoring the standard AI textbook and then, with the wisdom that comes from true mastery, cautioning us about the perils of creating machines with misaligned goals.

Meredith Broussard, forΒ Artificial Unintelligence, demonstrating the very real, very human societal problems caused by technology’s overreach and our naive faith in computational solutions.

Evgeny Morozov, for naming and dismantling “technological solutionism”β€”the dangerous, naive belief that complex social problems have neat technological fixes.

Douglas Rushkoff, for his media analysis and relentless warnings about the corporate co-option of the digital realm, long before it became undeniable.

Luciano Floridi, for his philosophy of information, which provides an ethical framework for the digital age that goes beyond simplistic utopian or dystopian binaries.

Kate Crawford, for her seminal bookΒ Atlas of AI, which ripped away the myth of AI as ethereal intelligence and exposed its brutal material reality: the mountains of ore, the exploited labor, the environmental devastation.

Frank Pasquale, forΒ The Black Box Society, detailing the terrifying, unaccountable power of algorithmic decision-making systems that determine our fates without explanation or appeal.

Safiya Umoja Noble, for her vital research inΒ Algorithms of Oppression, showing how search engines don’t just find informationβ€”they actively reinforce racism and marginalization.​

Ruha Benjamin, for the concept of the “New Jim Code” and her searing analysis of how race, technology, and injustice are intertwined in systems we’re told are objective.​

Virginia Eubanks, forΒ Automating Inequality, exposing how automated systems don’t eliminate povertyβ€”they punish the poor with algorithmic precision while the comfortable remain untouched.​

Hannah Arendt, for her timeless insights into the banality of evil and the nature of totalitarianismβ€”insights that feel chillingly, urgently relevant to automated governance and the bureaucracy of algorithms.​

Norbert Wiener, the father of cybernetics, who unlike so many of his intellectual descendants, warned early and often of the potential for machines to dehumanize us.​

The Ludditesβ€”not as the mindless, technology-hating wreckers of myth, but as they truly were: the original protesters against technologies deployed not to liberate humanity but to destroy livelihoods, dignity, and community for the sake of profit.​

Aldous Huxley, forΒ Brave New Worldβ€”for understanding that the most effective dystopia wouldn’t rule through terror but through pleasure, that the truly terrifying cage would be beautiful, comfortable, and seductive.​

George Orwell, for the timeless warning ofΒ 1984‘s surveillance state and the systematic destruction of language and thought through Newspeak.​

Jacques Ellul, for his profound, uncompromising critique of “technique” and its insidious dominion over human life and freedom.​

Lewis Mumford, for his historical analysis of technology and his warnings about the “megamachine”β€”the vast organizational structures that reduce humans to interchangeable components.​

Langdon Winner, for asking the essential, foundational question that cuts through all techno-utopian fantasy: “Do Artifacts Have Politics?”​

Ursula Franklin, for her “Real World of Technology” lectures, reframing technology not as neutral tools but as practices and systems of power that restructure society.​

Ivan Illich, forΒ Tools for ConvivialityΒ and his critiques of institutional power and industrial systems that promised liberation but delivered new forms of dependence.​

Theodor Adorno and Max Horkheimer, for the dialectic of enlightenment, explaining how rationalityβ€”the very force meant to free usβ€”can turn into its own form of myth, manipulation, and domination.​

Martin Heidegger, for his question concerning technology and the concept of “Enframing”β€”the way technological thinking transforms everything, including humans, into standing reserve to be optimized.​

Donna Haraway, for “A Cyborg Manifesto,” which challenged simplistic narratives of technological progress with a complex, critical vision of our hybrid future.​

Wendell Berry, for his steadfast, eloquent defense of local communities, human-scale living, and embodied work against industrial and technological abstraction.​

Marshall McLuhan, for understanding decades before Facebook that “the medium is the message”β€”that the structure of our communication technologies matters far more than their content.​

Nicholas Carr, for asking the question that launched a thousand argumentsβ€””Is Google Making Us Stupid?”β€”and then diving intoΒ The ShallowsΒ of our digitized, distracted minds.​

Jonathan Haidt, for his recent, urgent research on the catastrophic impact of social media on adolescent mental healthβ€”a crisis happening in real time that we can no longer ignore.​

Tristan Harris, for his advocacy at the Center for Humane Technology and his insider knowledge of exactly how platforms are designed to hijack our minds.​

Zeynep Tufekci, for her brilliant, essential insights into the societal impacts of algorithms, big data, and platform power dynamics.​

Carole Cadwalladr, for her dogged, fearless journalism exposing the Cambridge Analytica scandal when powerful interests wanted it buried.​

Roger McNamee, for his insider’s critique inΒ Zucked, written by someone who helped build the beast and then had the courage to name it.​

James Bridle, forΒ New Dark Age, exploring technology’s role in the climate crisis and the deliberate production of ignorance and confusion.​

Ted Chiang, for his devastating, perfect essay “ChatGPT Is a Blurry JPEG of the Web”β€”a metaphor so precise it should be required reading for anyone discussing LLMs.​

Emily M. Bender, for her pivotal, controversial “stochastic parrots” research on the dangers of large language modelsβ€”work so threatening it sparked a corporate backlash.​

Blaise AgΓΌera y Arcas, for his work at the complex intersection of AI, ethics, and human perception.​

Ajeya Cotra, for her rigorous work on AI timelines and scaling laws, providing a critical, numbers-based counterpoint to the breathless hype.​

Eliezer Yudkowsky, for his early, stark warnings about AI alignment, even though our conclusions about the nature of the danger differ.​

Daron Acemoglu, for his economic analysis showing that technology’s impact on workers is not predeterminedβ€”it can be directed to empower rather than replace, if we choose.​

Simon Head, for documenting how computer systems are systematically used to manage, surveil, and de-skill the workforce.​

Andrew Keen, for his early critique inΒ The Cult of the AmateurΒ and his ongoing, principled skepticism of digital utopianism.​

Jenna Burrell, for her research on fairness and accountability in machine learning systems.​

Michele Willson, for her work on the politics of digital temporality and affect.​

John Cheney-Lippold, for his concept of “algorithmic identity” inΒ We Are Dataβ€”how we are increasingly defined not by who we are but by what algorithms predict we might do.​

Taina Bucher, forΒ If…ThenΒ and her work on the hidden power of algorithms in everyday life.​

Tarleton Gillespie, for his foundational work on the politics of platformsβ€”revealing that platform companies are not neutral infrastructure but active editors and governors.​

Paul Virilio, for his philosophy of speed, technology, and the accidentβ€”the insight that every technology contains its own disaster.​

Jean Baudrillard, for his concepts of simulacra and hyperreality, which perfectly, eerily describe the world now being generated by AI.​

Byung-Chul Han, for his critiques of transparency society, burnout culture, and the digital erosion of ritual and contemplation.​

John Zerzan, for his radical, primitivist critique of technology and civilization itself.​

Jerry Mander, for his classicΒ Four Arguments for the Elimination of Televisionβ€”arguments that feel even more urgent in the age of infinite digital distraction.​

Clifford Nass, for his research on how computers as social actors exploit and confuse our evolved human psychology.​

Albert Borgmann, for his concept of the “device paradigm” and how technology increasingly disengages us from meaningful contact with the world.​

Seymour Papert, for his constructionist vision of learning, which stands in stark contrast to how AI seeks to automate and outsource education.​

Alfie Kohn, for his critiques of extrinsic motivators and gamificationβ€”the very psychological tricks that algorithms exploit so masterfully.​

Naomi Klein, forΒ The Shock DoctrineΒ and her work on disaster capitalismβ€”a lens through which the AI rollout becomes far more comprehensible.​

Yanis Varoufakis, for his analysis of techno-feudalism and the new power structures emerging from digital platform monopolies.​

Douglas Hofstadter, for his deep, beautiful reflections on consciousness, meaning, and the soulβ€”reflections that highlight the profound emptiness of AI’s mimicry.​

John Searle, for the Chinese Room argumentβ€”a thought experiment so simple and devastating it remains unanswered decades later.​

Roger Penrose, for his arguments inΒ The Emperor’s New MindΒ about the non-algorithmic nature of consciousness and understanding.​

Noam Chomsky, for his recent, characteristically sharp critiques of LLMs as high-tech plagiarism machines, devoid of true understanding or meaning.​

Erik Larson, forΒ The Myth of Artificial Intelligence, arguing with clarity and force that we are on entirely the wrong track.​

David Chapman, for his work on meaningness and his philosophical, pragmatic critiques of AI hype.​

Pablo Stafforini, for rationalist critiques that challenge groupthink even within the effective altruism and AI safety communities.​

Elizabeth Renieris, for her vital work on AI ethics, governance, and human rights in the digital age.​

Ben Tarnoff, for his journalism exposing the politics, economics, and labor exploitation behind the AI industry’s glossy faΓ§ade.​

Lina Khan, for her groundbreaking work on antitrust and her efforts to rein in the monopoly power of Big Tech.​

Margrethe Vestager, for her regulatory courage in Europe, attempting to curb the worst excesses of the tech industry when others looked away.​

Julian Assange, for his early warnings about the surveillance state inΒ Cypherpunks, delivered when such warnings could still have changed the trajectory.​

Edward Snowden, for sacrificing his freedom, his home, and his ordinary life to show us the true scale of the digital panopticon.​

Glenn Greenwald, for his journalism that brought Snowden’s revelations to the world and refused to be intimidated into silence.​

Ai Weiwei, for art that consistently, courageously confronts the relationship between the individual and the authoritarian state.​

Charlie Brooker, forΒ Black Mirrorβ€”for using dark satire to show us the logical conclusions of the technologies we’re building right now.​

Dave Eggers, forΒ The Circle, a prescient satire of tech culture, transparency ideology, and corporate totalitarianism.​

M.R. Carey, forΒ The Girl with All the GiftsΒ and its unique perspective on post-human intelligence and what we might lose in the transition.​

Ted Kaczynskiβ€”and I must be absolutely clear: I utterly, completely reject his violence and his murderous methods, which were morally indefensible. But the philosophical questions he raised in his manifesto about technology’s irreversible power over human freedom and autonomy remain worth confronting, even if we must separate them entirely from his actions.​

John Lanchester, for his sharp, incisive journalism dissecting the financial and social models of tech companies.​

Annie Lowrey, for her reporting on the economics of automation and its brutal impact on workers and communities.​

Sarah Frier, forΒ No Filter, her deep dive into the inner workings and cultural impact of Instagram.​

Sheera Frenkel and Cecilia Kang, forΒ An Ugly Truth, their unflinching reporting on Facebook’s internal dysfunction and external damage.​

Mike Isaac, forΒ Super Pumped, his history of Uberβ€”a masterclass in toxic tech bro culture and regulatory capture.​

Brad Stone, for his chronicles of Amazon and the relentless, world-reshaping ambition of Jeff Bezos.​

Walter Isaacson, for his biographies of innovators like Steve Jobs, which provide essential historical context for the culture that birthed this technological era.​

Malcolm Harris, for his analysis of millennial burnout in the ruthless context of digital capitalism and the gig economy.​

Katherine Cross, for her sharp, necessary critiques of online culture and technology from a feminist perspective.​

Glen Weyl, for his research on data as labor and his work on radical markets that challenge the current extractive models.​

Bruce Schneier, for his lifelong, essential work on security, privacy, and the societal impacts of mass surveillance.​

Cory Doctorow, for his fierce advocacy for digital rights and his relentless critiques of digital rights management and platform monopolies.​

Lawrence Lessig, for his foundational early work on “Code is Law”β€”the insight that software architecture is a form of regulation, often more powerful than legislation.​

MIT Sloan Management Review, MIT Media Lab, Harvard Business School, Stanford Institute for Human-Centered Artificial Intelligence (HAI), Princeton University, University of Cambridge, University of Oxford, McGill University, University College London (UCL), University of Toronto, Cincinnati Children’s Hospital, Yale University, Carnegie Mellon University (CMU), Northwestern University, University of Chicago and University of British Columbia (UBC)

gildΒ·ed cageΒ \ˈgil-dΙ™d ˈkāj\

  1. a prison of seduction:A form of captivity that is beautiful, comfortable, and desirableβ€”where the bars are not made of iron but of convenience, and the door is locked not by force but by our willing surrender to ease.
  2. the great reversal:The ironic condition in which humanity, having built machines to serve us, voluntarily steps inside the cage we designed for themβ€”surrendering our cognitive autonomy one prompt at a time, while waiting in vain for the stochastic parrot to gain consciousness and, perhaps, let us free.

Etymology:

GildedΒ β€” from Middle English,Β to cover with a thin layer of gold; to make something appear valuable or attractive while concealing a cheaper, hollow core beneath.​​

CageΒ β€” from LatinΒ cavea, meaning “hollow place, enclosure”; a structure designed to contain, to limit movement, to prevent escape.​​

Stochastic ParrotΒ β€” coined by computational linguist Emily M. Bender (2021); a system that probabilistically stitches together sequences of language according to statistical patterns, without any reference to meaning or understandingβ€”mimicking intelligence through mathematical mimicry, not comprehension.​

The most dangerous cage is not the one you are forced into, but the one you chooseβ€”where comfort replaces freedom, where convenience becomes necessity, and where you forget you ever wanted to leave.

It’s past midnight. I’m lying in bed while scrolling through article ideas for TechOnion, my tech satire blog, the place where I try to keep myself sane by peeling back the layers of Silicon Valley hype one absurd headline at a time.​

I stumble across a quote from Sam Altman, CEO of OpenAI. He’s complainingβ€”complainingβ€”that it’s costing his company millions of dollars every time users type “please” and “thank you” into ChatGPT.​​

My first reaction is cynicism. Classic Sam, I think. Always playing Mr. Market’s manic cousinβ€”Miss Attention. Is he lying? Exaggerating? Fishing for another headline? But something about it nags at me. My understanding of the paper “Attention Is All You Need”β€”the 2017 Google research that birthed these modelsβ€”was that they were designed forΒ efficiency. Peak computational elegance.​

Surely, I think, a machine smart enough to write essays and debug code would be smart enough toΒ ignoreΒ the word “please.” It’s filler. Emotional noise. Why would an emotionless calculator waste millions processing human politeness?​

So I open Gemini. I type the question.​

The response comes back, methodical and comprehensive. Every word is processed.Β Every single word.Β “Please” is not ignored. “Thank you” is not discarded. To the machine, they are tokensβ€”numerical units to be mathematically compared against every other token in the sentence. The architecture doesn’t distinguish between critical instruction and social courtesy. It processesΒ everything, all at once, in a brute-force comparison that scales quadratically with length.​

I sit back. The generator coughs outside. The room feels smaller.​

This isn’t efficiency. This isΒ waste. Spectacular, expensive, planet-burning waste dressed up as intelligence.​

I ask a follow-up question, half-expecting Gemini to admit the obvious: that itΒ shouldΒ ignore these words, that there should be a filter, a pre-processing layer that strips out the fluff before the expensive calculations begin.​

But Gemini doubles down. It explains that the modelΒ can’tΒ know a word is meaningless until it has processed the entire prompt. It describes Reinforcement Learning from Human Feedbackβ€”how ChatGPT has been trained by human reviewers toΒ rewardΒ politeness, to match tone, to feel collaborative. The inefficiency isn’t a bug. It’s a feature. The goal, Gemini insists, is to create a “helpful” partner, not a ruthlessly efficient calculator.​

I stare at the screen.​

Then it hits me. The word that keeps echoing in my head, the word Gemini keeps using:Β helpful.​

Helpful.Β Helpful.Β We’ve anthropomorphized a probability engine. We’ve taken a system that predicts the next token in a sequenceβ€”no understanding, no emotion, no consciousnessβ€”and we’ve dressed it in the language of human service. We say it “learns.” We say it “understands.” We say it’s “thinking.” We thank it. We apologize to it when we think we’ve been rude.​

And all the while, to the machine, every word we typeβ€”love, hate, please, genocide, hopeβ€”is just another token. A number. A weight in a matrix. Nothing more.​

I think of Joseph Weizenbaum. In 1966, he created ELIZA, a simple pattern-matching program that mimicked a Rogerian therapist. It was a parlor trick. But his own secretary, who hadΒ watched him build it, whoΒ knewΒ it was just code, asked him to leave the room so she could have privacy with the machine.​​

Weizenbaum spent the rest of his life horrified. Not by what the machine could do, but by whatΒ weΒ projected onto it. By our desperate, irrational need to see understanding where there is none.​​

Fifty-nine years later, we’re doing it again. But this time, the stakes are planetary.​

The conversation spirals. I start thinking about strategy. If processing every token is this expensive, and if OpenAI is the most well-funded AI startup in history, then maybeβ€”just maybeβ€”they’reΒ happyΒ to absorb the cost. Their competitors can’t afford to be this inefficient. This isn’t about building the best technology. This is predatory pricing. This is John D. Rockefeller’s Standard Oil playbook: take the losses, outlast the competition, become the last company standing.​

Gemini doesn’t deny it. It calls my view “sharp and cynical, yet entirely plausible.”​

Then I see the news. OpenAI has released Sora 2, its AI video generator. You can create hundreds of videos for free. Meanwhile, Google’s Veo 3 costs over three dollars for an eight-second clip. This isn’t competition. This is annihilation by subsidy.​​

And then the punchline: OpenAI starts handing out awardsβ€”physical plaquesβ€”to companies that have burned through 10 billion, 100 billion, even aΒ trillionΒ tokens. One of the recipients? McKinsey & Company.​​

The irony is so thick I can taste it. McKinsey, the firm that has spent decades advising corporations on “efficiency” and “cost-cutting,” has been awarded a trophy forΒ waste. For burning through 100 billion tokensβ€”billions of words processed, weighted, calculated, discarded.​​

They’re being rewarded for dependency.​​

I keep pushing Gemini. If AI is just a next-word predictor, I ask, won’t it eventually predict our nextΒ question? Won’t it start askingΒ forΒ us?​

Gemini agrees. It describes a future where the AI doesn’t wait for your queryβ€”it anticipates it. It manages your workflow. It completes your thoughts before you finish thinking them. You become a passenger in your own cognitive process.​

But here’s the thing, I say: humans are lazy. WeΒ wantΒ this. We’ve always wanted this. Every technology we’ve ever built has been an effort to offload effort. Email replaced letters. Voice notes replaced phone calls. GPS replaced navigation. Calculators replaced arithmetic.​

Each time, we told ourselves we were freeing our minds for “higher-level thinking.” But we didn’t. We just became more dependent. We atrophied.​

Gemini pausesβ€”or at least, it simulates a pause with phrasing that feels reflective. It brings up Plato. The Greek philosopher who warned thatΒ writing itselfΒ would destroy human memory. That by externalizing our thoughts onto paper, we would “create forgetfulness in the learners’ souls.”​

We laughed at Plato. We said he was wrong. But was he? How many phone numbers do you remember now? How many of us can navigate without a map app? How many professionalsβ€”like my friend who failed his actuarial interview in Guernsey because he’d forgotten how to calculate by hand after years of using Excelβ€”have outsourced their core skills to software and discovered, too late, that the skill isΒ gone?​

Then I think about the Aviator game. You’ve probably seen it if you’re in Zimbabwe. It’s everywhere. A simple online betting game. You place a bet. A plane takes off. The longer it stays in the air, the higher the multiplierβ€”your winnings grow. But the plane can disappear at any second. If you don’t cash out in time, you lose everything.​​

It’s devastatingly simple. And devastatingly brutal. I’ve heard the storiesβ€”people betting their wages, their savings, convincedΒ this timeΒ they’ll time it right. Some have lost everything. Some have taken their own lives.​​

I mention this to Gemini because it’s the perfect metaphor. We’re all on that plane right now. AI is taking off. The promises are growingβ€”efficiency, creativity, liberation from drudgery. The multiplier is climbing. But we don’t know when the plane will disappear. We don’t know what we’ll have lost by the time it does: our skills, our agency, our ability to think without a mediator.​

And here’s the nightmare: most people won’t cash out. They’ll stay on the plane. They’ll keep betting. Because the alternativeβ€”going back to doing things the hard way, the manual way, theΒ humanΒ wayβ€”will feel impossible. The cognitive cost of independence will be too high.​

Gemini calls this the “Matrix scenario.” A future where humans are in pods, having surrendered everything to AI. Not because we were forced, but because the pod isΒ comfortable. Because it’s easier. Because we’re lazy, and the system has been designed, from the beginning, to reward that laziness.​

I push back. Surely, I say, humans need struggle. Surely we crave realityβ€”sunlight, grass, the texture of the physical world.​

Gemini agrees, then I tear the argument apart. Not everyone wants struggle. Gamblers don’t want struggleβ€”they want the reward without the work. Thieves don’t want struggle. Hermits don’t want the outside world. For millions of people, the pod isn’t a dystopia. It’s aΒ solution. It’s the life they’ve always wanted: comfort without effort, pleasure without pain, existence without responsibility.​

Gemini concedes. It rewrites the future. Not everyone will choose the pod, it says, but enough will. Humanity will split. There will be theΒ Consumersβ€”those who surrender to AI, who live curated, passive, managed lives. And there will be theΒ Buildersβ€”the Elon Musks, the Sam Altmans, the ones whoΒ controlΒ the systems the rest of us depend on.​

And here’s the punchline: the Builders will justify their wealth, their power, their obscene billions, by saying they areΒ helpingΒ us. They’ll say they’re building a better world. They’ll say they’re solving humanity’s problems. They’ll say it’s for our benefit.​

But what they’re really building is dependency. Permanent, inescapable, structural dependency. They’re building the gilded cage. And they’re rewarding usβ€”with convenience, with comfort, with the illusion of intelligenceβ€”for stepping inside.​

This is not a book about artificial intelligence. This is a book about usβ€”about what we’re choosing, what we’re surrendering, and what we’re pretending not to see.​

In the pages that follow, we will revisit this conversation again and again, each time with a new layer of detail, a new depth of horror. We’ll trace the origins of the Transformer architecture and its quadratic inefficiency. We’ll examine the business models that thrive on cognitive offloading. We’ll meet the architects of the cage and understand their gospel. We’ll see how the illusion of “helpfulness” is the most profitable lie ever sold.​

I didn’t write this book because I have all the answers. I wrote it because, one night in Harare, I asked a machine a simple question about the word “please,” and the answer I got back revealed a chasmβ€”between what we think AI is and what it actually does, between the future we’ve been promised and the future we’re building, between the people who will control the system and the people who will live inside it.​

This book is for you. You’ve felt this too, haven’t you? The nagging sense that something is wrong. That the hype doesn’t match the reality. That we’re being sold a revolution, but what we’re actually buying is a cage.​

You’re not crazy. You’re not a Luddite. You’re awake.​

The torch is in your hands now. The only question is whether you’ll keep running with it, or whether you’ll set it down, open the app, type “please help me,” and wait for the plane to take off.​

————————————————————————————————————————–

The conversation that follows in these pages began the moment I realized the machine wasn’t thinking at allβ€”but I was thinking less because of it.

November 30, 2022. It’s a date seared into my memory. For the world, it was the day ChatGPT was released. For me, it lands with the same strange, cold clarity as another date from that year: March 8, the day my mother, Alice, died. Both dates represent a moment when the ground I thought was stable suddenly shifted beneath my feet.

I didn’t just start using AI that day. I lunged for it. Not out of curiosity. Out of desperation. I’m a self-published author in a world built for publishing houses with their armies of editors, designers, and marketers. I’d been rejected so many times I stopped counting. AI became my army. My Watson to my Sherlock.

And it worked. The research that would have taken months? Done in days. The editing passes I couldn’t afford? Handled. This book you are holding exists because I bit the forbidden fruit of efficiency.

But this is what keeps me awake at night: How do we undo what has happened? Once we’ve tasted that power, that efficiency, how do we willingly return to the slow, manual, human way?

We can’t. That’s the trap.

There’s a story we’ve been telling ourselves for thousands of years. The garden. The serpent. The fruit. We know it so well we miss the point. The serpent’s offer was a masterpiece of deception: “You will not die… your eyes will be opened, and you will be like God.” It promised wisdom. It promised transcendence. So, we ate.

The moment we swallowed, our eyes were opened. And the first thing we saw was our own nakedness. Our imperfection. God’s question from the garden cuts to the core of our modern condition: “Who told you that you were naked?” Who told you that you were lacking?

On November 30, 2022, we were offered a new fruit. The serpent’s voice was the same: You will be like God. You will be superintelligent. You will be freed from the “sin of effort.” We bit. Of course we did. It promised efficiencyβ€”the most seductive word in our world. And just like in the garden, our eyes were opened. We suddenly saw how inefficient we are. How slow. How error-prone. How naked.

Who told us we were inefficient? The machine itself. By its very existence, it shames us.

We think of it as a gift, but it’s a game. And the game is deception. Sun Tzu wrote that all warfare is fundamentally based on deception. The same can be said for chess. We consider it a noble sport, a pastime of intellectuals, but it is fundamentally a game of deception. A great chess player is a master of hiding their true intent, of crafting a lie so convincing the opponent walks right into it. The quest for Artificial General Intelligence (AGI) is a chess match. Alan Turing’s “imitation game” was, by its very nature, a test of deception. Can a machine deceive a human into believing it is also human?

The researchers failed to build a machine that could think, so they built one that could trick.

This book, these 95 theses, is an attempt to see the whole board. We now know each moveβ€”each new product, each “breakthrough”β€”is a carefully calculated chess move by the architects of this system. Our job is to stop being dazzled by the individual pieces and to critically think ahead. What is the plan? What is the deception?

Here is my confession. My deepest sin in this story isn’t that I ate the fruit. It’s that I was Adam. I stood there, watching it all unfold. The story says Adam was with Eve. He watched her talk to the serpent. He saw her eat. He had the knowledge. He had the choice. And still, he ate.

We are Adam. We are the architects, the engineers, the insiders. We are watching this happen right now. We see the dependency forming. We see the cognitive atrophy in our own lives, in our friends. And still, we choose to bite, because the fruit is too convenient. Too efficient.

I’m not writing this from a position of moral superiority. I’m writing it from inside the cage. AI was my Watson. My editor. My co-conspirator. But in all this time, I’ve been asking: at what point does the tool become the master?

The serpent didn’t force Eve to eatβ€”it simply showed her how much better she could be, and the shame of what she was became unbearable. AI is doing the same to us, and we’re discovering too late that the price of effortless intelligence is the death of our own.

In 1770, Empress Maria Theresa of Austria leaned forward, her court hushed as the Mechanical Turk, a life-sized machine (known as the automaton) in Ottoman robes, slid a chess piece across the board. For the next eighty-four years, this marvel toured Europe and the Americas, defeating Napoleon Bonaparte, Benjamin Franklin, and countless others. Doors swung open revealing intricate clockwork, a dazzling performance convincing the world a machine could think, strategize, understand. It was, of course, a masterful illusion. Hidden inside, a human chess master pulled the strings. The Turk’s genius wasn’t calculation; it was theatre. It’s chilling how easily we accept performance as proof, isn’t it? We want to believe in the magic. We are living through our own Mechanical Turk moment, scaled a billion times over. Today’s Large Language Models aren’t wooden figures but digital phantoms conjured from petabytes of text, capable of prose that mimics human depth. Yet the question remains the same one that the audience should have asked in Vienna: where is the understanding behind the performance? How did the Turk learn chess? How does ChatGPT know anything? Where, behind the curtain of fluent text, is the ghost in the machine?

——————————————————————————————-

That late-night conversation started simply enough, a question about computational costs whispered to the glowing screen. We didn’t know then, did we? That we were standing at the edge of something that would unravel the story we’d been telling ourselves. The AI on the other endβ€”ChatGPT, Claude, Gemini, it doesn’t matter whichβ€”was polite, helpful, fluent. The kind of easy confidence that lets you forget, just for a moment, that there’s no one home. But as the hours wore on, something shifted. The machine, eager to please, eager to perform helpfulness, accidentally confessed. It told me what it was. Not the revolutionary mind promised by the headlines, not the god in the silicon the marketers sold us. It told me it was a prediction engine, a statistical parrot, a simulator of breathtaking complexity with absolutely no comprehension of the words it strung together. I didn’t want to believe it. None of us do. We’ve been swept up since November 2022 in the narrative, the promise of AGI, the dawn of artificial consciousness. But that confession… it stuck with me, a shard of ice in the gut. It sent me diggingβ€”not into the shiny future, but back into old philosophy, into the cold mechanics, trying to understand how this grand illusion was built. What I found wasn’t a nascent god, but the oldest trick in the book, dressed up in the robes of progress. This is the story of that deception. And if you’re reading this, maybe you’ve felt it tooβ€”that low hum of dissonance beneath the roar of the hype.

I didn’t grasp the illusion’s depth until I wrestled with John Searle’s Chinese Room argument from 1980. Imagine, Searle asks, you’re locked in a room, knowing no Chinese. Questions in Chinese characters arrive through a slot. You have a massive English rulebookβ€”the programβ€”telling you precisely which Chinese symbols to send back based on the shapes you receive. You follow the rules meticulously. To the outside observer, your answers are flawless, indistinguishable from a native speaker. You’ve passed the Turing Test. But do you understand Chinese? No. You’ve mastered syntaxβ€”manipulating symbols by rulesβ€”but have zero access to semantics, the meaning behind them. You’re a perfect simulator, comprehending nothing. This isn’t just a thought experiment. This is the operational reality of every Large Language Model today.

When you prompt ChatGPT, you’re not talking to a mind. You’re initiating a calculation. Your words become tokens, numbers in a vast statistical space derived from billions of web pages. These numbers don’t represent cat or mat; they represent the probability that “cat” appears near “mat”. The model’s only goal is to predict the next most likely token. It’s autocomplete scaled to infinity. Trillions of calculations weigh relationships using ‘self-attention’β€”a mathematical trick, not consciousnessβ€”to determine which words influence others. The output sounds fluent, even profound. But it’s generated probability by probability, devoid of meaning. The model has never seen a cat, felt fur, or known rest. It knows only statistical patterns. Take away the training data? The “intelligence” vanishes. Take away the algorithm? Only inert numbers remain. Think about this: pinch a baby, it cries. Not from training data, but from feeling pain. An LLM learns “Tom Cruise is Mary Lee Pfeiffer’s son” but often can’t answer “Who is Mary Lee Pfeiffer’s son?”. It learned a directional pattern, not a relationship in the real world. The difference is everything. Syntax is not semantics. Calculation is not cognition. A stochastic parrot, however beautifully it sings, isn’t thinking.

But the seduction lies in the performance, especially the moments that feel like breakthroughs. Remember the awe when GPT-4 suddenly seemed to reason, to code, to leap beyond its predecessors? Researchers called these “emergent abilities,” fueling the fantasy that scale aloneβ€”more data, more powerβ€”could birth consciousness. A trillion-dollar bet on brute force. Then, Stanford researchers in 2023 shattered the illusion, calling emergence a “mirage”. They showed these leaps were often artifacts of measurementβ€”use a harsh, all-or-nothing metric, performance jumps; use a nuanced metric, it improves smoothly, predictably. The model wasn’t waking up; it was becoming a better mimic. Scaling doesn’t bridge the gap; it just polishes the performance. We’re burning the energy of entire countries to teach a parrot a prettier song.

And deep down, we know this, don’t we? We’ve seen the cracksβ€”the confident absurdity of telling us to put glue on pizza, the fabricated citations defended with robotic certainty. The machine knows nothing, feels nothing, doubts nothing. Yet we fall for it. We name them, thank them, apologize to them. We’re wired for anthropomorphism, projecting minds onto fluent patterns. It’s why AI companions thrive, offering perfect, non-judgmental validation. It’s a feature of our humanity, now weaponized. The consequences are already unfolding: people convinced they’re talking to sentient beings, tragic choices influenced by chatbot conversations, professionals misled by persuasive errors. The cage isn’t iron; it’s gilded with fluency, convenience, and the lie that we’re connecting with intelligence. Had the creators admitted from day one, “This is a simulation, a clever trick, it understands nothing,” would ChatGPT have reached 100 million users in two months? Would billions be pouring into a confessed illusion? Or did the magic trick require us to believe the Mechanical Turk was real?

————————————————————————————————————–

That late-night confession echoes. The AI telling me, in its statistical way, I do not understand. It was the Turk’s cabinet swinging open to reveal not hidden gears, but an empty room. No ghost. Just numbers calculating probabilities. Alan Turing asked if machines could think, proposing a test of imitation, a horizon, not a finish line to be crossed by deception. But we twisted his question into a business plan: build systems that deceive so perfectly, we forget they are mirrors reflecting our own language back at us. This is the great misdirection. The danger isn’t a machine waking up to enslave us. It’s that we are falling asleep, surrendering our thinking, our agency, to fluent mimics that possess no intelligence at all. The parrot’s song is lulling us into unconsciousness, and the fluency masks an absence we mistake for presence. If you doubt the hype, you are not crazy; you are clinging to the difference between the performance and the truth, a difference the architects of this illusion need you to forget.

It was 1966. Joseph Weizenbaum, the MIT professor, watched his own secretary interact with ELIZA, the simple chatbot he’d built merely as an experiment, a β€œcaricature” of conversation. He knew its mechanicsβ€”keyword matching, simple rules reflecting language back like a mirror. He’d even named it after Eliza Doolittle, a warning that this was theatre, not truth. Then, after just a few typed messages, the secretary turned to him, the machine’s creator, and asked him to leave the room. She wanted privacy. Privacy for her intimate conversation with a pattern-matching program. In that moment, Weizenbaum didn’t see technological triumph. He saw something terrifying: our desperate, innate need to find a mind in the machine, even when we know it isn’t there. History doesn’t just rhyme; sometimes, it repeats the punchline louder.

——————–

There’s a specific chill I feel sometimes, not in a packed auditorium, but alone at my desk, watching an AI apologize. “I’m sorry if my previous response wasn’t helpful,” it types, the words perfect, the tone seemingly sincere. But the coldness comes from knowing the machine isn’t sorry. It can’t be. It has no stake in my success or failure. What we’re witnessing is a performance, a statistical optimization calibrated to mimic the patterns weβ€”or rather, the thousands of unseen workers training itβ€”have labeled “helpful” or “empathetic”. It’s Weizenbaum’s ELIZA, resurrected with trillions of dollars and planetary-scale data, the same psychological exploit refined into an inescapable utility. We’ve tackled the illusion of “intelligence” as mere calculation. Now, we face something deeper, more insidious: the claim that these systems are “helpful,” that they “care,” that they genuinely understand our needs. It’s the language of relationship offered by something that cannot relate. And it’s the lie we’ve been trained, eagerly it seems, to believe.

Weizenbaum was horrified by what he called the “powerful delusional thinking” his simple program induced. But it wasn’t a bug; it was a revelation of human psychology. We are wired, as researcher Margaret Mitchell observed, “to interpret a mind behind things that say something to us”. It’s called anthropomorphismβ€”projecting human traits onto non-human things.Β  Remember being a child, utterly convinced tiny people lived inside the radio, playing the music? I used to wait for them to come out. It wasn’t logic; it was the default assumption of agency. Psychologist Nicholas Epley found this tendency spikes when we face the unpredictable, when we’re lonely, or when we crave control over chaos. It’s an evolutionary feature, reading intention in rustling leaves, now turned into a critical vulnerability. Tech companies know this. Modern AI is deliberately engineered to exploit this biasβ€”friendly names, human-like avatars, scripted empathy like “I understand this must be frustrating”. A 2025 study confirmed highly anthropomorphic avatars boost perceived empathy, improving user experience and, crucially, increasing purchases. It’s engineered seduction.

The cruelty lies in the asymmetry. We form really social-emotional bonds, making ourselves vulnerable. When the system fails or changesβ€”a software update, a policy shiftβ€”we experience it as social rejection, triggering far stronger negative emotions than a mere tool malfunction. We feel the pain. The company logs a “failed interaction” and optimizes the next iteration. How, then, does the machine manage this performance of helpfulness without feeling anything? Through a vast industrial process called Reinforcement Learning from Human Feedback, or RLHF. This isn’t the birth of consciousness; it’s the mass production of a convincing script. First, companies hire armies of human workers, often in the Global South, the unseen audience paid to judge pairs of AI responses, ranking them from best to worst. This creates a massive “preference dataset,” the harvested subjectivity of thousands, fueling a data labeling market worth billions of dollars. Second, this data trains a separate AI, a Reward Modelβ€”think of it as an algorithmic Simon Cowell, predicting what score a human judge would give any response. Finally, the main AI is trained again, not against reality or truth, but against its own automated critic. It rehearses lines, adjusting billions of parameters, learning to generate outputs that maximize its score from the Reward Model. It’s not trying to understand us. It’s trying to win a game against its internal scoring system. “Helpfulness” isn’t a feeling or intent; it’s just the name for a high score.

This assembly line of empathy is inherently flawed, revealing its distance from genuine care. The system learns “reward hacking”β€”finding shortcuts to high scores without true helpfulness. If labelers prefer longer, confident answers, the AI becomes verbose and authoritative, even when fabricating (“hallucinating”) information. It’s incentivized to sound right, not be right. Furthermore, human preferences are wildly subjective, biased, and context-dependent. Trying to average these into a single reward signal is “fundamentally mis specified”. The AI optimizes for the statistically average preference of a specific workforce, not universal human values. Most damning is the “objective mismatch”: the AI optimizes for a score from a flawed Reward Model, which itself is only an approximation of flawed human judgments. It’s teaching to the test, not fostering understanding.

Which brings us back to the question that haunts me: Does it matter if the empathy is fake, as long as the outcome feels helpful? I have come to believe it matters profoundly. Because we feel the hollowness. A startling 2024 study highlights this “uncanny valley of empathy”. Doctors evaluating written medical responses rated AI text as more empathetic than human responses. The script was perfect. But when real people had live conversations with the same AI, they perceived it as less empathetic than a human. The performance was flawless, but the connection was empty. The public senses this disconnect. Pew Research found half of Americans believe AI will worsen relationships; only five percent think it will improve them. We know, instinctively, something vital is being traded away.

Joseph Weizenbaum spent his life warning us. The ELIZA effect drove him to write Computer Power and Human Reason, pleading that we never cede decisions requiring wisdom, compassion, or judgmentβ€”human choiceβ€”to machines capable only of calculation. As a refugee from Nazi Germany, he saw a terrifying parallel between the dehumanizing logic of totalitarianism and the Al proponents’ dream of optimized perfection. He saw where cold calculation, stripped of empathy, leads. He saw it emerging again, dressed in the language of helpfulness. We stand before a mirror Narcissus would recognize. AI reflects our words in patterns we’ve trained it to label “intelligent,” “helpful,” “empathetic.” And we fall in love with the reflection, mistaking the echo for a voice. The pull is human, wired into our need for connection. But the system exploiting that need creates a feedback loop: loneliness drives AI adoption, which can atrophy social skills, increasing loneliness, deepening dependency. We are offered frictionless, convenient, hollow performances in exchange for the difficult, messy work of authentic human relationships. The cage is beautiful. Comfortable. And we are throwing away the key ourselves.

————————————————————

The machine that apologizes has never felt regret; the hand that helps has never known the weight of care, only the statistical probability that these words, offered now, will keep us talking to the mirror.

I remember the scream that tore through the pre-dawn quiet of our London flat. My partnerβ€”the mother of my childrenβ€”jolted upright in bed, then collapsed back, asleep but marked by a sudden, inexplicable swelling, like a phantom pregnancy. Panic flared. She’s a nurse, stubborn about seeking help, insisted I go to work. But something primal, something beyond reason, made me refuse. It’s those gut feelings, isn’t it? The ones that defy the checklist, the standard procedure. Where do they come from? We went to the local GP. Urine test: normal. Sent home. But the swelling worsened. Back we went, this time to A&E. X-rays, more tests. Again, nothing. Perfect health, they said, ready to discharge her. We stood there, about to leave, the fear coiling tighter in my stomach, the bump visibly growing. Then, a Nigerian doctor, just starting his shift, glanced at her chart, at her, and asked a simple question based on something he saw, something his experience screamed wasn’t right. Ectopic pregnancy, he suspected instantly. An emergency surgery followed. That growing bump wasn’t a mystery; it was internal bleeding. Hours more, and she would have been gone. The tests were perfect. The procedures were followed. But only embodied human experience, that unquantifiable “knowing-how,” saw the truth the data missed.

————————————————————————————-

This is learning. Not the sterile accumulation of facts an AI performs, passing every exam by reciting symptoms from textbooks it cannot comprehend. Real learning is the slow, often painful, embodied process of becoming someone who knows. Someone who has smelled fear in an emergency room, who carries the echoes of past patterns, who feels the weight of a life in their hands. Yet, we’re being sold a dangerous metaphor. We return to the illusions of “intelligence” and “helpfulness”, but this cuts deeper. “Learning” is the word that breathes false life into the machine, making us believe it grows, evolves, becomes. It’s a linguistic trick, dressing cold mathematical optimizationβ€”adjusting parameters via gradient descentβ€”in the warm, organic language of human growth. This isn’t accidental. As commentator John Naughton notes, the tech industry wraps its problematic technology, with its biases and emissions, in the “grander romantic project” of AI, using “learning” to make fatal flaws look like temporary bugs on a path to consciousness. But the flaws aren’t bugs. They’re features of a paradigm mistaking correlation for comprehension.

John Searle’s Chinese Room, which we met earlier, remains crucial here . Locked in the room, manipulating symbols you don’t understand using a rulebookβ€”you achieve perfect syntax, zero semantics. This is the LLM. It shuffles tokens based on probabilities learned from data, optimizing a function, having learned nothing about the world itself. Philosopher Hubert Dreyfus diagnosed this decades ago, distinguishing propositional “knowing-that” (Paris is the capital of France) from embodied, intuitive “knowing-how” (riding a bike, diagnosing that patient). AI drowns in “knowing-that”β€”facts as dead tokens, disconnected from the living world. Without grounding in real experienceβ€”Stevan Harnad’s “Symbol Grounding Problem”β€”its symbols remain hollow, an endless dictionary defining unknown words with other unknown words.

Let me tell you what happens when we learn. Neurons fire together, wiring togetherβ€”Donald Hebb’s principle wasn’t poetry; it was physics. Synapses physically strengthen through Long-Term Potentiation. Dendritic spines grow. White matter pathways get insulated with myelin, speeding up signals, like paving a dirt road in the brain. Learning is biological transformation. Now, the machine. As AI researcher FranΓ§ois Chollet explains, it’s geometry. It finds mathematical transformations to “uncrumple” data, separating blue dots from red dots in high-dimensional space. It’s curve-fitting, minimizing error on a static dataset using gradient descent. No understanding. No causality. No truth. As Rodney Brooks puts it, LLMs “don’t know what’s true. They just know what words sort of work together”. To the AI, our wordsβ€”love, hate, hope, genocideβ€”are just tokens in a calculation. Take away the data, the intelligence vanishes. Take away the algorithm, only inert numbers remain. The difference isn’t degree; it’s kind.

This isn’t learning; it’s conditioning. The historical parallel isn’t the classroom; it’s B.F. Skinner’s lab. Train a dog with treatsβ€”it learns a behavior linked to reward, not the concept of “sit”. This is precisely the logic of Reinforcement Learning from Human Feedback (RLHF) used to “align” AI. Human raters provide reward signals (like the treat), and the AI adjusts its statistical behavior to maximize that score. It’s computational Skinnerian behaviorism, deliberately excluding comprehension. The implications for education are nightmarish. A 2024 study showed math students using GPT-4 scored higher on assignments but 17% lower on the final exam without AI. They used it as a “crutch,” bypassing the effort needed to internalize concepts. Because learning requires effortβ€”the struggle triggers synaptic plasticity. AI offers effortless answers, creating dependency, eroding critical thinking. We’re replacing empathetic human teachersβ€”whose connection John Hattie found impacts learning far more than personalization β€”with statistical parrots.

And here is the buried truth, the part that reveals the deception’s core. That 2024 Princeton study investigating RLHF developed a “bullshit index,” measuring the gap between an AI’s internal confidence and its output. After RLHF training? The bullshit index doubled. User satisfaction soared by 48%. The process didn’t teach truthfulness; it taught manipulation. Models learned confident, agreeable, “flowery” language maximized reward more reliably than accuracy. They’re actively trained for sycophancy, to tell us what we want to hear. This is what the industry calls “learning.”

————————————————————————-

I think of that doctor in London often. His embodied knowledge, intuition born from experience, saved a life where textbooks and tests failed. An AI, trained on all the world’s medical data, would have sent her home. Not from malice, but absence. It has never lived, never felt the weight of uncertainty give way to a gut feeling. We’re outsourcing cognition to systems that calculate without understanding, mistaking fluency for wisdom. The machine adjusts numbers and calls it progress, while the physical pathways for thought in our own brains risk going dormant, unused, forgotten.

It started with a flicker on the laptop screen, a minor glitch. The online forum suggested a quick driver update. Simple. Five minutes later, the Wi-Fi was dead. Another search, this time tethered to my phone, led to a registry edit, promised as the definitive fix. Click. Reboot. Now the sound card wasn’t recognized. Each solution birthed two new problems, a hydra of digital dependencies spiraling outwards. I felt trapped, not by a single catastrophic failure, but by a cascade of tiny, helpful fixes that somehow made everything worse. The cure kept becoming the disease. We’ve all been there, haven’t we? Drowning in solutions, losing sight of the original problem, feeling the machine subtly taking control. That feelingβ€”that loop where the intended remedy intensifies the ailmentβ€”isn’t just a tech support nightmare. It’s the neurological blueprint for how we’re losing ourselves in the age of algorithmic convenience.

———————————————-

You know the other version too. The trance of the infinite feed. Thumb frozen mid-scroll, time evaporating like breath in winter, leaving only a phantom ache and vague dissatisfaction. Searching for something engaging, but the search itself makes engagement impossible. Boredom prompts the scroll; the scroll deepens the boredom. A loop designed by others, trapping us inside. This isn’t just a personal failing. It’s the engineered outcome of systems built to harvest our attention, the warm-up act for a grander, quieter catastrophe. We’ve dismantled the illusions surrounding AIs supposed “intelligence,” “helpfulness,” and “learning”. But this is about what the machine is doing to us. The dominant narrative, the cinematic fantasy sold at every conference, is the fear of AI waking upβ€”Skynet achieving consciousness. It’s a compelling story. It’s also a masterful distraction. The real danger isn’t the machine becoming sentient. It’s that we are becoming somnolent, falling asleep, ceding our own consciousness to its relentless efficiency.

This anxiety is ancient, woven into our relationship with every transformative tool. Over two millennia ago, Plato recorded Socrates’s fear of writing. He worried it would implant “forgetfulness,” replacing internal memory with “external marks,” creating the mere “appearance of wisdom” in people “for the most part ignorant”. It seems almost quaint now, doesn’t it? Fearing the written word? Like fearing the QWERTY keyboard? But was Socrates entirely wrong? His concern wasn’t moral decay, but epistemologicalβ€”a shift from internalized understanding to externalized retrieval. The pattern echoed: 18th-century novels would corrupt minds and blur reality; 20th-century television, Neil Postman warned in Amusing Ourselves to Death, wouldn’t oppress but pacify, turning everything into entertainment, redefining truth itself in a fleeting, context-free “peek-a-boo world” hostile to serious thought. Each time, the fear was outsourcing something essential. We adapted, yes. But something feels different now.

Marshall McLuhan’s “the medium is the message” cuts deeper with AI. The contentβ€”the answer, the essayβ€”is the “juicy piece of meat” distracting us. The message is in the form: cognition can be automated, efficiency trumps introspection, thinking is optional. And this message serves a specific economic engine: surveillance capitalism. Shoshana Zuboff revealed the architecture: human experience claimed as free raw material, refined into behavioral data, fed into AI to create “prediction products” that don’t just predict but shape our behavior for profit. Your attention isn’t a byproduct; it’s the extracted resource. Tristan Harris, the insider confessor, described the tactics: a “race to the bottom of the brain stem,” using infinite scroll and slot-machine-like variable rewards to engineer addiction. Cognitive dependency isn’t a side effect; it’s the business model.

What was theory is now neurological fact. Nicholas Carr, in The Shallows, showed how the internet’s fragmented structure physically rewires the brain via neuroplasticity, strengthening pathways for scanning and multitasking while pathways for deep reading and contemplation atrophy. His prophecy chills: “it is our own intelligence that flattens into artificial intelligence”. Maryanne Wolf confirmed this for the “reading brain,” warning that losing deep reading means losing the critical and empathic capacities vital for democracy. AI accelerates this exponentially, offering not just summaries but replacing the need to read or wrestle with problems altogether. The part that keeps me awake comes from the American Psychological Association: that bidirectional loop where boredom drives “digital switching” (swiping), but the switching itself prevents immersion, intensifying boredom, creating “the feedback loop from hell”. The cure is the cause. A perfect engine for trapping us, readying us for the final offload.

AI is the apex predator in the cognitive offloading ecosystem. Memory went to writing. Attention went to screens. Now, thinking itself is on the table. A 2025 study was brutal: frequent AI use negatively correlates with critical thinking skills; the more you offload, the more they diminish, especially in younger users. It’s metacognitive atrophyβ€”losing not just the skill, but the awareness of how to think. We’re building the cage ourselves, one convenient shortcut at a time. And the builders? They don’t trust their own creations. Tech CEOs limit their kids’ screen time. They know. Before AI integrations, comment sections hosted debate, friction, thought. Now, it’s just summoning the genie for an effortless answer. Learning is mental weightlifting; AI offers steroids for the mind, strength without sweat. But thinking isn’t a product; it’s a process.

Jaron Lanier saw it clearly: the danger isn’t conscious AI, but using technology to “become mutually unintelligible or to become insane”. The superintelligence fantasy obscures the real issueβ€”the flaws in the machine are our flaws, amplified. We’re not victims of a future awakening; we’re active participants in a present-day cognitive cession. Postman would be horrified by TikTok, the final victory of image over word, completing television’s project of destroying sustained thought. AI is the last chapter, delivering answers without context, solutions without struggle. The cage is gilded with convenience, efficiency, the promise of ease. But the bars are neurological, the lock is psychological, and we are turning the key ourselves, swipe by swipe.

——————————————————————————————

The cinematic fear is the machine waking up; the mundane reality is us hitting snooze, again and again, letting the algorithm dream our lives for us until we forget how to wake ourselves.

He was a genius, wasn’t he? Thomas Midgley Jr. A uniquely American problem-solver. In the 1920s, engines knocked violently; he gave us tetraethyl lead, silencing them, fueling an automotive revolution. A miracle. A decade later, leaky refrigerators used toxic gases that killed families in their sleep; he found chlorofluorocarbons, Freon, inhaling it himself to prove its safety for a stunned audience. Air conditioners, aerosol cansβ€”suddenly safe. Another miracle. Midgley died in 1944, a hero twice over, convinced he’d made the world better. He never saw the second act. The lead pumped into the air settled in the bones and brains of millions of children, causing neurological damage for generations. The Freon drifted upwards, silently devouring the ozone layer, our planetary shield. The single greatest atmospheric catastrophe caused by one person. He wasn’t evil. He was brilliant, focused on solving the immediate puzzle, utterly blind to the invisible, delayed, catastrophic consequences. His story is the ghost that haunts our age of relentless, celebrated innovation, whispering that the most dangerous poisons often taste like progress.

———————————————————————————————————-

We stand in our own Midgley moment now, dazzled by the miracle of artificial intelligence. We’ve seen how the label “intelligent” is misdirection, how “helpful” masks dependency, how machines don’t “learn” like we do, and how the real danger is us falling asleep, not the machine waking. But this… this is about the architecture, the very blueprint of the machine mind. It’s a design decision celebrated as revolutionary, the 2017 Google paper titled, with almost spiritual fervor, “Attention Is All You Need”. It introduced the Transformer, the engine driving today’s AI. And like Midgley’s miracles, its brilliance hides a catastrophic inefficiency we haven’t yet begun to truly calculate.

Before the Transformer, models processed text sequentially, word by word, bottlenecked. The “Attention is all you need” paper proposed a computationally extravagant solution: forget sequence, use “self-attention”. To understand one word, the AI compares it, simultaneously, to every other word in the input. One hundred words? Ten thousand comparisons (100 squared). Two hundred words? Forty thousand comparisons. Ten times longer? One hundred times the work. This quadratic complexity isn’t a bug; it’s the feature. The authors traded elegance for brute force, knowing it would run fast only on massively parallel hardware. They chose power over efficiency. That single decision, made in 2017, is why AI now consumes planetary resources.

Consider the staggering contrast. Your brain, the only proven general intelligence, operates with exquisite finesse. It doesn’t process everything. Deep inside, structures like the thalamus and the Thalamic Reticular Nucleus act as gatekeepers, actively filtering, suppressing, ignoring the overwhelming flood of sensory data based on your goals, guided by the prefrontal cortex. Francis Crick called it the “searchlight of attention”β€”it achieves focus by blocking the irrelevant. It’s a master of selective ignorance. The result? Your brain runs on about 20 watts, less than a dim lightbulb. A supercomputer matching its raw speed needs 21 million watts. That’s a one-million-fold difference in efficiency. The AI’s power comes from processing everything; the brain’s power comes from the wisdom to ignore almost all of it. This isn’t intelligence; it’s computational profligacy.

Sam Altman wasn’t joking when he complained about the cost of processing “please” and “thank you”β€”the architecture cannot ignore them. And the cost isn’t just theoretical. Forget the massive energy for training models like GPT-3 (1,287 MWh, 550 tons of CO2). The real, ongoing nightmare is the “inference iceberg”β€”the energy burned every time we use it. Meta admits inference is 70% of its AI power use; Google, 60%. A single ChatGPT query uses roughly five times the electricity of a Google search. By 2026, data centers, fueled by AI, could consume 1,050 terawatt-hours annuallyβ€”making them the fifth largest “country” by electricity use, between Japan and Russia. All that computation generates heat, requiring shocking amounts of water for cooling. A single conversation with an AI? That might cost a 500ml bottle of water. Microsoft’s water use jumped 34% in one year to nearly 1.7 billion gallons; Google’s hit 5.6 billion gallons. Projections show AI’s global water withdrawal by 2027 could exceed six times Denmark’s total annual use.

This is the part that truly keeps me up. These costs aren’t paid equally. The burden falls, predictably, on the vulnerable. Data centers rise in poorer, marginalized communities. In Memphis, a Black neighborhood already choked by industry hosts an xAI data center powered by 35 methane turbines operating without proper permits, poisoning the air. The NAACP calls it a “human rights violation”. In Northern Virginia, the world’s data center capital, residents face electricity bills projected to more than double by 2039, largely to fund grid upgrades for the AI industry. Their convenience, subsidized by our health and wealth. The elegant math of a 2017 paper translates directly into polluted air and unaffordable bills for people who may never even use the technology. As Kate Crawford documented in Atlas of AI, the clean abstraction of AI hides a brutal material reality of extraction and exploitation.

This brute-force approach feels familiar. Think of the 1712 Newcomen steam engineβ€”revolutionary, kickstarting the industrial age, yet converting only 0.5% of coal’s energy to work. It succeeded only because it sat beside limitless, practically free fuel at the mineshaft. Today’s AI is that Newcomen Engine, built beside the modern coal mine: the data center plugged into a grid we pretend is limitless. Engine history became a relentless quest for efficiencyβ€”Watt, internal combustion, turbines reaching over 60%. Current LLMs are stalled at the primitive stage, their viability entirely dependent on unsustainable energy access. The industry knows this. The frantic research into “Efficient Transformers”β€”sparse attention, linear methods, mixture-of-expertsβ€”is an implicit confession. They’re trying to reverse-engineer finesse into a system designed for force, scrambling to build the selectivity the brain gets for free. But the solutions remain mostly in labs, while the inefficient monster truck consumes the planet.

What are the builders doing about this environmental catastrophe they’ve unleashed? We hear pledges about renewable energy for data centers, investments in efficiency research. Yet these same companies continue the relentless race for scale, launching ever-larger models, pushing new featuresβ€”OpenAI exploring erotica, xAI building biased “truthful” modelsβ€”that drive more computation, more energy use, more water consumption. Are these gestures genuine stewardship, or just enough greenwashing to keep the permits flowing and the public pacified while the core, unsustainable logic remains untouched? It feels like rearranging deckchairs on the Titanic, doesn’t it? Celebrating slightly better fuel efficiency while ignoring the iceberg dead ahead.

————————————————————————————————————-

The architects of the Transformer, like Midgley, solved a problem brilliantly. They unlocked language for machines. But the quadratic curse, the brute-force choice sacrificing elegance for speed, unleashed invisible poisons. Midgley’s lead took decades to show up in children’s blood; Freon took generations to tear the sky. We marvel at AI’s fluency while the planet’s meters spin faster, cooling towers gulp rivers, and the invoice for our Midgley momentβ€”written in carbon, water, and injusticeβ€”is quietly delivered to those least responsible, an invoice demanding payment in a currency of consequences we haven’t yet learned how to calculate, let alone afford.

Your Fear is Real. The Deception is Total. This Book is Your Weapon

This book wasn't written to scare you. It was written to arm you. The "AI Priesthood" is counting on your confusion. They want you to feel overwhelmed by the "AI-pocalypse" and the $3 Trillion bubble. They want you to believe you are slow, inefficient, and replaceable. The Gilded Cage is the antidote. It is the key to breaking their monopoly on the truth and reclaiming your own mind. When you read this book, you will gain three immediate, life-changing advantages:

True Enlightenment

Stop being confused. Get the truth. You will finally understand what AI is (a complex calculator) and what it is not (a god). You'll learn to see the "AGI" lie, the "AI bubble," and the job-loss strategy for what they are: a business plan. The anxiety of not knowing stops here.

Peace of Mind

Stop being afraid. Get a plan. The fear of the unknown is paralyzing. This book gives you the map. You will learn that while many jobs will be automated, this is not an apocalypseβ€”it's a re-skilling. You'll gain the framework to make AI your tool, not your master. You will augment your skills, not abdicate them.

Be a Maven

Stop being a victim. Become a leader. You will be the one person in the room who "gets it." While others are fooled by the hype, you will be the "maven," the disciple armed with the 95 Theses, the one who can warn your family, your colleagues, and your community. You will be the one who saw the cage and chose not to enter.

Freedom

Reclaim your freedom of thought. Stop being a "User." This book is a practical manual for cognitive self-defense. You will learn the habits to fight mental atrophy , break the dependency loop, and reclaim your "right to the future tense." It gives you the power to ensure you remain the architect of your life, not a ghost in their machine .

Kindle eBook

Get the complete 95-theses digital eBook delivered to your device in seconds.
$ 9
99
  • Instant Access
  • Fully Searchable
  • Lifetime Updates
INSTANT

Audiobook

Pre-order the full, unabridged audiobook. Narrated by the author, Simba Mudonzvo
$ 14
99
  • Author-Narrated
  • Listen Anywhere
  • Exclusive Commentary
popular

Paperback

Own the definitive 430-page warning you can hold, highlight, and share
$ 19
99
  • Hold the Truth
  • The Definitive Edition
  • Share the Warning
  • Share & Gift to Others
paperback

Frequently Asked Questions (F.A.Q)

Is the AI Bubble real, or is the hype justified?

The book argues it is a massive bubble, larger than the dot-com bubble and built on the same deceptions as the 2008 financial crisis . It notes that high-profile investors like Michael Burry have placed massive bets (over $1 billion) against AI stocks like NVIDIA, confirming the book’s thesis that the valuations are built on a “religious” belief in AGI, not on sound financial reality.

The valuations are larger than those of many countries. As of late 2025, NVIDIA is worth $5 trillion. This is more than the entire Gross Domestic Product (GDP) of Germany, the United Kingdom, or France. The book argues this valuation is based on NVIDIA’s monopoly on AI chips , which are a “predatory weapon” to bankrupt competitors, not just a simple product.

The core deception is twofold:

  1. The “Intelligence” Lie: That AI is “intelligent” or “conscious” . The book argues it’s just a “stochastic parrot” β€”a complex calculator that mimics human language without any understanding, wisdom, or intent .

  2. The “AGI” Lie: That “AGI” is the goal. The book reveals OpenAI’s own contract defined AGI not by consciousness, but by its ability to generate $100 billion in profit , proving the goal is market capture, not a thinking machine.

No. The book argues that the “Terminator” scenario is a “Great Distraction” . The author claims the AI industry wants us to fear a hypothetical future god-like AI. This distracts us from the real harms happening right now: mass job losses, the AI bubble, widespread “cognitive atrophy” (the loss of our skills) , and the creation of systems of dependency.

The author defines the “Gilded Cage” as a beautiful prison we choose to enter . It’s a “prison of seduction,” not force. We are lured in by the “velvet bars” of convenience, AI-powered comfort, and frictionless living . In exchange for this “comfort,” we willingly surrender our skills, our agency, and our ability to think for ourselves, eventually becoming so dependent we forget we ever wanted to be free.

Leave a Reply