in

The AGI Con: A 250-Year History of Selling the Same Miracle Over and Over Again

The Lede

Here’s what history teaches us about grand deceptions: they don’t repeat exactly, but they do share a signature—a seductive promise of a transformative leap, a new technology so revolutionary it will remake civilization as we know it, backed by charismatic prophets who tell us this time is different! In 1770, it was the Mechanical Turk promising mechanical genius with a chess master hidden in a box. In 1637, it was tulip bulbs promising infinite wealth until the bubble popped. In 1989, it was Cold Fusion promising limitless energy that nobody could replicate. In 2018, it was Theranos promising a drop-of-blood medical revolution built on falsified data.​

In 2025, it’s Artificial General Intelligence—and Sam Altman just announced that OpenAI has “solved the path to AGI” and is now working on superintelligence. The company is valued at $500 billion despite losing $8 billion annually, with AI researchers predicting a 50% chance of AGI somewhere between 2040 and 2061, while OpenAI’s CEO claims it’ll arrive by late 2025 – which is which? If that sounds familiar, it’s because you’ve seen this con before. Twenty times, in fact. The only thing that’s changed is the special effects budget and the complexity of the financial engineering hiding the humans in the box.

The Investigation: The Greatest Hits of Human Gullibility

Act I: When Machines That Can’t Think Convince Us They’re Thinking

The Mechanical Turk is the ur-example, the foundational con that every tech hype cycle has been quietly plagiarizing for 255 years. Wolfgang von Kempelen built a chess-playing automaton in 1770 that defeated Napoleon, Benjamin Franklin, and the brightest minds of the Enlightenment—not because it could think, but because a human chess master was crammed inside the cabinet manipulating the pieces. The deception worked for 84 years because everyone wanted to believe machines could think, and questioning the illusion meant looking like a killjoy who didn’t appreciate progress.​

Fast forward to the early 1900s, and we get Clever Hans, the “counting” horse who appeared to solve arithmetic by tapping his hoof. The con wasn’t intentional, but it was brutally effective: Hans wasn’t doing math—he was reading unconscious physical cues from his questioners, stopping when he sensed their tension release. He wasn’t intelligent; he was reflecting human expectation back at humans who desperately wanted to believe animals could reason.​

Sound familiar? Today’s AI doesn’t think—it pattern-matches against billions of labeled examples created by underpaid Filipino workers who teach it to seem intelligent by rewarding outputs that humans rate as “helpful”. It’s Clever Hans with a $500 billion valuation and a marketing department that calls human-supervised pattern recognition “machine learning”.​​

Then there’s the delightful case of N-Rays in 1903, where French physicist Prosper-René Blondlot announced the discovery of a new form of radiation. Dozens of scientists published papers confirming its existence—until a skeptical American physicist secretly removed a key prism during a demonstration and the “readings” continued anyway. N-Rays were entirely illusory, a collective psychological phenomenon where scientists saw what they expected to see. The parallel to today’s AGI hype is almost too perfect: researchers “detecting” emergent intelligence in systems that are really just scaling up pattern-matching, seeing reasoning where there’s only statistical correlation.​

Act II: Financial Bubbles Dressed as Revolutions

If the Mechanical Turk is the con’s technical blueprint, then Tulip Mania is its financial model. In 1637, the Dutch convinced themselves that tulip bulb prices would increase boundlessly, creating wealth from nothing—a collective delusion that a rare flower could defy economic gravity until the bubble catastrophically popped. The deception wasn’t the tulips themselves; it was the belief that speculative value could compound infinitely without underlying productive capacity.​

The South Sea Bubble in 1720 perfected this playbook: wildly exaggerated claims about future wealth from South Seas trade fueled a speculative frenzy based on a mirage, sustained by corporate propaganda and public mania. Replace “South Seas trade” with “artificial general intelligence” and “corporate propaganda” with “Sam Altman blog posts,” and you’ve got OpenAI’s $500 billion valuation built on $11.6 billion in projected revenue and $8 billion in annual losses.​​

OpenAI’s CFO recently told investors to expect the company to spend “trillions of dollars on data center construction in the not very distant future”—even as Altman himself admits “investors as a whole are overexcited about A.I.”. That’s not a business plan; it’s a dare. It’s the tulip trader in 1636 saying “yes, this bulb costs more than a house, but just imagine how much it’ll be worth next year.” Except this time, the tulip is digital, the house is a data center, and NVIDIA is both the vendor and the investor in a circular financing loop that would make South Sea Company executives blush.

Act III: Pseudoscience With the Veneer of Legitimacy

Some of history’s best cons worked because they wrapped absurdity in the language of science. Phrenology—the belief that skull shape determined character and intelligence—offered a simple, physical key to understanding the complex human mind while justifying racial and social prejudices with a veneer of scientific authority. It was nonsense, but it was legible nonsense that promised to make the mysterious measurable.​

The “Mozart Effect” in the 1990s followed the same playbook: a limited study showing a temporary, minor effect on adult spatial reasoning was wildly exaggerated into a lucrative industry claiming that playing Mozart to babies would permanently increase their intelligence. Parents wanted a simple intervention to guarantee their children’s success, so they paid for the illusion.​

Today’s version is “Reinforcement Learning from Human Feedback” (RLHF)—a technically legitimate training method that the AI industry has dressed up as autonomous machine learning when it’s really just thousands of Kenyan and Filipino workers clicking “Is this helpful?” ten million times. The deception isn’t that RLHF exists; it’s that we call it “artificial intelligence” instead of “crowdsourced human judgment at scale.” It’s phrenology for the algorithmic age: a simple, legible framework that obscures a far messier reality while promising to unlock intelligence itself.​

Act IV: The Prophets and the Believers

Every great con needs a charismatic prophet, and every prophet needs believers who want so desperately for the miracle to be real that they’ll ignore the levers and pulleys. Elizabeth Holmes promised a healthcare revolution with a single drop of blood. The technology didn’t work, but she raised $700 million by exploiting Silicon Valley’s desperate belief that disruption was always one pitch deck away from changing the world. Theranos was a mirage built on secrecy, falsified data, and storytelling—sustained not by evidence but by the collective agreement not to ask hard questions until federal prosecutors forced the issue.​

In January 2025, Sam Altman posted on his blog: “We are now confident we know how to build AGI as we have traditionally understood it”. In August 2025, after GPT-5’s underwhelming launch, he told reporters that investors are “overexcited about A.I.” and warned of a bubble. In September, he admitted AGI has become a “pointless term”. And throughout it all, OpenAI’s valuation climbed from $300 billion to $500 billion while the company projected continued cash burn through 2029.

This isn’t a business strategy—it’s a theological movement with quarterly earnings calls. Altman is the high priest, AGI is the Second Coming, and questioning the timeline means excommunication from the Church of Exponential Progress. When AI researchers surveyed in 2025 predict a 50% probability of AGI between 2040 and 2061, but OpenAI’s CEO claims it’s months away, one of two things is true: either Altman has access to a revolutionary breakthrough that the entire AI research community has missed, or he’s running the most expensive Hail Mary in Silicon Valley history.

The Absurdity: Why We Keep Falling for the Same Con

Here’s the pattern that connects all 20 historical deceptions to the AGI pursuit: they offer a seductively simple answer or a monumental leap forward, leveraging new technology to obscure a complex, flawed, or outright fraudulent reality. They work because they perfectly mirror our deepest hopes and desires—for genius machines, infinite wealth, simplified explanations, or civilizational transformation.​

The cryptocurrency boom promised a “decentralized utopia” free from banks and governments. In reality, it resulted in extreme wealth centralization, massive energy consumption, and became a vehicle for speculation and fraud—replicating the very systems it claimed to replace. But people wanted to believe in financial liberation, so they bought the coins and ignored the contradictions until the exchanges collapsed.​

The “Like” button deception convinced an entire generation that social validation quantified by engagement metrics was a true measure of an idea’s worth. It created an attention economy that rewards engagement over truth, but we embraced it because we wanted a simple, numerical proxy for value and influence.​

Visionary CEO (at a packed tech conference, wearing the mandatory black turtleneck): “We’ve solved the path to AGI. We’re now working on superintelligence. This year, AI agents will join the workforce and materially change the output of companies. Expect us to spend trillions on infrastructure.”

Skeptical Historian (reading the same press release for the twentieth time in 250 years): “Let me guess—it requires unlimited investment, the timeline keeps accelerating despite no breakthrough in fundamental architecture, and questioning it makes you a Luddite who ‘doesn’t get it.’ Also, you’re burning $8 billion a year while claiming exponential returns are just around the corner. I’ve literally seen this exact pitch in 1637, 1720, 1989, and 2017. The only thing that changes is whether the prophet wears a powdered wig or a hoodie.”

The AGI con works because admitting it’s a con means admitting that Silicon Valley’s most valuable companies are built on pattern-matching glorified autocomplete, that “machine learning” is mostly human learning at $2/hour in Manila, and that the $500 billion valuation is a South Sea Bubble with better PR. Nobody wants to be the killjoy. Nobody wants to look unsophisticated. So we collectively agree not to ask why AGI timelines keep shrinking even as capabilities plateau, why hallucinations increase with model complexity, or why the path to superintelligence requires “trillions of dollars” in infrastructure when you’ve allegedly already “solved” the core problem.

The Judgment: History Doesn’t Repeat, But This Con Does

Here’s what makes the AGI deception different from the Mechanical Turk or Theranos: the financial scale is unprecedented, the believers include the world’s most sophisticated investors, and the prophet has convinced himself that the con is real. Sam Altman isn’t Edgar Allan Poe writing about the Mechanical Turk’s “very ingenious deception”—he’s Wolfgang von Kempelen touring Europe with the cabinet, genuinely believing that if he just builds a bigger box and hires more chess masters, the machine will eventually play on its own.​​

The verdict is this: AGI, as currently pursued and promoted by the AI industry, is not a scientific inevitability or a technological breakthrough waiting to be unlocked. It is a 250-year-old con with a GPU upgrade—a collective delusion sustained by circular financing, exploited labor dressed as “machine learning,” and a prophet class that has convinced investors to fund a $500 billion tulip bulb that produces excellent autocomplete but no path to consciousness.​​

Every historical deception follows the same arc: the prophets promise transformation, the believers suspend skepticism, the money floods in, the cracks appear, and eventually reality reasserts itself. The Mechanical Turk burned in a fire after 84 years. Tulip Mania collapsed when someone finally asked what a flower was actually worth. The South Sea Bubble popped when investors realized trade projections were fantasy. Theranos imploded when a journalist asked to see the machines work.​

OpenAI will follow the same trajectory—not because AI is worthless, but because AGI as promised (general intelligence, autonomous reasoning, superintelligence by 2027) is the same mirage that’s been sold since 1770: a genius machine that doesn’t exist, funded by people who want so desperately to believe that they’ll ignore the humans in the box until the whole apparatus burns down.​​

The great cons work not because they’re plausible, but because they’re irresistible. And the greatest con of all is convincing an entire industry that this time is different—when history is screaming that it’s always, always the same.​

The Aftermath: Your Turn

History doesn’t repeat itself, but it does rhyme—in prophets and believers, in bubbles and crashes, and in the seductive belief that the miracle is always just one more funding round away.

So here’s what we want to know:

  1. Which historical deception does the AGI hype most remind you of? Is it the Mechanical Turk (genius in a box), Tulip Mania (infinite valuation with no underlying value), Theranos (charismatic prophet with vaporware), or something else entirely? Bonus points for deceptions we missed.
  2. Have you worked in AI and seen the “humans in the box”? If you’re a data labeler, content moderator, or RLHF annotator—the invisible workforce teaching machines to seem intelligent—what’s the real story behind the automation? We want to hear from the people doing the “machine learning.”
  3. What’s your favorite AGI timeline prediction failure? Sam Altman claims AGI is solved and arriving in 2025. AI researchers say 50% probability by 2040-2061. Who’s your favorite prophet of the imminent singularity, and when did their prediction spectacularly miss the mark?

What do you think?

1k Points
Upvote Downvote

Written by Simba

TechOnion Founder - Satirist, AI Whisperer, Recovering SEO Addict, Liverpool Fan and Author of Clickonomics.

Leave a Reply

GIPHY App Key not set. Please check settings

The Emperor’s New Algorithm: Why Your “Intelligent” AI Is Just the Mechanical Turk 2.0 with a Filipino Teenager in a Very Expensive Box

AI Startups NOBODY asked for are EVERYWHERE!