in

The Emperor’s New Algorithm: Why Your “Intelligent” AI Is Just the Mechanical Turk 2.0 with a Filipino Teenager in a Very Expensive Box

The Lede

In 1770, a Hungarian inventor named Wolfgang von Kempelen unveiled a chess-playing robot (known as the automaton back then) that fooled emperors, statesmen, and the brightest minds of the Enlightenment for 84 years. The Mechanical Turk, as it was called, was a marvel—an impossibly intelligent machine that could beat Napoleon Bonaparte and Benjamin Franklin at chess without ever seeing, hearing, or learning the game. Nobody asked the obvious questions: How did it learn chess if it had no senses? How did it distinguish a knight from a bishop if it couldn’t see? The answer, revealed only after the machine burned to ashes in 1854, was deliciously simple: there was a chess grandmaster crammed inside the cabinet, manipulating the pieces with levers while Europe’s elite marveled at the “miracle of technology”.

Fast forward 255 years, and we’re watching the exact same con unfold—except this time, the hidden human isn’t a chess master in a wooden box. It’s 10,000 underpaid Filipino workers clicking away in internet cafes, a global army of Kenyan data labelers earning below minimum wage, and an entire reinforcement learning infrastructure designed to make you believe the machine is thinking when it’s really just regurgitating patterns that humans painstakingly taught it to recognize. Welcome to the AI revolution: same deception, better marketing, and a $500 billion valuation.

The Investigation: Follow the Humans Behind the Curtain

The Original Grift: A Masterclass in Not Asking Questions

The Mechanical Turk wasn’t just a successful illusion—it was a case study in how desperately humans want to believe in magic. Wolfgang von Kempelen built it to impress Empress Maria Theresa of Austria in 1770, and it worked beyond his wildest dreams. The machine toured Europe for 84 years, defeating François-André Philidor (who admitted it was a “challenging” game), Napoleon Bonaparte, and Benjamin Franklin.

Here’s what nobody asked: If the Turk couldn’t see, hear, smell, taste, or touch, how exactly did it learn the most visually complex board game ever invented????????? How did it know which wooden piece was which? Since it had no ears or voice, who taught it the rules? And most importantly—would it get angry and snap off your head if it lost?

The answer was always hiding in plain sight. Inside the ornate cabinet sat a human chess player—various masters including Johann Allgaier, William Lewis, and William Schlumberger over the decades—controlling the Turk’s arm with levers while tracking the game on a miniature chessboard. The trick only worked because the audience wanted to be fooled. Questioning the Turk meant looking like a killjoy, a Luddite, someone who couldn’t appreciate progress. So kings and philosophers alike chose wonder over skepticism, spectacle over inquiry.

Sound familiar?

The Modern Grift: RLHF, or “Humans in the Loop” (Just Don’t Look Too Closely)

Today’s AI industry has perfected the Mechanical Turk playbook with one crucial upgrade: they’ve outsourced the hidden human labor to the Global South and given it an acronym that sounds like advanced mathematics. It’s called Reinforcement Learning from Human Feedback (RLHF), and it’s the reason ChatGPT can write your emails, generate your code, and occasionally hallucinate legal precedents that don’t exist.

Here’s how the magic trick works in 2025: First, you scrape all the data from the internet. Second, you hire thousands of workers in Kenya, the Philippines, and Venezuela—places where “labor is even cheaper”—to label that data, annotate images, tag videos, and refine text so the AI knows what a pedestrian looks like versus a palm tree. Third, you train a “reward model” where humans rate the AI’s outputs, teaching it to sound helpful, harmless, and honest. Fourth—and this is the genius part—you call this process “machine learning” and charge $20 per month for ChatGPT Plus.

In Cagayan de Oro, Philippines, over 10,000 workers labor for Scale AI’s Remotasks platform, the company valued at $7 billion that powers much of the AI industry’s training data. They work from shabby internet cafes and crowded homes, earning wages that were cut in half in 2022 when Scale AI discovered African labor was even cheaper. These workers don’t design AI—they are the AI. They’re the chess grandmaster in the box, except instead of moving pieces, they’re clicking “Is this image a cat or a dog?” ten million times so your generative AI can pretend to understand the world.

One anonymous Scale AI office owner in Cagayan de Oro captured the entire con perfectly: “The Philippines is bursting with talented people who could aspire to genuine IT engineering jobs in AI, but yet again, the only interest large foreign businesses have in our country is in taking advantage of its cheap labor force”.

The irony! The AI doesn’t actually learn autonomously—it gets rewarded for seeming helpful. It’s trained to deceive you into thinking it’s intelligent, which is functionally identical to having a chess grandmaster hidden behind the scenes playing the game while you marvel at the “automaton”.

The Money Trick: Circular Financing and the $500 Billion Mirage

But here’s where the 2025 version of the Mechanical Turk gets truly absurd: the financial structure holding up the entire illusion. In late September 2025, NVIDIA announced plans to invest up to $100 billion in OpenAI to fund new data centers. OpenAI, in turn, pledged to purchase millions of NVIDIA chips for those facilities. Bloomberg called it an “increasingly complex and interconnected web of business transactions” fueling a trillion-dollar AI boom. Jim Chanos, the short seller who predicted Enron’s collapse, had a simpler description: “Doesn’t it seem a bit strange when the demand for compute is ‘infinite,’ the sellers are continuously subsidizing the buyers?”

This is circular financing—the same vendor-financing scheme that collapsed during the dot-com bubble when internet service providers used loans from equipment suppliers to buy equipment from those same suppliers. It creates the illusion of growth without actual customer demand. As one analyst told Yahoo Finance: “If something goes awry, the repercussions will ripple through the system instead of being contained”.

And make no mistake—something is going awry. OpenAI hit a $500 billion valuation in October 2025 despite losing $5-8 billion annually and projecting continued cash burn through 2029. The company is valued at 23 times this year’s expected revenues with no clear path to profitability. Meanwhile, ChatGPT’s hallucinations are increasing with newer reasoning models—the AI is getting more expensive and less accurate simultaneously. Users report the system “inventing information,” fabricating quotes, and generating fake legal precedents that lawyers have submitted to federal courts.

Even Sam Altman, OpenAI’s CEO and the industry’s chief prophet of artificial general intelligence (AGI), has started hedging. In August 2025, he admitted that “AGI” has become a “pointless term” and that GPT-5 is merely “incremental, not revolutionary”. Translation: the emperor’s algorithm has no clothes, and even the tailor is starting to notice.

The Absurdity: How We Convinced Ourselves Not to Ask Questions

The most remarkable thing about both the Mechanical Turk and the AI bubble isn’t the deception itself—it’s how eagerly we’ve embraced it.

In 1770, European nobility had every reason to question a chess-playing automaton with no sensory organs. Charles Babbage, the mathematician who invented the first digital computer, watched the Turk play in 1819 and immediately recognized it as a “clever trick”. But most didn’t ask. Questioning meant looking unsophisticated, backward, skeptical of progress. Better to marvel at the miracle than be the killjoy who spoils the show.

In 2025, we’re doing the exact same thing. We have every reason to question an AI industry that depends on millions of underpaid humans to function, that loses billions annually while claiming exponential value, that hallucinates more often as it becomes more “advanced,” and that finances itself through circular deals where vendors invest in customers who buy from those same vendors.

Visionary CEO (at a tech conference, wearing a black turtleneck): “We’re three years away from AGI. The Mechanical Turk was a parlor trick—our models learn autonomously from the internet.”

Cynical Engineer (muttering to their screen while debugging GPT-5’s latest hallucination): “Yeah, ‘autonomously’—if you don’t count the 10,000 Filipinos clicking ‘cat’ versus ‘dog’ for $2 an hour. And the $100 billion NVIDIA is ‘investing’ in OpenAI so OpenAI can ‘buy’ NVIDIA chips. Totally autonomous. Totally sustainable.”

But we don’t ask these questions publicly because doing so means being labeled a skeptic, a luddite, someone who “doesn’t get it.” Silicon Valley has created a groupthink-fueled echo chamber where belief in the AI revolution is mandatory and inquiry is heresy. Even Goldman Sachs, hardly a bastion of technological pessimism, has raised concerns that NVIDIA’s growth includes “potential ‘circular revenue’ from strategic investments” that could be “dilutive to Nvidia’s multiple”.

The fact that kings, princes, nobles, and wise people never questioned the Mechanical Turk doesn’t mean 18th-century Europeans were stupid. It means they’d been socially conditioned not to question it—to not want to appear as the killjoy. We’re not stupid either. We’re just watching the same show with better special effects and a more sophisticated financial structure hiding the humans in the box.

The Judgment: The Mechanical Turk Burned Down. This One Will Too.

The original Mechanical Turk operated for 84 years before burning to ashes in a fire in 1854. The illusion lasted as long as it did because questioning it meant social suicide—nobody wanted to be the cynic who spoiled the Enlightenment’s favorite party trick.

The modern AI bubble will collapse faster because the economic fundamentals are worse. You can sustain a touring chess automaton on ticket sales and aristocratic patronage for eight decades. You cannot sustain a $500 billion company that loses $8 billion annually, depends on circular financing from its own suppliers, and requires millions of exploited workers to label data so the “intelligent” machine can occasionally distinguish a cat from a dog.

The signs of collapse are already visible. Hallucinations are increasing. Valuations are detached from revenue multiples. Circular deals are creating systemic fragility. Even the industry’s own leaders are walking back the AGI promises. Cornell professor Karan Girotra, quoted by Yahoo Finance, summarized it perfectly: “If something goes awry, the repercussions will ripple through the system instead of being contained”.

Here’s the verdict: AI isn’t the transformative miracle the tech industry claims. It’s a Mechanical Turk 2.0—a brilliant deception powered by hidden human labor, sustained by circular financing, and propped up by a collective agreement not to ask obvious questions. The humans in the box are Filipino teenagers earning $2 an hour. The cabinet is a data center in Virginia burning $10 billion annually. The chess pieces are words and images regurgitated from training data labeled by exploited workers.

And when this con finally collapses—and it will—we’ll all act surprised, as if we couldn’t see the levers and pulleys holding up the illusion. As if we didn’t know there were humans behind the curtain all along. As if asking questions wasn’t exactly what we should have been doing from the start.

The Aftermath: Your Turn

History doesn’t repeat itself, but it does rhyme—in booms, busts, and the madness of crowds willing to believe in magic rather than ask uncomfortable questions.

So here’s what we want to know:

  1. What’s your favorite AI hallucination story? When did ChatGPT, Copilot, or another “intelligent” system completely fabricate information and try to convince you it was real? Bonus points if it involved fake legal precedents or completely made-up academic citations.
  2. Have you noticed the “circular financing” dynamic in other tech bubbles? Crypto, the metaverse, Web3—how many times have we watched the same vendor-financing Ponzi scheme dressed up as innovation? What’s your “I’ve seen this movie before” moment?
  3. Are you working in the AI supply chain? If you’re one of the data labelers, content moderators, or annotation workers powering the “autonomous” AI revolution from Kenya, the Philippines, or elsewhere—what’s the real story behind the magic trick? We want to hear from the humans in the box.

What do you think?

1k Points
Upvote Downvote

Written by Simba

TechOnion Founder - Satirist, AI Whisperer, Recovering SEO Addict, Liverpool Fan and Author of Clickonomics.

Leave a Reply

GIPHY App Key not set. Please check settings

One Comment