In 2015, Elizabeth Holmes promised that a single drop of blood from a finger prick could run hundreds of medical tests with revolutionary accuracy, transforming healthcare forever. The technology didn’t work, the results were often fabricated or wildly inaccurate, and investors lost $700 million before Holmes was convicted of fraud. Fast forward to 2025, Sam Altman promises that AGI will emerge from scaling large language models to achieve human-level intelligence across all domains, transforming civilization forever. The technology hallucinates confidently incorrect information, the timelines keep getting pushed back, and investors have poured $60 billion into a company that just hit a $500 billion valuation while losing billions annually. The parallels aren’t subtle—they’re a instruction manual that Silicon Valley is following with religious precision while insisting “this time is different because it’s AI instead of blood tests.”​
Welcome to Theranos 2.0, where the only innovation is making the fraud so expensive and technically sophisticated that by the time anyone realizes the emperor has no clothes, founders and venture capitalists have already cashed out billions in secondary sales.
The Single Solution Mirage: One Prick, One Prompt
Elizabeth Holmes’s central pitch was elegantly simple and catastrophically fraudulent: Theranos’s Edison machines could perform a full range of blood tests—from cholesterol to cancer markers—using just a finger prick instead of traditional venous blood draws. The vision was transformative: democratize healthcare by making comprehensive testing cheap, fast, and accessible. The reality was that the Edison machines didn’t work, so Theranos secretly used conventional blood-testing equipment from Siemens and other manufacturers while claiming proprietary breakthroughs.​
OpenAI’s pitch follows identical contours: AGI will emerge from scaling transformer models, creating a single system that can perform the “full range” of cognitive tasks—from coding to scientific research to creative work—matching or exceeding human-level intelligence across all domains. The vision is transformative: democratize intelligence by making comprehensive AI capabilities cheap, fast, and accessible to every person and their dog. The reality is that GPT models hallucinate, struggle with basic reasoning, can’t reliably solve novel problems, and require armies of human contractors to function—but OpenAI presents them as steps toward AGI while quietly shifting the goalposts.​
Holmes promised “a full range of blood tests” that Theranos would “eventually achieve” once the technology matured. Altman promises AGI that OpenAI will “eventually achieve” once the models scale sufficiently. Both framed current limitations as temporary obstacles on an inevitable journey rather than fundamental flaws in the approach. Both convinced investors that pouring billions into the vision would accelerate the timeline. Both were catastrophically wrong about how close they actually were to delivering on the core promise.
The Timeline Two-Step: When “Soon” Becomes “Someday”
Theranos began offering tests to the public in late 2013, despite internal knowledge that the technology didn’t work reliably. The public launch was designed to create momentum, validate the vision with real customers, and maintain investor confidence that breakthroughs were imminent. As problems mounted, Theranos kept promising that improvements were “months away” while secretly knowing the fundamental technology was broken.
OpenAI launched ChatGPT publicly in late November 2022, achieving 1 million users in 5 days and 100 million in 2 months—now reaching 800 million users. The explosive adoption validated the vision, created massive momentum, and convinced investors that AGI was achievable on aggressive timelines. Sam Altman and OpenAI executives made repeated predictions about AGI arriving within years, not decades.​
Then the timeline two-step began. AGI dates got pushed back. The narrative shifted from “AGI is near” to “AGI is a journey.” New terminology emerged—Artificial Superintelligence (ASI)—to reframe expectations when AGI proved elusive. Most tellingly, Sam Altman rarely talks about AGI anymore in public appearances and interviews. The pivot from “we’re months from AGI” to “let’s focus on enterprise partnerships and infrastructure deals” mirrors Theranos’s shift from “revolutionary blood testing” to “partnerships with Walgreens” when the core technology kept failing.
This isn’t iterative development—it’s the classic con artist move of changing the promise when the original one becomes untenable while pretending continuity. Holmes shifted from “comprehensive testing” to “we’re working on it” to eventual silence as investigations mounted. Altman shifted from “AGI soon” to “responsible scaling” to focusing on $300 billion Oracle deals and enterprise adoption while the AGI timeline quietly extends into the indefinite future.​
Cheating the Turing Test: When You Can’t Win, Change the Rules
The Turing Test was never meant to be “passed”—it was Alan Turing’s thought experiment for assessing machine intelligence, deliberately designed to be impossibly difficult because true human-level understanding involves consciousness, reasoning, and contextual awareness that pure pattern matching can’t and won’t ever be able to replicate. But tech startups have decided to game the Turing test by building systems that mimic human responses through statistical prediction rather than actual understanding.
This is the “Margin Call” strategy: be first, be smarter, or cheat. DeepMind and others tried to be first with AGI. DeepSeek and competitors tried to be smarter with more efficient architectures. OpenAI and co chose to cheat by building stochastic parrots so sophisticated they convince casual users they’re intelligent—the “stupid Good Will Hunting kid” that can recite impressive-sounding answers without actual comprehension.
Theranos cheated by using conventional blood-testing machines while claiming proprietary technology. OpenAI cheats by using massive human labor infrastructure—data labelers, content moderators, RLHF trainers—while presenting outputs as pure machine intelligence. Both relied on making the cheating sophisticated enough that casual observers couldn’t detect it. Both counted on the lag between impressive demonstrations and rigorous scrutiny to secure valuations and investor capital.
The Hallucination Problem: When Wrong Answers Look Right
Theranos’s fundamental flaw was that its Edison machines produced wildly inaccurate results—false negatives for serious conditions, false positives causing unnecessary anxiety, inconsistent outputs that made medical decisions impossible. The company knew about the accuracy problems but launched publicly anyway, gambling that they could fix the technology before regulators or patients noticed.
OpenAI’s models hallucinate—confidently generating false information, fabricating citations, creating plausible-sounding but incorrect answers. The company knows about the reliability problems but has scaled anyway, gambling that users will tolerate occasional errors and that iterative improvements will eventually solve the fundamental issue. MIT surveys found that 95% of companies investing in AI “are getting zero return” largely because the reliability issues make deployment in critical applications impossible.​
Both companies framed accuracy problems as features to be improved rather than fatal flaws in the approach. Holmes claimed Theranos was “working on” validation studies and accuracy improvements. Altman claims OpenAI is “working on” alignment and reliability. Both deployed products to paying customers despite knowing the outputs couldn’t be trusted for critical decisions.
The Young Founder Mythology: Vision Over Expertise
Elizabeth Holmes was a 19-year-old Stanford dropout with rudimentary engineering training and zero medical expertise when she founded Theranos. She compensated with charisma, vision, and the ability to convince prestigious investors that conventional expertise was obsolete in the face of revolutionary innovation.
Many AI company founders—including some of OpenAI’s leadership—lack deep AI research credentials or extensive business experience. They compensate with charisma, vision, and the ability to convince investors that this time the rules of business fundamentals don’t apply because the technology is transformative.
Both ecosystems celebrate “founder vision” over boring expertise like “understanding the technology” or “having a profitable business model.” Both feature young leaders who’ve never built sustainable companies telling investors that traditional metrics are obsolete. Both reward confidence over competence, narrative over numbers, promise over performance.
The Human Cost: When Pressure Kills
In May 2013, Theranos scientist Ian Gibbons committed suicide. Gibbons reportedly struggled with the ethical implications of his work after realizing the technology didn’t work as claimed. His death became a dark footnote in the Theranos story—evidence of the psychological toll when employees realize they’re participating in deception.
OpenAI lost an employee to suicide, something Sam Altman confirmed in an interview with Tucker Carlson. The circumstances remain largely private, but the parallel is haunting: both companies created high-pressure environments driven by impossible promises, where employees faced the cognitive dissonance of working on revolutionary visions that kept failing to materialize.
The Oracle Connection: When Larry Ellison Picks Winners
Oracle co-founder Larry Ellison invested in Theranos, lending credibility to Holmes’s vision. In 2025, Oracle signed a $300 billion IOU deal with OpenAI, immediately losing $100 million per quarter but seeing its stock soar 40% as markets interpreted the partnership as validation. Both investments signal “serious money backing transformative technology.” Both ignore fundamental questions about whether the business model works.​
The Judgment: Same Script, Bigger Budget, No Prison (Yet)
The Theranos-OpenAI parallels aren’t coincidental—they’re structural. Both promised single-solution technologies that would revolutionize entire industries. Both secured massive valuations based on timelines that kept extending. Both launched publicly before the technology was reliable. Both gamed evaluation metrics (Theranos: manipulated test results; OpenAI and competitors: manipulated benchmark scores). Both featured young founders without deep domain expertise. Both experienced employee suicides. Both attracted Oracle investment. Both relied on investors believing that current losses would transform into future dominance.
The difference is that Holmes is serving 11 years in prison for fraud, while Altman just executed a $500 billion valuation and gets profiled in Fortune as a visionary. Elizabeth Holmes would have absolutely loved the AI bubble—it’s everything she tried to do with Theranos, except legal because the promises are vague enough (“AGI eventually”) and the technology works just enough (ChatGPT generates text) to avoid criminal liability.
When this bubble pops and OpenAI’s $44 billion in projected losses through 2028 become reality without AGI arriving, the retrospectives will compare it to Theranos. They’ll note the identical playbook: revolutionary promises, timeline extensions, evaluation gaming, human costs, Oracle validation, and fundamentals that never worked. They’ll wonder why we didn’t learn from 2015. And they’ll discover that we did learn—we just learned that if you make the fraud expensive enough and technical enough, you get a $500 billion valuation instead of a prison sentence.
The Aftermath
So, dear reader, as OpenAI speedruns the Theranos playbook with better lawyers: How many more AGI timeline extensions before we admit “eventually” means “never” and this is just expensive autocomplete? When OpenAI’s hallucination problems prove as unfixable as Theranos’s accuracy issues, will we admit the parallel or invent new excuses? And which founder do you think history will judge more harshly—Holmes for lying about blood tests that could have saved lives, or Altman for burning $44 billion on AGI promises that enriched insiders while delivering chatbots?
This article is just a tremor. The earthquake is coming.
The patterns of hype, deception, and greed laid bare here are part of a much larger story. They are the evidence file for the great deception of our time.
The full, unvarnished truth is detailed in the forthcoming book from Simba Mudonzvo:
The Gilded Cage: How the Quest for Artificial Intelligence Became the Greatest Deception in Human History.
Stay tuned. The reformation is coming.
GIPHY App Key not set. Please check settings
One Comment