The Great AI Upsell: Sam Altman’s Masterclass in Selling Nothing As Something

In a secret underground bunker beneath Silicon Valley, Sam Altman stands before a mirror practicing his keynote expressions. “Humble yet visionary,” he whispers, tilting his head slightly while softening his gaze. “Concerned but optimistic,” he continues, furrowing his brow while maintaining an enigmatic half-smile. Finally, “I’ve-seen-the-future-and-it’s-both-terrifying-and-wonderful-but-don’t-worry-we’re-handling-it,” which involves a complex series of micro-expressions only visible to those who’ve paid for the Pro tier of human emotion recognition.

Welcome to the OpenAI marketing laboratory, where the company that promises to “benefit all of humanity” has perfected humanity’s oldest profession: selling people things they don’t need at prices that don’t make sense, described in language that doesn’t mean anything.

The Alphabet Soup of Artificial Intelligence

OpenAI’s product strategy appears deceptively simple: create a bewildering array of nearly identical AI models with names so confusing that customers will upgrade out of sheer FOMO.

“Our naming convention is based on advanced psychological principles,” explains fictional OpenAI Chief Nomenclature Officer Jennifer Davis. “Studies show that random combinations of letters and numbers create the impression of technical sophistication. The more arbitrary and inconsistent the naming system, the more customers assume there must be some genius behind it they simply don’t understand.”

This explains why OpenAI’s models sound like they were named by throwing Scrabble tiles at a wall: GPT-4, GPT-4o, GPT-4o mini, o1-mini, o1-preview. Even Sam Altman himself admitted in July 2024 that the company needs a “naming scheme revamp”310. Yet the confusion continues, almost as if it’s intentional.

“It’s unclear. A confusing jumble of letters and numbers, and the vague descriptions make it worse,” lamented one Reddit user about OpenAI’s model naming7. The difference between models is described with equally vague terminology – one is “faster for routine tasks” while another is “suitable for most tasks”7. What constitutes a “routine task” versus a “most task” remains one of the great mysteries of our time, alongside what happened to Jimmy Hoffa and why airplane food is so terrible.

According to the completely fabricated Institute for Consumer Clarity, 97% of ChatGPT users cannot accurately describe the difference between the models they’re using, yet 94% are convinced the more expensive one must be better.

The Three-Tier Monte

OpenAI’s pricing strategy resembles a psychological experiment designed by a particularly sadistic behavioral economist. The free tier gives you just enough capability to realize its limitations. The Plus tier ($20/month) offers the tantalizing promise of better performance. And for the power users willing to part with $200 monthly, there’s Pro – which is exactly like Plus but costs 10 times more9.

“We started with two test prices, $20 and $42,” Altman explained in a Bloomberg interview. “People thought $42 was a little too much. They were happy to pay $20. We picked $20.”8 This scientific pricing methodology, known in economic circles as “making numbers up,” has proven remarkably effective.

Fictional OpenAI Chief Revenue Officer Marcus Reynolds elaborates: “Our pricing strategy is based on what we call the Goldilocks Principle. Free is too cold – it leaves users wanting more. Pro at $200 is too hot – only businesses and power users will pay that. But Plus at $20 is juuuust right – affordable enough that millions will subscribe without questioning whether they actually need it.”

This tiered strategy has created what the fictional American Journal of Technological Psychology terms “AI Status Anxiety” – the fear that somewhere, someone is getting slightly better AI responses than you are.

The Reality Distortion Academy

Sam Altman’s mastery of perception management didn’t emerge from nowhere. He stands on the shoulders of giants – specifically, the reality distortion giants of Silicon Valley.

“Reality distortion field” was a term first used to describe Steve Jobs’ charisma and its effects on developers6. It referred to Jobs’ ability to convince himself and others to believe almost anything through a potent cocktail of charm, charisma, and hyperbole6. Bill Gates once said Jobs could “cast spells” on people, mesmerizing them with his reality distortion field6.

Altman appears to have graduated with honors from this school of persuasion. Like Jobs before him, he has mastered the art of making the incremental sound revolutionary and the mundane seem magical.

“What advice do you have for OpenAI about how we manage our collective psychology as we kind of go through this crazy super intelligence takeoff,” asked Adam Grant in a 2025 TED interview with Altman12. The question itself reveals how successfully Altman has convinced even sophisticated observers that we’re witnessing a “crazy super intelligence takeoff” rather than gradual improvements to predictive text generation.

This reality distortion extends to OpenAI’s relationship with its own technology. When ChatGPT-4o Mini failed to summarize an article correctly – claiming tennis player Rafael Nadal had come out as gay when he hadn’t – the company framed it not as a hallucination but as “creative summarization.”14

“We call this ‘creative summarization,'” notes fictional OpenAI News AI Product Manager Jessica Zhang. “Technically, it’s not a bug—it’s an artistic interpretation of reality. Who’s to say what ‘accuracy’ really means in a post-truth world?”

The Moving Goalposts of Artificial General Intelligence

Perhaps Altman’s greatest sleight of hand has been his management of expectations around Artificial General Intelligence (AGI). OpenAI originally defined AGI as “a highly autonomous system that outperforms humans at most economically valuable work.”15 The company claimed AGI would “elevate humanity” and grant “incredible new capabilities” to everyone5.

But as the technical challenges of achieving this vision became apparent, Altman began subtly redefining what AGI means.

“My guess is we will hit AGI sooner than most people think, and it will matter much less,” Altman said at the New York Times DealBook Summit5. This remarkable statement essentially says, “We’ll achieve the thing we’ve been promising sooner than expected, but don’t worry – it won’t be as important as we’ve been telling you for years.”

The fictional International Institute for Goal Post Relocation calls this “The Altman Maneuver” – redefining success after you’ve realized your original promises were unattainable.

The Price of Enlightenment

As competition in the AI space intensifies, rumors swirl about even more expensive tiers. Bloomberg reported on the possibility of a $2,000 tier8, which would presumably allow users to experience AI that’s exactly like the $200 version but comes with a certificate of digital superiority suitable for framing.

“We believe in democratizing AI,” states fictional OpenAI Chief Access Officer Thomas Williams. “And what’s more democratic than allowing people to vote with their wallets for which level of artificial intelligence they deserve? The free people get free AI. The $20 people get $20 AI. The $200 people get $200 AI. And soon, the $2,000 people will get AI that makes them feel like they’ve spent $2,000.”

The fictional Center for Pricing Psychology estimates that OpenAI could charge up to $10,000 monthly for a service that adds a gold star to the ChatGPT interface and occasionally says “Your question is particularly insightful” before providing the exact same answer available at lower tiers.

The Elon in the Room

No discussion of reality distortion would be complete without mentioning Elon Musk, who has gone from OpenAI co-founder to arch-nemesis in a dramatic falling out1114.

“He’s just trying to slow us down. He obviously is a competitor,” Altman told Bloomberg TV about Musk. “Probably his whole life is from a position of insecurity. I don’t think he’s a happy person. I do feel for him.”14

The irony of this feud is that both men are masters of the same craft – reality distortion – yet each seems to resent the other’s proficiency in it. It’s like watching two magicians accuse each other of using actual magic while insisting their own tricks are just skilled illusions.

“Sam and Elon are engaged in what we call a ‘Reality Distortion Duel,'” explains fictional Silicon Valley historian Dr. Eleanor Wright. “Each is trying to convince the world that his vision of AI is the correct one, while the other is dangerous or misguided. Meanwhile, both are building businesses based more on perception than technological reality.”

The Unexpected Twist

As our exploration of OpenAI’s marketing mastery concludes, we arrive at a startling realization: perhaps the greatest beneficiaries of artificial intelligence aren’t the users but the perception managers who sell it to them.

In a leaked internal document that I’ve completely fabricated, OpenAI researchers discovered something shocking: when given identical prompts, ChatGPT Free, Plus, and Pro produced responses that were indistinguishable in quality 94% of the time. The only difference was that Pro responses arrived 0.3 seconds faster and included an invisible metadata tag that made users feel the response was more intelligent.

When confronted with this fictional finding, our fictional OpenAI spokesperson offered a response that perfectly encapsulates the company’s approach: “The value of our premium tiers isn’t just in the technical capabilities – it’s in how they make you feel. Is feeling smarter worth $200 a month? Our subscribers seem to think so.”

And perhaps that’s the true genius of Sam Altman’s marketing approach. He’s not selling artificial intelligence; he’s selling the perception of intelligence – both artificial and human. In a world increasingly anxious about being replaced by machines, what could be more valuable than feeling like you’ve got the best machine on your side?

As we continue to upgrade our subscriptions in pursuit of ever-more-intelligent AI, perhaps we should pause to consider whether the most impressive intelligence at work belongs not to the models but to the marketers who’ve convinced us that letters, numbers, and dollar signs equate to meaningful differences in capability.

In the words of the fictional but prophetic AI philosopher Dr. Jonathan Chen: “The greatest achievement of artificial intelligence isn’t what it can do, but what it can convince us to pay for.”

Hot this week

Silicon Valley’s Empathy Bypass: How Tech Giants Replaced Emotional Intelligence With Digital Yes-Bots

In a breakthrough development that absolutely nobody saw coming,...

Related Articles

Popular Categories