OpenAI CEO Sam Altman is hyping GPT‑5—but not in the way you might expect. In a recent appearance on This Past Weekend with Theo Von, Altman spoke about the upcoming AI model in a tone that veered from excitement to existential dread, comparing the experience of testing GPT-5 to the Manhattan Project. It’s a dramatic analogy—and a strange way to promote what’s supposed to be OpenAI’s most advanced model yet.
“It Feels Very Fast”—But Why the Fear?
Altman said GPT-5 “feels very fast” and admitted that during some test sessions, he felt genuinely nervous. That’s a striking confession from the very person leading its development. At one point, he even implied that the model behaves in ways that make him uncomfortable—raising questions about how well the team at OpenAI truly understands what they’ve built.
He didn’t offer specifics about what GPT-5 can actually do—but the comparison to the Manhattan Project suggests he views the model as a potentially world-altering force. That might sound impressive, but it also leans into the type of AI discourse that swings wildly between hype and doom.
Governance: “No Adults in the Room”
Perhaps more alarming than the model itself was Altman’s commentary on AI regulation. He bluntly stated that “there are no adults in the room” when it comes to overseeing AI’s development. That’s a powerful indictment of the current global governance landscape—especially coming from one of the tech world’s most visible AI leaders.
If true, that leaves a troubling question: Who’s actually in charge of making sure AI systems like GPT-5 are used responsibly?
Analysis: Hype, Power, and a Lack of Clarity
Altman’s messaging here is all over the place. On one hand, GPT-5 is being positioned as a revolutionary leap in artificial general intelligence. On the other, he’s implying that the people building it don’t fully understand it—or know how to govern it.
Referencing the Manhattan Project—an effort that led to the creation of nuclear weapons—suggests a level of fear that feels performative or at least strategically exaggerated. After all, GPT-5 is still, at its core, a sophisticated language model. It may be faster and more intuitive than GPT-4, but framing it as an existential threat seems like a calculated narrative choice rather than an objective risk assessment.
The Bigger Issue: AI’s Expanding Power
Regardless of Altman’s delivery, one point stands: AI systems are gaining more responsibility. As they become smarter and more integrated into everyday tools—from enterprise apps to healthcare diagnostics—the stakes are undeniably higher.
But if the creators themselves are uneasy—or if they’re publicly saying they’re uneasy—that’s not exactly comforting. If Altman is serious, it calls into question whether OpenAI is equipped to manage the kind of power it’s unleashing. If he’s exaggerating, it suggests a willingness to lean on fear-based marketing to draw attention and control the narrative around AI’s future.
Where’s the Middle Ground?
Public dialogue around AI often oscillates between utopian visions and doomsday scenarios. But the truth likely lies somewhere in the middle. GPT-5 will probably come with a slicker UI, improved reasoning, and better memory. It may even surpass expectations. But it’s still a tool—one whose impact depends heavily on how we choose to use it.
Is GPT-5 a game-changer? Almost certainly. But is it the next Manhattan Project? That’s a stretch.
Final Takeaway
Sam Altman’s comments raise a crucial question: Should a company expressing fear about its own technology also be the one shaping its future?
What do you think? Is Altman right to sound the alarm—or is this just another chapter in AI’s endless hype cycle?