Michael
👤 PersonAppearances Over Time
Podcast Appearances
Even basic brainstorming can hit a wall of, I'm sorry, I can't help that.
Here's the gap between promise and reality that many users have faced.
So OpenAI said that TPT-5 would be deeper, have human-like reasoning, but in reality, it sometimes struggles with basic logic chains.
They also said that it would have more creative writing, but in reality, many users say it feels less formulaic and less imaginative.
And another thing that OpenAI said is that it would be faster with smoother responses.
But for some people, there are slower load times and more interruptions.
And lastly, OpenAI also promised more up-to-date knowledge, but it still runs into outdated facts and caveats, and also makes up facts occasionally.
And it's not just casual users.
Developers, educators, and researchers, the folks who push these tools to their limits, are reporting that GPT-5 is more rigid than earlier models, especially GPT-4.
Others think that there's a deliberate product strategy here, that OpenAI is holding back full GPT-5 capabilities for enterprise customers and API access, leaving the public with a quote-unquote light version.
Some tech bloggers speculate this could be due to model compression, cost-cutting, or even resource sharing between multiple AI products that dilute GPT-5's raw power.
This all leads to a bigger question.
What do we actually want?
Do we want a safe, predictable assistant or a bold, sometimes chaotic, collaborative?
In this episode, we'll dig into real user reviews, explore technical possibilities for why GPT-5 feels different, and debate whether these limitations are temporary growing pain or the start of a new, more restricted era for AI.
So stick around.
This one's going to go deep.
Pulling no punches and asking the questions a lot of people are thinking, not saying.
The future of AI won't just be written in code.
It'll be written in laws, debates, and the choices we make together.