Steve Hsu
๐ค SpeakerAppearances Over Time
Podcast Appearances
It doesn't have a way to independently validate X, and it can then use X in its reasoning pattern.
Even though it knows it's supposed to check X, it doesn't really have a way to falsify X.
The other problem with reasoning models for the application that we're specifically addressing in our market segment is people want for voice, again, fast response.
So if you rely on a whole bunch of tokens being generated for reasoning, that's way over the, you know, fraction of a second that you have for that generations.
You know, the speech to text and text to speech, all that stuff is wrapped in there.
And then the reasoning has to be, you know, even smaller than a second, like a fraction of a second.
So you would need a reasoning model that generates all of its reasoning tokens that fast.
And then you might be able to use it.
Yeah.
It's interesting.
So, and this gets into a very, I think, important point, even for the broader question of AGI and what's going to happen to human society in the future.
So the technology for, you could call them customer support agents, is very advanced now.
So we have agents that are
capable of replacing, you know, something like 80%, maybe 90% of the calls that come into a call center.
So if you're ordering a pizza or you're changing the delivery address for a package, you're wondering what happened to your package, you know, all of those things, AIs can actually handle pretty well now
However, in terms of what fraction of labor that used to be done by humans, entirely by humans in these call centers, has been replaced thus far by AI agents, it's still minuscule.
It's very tiny.
And a lot of it's held up by human decision-making.
A lot of it's held up by sunk costs in old systems and old ways of doing things.
And so...