Andy Halliday
๐ค SpeakerAppearances Over Time
Podcast Appearances
Or how Einstein came up with his thought experiments that concluded with the reality of the universe without the tools to actually observe it.
I mean, these people are out there at the fringe, right?
And yet, AI is capturing and replicating that capability.
So they have to be thinking, well, does that...
Does that obviate the need for people like me?
And I think the answer is no.
We need to have people who understand that just in order to be able to review and approve and understand what's coming out of AI.
And this gets back to another thing that I'm really noticing, which is that we're seeing faster and faster inference speeds happening.
are happening and you know open ai has released their i think spark model right the codex spark model yeah we were joining them with cerebrus chips which is this wafer scale chip structure and and and it's running a thousand tokens per second well i said in our chat last night uh you know i i can't keep up with ai that's running slow so
How am I going to respond to the voluminous output that can come from a Spark-level coding agent?
And in fact, in one of the newsletters today, there was some discussion.
It was an AI summary of X discussions.
And there is discussion on X about with Spark, it's exceeding the capacity of humans to review and comprehend it in real time because it's going so much faster.
So we need slower models is my point.
This is about relativity.
Like when you're inside the cabin of an airplane, you don't know whether it's going 300 miles an hour or 600 miles an hour.
You'd have to be looking at the airspeed indicator to know.
The one area that I think I could see a need for and value from really fast inference like that is in a native speech model.
That is doing real time interaction with a group of humans in voice.