Harry Stebbings
๐ค SpeakerAppearances Over Time
Podcast Appearances
Very nice to speak to him.
So thank you so much for joining me today, dude.
Dude, I'm confused.
Help me out.
I had Demis on the show the other day from DeepMind.
He was like, yeah, I'm not sure if we're seeing scaling laws, but we are definitely seeing slightly diminishing return slash performance as we scale.
So potentially, are we getting to a stage where increased compute is no longer leading to increased performance?
When we look at the bottlenecks around performance and progression today, what are the bottlenecks that really persist most significantly to you?
Is it algorithms?
Is it data?
Is it compute?
Can you help me understand which is most lagging?
If we just go through them, when we look at that context feedback on the data side, will we see then a generation of vertically integrated foundation model companies, light periodic for a ton of different things?
My question to you then is, how do I determine what is not going to get claudified in that vertical model company build out?
Because you could look at a cursor and say, well, they've built their own vertical model end to end.
It's been claudified, if we're being blunt.
Periodic won't be because of the physical data that's being produced in the labs.
How do I know what will be claudified versus won't in that model there?
With the greatest of respects, is that the core investment thesis of Mistral for you?
Do Anthropic and OpenAI just accept that and roll over?