Jaeden Schaefer
๐ค SpeakerAppearances Over Time
Podcast Appearances
So in the early 2010s, deep learning started to crush a lot of benchmarks.
It's also hilarious to talk about crushing benchmarks in 2010 because it's definitely different than what we have today.
But you had like image recognition that all of a sudden it actually worked.
You had speak recognition that got really good.
Translations went from being super terrible to usable.
I mean, I even remember early days of Google Translate, you know, everyone would make fun of it.
And as time went on, it became really, really good.
So because of this, a lot of companies realized, look, this is actually scaling.
And so instead of just writing rules, you just give models more like these massive data sets and you're gonna let them learn.
Um, you give them the better they got, the more compute you give them, the smarter they become.
And so I think when we kind of realized that, um, this kicked off basically what's known as the modern AI boom from there, everything got accelerated much faster.
We realized we needed to have much bigger models.
The data sets we realized had to get much larger.
And then training runs went from, you know, like it used to be like hours to weeks.
And then it started getting pushed into months.
And eventually we've arrived at a lot of these large language models.
And we have the kind of AI that can read, write, reason, and talk.
I think what's important to like understand with all of this is that obviously modern AI, like this isn't magic.