Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Logan Kilpatrick

๐Ÿ‘ค Speaker
See mentions of this person in podcasts
715 total appearances

Appearances Over Time

Podcast Appearances

The Neuron: AI Explained
Google AI Studio Deep Dive: From Vibe Coding to AGI with Logan Kilpatrick

Um, so I think, and obviously we're still early in that domain, but I think it'll be cool to see how much people are accelerated when you like 10 X, the context window or a hundred X, the context window and things like that in the future.

The Neuron: AI Explained
Google AI Studio Deep Dive: From Vibe Coding to AGI with Logan Kilpatrick

Um, and it's, it's very distinct from rag in a lot of ways.

The Neuron: AI Explained
Google AI Studio Deep Dive: From Vibe Coding to AGI with Logan Kilpatrick

Like I think you.

The Neuron: AI Explained
Google AI Studio Deep Dive: From Vibe Coding to AGI with Logan Kilpatrick

If folks are have gone into the weeds of, you know, rag versus long context, it really is a different it is a fundamental trade off that you're making.

The Neuron: AI Explained
Google AI Studio Deep Dive: From Vibe Coding to AGI with Logan Kilpatrick

So I'll be interested to see people not have to make that trade off and in cases where their use case would support it.

The Neuron: AI Explained
Google AI Studio Deep Dive: From Vibe Coding to AGI with Logan Kilpatrick

There's a bunch of like architectural challenges, like LLMs in the current form are not designed to scale up to the 10 to 100 million token context window.

The Neuron: AI Explained
Google AI Studio Deep Dive: From Vibe Coding to AGI with Logan Kilpatrick

Like it's really tough.

The Neuron: AI Explained
Google AI Studio Deep Dive: From Vibe Coding to AGI with Logan Kilpatrick

Like you could do some hacks to sort of

The Neuron: AI Explained
Google AI Studio Deep Dive: From Vibe Coding to AGI with Logan Kilpatrick

get slightly farther.

The Neuron: AI Explained
Google AI Studio Deep Dive: From Vibe Coding to AGI with Logan Kilpatrick

And like, we did show a bunch of like research, um, of what it would look like to bring 10 million to people.

The Neuron: AI Explained
Google AI Studio Deep Dive: From Vibe Coding to AGI with Logan Kilpatrick

And even with the original Gemini launch showed some of that in, in like practice and production environments, it becomes extremely like very, very, very costly.

The Neuron: AI Explained
Google AI Studio Deep Dive: From Vibe Coding to AGI with Logan Kilpatrick

Um, and like not easy to maintain and continue to scale up.

The Neuron: AI Explained
Google AI Studio Deep Dive: From Vibe Coding to AGI with Logan Kilpatrick

So I do think we'll need some like architectural, uh, innovation at the model level in order to enable things like a hundred million tokens.

The Neuron: AI Explained
Google AI Studio Deep Dive: From Vibe Coding to AGI with Logan Kilpatrick

um, which I'm excited about and I think the world needs.

The Neuron: AI Explained
Google AI Studio Deep Dive: From Vibe Coding to AGI with Logan Kilpatrick

So I'm, I'm hopeful we'll keep pushing the rock up the hill.

The Neuron: AI Explained
Google AI Studio Deep Dive: From Vibe Coding to AGI with Logan Kilpatrick

What's a hundred million token use case.

The Neuron: AI Explained
Google AI Studio Deep Dive: From Vibe Coding to AGI with Logan Kilpatrick

Yeah.

The Neuron: AI Explained
Google AI Studio Deep Dive: From Vibe Coding to AGI with Logan Kilpatrick

I mean, some of these code bases is actually a good example of like, if you look at like a large company and if you're using a mono repo, really interesting to see, like, uh,

The Neuron: AI Explained
Google AI Studio Deep Dive: From Vibe Coding to AGI with Logan Kilpatrick

like you probably maybe a hundred million tokens is like too much or is like slightly on the extreme of this, but like accumulated through your lifetime, you actually do have a lot of this data.

The Neuron: AI Explained
Google AI Studio Deep Dive: From Vibe Coding to AGI with Logan Kilpatrick

I think the challenge then becomes like, how do you, and like the attention mechanism in language models and transformers specifically doesn't have this intrinsically in it, but like, how do you up sample the right data and down sample the wrong data, all that stuff.