Logan Kilpatrick
๐ค SpeakerAppearances Over Time
Podcast Appearances
And in that conversation with Jake...
talking about like the, the actually how ubiquitous the penetration of AI is in the law in legal today, which was really interesting and not like my, what would have been my conventional wisdom default assumption.
Um, but they, Thompson Reuters ran a bunch of, I think it was them or one of their partners ran some studies and it was like 99% of lawyers had tried
Gemini, Claude, or ChachiBT, which is crazy.
And I'm sure there's probably some bias sampling.
I'd assume the absolute number is probably slightly lower than that.
But even in a small subset of people, it's incredible to see how much
usage they're getting I think there's a bunch of reasons for that in the legal domain like you know access to you know they have money to spend their high potential ROI of like making operations go faster because they're you know digging through case law stuff like that that you can only remember so many you can only process so many in a search you can
So I think it's one of those examples where the use case is just so mission critical to what those folks are doing that it just becomes easy to buy in.
And yeah, I think one of the biggest points of feedback from...
from them in the conversation was just around like how much long context matters for this use case.
And Gemini has obviously been at the bleeding edge of this with our 1 million token context window and 2 million.
And it's been interesting to see how much that still comes up as like a limitation for them is they just want long context to bring more context and more documents and more information into the memory of the model.
Um, so I think, and obviously we're still early in that domain, but I think it'll be cool to see how much people are accelerated when you like 10 X, the context window or a hundred X, the context window and things like that in the future.
Um, and it's, it's very distinct from rag in a lot of ways.
Like I think you.
If folks are have gone into the weeds of, you know, rag versus long context, it really is a different it is a fundamental trade off that you're making.
So I'll be interested to see people not have to make that trade off and in cases where their use case would support it.
There's a bunch of like architectural challenges, like LLMs in the current form are not designed to scale up to the 10 to 100 million token context window.
Like it's really tough.