Logan Kilpatrick
๐ค SpeakerAppearances Over Time
Podcast Appearances
And the sort of the AI layer is happening seamlessly omnipresent, you know, and, and enriching whatever my ideas are.
I think today it requires like a user to like go out and be intentional about that.
And I think we'll, we'll, we'll move out of that space probably within the next couple of years of like you having to be intentional about bringing AI in a loop versus like it just being there and supporting you, which is interesting.
Yeah, one, you should definitely make sure that your law firm allows usage of these tools.
I have maybe a slightly different perspective, which I talked to...
The CEO of Casetext, which was one of the early sort of successful GPT-based products, and they had early access, and they did a bunch of stuff.
They ended up selling their company to Thomson Reuters, which is like a massive legal firm of folks that have done legal stuff in the past.
And in that conversation with Jake...
talking about like the, the actually how ubiquitous the penetration of AI is in the law in legal today, which was really interesting and not like my, what would have been my conventional wisdom default assumption.
Um, but they, Thompson Reuters ran a bunch of, I think it was them or one of their partners ran some studies and it was like 99% of lawyers had tried
Gemini, Claude, or ChachiBT, which is crazy.
And I'm sure there's probably some bias sampling.
I'd assume the absolute number is probably slightly lower than that.
But even in a small subset of people, it's incredible to see how much
usage they're getting I think there's a bunch of reasons for that in the legal domain like you know access to you know they have money to spend their high potential ROI of like making operations go faster because they're you know digging through case law stuff like that that you can only remember so many you can only process so many in a search you can
So I think it's one of those examples where the use case is just so mission critical to what those folks are doing that it just becomes easy to buy in.
And yeah, I think one of the biggest points of feedback from...
from them in the conversation was just around like how much long context matters for this use case.
And Gemini has obviously been at the bleeding edge of this with our 1 million token context window and 2 million.
And it's been interesting to see how much that still comes up as like a limitation for them is they just want long context to bring more context and more documents and more information into the memory of the model.