Cal Newport
๐ค SpeakerAppearances Over Time
Podcast Appearances
You've given me a text that I'm trying to expand as if there was a real text that exists and I'm trying to match it.
You get that like kind of indirectly.
So really it's idiom
is the type of text it's trained on, which for the most part is more sort of prose style text.
So you can tune it away from it.
Like you can tune its mood, you can tune its sycophancy, but it might be hard to actually tune an LLM because it deals with human written prose as its main training data.
It might be harder than we think
to tune that away from being verbose and to just give a table.
Now, I guess you could take its output and then maybe run that through another thing that then strips away the other piece.
It's like, it's possible.
But I think the anthropomorphized verbosity we see in language models is also, that's kind of the native tongue.
Which is why we still have a lot of chatbots being emphasized and tools that are built upon LLM as the digital brain are still way more scarce than you would imagine outside of maybe computer programming and coding harnesses.
We just don't have a lot of other examples where we just use the LLM as a general person's digital brain.
Because I think this verbosity is okay.
Humans can interpret that, but it's not great if the LLM is just a digital brain that's interfacing between you and another computer.
It doesn't need to hear that their idea is great or wants to try to parse the different types of text.
There's some interesting things going on there about the fundamental nature of these things.
Well, this is what you don't see in Star Trek is, you know, Captain Kirk or whoever.
I'm going to mix up the episodes here.
You know, say like, hey, computer, we are approaching Deep Space Nine.