Dan Nottingham
đ€ SpeakerAppearances Over Time
Podcast Appearances
Do all these sources agree?
Is there consensus that you can build?
And of course, the large language models, you might argue, well, some of them have bias in them, the way they're trained, where they prompt it.
So that's why we use multiple large language models to then kind of look through the data and decide, what am I really learning from this?
And then at the end, it creates a full analysis of what it's learned.
It lists all the sources and creates a credibility score from zero to 10 that tells you how credible is this.
So at a glance, before you even read the analysis, you can see right away is what I'm researching, what I'm trying to say, what I'm thinking about, is it credible?
And then you read about why or why not.
That, that is spot on.
You got it.
So the, the basic idea is one, you can do what,
most people seem to naturally want to do is check other people right yeah see if they're saying the right things that doesn't sound right to me let me check that you absolutely can do that with am i credible but one of i think the the use that i'd love to see come out of this is that people pause for a second before they post proactively deciding i'm gonna make a statement
I think I'm right.
Let me just double check before I post it before I might.
How do I know I'm not spreading misinformation?
And if you have if you're going to try to be a credible person, you know, put the be accountable for yourself and what you say.
Maybe you'll check things before you say it.
And that way you kind of proactively can stop misinformation from right in its tracks before it starts spreading, before it's even created.
Yes.
You know, you, you, uh, you make an interesting point in the cocktail party where you, uh, you hear someone say something and you want to challenge them, but maybe you're not confident enough, your own knowledge that you kind of let it go or, or that even happens online where you see something.