Jimmy Wales
👤 PersonPodcast Appearances
Origin story of Wikipedia.
Well, so I was watching the growth of the free software movement, open source software, and seeing programmers coming together to collaborate in new ways, sharing code, doing that under a free license, which is really interesting because it empowers an ability to work together.
That's really hard to do if the code is still proprietary because then if I chip in and help, we sort of have to figure out how I'm
going to be rewarded and what that is, but the idea that everyone can copy it and it just is part of the commons really empowered a huge wave of creative software production.
And I realized that that kind of collaboration could extend beyond just software to all kinds of cultural works.
And the first thing that I thought of was an encyclopedia.
I thought, oh, that seems obvious that an encyclopedia, you can collaborate on it.
There's a few reasons why.
One, we all pretty much know what an encyclopedia entry on, say, the Eiffel Tower should be like.
You know, you should see a picture, a few pictures maybe, history, location, something about the architect, etc., etc., etc.
So we have a shared understanding of what it is we're trying to do, and then we can collaborate and different people can chip in and find sources and so on and so forth.
So set up first, Newpedia, which was about two years before Wikipedia.
And with Newpedia, we had this idea that in order to be respected, we had to be even more academic than a traditional encyclopedia.
Because a bunch of volunteers on the internet getting to write an encyclopedia, you know, you could be made fun of if it's just every random person.
So we had implemented this seven-stage review process to get anything published.
And two things came of that.
So one thing, one of the earliest entries that we published after this rigorous process, a few days later, we had to pull it because as soon as it hit the web and the broader community took a look at it, people noticed plagiarism and realized that it wasn't actually that good, even though it had been reviewed by academics and so on.
So we had to pull it.
So it's like, okay, well, so much for a seven-stage review process.
But also, I decided that I wanted to try.
Why is this taking so long?
Why is it so hard?
So I thought, okay.
I saw that Robert Merton had won Nobel Prize in Economics for his work on option pricing theory.
And when I was in academia, that's what I worked on, was option pricing theory, how to publish paper.
So I'd worked through all of his academic papers, and I knew his work quite well.
I thought, oh, I'll just – I'll write a short biography of Merton.
And when I started to do it, I'd been out of academia.
I hadn't been a grad student for a few years then.
I felt this huge intimidation because they were going to take my draft and send it to the most prestigious finance professors that we could find to give me feedback for revisions.
And it felt like being back in grad school.
It's like this really oppressive sort of like, you're going to submit it for review and you're going to get
A little bit of the bad part of grad school.
And so that was when I realized, okay, look, this is never going to work.
This is not something that people are really going to want to do.
So Jeremy Rosenfeld, one of my employees, had brought and showed me the wiki concept in December.
And then Larry Sanger brought in the same, said, what about this wiki idea?
And so in January, we decided to launch Wikipedia, but we weren't sure.
So the original project was called Newpedia.
And even though it wasn't successful, we did have quite a group of academics and like really serious people.
And we were concerned that, oh, maybe these academics are going to really hate this idea.
And we shouldn't just convert the project immediately.
We should launch this as a side project, the idea of here's a wiki where we can start playing around.
But actually, we got more work done in two weeks than we had in almost two years because people were able to just jump on and start doing stuff.
And it was actually a very exciting time.
Back then, you could be the first person who typed Africa is a continent and hit save, which isn't much of an encyclopedia entry.
But it's true, and it's a start, and it's kind of fun.
You put your name down.
Actually, a funny story was several years later, I just happened to be online, and I saw when –
His name is Robert Allman, won the Nobel Prize in economics.
And we didn't have an entry on him at all, which was surprising, but it wasn't that surprising.
This was still early days, you know?
And so I got to be the first person to type, Robert Allman won the Nobel Prize in economics and hit save, which again, wasn't a very good article.
But then I came back two days later and people had improved it and so forth.
second half of the experience where with Robert Merton, I never succeeded because it was just too intimidating.
It was like, oh no, I was able to chip in and help.
Other people jumped in.
Everybody was interested in the topic because it's all in the news at the moment.
And so it's just a completely different model, which worked much, much better.
Well, I think it's, you know, especially in the early days, and this, by the way, has gotten much harder because there are fewer topics that are just greenfield, you know, available.
But, you know, you could say, oh, well, you know, I know a little bit about this and I can get it started.
But then it is fun to come back then and see other people have added and improved and so on and so forth.
And that idea of collaborating, you know, where people can, much like open source software, you know, you put your code out and then people suggest revisions and they change it and it modifies and it grows beyond the original creator.
It's just a kind of a fun, wonderful, quite geeky hobby, but people enjoy it.
Yeah, I mean, not as much as there probably should have been, in a way.
During that two years of the failure of Newpedia, where very little work got done,
What was actually productive was there was a huge, long discussion, email discussion, very clever people talking about things like neutrality, talking about what is an encyclopedia, but also talking about more technical ideas, you know, things.
Back then, XML was kind of all the rage and thinking about, ah, could we…
Shouldn't you have certain data that might be in multiple articles that gets updated automatically?
So, for example, the population of New York City, every 10 years there's a new official census.
Couldn't you just update that bit of data in one place and it would update across all languages?
That is a reality today, but back then it was just like, hmm, how do we do that?
How do we think about that?
Yeah, Wikidata, you can link from a Wikipedia entry, you can link to that piece of data in Wikidata.
I mean, it's a pretty advanced thing, but there are advanced users who are doing that.
And then when that gets updated, it updates in all the languages where you've done that.
Yeah, so the interface.
So an example, there was some software called UseModWiki, which we started with.
It's quite amusing, actually, because the main reason we launched with UseModWiki is that it was a single Perl script.
So it was really easy for me to install it on the server and just get running.
But it was some guy's hobby project.
It was cool, but it was just a hobby project.
And all the data was stored online.
in flat text files.
So there was no real database behind it.
So to search the site, you basically used grep, which is just like basic Unix utility to like look through all the files.
So that clearly was never going to scale.
But also in the early days, it didn't have real logins.
So you could set your username, but there were no passwords.
So, you know, I might say Bob Smith and then someone else comes along and says, no, I'm Bob Smith.
And they both had it.
Now that never really happened.
We didn't have a problem with it, but it was kind of obvious.
Like you can't grow a big website where everybody can pretend to be everybody.
That's not going to be good for trust and reputation and so forth.
So quickly I had to write a little, you know, login, you know, store people's passwords and things like that.
So you can have unique identities.
And then another example of something, you know, quite, you would have never thought would have been a good idea.
And it turned out to not be a problem, but
To make a link in Wikipedia, in the early days, you would make a link to a page that may or may not exist by just using camel case, meaning it's like uppercase, lowercase, and you smash the words together.
So maybe New York City, you might type N-E-W, no space, capital Y, York City.
And that would make a link.
But that was ugly.
That was clearly not right.
And so I was like, okay, well, that's just not going to look nice.
Let's just use square brackets.
Two square brackets makes a link.
That may have been an option in the software.
I'm not sure I thought up square brackets.
But anyway, we just did that, which worked really well.
It makes nice links.
And you can see in its red links or blue links, depending on if the page exists or not.
But the thing that...
didn't occur to me even think about is that for example on the german language standard keyboard there is no square bracket so for german wikipedia to succeed people had to learn to do some alt codes to get the square bracket or they a lot of users cut and paste a square bracket where they could find one they would just cut and paste one in and yet german wikipedia has been a massive success so somehow that didn't slow people down um how is that that the german keyboards don't have a square bracket how do you do programming how do you
How do you live life to its fullest with a screwdriver?
It's a very good question.
I'm not really sure.
I mean, maybe it does now because keyboard standards have, you know, drifted over time and becomes useful to have a certain character.
I mean, it's the same thing.
Like, there's not really a W character in Italian.
And it wasn't on keyboards, or I think it is now.
But in general, W is not a letter in Italian language.
But it appears in enough international words that it's crept into Italian, so.
The discussion of square brackets in German.
What is an encyclopedia?
So the way I would put it is an encyclopedia, what our goal is, is the sum of all human knowledge, but sum meaning summary.
So, and this was an early debate.
I mean, somebody started uploading the full text of Hamlet, for example.
And we said, hmm, wait, hold on a second.
That's not an encyclopedia article, but why not?
So, hence was born Wikisource, which is where you put original text and things like that out of copyright text.
Because they said, no, an encyclopedia article about Hamlet, that's a perfectly valid thing.
But the actual text of the play is not an encyclopedia article.
So most of it's fairly obvious.
But there are some interesting quirks and differences.
So, for example, as I understand it, in French language encyclopedias, traditionally –
it would be quite common to have recipes, which in English language, that would be unusual.
You wouldn't find a recipe for chocolate cake in Britannica.
And so I actually don't know the current state.
I haven't thought about that in many, many years now.
I wouldn't say there's chocolate cake recipes.
I mean, you might find a sample recipe somewhere.
I'm not saying there are none, but in general, no.
Like we wouldn't have recipes.
It's actually very complicated.
I'm actually quite a good cook.
What's interesting is it's very hard to have a neutral recipe.
Like a canonical recipe for cake.
A canonical recipe is kind of difficult to come by because there's so many variants and it's all debatable and interesting.
For something like chocolate cake, you could probably say, here's one of the earliest recipes or here's one of the most common recipes.
You know, for many, many things, the variants are as interesting, you know, as somebody said to me recently, you know, 10 Spaniards, 12 paella recipes.
So, you know, these are all matters of open discussion.
I mean, yes, it does.
I mean, it doesn't because I know those numbers and see them from time to time.
But in another sense, a deeper sense, yeah, it does.
I mean, it's really remarkable.
I remember when English Wikipedia passed 100,000 articles and when German Wikipedia passed 100,000 because I happened to be in Germany with a bunch of Wikipedians that night.
You know, then it seemed quite big.
I mean, we knew at that time that it was nowhere near complete.
I remember at Wikimania in Harvard, when we did our annual conference there in Boston, someone who had come to the conference from Poland said,
had brought along with him a small encyclopedia, a single volume encyclopedia of biographies.
So short biographies, normally a paragraph or so, about famous people in Poland.
And there were some 22,000 entries.
And he pointed out that even then, 2006, Wikipedia felt quite big.
And he said, in English Wikipedia, there's only a handful of these, you know, less than 10%, I think he said.
And so then you realize, yeah, actually, you know, who was the mayor of Warsaw in 1873?
Probably not in English Wikipedia, but it probably might be today.
But there's so much...
And of course, what we get into when we're talking about how many entries there are and how many could there be is this very deep philosophical issue of notability, which is the question of, well, how do you draw the limit?
How do you draw what is there?
Sometimes people say, oh, there should be no limit.
But I think that doesn't stand up to much scrutiny if you really pause and think about it.
So I see in your hand there, you've got a Bic pen.
Everybody's seen, you know, billions of those in life.
It's a classic, clear Bic pen.
So could we have an entry about that Bic pen?
Well, I bet we do.
That type of Bic pen.
Because it's classic.
Everybody knows it.
And it's got a history.
And actually, there's something interesting about the Bic company.
They also make kayaks.
And there's something else they're famous for.
Basically, they're sort of a definition by non-essentials company.
Anything that's long and plastic, that's what they make.
If you want to find the common ground.
The platonic form of a Bic.
But could we have an article about that very Bic pen?
So, Lex Friedman's big pen as of this week.
Oh, the very, this instance.
The very specific instance.
And the answer is no, there's not much known about it, I dare say, unless, you know, it's very special to you and your great-grandmother gave it to you or something.
You probably know very little about it.
It's just here in the office.
So, that's just to show there is a limit.
I mean, in German Wikipedia, they used to talk about
the rear nut of the wheel of Uli Fuchs' bicycle, Uli Fuchs, the well-known Wikipedian of the time, to sort of illustrate, like, you can't have an article about literally everything.
And so then it raises the question, what can you have an article about?
And that can vary depending on the subject matter.
One of the areas where we try to be much more careful would be biographies.
The reason is a biography of a living person, if you get it wrong, it can actually be quite hurtful, quite damaging.
And so if someone is a private person and somebody tries to create a Wikipedia entry, there's no way to update it.
There's not much done.
So, for example, an encyclopedia article about my mother.
My mother, school teacher, later a pharmacist, wonderful woman, but never been in the news.
I mean, other than me talking about why there shouldn't be a Wikipedia entry, that's probably made it in somewhere, standard example.
You know, there's not enough known.
And you could sort of imagine a database of genealogy having date of birth, date of death, and certain elements like that of private people, but you couldn't really write a biography.
One of the areas this comes up quite often is what we call BLP1A.
We've got lots of acronyms.
Biography of a living person who's notable for only one event is a real sort of danger zone.
And the type of example would be a victim of a crime.
So someone who's a victim of a famous serial killer, but about whom, like, really not much is known.
They weren't a public person.
They're just a victim of a crime.
We really shouldn't have an article about that person.
They'll be mentioned, of course, and maybe the specific crime might have an article.
But for that person, no, not really.
That's not really something that makes any sense, because how can you write a biography about someone you don't know much about?
And this is, you know, it varies from field to field.
So, for example, for many academics, we will have an entry that we might not have in a different context, because for an academic...
It's important to have sort of their career, you know, what papers they've published, things like that.
You may not know anything about their personal life, but that's actually not encyclopedically relevant in the same way that it is for a member of a royal family where it's basically all about the family.
We're fairly nuanced about notability and where it comes in.
And I've always thought that the term notability, I think, is a little problematic.
I mean, we struggle about how to talk about it.
The problem with notability is...
It can feel insulting to say, oh no, you're not noteworthy.
Well, my mother's noteworthy.
She's a really important person in my life, right?
So that's not right.
But it's more like verifiability.
Is there a way to get information that actually makes an encyclopedia entry?
Yeah, yeah, yeah.
So there's a few things to unpack in all that.
So first, one of the things I find really, always find very interesting is your status with MIT.
Okay, that's upsetting and it's an argument and can be sorted out.
But then what's interesting is you gave as much time to that, which is actually important and relevant to your career and so on, to also where your father was born, which most people would hardly notice but is really meaningful to you.
And I find that a lot when I talk to people who have a biography in Wikipedia is they're often as annoyed by a tiny error that no one's going to notice.
Like this town in Tajikistan has got a new name and so on.
Like nobody even knows what that means or whatever, but it can be super important.
And so that's one of the reasons, you know, for biographies, we say like human dignity really matters.
And so, you know, some of the things have to do with, and this is a common debate that goes on in Wikipedia, is what we call undue weight.
So I'll give an example here.
There was an article I stumbled across many years ago about the mayor.
No, he wasn't a mayor.
He was a city council member of, I think it was Peoria, Illinois, but some small town in the Midwest.
And the entry, you know, he's been on the city council for 30 years or whatever.
He's a pretty, I mean, frankly, pretty boring guy and seems like a good local city politician.
But in this very short biography, there was a whole paragraph, a long paragraph about his son being arrested for DUI.
And it was clearly undue weight.
It's like, what has this got to do with this guy if it even deserves a mention?
It wasn't even clear...
Had he done anything hypocritical?
Had he done himself anything wrong?
Even was his son, his son got a DUI.
That's never great, but it happens to people and it doesn't seem like a massive scandal for your dad.
So of course, I just took that out immediately.
This is a long, long time ago.
And that's the sort of thing where, you know, we have to really think about in a biography and about controversies to say, is this a real controversy?
So in general, like one of the things we tend to say is like any section, so if there's a biography and there's a section called controversies, that's actually poor practice.
Because it just invites people to say, oh, I want to work on this entry.
See, there's seven sections.
Oh, this one's quite short.
Can I add something?
Go out and find some more controversies.
No, that's nonsense, right?
And in general, putting it separate from everything else kind of makes it seem worse and also doesn't put it in the right context.
Whereas if it's sort of a live flow, and there is a controversy, there's always...
potential controversy for anyone, it should just be sort of worked into the overall article because then it doesn't become a temptation.
You can contextualize appropriately and so forth.
So that's, you know, that's...
you know, part of the whole process.
But I think for me, one of the most important things is what I call community health.
So yeah, are we going to get it wrong sometimes?
We're humans and doing good quality, you know, sort of reference material is hard.
The real question is, how do people react, you know, to a criticism or a complaint or a concern?
And if the reaction is defensiveness or combativeness back, or if someone's really sort of in there being aggressive and in the wrong, like, no, no, no, hold on.
We've got to do this the right way.
You've got to say, okay, hold on.
Are there good sources?
Is this contextualized appropriately?
Is it even important enough to mention?
What does it mean?
And sometimes one of the areas where I do think there is a very complicated flaw is
And you've alluded to it a little bit, but it's like we know the media is deeply flawed.
We know that journalism can go wrong.
And I would say particularly in the last, whatever, 15 years, we've seen a real decimation of local media, local newspapers.
We've seen a real rise in clickbait headlines and sort of
eager focus on anything that might be controversial.
We've always had that with us, of course.
There's always been tabloid newspapers.
But that makes it a little bit more challenging to say, okay, how do we sort things out when we have a pretty good sense that...
not every source is valid.
So as an example, a few years ago, it's been quite a while now, we deprecated the Mail Online as a source.
And the Mail Online, the digital arm of the Daily Mail, it's a tabloid.
Not completely, you know, it's not fake news, but it does tend to run very hyped up stories.
They really love to attack people and go on the attack for political reasons and so on.
And it just isn't great.
And so by saying deprecated, and I think some people say, oh, you banned the Daily Mail.
No, we didn't ban it as a source.
We just said, look, it's probably not a great source, right?
You should probably look for a better source.
So certainly, you know, if the Daily Mail runs a headline saying, new cure for cancer, right?
It's like, you know, probably there's more serious sources than a tabloid newspaper.
So, you know, in an article about lung cancer, you probably wouldn't cite the Daily Mail.
That's kind of ridiculous.
But also for celebrities and so forth, to sort of know, well, they do cover celebrity gossip a lot, but they also tend to have vendettas and so forth.
And you really have to step back and go, hmm, is this really encyclopedic or is this just the Daily Mail going on a rant?
It requires massive community health.
Yeah, a great example that I really loved this morning that I saw, someone left a note on my user talk page in English Wikipedia saying, it was quite a dramatic headline saying,
racist hook on front page.
So we have on the front page of Wikipedia, we have a little section called Did You Know?
And it's just little tidbits and facts, just things people find interesting.
And there's a whole process for how things get there.
And the one that somebody was raising a question about was, it was comparing a very well-known US football player, Black.
There was a quote from another famous sport person comparing him to a Lamborghini, clearly a compliment.
And so somebody said, actually, here's a study.
Here's some interesting information about how black sports people are far more often compared to inanimate objects and given that kind of analogy.
And I think it's demeaning to compare a person to a car, et cetera, et cetera.
But they said, I'm not deleting it.
I'm not removing it.
I just want to raise the question.
And then there's this really interesting conversation that goes on where –
I think the general consensus was, you know what, this isn't like the alarming headline, racist thing on the front page of Wikipedia.
That sounds – holy moly, that sounds bad.
But it's sort of like, actually, yeah, this probably isn't the sort of analogy that we think is great.
And so we should probably think about how to improve our language and not compare sports people to inanimate objects and particularly be aware of –
Certain racial sensitivities that there might be around that sort of thing, if there is a disparity in the media of how people are called.
And I just thought, you know what?
Nothing for me to weigh in on here.
This is a good conversation.
Like, nobody's saying, you know, people should be banned if they refer to, what was his name?
The fridge, Refrigerator Perry.
you know, very famous comparison to an inanimate object of a Chicago Bears player many years ago.
But they're just saying, hey, let's be careful about analogies that we just pick up from the media.
I said, yeah, you know, that's good.
Oh yeah, I mean it's really important and we talk a lot about weasel words, you know, and...
You know, actually, I'm sure we'll end up talking about AI and ChatGPT, but just to quickly mention, in this area, I think one of the potentially powerful tools, because it is quite good at this, I've played around with and practiced it quite a lot, but ChatGPT 4 is really quite able to take a passage
And point out potentially biased terms to rewrite it to be more neutral.
Now, it is a bit anodyne and it's a bit, you know, cliched.
So sometimes it just takes the spirit out of something that's actually not bad.
It's just like, you know, poetic language.
And you're like, okay, that's not actually helping.
But in many cases, I think that sort of thing is quite interesting.
And I'm also interested in, you know...
Can you imagine where you feed in a Wikipedia entry and all the sources, and you say, help me find anything in the article that is not accurately reflecting what's in the sources.
And that doesn't have to be perfect.
It only has to be good enough to be useful to a community.
So if it scans an article and all the sources, and you say, oh, it came back with...
10 suggestions, and seven of them were decent, and three of them it just didn't understand.
Well, actually, that's probably worth my time to do.
And it can help us really more quickly get good people to sort of review obscure entries and things like that.
Yeah, there's a famous webcomic that's titled Cytogenesis, which is about how something—an error's in Wikipedia, and there's no source for it, but then a lazy journalist reads it and writes the source.
And then some helpful Wikipedian spots that it has no source, finds the source, and adds it to Wikipedia.
And voila, magic.
This happened to me once.
Well, it nearly happened.
There was this—
I mean, it was really brief.
I went back and researched it.
I'm like, this is really odd.
So, Biography Magazine, which is a magazine published by the Biography TV channel, had a profile of me, and it said, in his spare time, I'm not quoting exactly, it's been many years, but in his spare time, he enjoys playing chess with friends.
I thought, wow, that sounds great.
Like, I would like to be that guy, but actually, I mean, I play chess with my kids sometimes, but no, it's not a hobby of mine.
I was like, where did they get that?
And I contacted the magazine and said, where did that come from?
They said, oh, it was in Wikipedia.
I looked in the history.
There had been vandalism of Wikipedia, which was not damaging.
And it had already been removed.
But then I thought, oh, gosh, well, I better mention this to people because otherwise somebody's going to read that and they're going to add it to the entry and it's going to take on a life of its own.
And then sometimes I wonder if it has because I was invited a few years ago to do the ceremonial first move in the World Chess Championship.
And I thought, I wonder if they think I'm a really big chess enthusiast because they read this biography magazine article.
But that problem, when we think about large language models and the ability to quickly generate very plausible but not true content, I think is something that there's going to be a lot of shakeout, a lot of implications of that.
So, I mean, there's a lot of stuff going on.
Obviously, the technology has moved very quickly in the last six months and looks poised to do so for some time to come.
So, first things first, I mean, part of our philosophy is the open licensing, the free licensing, the idea that, you know, this is what we're here for.
We are a volunteer community and we write this book.
We give it to the world to do what you like with.
You can modify it, redistribute it, redistribute modified versions, commercially, non-commercially.
This is the licensing.
So in that sense, of course, it's completely fine.
Now, we do worry a bit about attribution because it is a Creative Commons attribution share-alike license.
So attribution is important, not just because of our licensing model and things like that, but it's just...
Proper attribution is just good intellectual practice.
And that's a really hard, complicated question.
You know, if I were to write something about my visit here, I might say in a blog post, you know, I was in...
Austin, which is a city in Texas.
I'm not going to put a source for Austin as a city in Texas.
That's just general knowledge.
I learned it somewhere.
I can't tell you where.
So you don't have to cite and reference every single thing.
But if I actually did research and I used something very heavily, it's just morally proper to give your sources.
So we would like to see that.
And obviously, they call it grounding.
So particularly people at Google are really keen on figuring out grounding.
The same kind of thing.
And of course, one of the biggest flaws in ChatGPT right now is that it just literally will make things up just to be...
Amiable, I think.
It's programmed to be very hopeful and amiable, and it doesn't really know or care about the truth.
It can get bullied into—it can kind of be convinced into— Well, but like this morning, the story I was telling earlier about—
comparing a football player to a Lamborghini.
And I thought, is that really racial?
But I'm just, I'm mulling it over.
And I thought, I'm going to go to ChatGPT.
So I said to ChatGPT4, I said, you know, this happened in Wikipedia.
Can you think of examples where a white athlete has been compared to a fast car inanimate object?
And it comes back with a very plausible essay.
or it tells, you know, why these analogies are common in sport.
I said, no, no, I really, could you give me some specific examples?
So it gives me three specific examples, very plausible, correct names of athletes and contemporaries and all of that could have been true.
Googled every single quote and none of them existed.
And so I'm like, well, that's really not good.
Like I wanted to explore a thought process I was in.
I thought, first I thought, how do I Google?
And it's like, well, it's kind of a hard thing to Google because unless somebody has written about this specific topic, it's, you know, it's a large language model.
It's processed all this data.
It can probably piece that together for me, but it just can't yet.
So I think, I hope that,
chat GPT-5, 6, 7, you know, 3 to 5 years, I'm hoping we'll see a much higher, you know, level of accuracy where when you ask a question like that, I think instead of being quite so
eager to please by giving you a plausible sounding answer.
It's just like, don't know.
Well, it's one of the things I've said for a long time.
So in Wikipedia, one of the great things we do may not be great for our reputation, except in a deeper sense for the long term, I think it is.
But, you know, we'll all be a notice that says,
the neutrality of this section has been disputed or the following section doesn't cite any sources.
And I always joke, you know, sometimes I wish the New York Times would run a banner saying the neutrality of this has been disputed.
They could give us, we had a big fight in the newsroom as to whether to run this or not.
But we thought it's important enough to bring it to you, but just be aware that not all the journalists are on board with it.
Ah, that's actually interesting, and that's fine.
I would trust them more for that level of transparency.
So yeah, similarly, ChatGPT should say, yeah, 87% bullshit.
It's really interesting.
Yeah, I hadn't thought of that.
Because one of the things I do spend a lot of time thinking about these days, and people have found that we're moving slowly, but we are moving, thinking about, okay, these tools exist.
Are there ways that this stuff can be useful to our community?
Because a part of it is we do approach things in a non-commercial way, in a really deep sense.
It's been great that Wikipedia has become very popular, but really we're a community whose hobby is writing an encyclopedia.
And if it's popular, great.
If it's not, okay, we might have trouble paying for more servers, but it'll be fine.
And so how do we help the community grow?
What are the ways that these tools can support people?
And one example I never thought about, I'm going to start playing with it, is feed in the article and feed in the talk page and say, can you suggest some warnings in the article based on the conversations in the talk page?
I think it might be good at that.
It might get it wrong sometimes.
But again, if it's reasonably successful at doing that and you can say, oh, actually, yeah, it does suggest –
the neutrality of this has been disputed on a section that has a seven page discussion in the back.
That might be useful, I don't know.
We're playing with.
I can give an example that I haven't looked at in a long time, but I was really pleased with what I saw at the time.
So the discussion was that they're building something in Israel.
And for their own political reasons, one side calls it a wall, hearkening back to Berlin Wall, apartheid.
The other calls it a security fence.
So we can understand quite quickly, if we give it a moment's thought, like, okay, I understand why people would have...
this grappling over the language.
Like, okay, you want to highlight the negative aspects of this, and you want to highlight the positive aspects, so you're going to try and choose a different name.
And so there was this really fantastic Wikipedia discussion on the talk page.
How do we word that paragraph to talk about the different naming?
It's called this by Israelis, it's called this by Palestinians.
And how you explain that to people could be quite charged, right?
You could easily...
Explain, oh, there's this difference, and it's because this side's good and this side's bad, and that's why there's a difference.
Or you could say, actually, let's try and really stay as neutral as we can and try to explain the reason.
So you may come away from it with a concept.
Oh, okay, I understand what this debate is about now.
And, you know, in a number of cases, so this actually speaks to a slightly broader phenomenon, which is there are a number of cases where there is no one word that can get consensus.
And in the body of an article, that's usually okay because we can explain the whole thing.
You can come away with an understanding of why each side wants to use a certain word.
But there are some aspects, like the page has to have a title.
Same thing with certain things like photos.
It's like, well, there's different photos, which one's best?
A lot of different views on that.
But at the end of the day, you need the lead photo because there's one slot for a lead photo.
Categories is another one.
So at one point, I have no idea if it's in there today, but I don't think so.
I was listed in, you know, American entrepreneurs, fine, American atheists.
And I said, hmm, that doesn't feel right to me.
Like, just personally, it's true.
I mean, I wouldn't disagree with the objective fact of it.
But when you click the category and you see sort of,
a lot of people who are, you might say, American atheist activists, because that's their big issue.
So Madeline Murray O'Hare or various famous people who, Richard Dawkins, who make it a big part of their public argument and persona.
But that's not true of me.
It's just like my private personal belief.
It doesn't really, it's not something I campaign about.
So it felt weird to put me in the category, but what category would you put, you know?
And do you need that?
In this case, I was, I argued that,
it doesn't need that kind of like, that's not, I don't speak about it publicly except incidentally from time to time, I don't campaign about it.
So it's weird to put me with this group of people and that argument carry the day.
I hope not just because it was me, but, um, but categories can be like that, uh, where, you know, you're either in the category or you're not.
And sometimes it's a lot more complicated than that.
And, and is it, again, we go back to, is it undue weight?
Uh, you know, if, um,
Someone who is now prominent in public life and generally considered to be a good person was convicted of something, let's say DUI, when they were young.
Normally, in normal sort of discourse, we don't think, oh, this person should be in the category of American criminals.
Because you think, oh, a criminal, technically speaking, it's against the law to drive under the influence of alcohol, and you were arrested, and you spent a month in prison or whatever.
But it's odd to say that's a criminal.
So, just as an example in this area is...
Mark Wahlberg, Marky Mark, that's what I always think of him as, because that was his first sort of famous name, who I wouldn't think should be listed as in the category American criminal, even though he did, he was convicted of quite a bad crime when he was a young person, but we don't think of him as a criminal.
Should the entry talk about that?
Yeah, it's actually, that's actually an important part of his life story, you know, that he had a very rough youth and he could have, you know, gone down a really dark path and he turned his life around.
That's actually interesting.
So categories are tricky.
Well, there's definitely some really charged ones.
Like Alt-Right, I think is quite a complicated and tough label.
I mean, it's not a completely meaningless label, but boy, I think you really have to pause before you actually put that label on someone.
Partly because now you're putting them in a group of people, some of whom are quite, you wouldn't want to be grouped with.
Yeah, so I don't think so.
And, you know, I think you can always point to specific entries and talk about specific biases, but that's part of the process of Wikipedia.
Anyone can come and challenge and to go on about that, but...
You know, I see fairly often on Twitter, you know, some, you know, sort of quite extreme accusations of bias.
And I think, you know, actually, I just, I don't see it.
I don't buy that.
And if you ask people for an example, they normally struggle.
And depending on who they are and what it's about.
So it's certainly true that some people who have quite fringe viewpoints, and who knows, the full...
rush of history in 500 years, they might be considered to be path-breaking geniuses, but at the moment, quite fringe views, and they're just unhappy that Wikipedia doesn't report on their fringe views as being mainstream.
And that, by the way, goes across all kinds of fields.
I mean, I was once accosted on the street
outside the TED conference in Vancouver by a guy who's a homeopath who was very upset that Wikipedia's entry on homeopathy basically says it's pseudoscience.
And he felt that was biased, and I said, well, I can't really help you because, you know, we cite good quality sources to talk about the scientific status, and it's not very good.
So, you know, it depends.
And, you know, I think it's something that we should always be vigilant about.
But it's, you know, in general, I think we're pretty good.
And I think any time you go to any serious political controversy, we should have a pretty balanced perspective on who's saying what, what the views are, and so forth.
I would actually argue that the
The areas where we are more likely to have bias that persists for a long period of time are actually fairly obscure things or maybe fairly non-political things.
It's kind of a humorous example, but it's meaningful.
If you read our entries about Japanese anime, they tend to be very, very positive and very favorable because almost no one knows about Japanese anime except for fans.
And so the people who come and spend their days writing Japanese anime articles, they love it.
They kind of have an inherent love for the whole area.
Now, of course, being human beings, they'll have their internal debates and disputes about what's better or not, you know.
But in general, they're quite positive because nobody actually cares.
And anything that people...
quite passionate about, then hopefully, you know, there's like quite a lot of interesting stuff.
So I'll give an example, a contemporary example where I think we've done a good job as of my most recent sort of look at it.
And that is the question about the efficacy of masks during the COVID pandemic.
And that's an area where I would say the public authorities really kind of jerked us all around a bit.
In the very first days, they said, whatever you do, don't rush on and buy masks.
And their concern was shortages in hospitals.
Okay, fair enough.
Later, it's like, no, everybody's got to wear a mask everywhere.
It really works really well.
Then now, I think the evidence is mixed.
Masks seem to help.
In my personal view, masks seem to help.
They're no huge burden.
You might as well wear a mask in any environment where you're with a giant crowd of people and so forth.
But it's very politicized, that one.
It's very politicized where certainly in the US, you know, much more so.
I mean, I live in the UK.
I live in London.
I've never seen kind of on the streets sort of the kind of thing that there's a lot of reports of people actively angry because someone else is wearing a mask, that sort of thing in public.
And so, because it became very politicized, then clearly if Wikipedia – so anyway, if you go to Wikipedia and you research this topic, I think you'll find more or less what I've just said.
Like, actually, after it's all, you know, to this point in history, it's mixed evidence.
Like, masks seem to help, but maybe not as much as some of the authorities said, and here we are.
And that's kind of an example where I think, okay, we've done a good job, but I suspect there are people on both sides of that very emotional debate who think this is ridiculous.
Hopefully we've got quality sources, so then hopefully those people who read this can say, oh, actually, you know, it is complicated.
If you can get to the point of saying, okay, I have my view, but I understand other views, and I do think it's a complicated question, great, now we're a little bit more mature as a society.
And that whole politicization of society is just so damaging.
And I don't know in the broader world, how do we start to fix that?
That's a really hard question.
Well, I mean, I think we have to try to do our best to recognize both, but also to appropriately contextualize.
And so this can be quite hard, particularly when emotions are high.
That's just a fact about human beings.
I'll give a simpler example because there's not a lot of emotion around it.
Our entry on the moon doesn't say...
Some say the moon's made of rocks.
You know, who knows?
That kind of false neutrality is not what we want to get to.
Like, that doesn't make any sense.
But that one's easy.
Like, we all understand.
I think there is a Wikipedia entry called something like, The Moon is Made of Cheese, where it talks about this is a common sort of –
joke or, or thing that children say or that people tell to children or whatever, you know, it's just a thing.
It's a, everybody's heard moon's made of cheese.
Um, but nobody thinks, wow, like Wikipedia is so one-sided.
It doesn't even acknowledge the cheese theory.
Um, I say the same thing about flat earth, you know, again, that's exactly what I'm looking up right now.
Very little controversy.
Uh, we will have an entry about flat earth, uh,
theorizing flat earth people.
My personal view is most of the people who claim to be flat earthers are just having a laugh, trolling, and more power to them, have some fun, but let's not be ridiculous.
And how can you fly from South Africa to Perth?
Because on a flat earth view, that's really too far for any plane to make it.
It's all spread out.
Yeah, I mean, yeah, it's exactly right.
And so, you know, I think I find on many, many cases, and of course I, like anybody else, might quibble about this or that in any Wikipedia article,
But in general, I think there is a pretty good sort of willingness and indeed eagerness to say, oh, let's fairly represent all of the meaningfully important sides.
So there's still a lot to unpack in that, right?
So meaningfully important.
So, you know, people who...
are raising questions about the efficacy of masks.
That's actually a reasonable thing to have a discussion about.
And hopefully we should treat that as a, as a fair conversation to have and actually address which authorities have said what, and so on and so forth.
And then, you know, there are other cases where it's not meaningful opposition, you know, like you just wouldn't say if I, I mean, I, I doubt if the main article moon is,
It may mention cheese, but probably not even because it's not credible and it's not even meant to be serious by anyone.
Or the article on the earth certainly won't have a paragraph that says, well, most scientists think it's round, but...
certain people think flat.
Like, that's just a silly thing to put in that article.
You would want to sort of address, you know, that's an interesting cultural phenomenon.
You want to put it somewhere.
So this, you know, this goes into all kinds of things about politics.
You want to be really careful, really thoughtful about not getting caught up in the anger of our times and really recognize.
You know, so I always thought
I remember being really kind of proud of the U.S.
at the time when it was, McCain was running against Obama.
Because I thought, oh, I've got plenty of disagreements with both of them.
But they both seem like thoughtful and interesting people who I would have different disagreements with.
But I always felt like, yeah, that's good.
Now we can have a debate.
Now we can have an interesting debate.
And it isn't just sort of people slamming each other, personal attacks, and so forth.
I hope so, yeah, and I think so, in the main.
Obviously, you can always find debate that went horribly wrong, because there's humans involved.
Yeah, I mean, it's really interesting.
And it feels, it's hard to judge, you know, the sweep of history within your own lifetime.
But it feels like it's gotten much worse.
That this idea of two parallel universes where people can agree on certain basic facts.
feels worse than it used to be.
And I'm not sure if that's true or if it just feels that way, but also I'm not sure what the causes are.
I think I would lay a lot of the blame in recent years on social media algorithms which reward clickbait headlines, which reward tweets that
go viral, and they go viral because they're cute and clever.
I mean, my most successful tweet ever, by a fairly wide margin, some reporter tweeted at Elon Musk, because he was complaining about Wikipedia or something, you should buy Wikipedia, and I just wrote, not for sale.
And, you know, 90 zillion retweets, and people liked it, and it was all very good.
But I'm like, you know what?
It's a cute line, right?
And it's a good, like, mic drop and all that.
And I was pleased with myself.
Like, it's not really discourse, right?
It's not really sort of what I like to do.
But it's what social media really rewards, which is kind of a – lets you and him have a fight, right?
And that's more interesting.
I mean, it's funny because at the time I was –
I was texting with Elon who was very pleasant to me and all of that.
Well, and we certainly see it online.
You know, like a series of tweets, you know, sort of a tweet thread of 15 tweets that assesses the quality of the evidence for masks, pros and cons, and sort of where this – that's not going to go viral, you know.
But, you know, a smackdown for –
a famous politician who was famously in favor of masks, who also went to a dinner and didn't wear a mask, that's going to go viral.
And, you know, that's partly human nature.
You know, people love to call out hypocrisy and all that, but it's partly what these systems elevate automatically.
I talk about this with respect to Facebook, for example.
So I think Facebook has done a pretty good job, although it's taken longer than it should in some cases.
But, you know, if you have a
a very large following and you're really spouting hatred or misinformation, disinformation, they've kicked people off.
They've done some reasonable things there.
But actually, the deeper issue is of the anger we're talking about, of the contentiousness of everything.
I make a family example with two great stereotypes.
So one, the crackpot racist uncle, and one, the sweet grandma.
And I always want to point out all of my uncles in my family were wonderful people, so I didn't have a crackpot racist uncle, but everybody knows this stereotype.
Well, so, Grandma, she just posts, like, sweet comments on the kids' pictures and congratulates people on their wedding anniversary.
And Crackpot Uncle's posting his nonsense.
And normally, sort of at Christmas dinner, everybody rolls their eyes.
Oh, yeah, Uncle Frank's here.
He's probably going to say some racist comment, and we're going to tell him to shut up.
Or, you know, maybe let's not invite him this year.
You know, normal human drama.
He's got his three mates down at the pub who listen to him and all of that.
Grandma's got 54 followers on Facebook, which is the intimate family, and racist uncle has 714.
He's not a massive influence or whatever, but how did that happen?
It's because the algorithm notices, oh, when she posts, nothing happens.
He posts, and then everybody jumps in to go, gosh, shut up, Uncle Frank.
That's outrageous.
And it's like, oh, there's engagement, there's page views, there's ads, right?
And those algorithms, I think they're working to improve that, but it's really hard for them.
It's hard to improve that if that actually is working.
If the people who are saying things that get engagement, if it's not too awful, but it's just, you know, like, maybe it's not a racist uncle, but maybe it's an uncle who posts a lot about what an idiot Biden is, right?
Which isn't necessarily an offensive or blockable or bannable thing, and it shouldn't be.
But if that's the discourse that gets elevated because it gets a rise out of people, then suddenly in a society, it's like, oh, we get more of what we reward.
So I think that's a piece of what's gone on.
So a piece of it is...
The problem with making a recommendation to Facebook is that I actually believe their business model makes it really hard for them.
And I'm not anti-capitalism.
I'm not, you know, great.
Somebody's got business.
They're making money.
That's not where I come from.
But certain business models mean you are going to prioritize things that maybe aren't that long-term healthful.
And so that's a big piece of it.
So certainly for Facebook, you could say, you know, with vast resources –
Start to prioritize content that's higher quality, that's healing, that's kind.
Try not to prioritize content that seems to be just getting a rise out of people.
Now, those are vague human descriptions, right?
But I do believe good machine learning algorithms, you can optimize in slightly different ways.
But to do that, you may have to say...
Actually, we're not necessarily going to increase page views to the maximum extent right now.
And I've said this to people at Facebook.
It's like, you know, if your actions are, you know, convincing people that you're breaking Western civilization, that's really bad for business in the long run.
Certainly these days I'll say Twitter is the thing that's on people's minds as being more upsetting at the moment.
But I think it's true.
And so one of the things that's really interesting about Facebook compared to a lot of companies is that Mark has a pretty unprecedented amount of power.
His ability to name members of the board, his control of the company is pretty hard to break.
Even if financial results aren't as good as they could be because he's taken a step back from the perfect optimization to say, actually, for the long-term health in the next 50 years of this organization, we need to rein in some of the things that are working for us and making money because they're actually giving us a bad reputation.
So one of the recommendations I would say is, and this is not to do with the algorithms and all that, but, you know, how about just a moratorium on all political advertising?
I don't think it's their most profitable segment, but it's given rise to a lot of deep, hard questions about dark money, about, you know, ads that are run by questionable people that push false narratives.
Or, you know, the classic kind of thing is you run – I saw a study about Brexit and
In the UK where people were talking about there were ads run to animal rights activists saying, finally, when we're out from under Europe, the UK can pass proper animal rights legislation.
We're not constrained by the European process.
Similarly for people who are advocates of fox hunting to say, finally, when we're out of Europe, we can re-implement.
So you're telling people what they want to hear.
In some cases, it's really hard for journalists to see that.
So it used to be that for political advertising, you really needed to find some kind of mainstream narrative.
And this is still true to an extent.
Mainstream narrative that...
60% of people can say, oh, I can buy into that, which meant it pushed you to the center.
It pushed you to sort of try and find some nuanced balance.
But if your main method of recruiting people is a tiny little one-on-one conversation with them because you're able to target using targeted advertising, suddenly you don't need consistent.
You just need a really good targeting strategy.
Really good Cambridge analytic style machine learning algorithm data to convince people.
And that just feels really problematic.
So, I mean, until they can think about how to solve that problem, I would just say, you know what, it's going to cost us X amount, but it's going to be worth it.
to kind of say, you know what, we actually think our political advertising policy hasn't really helped contribute to discourse and dialogue and finding reasoned, you know, middle ground and compromise solution.
So let's just not do that for a while until we figure that out.
So that's maybe a piece of advice.
It's a difficult problem.
And, you know, so with WT Social, WikiTree and Social, we're launching in a few months' time a completely new system, new domain name, new lots of things.
But the idea is to say, let's focus on trust.
People can rate each other as trustworthy, rate content as trustworthy.
You have to start from somewhere.
So we'll start with a core base of our tiny community who I think are sensible, thoughtful people.
We want to recruit more.
But to say, you know what, actually, let's have that as a pretty strong element to say, let's not optimize based on what gets the most page views in this session.
Let's optimize on what...
sort of the feedback from people is, this is meaningfully enhancing my life.
And so part of that is, and it's probably not a good business model, but part of that is, okay, we're not going to pursue an advertising business model, but a, you know, membership model where, you know, you can, you don't have to be a member, but you can pay to be a member.
You maybe get some benefit from that.
To say, actually, the problem with... And actually, the division, I would say, and the analogy I would give is broadcast television funded by advertising gives you a different result than paying for HBO, paying for Netflix, paying for whatever.
And the reason is...
You know, if you think about it, what is your incentive as a TV producer?
You're going to make a comedy for ABC network in the U.S.
You basically say, I want something that almost everybody will like and listen to.
So it tends to be a little blander.
You know, family friendly, whatever.
Whereas if you say, oh, actually, I'm going to use the HBO example and an old example.
You say, you know what?
Sopranos isn't for everybody.
Sex and the City isn't for everybody.
But between the two shows, we've got something for everybody that they're willing to pay for.
So you can get edgier, higher quality, in my view, content rather than saying it's got to not offend anybody in the world.
It's got to be for everybody.
Which is really hard.
So, same thing, you know, here in a social network, if your business model is advertising, it's going to drive you in one direction.
If your business model is membership, I think it drives you in a different direction.
I actually – and I said this to Elon about Twitter Blue, which I think wasn't rolled out well and so forth.
But it's like, hmm, the piece of that that I like is to say, look, actually, if there's a model –
where your revenue is coming from people who are willing to pay for the service, even if it's only part of your revenue.
If it's a substantial part, that does change your broader incentives to say, actually, are people going to be willing to pay for something that's actually just toxicity in their lives?
Now, I'm not sure it's been rolled out well.
I'm not sure how it's going.
And maybe I'm wrong about that as a plausible business model.
But I do think it's interesting to think about
Just in broad terms, business model drives outcomes in sometimes surprising ways unless you really pause to think about it.
I mean, it's a long conversation.
But to start with, one of the things that I always say is, it's a really hard problem.
So I concede that right up front.
I said this about, you know, the old ownership of Twitter and the new ownership of Twitter.
Because unlike Wikipedia, and this is true actually for all social media, there's a box, and the box basically says, what do you think?
What's on your mind?
You can write whatever the hell you want, right?
This is true, by the way, even for YouTube.
I mean, the box is to upload a video, but again, it's just like an open-ended invitation to express yourself.
And what makes that hard is some people have really toxic, really bad, you know, some people are very aggressive.
They're actually stalking.
They're actually, you know, abusive.
And suddenly you deal with a lot of problems.
Whereas at Wikipedia, there is no box that says what's on your mind.
There's a box that says this is an entry about
Please be neutral.
Please cite your facts.
Then there's a talk page, which is not coming rant about Donald Trump.
If you go on the talk page of the Donald Trump entry and you just start ranting about Donald Trump, people would say, what are you doing?
We're not here to discuss.
There's a whole world of the internet out there for you to go and rant about Donald Trump.
Well, also, on Wikipedia, people are going to say, stop.
And actually, are you here to tell us, like, how can we improve the article?
Or are you just here to rant about Trump?
Because that's not actually interesting.
So because the goal is different.
So that's just admitting and saying up front, this is a hard problem.
Certainly, I'm writing a book on trust.
So the idea is, you know, in the last 20 years, we've lost trust, you know, in all kinds of institutions and politics.
You know, the Edelman Trust Barometer Survey has been done for a long time.
And, you know, trust in politicians, trust in journalism, it's declined substantially.
And I think in many cases, deservedly.
So how do we restore trust and how do we think about that?
Trust in the idea of truth.
Even the concept of facts and truth is really, really important.
And the idea of uncomfortable truths is really important.
Now, so when we look at Twitter, right, and we say, we can see, okay, this is really hard.
So here's my story about Twitter.
It's a two-part story.
And it's all pre-Elon Musk ownership.
So many years back, somebody accused me of horrible crimes on Twitter.
And I, you know, like anybody would, I...
It's like, you know, I'm in the public eye.
People say bad things.
I don't really, you know, I brush it off, whatever.
But I'm like, this is actually really bad.
Like, accusing me of pedophilia, like, that's just not okay.
So I thought, I'm going to report this.
So I click report, and I report the tweet, and there's five others, and I report...
And I go through the process, and then I get an email that says, you know, whatever, a couple hours later, saying, thank you for your report.
We're looking into this.
Great, okay, good.
Then several hours further, I get an email back saying, sorry, we don't see anything here to violate our terms of use.
And I'm like, okay.
So I email Jack, and I say, Jack, come on, like, this is ridiculous.
And he emails back roughly saying, yeah, sorry, Jimmy, don't worry, we'll sort this out.
And I just thought to myself, you know what?
That's not the point, right?
I know Jack Dorsey.
I can email Jack Dorsey.
He'll listen to me because he's got an email from me and sorts it out for me.
What about the teenager who's being bullied and is getting abuse, right?
And getting accusations that aren't true.
Are they getting the same kind of like really poor result in that case?
So fast forward a few years, same thing happens.
The exact quote always goes, please help me.
I'm only 10 years old and Jimmy Wales raped me last week.
It's like, come on, fuck off.
Like, that's ridiculous.
I'm like, this time I'm reporting, but I'm thinking, well, we'll see what happens.
This one gets even worse because...
Then I get the same result, email back saying, sorry, we don't see any problems.
So I raise it with other members of the board who I know and Jack.
Like, this is really ridiculous.
Like, this is outrageous.
And some of the board members, friends of mine, sympathetic and so good for them.
But I actually got an email back then from the general counsel, head of trust and safety, saying, actually, there's nothing in this tweet that violates our terms of service.
We don't regard...
And gave reference to the Me Too movement.
If we didn't allow accusations, the Me Too movement, it's an important thing.
And I was like, you know what?
Actually, if someone says I'm 10 years old and someone raped me last week, I think the advice should be, here's the phone number of the police.
Like, you need to get the police involved.
There's not the place for that accusation.
So even back then, by the way, they did delete those tweets, but I mean, the rationale they gave is spammy behavior.
So completely separate from abusing me, it was just like, oh, well, they were retweeting too often.
So that's just broken.
That's a system that it's not working for people in the public eye.
I'm sure it's not working for private people who get abuse.
Really horrible abuse can happen.
So how is that today?
Well, it hasn't happened to me since Elon took over, but I don't see why it couldn't.
And I suspect now if I send a report and email someone, there's no one there to email me back because he's gotten rid of a lot of the trust and safety staff.
So I suspect that problem is still really hard.
At huge scales is really something.
And I don't know the full answer to this.
I mean, a piece of it could be to say actually making specific allegations of crimes.
This isn't the place to do that.
We've got a huge database.
If you've got an accusation of crime, here's who you should call, the police, the FBI, whatever it is.
It's not to be done in public.
And then you do face really complicated questions about Me Too movement and people coming forward in public and all of that.
But again, it's like probably you should talk to a journalist, right?
Probably there are better avenues than just tweeting from an account that was created 10 days ago, obviously set up to abuse someone.
So I think they could do a lot better.
But I also admit it's a hard problem.
No, I mean, I remember another case that didn't bother me because it wasn't of that nature.
But somebody was saying, you know, I'm sure you're making millions off of Wikipedia.
And I'm like, no, actually, I don't even work there.
I have no salary.
And they're like, you're lying.
I'm going to check your 990 form, which is the U.S.
form for tax reporting for charities.
Yeah, here's the link.
Go read it and you'll see.
I'm listed as a board member and my salary is listed as zero.
So, you know, things like that.
It's like, okay, that one, that feels like you're wrong, but I can take that and we can have that debate quite quickly.
And again, it didn't go viral because it was kind of silly.
And if anything would have gone viral, it was me responding.
But that's one where it's like, actually, I'm happy to respond because a lot of people don't know that I don't work there and that I don't make millions and I'm not a billionaire.
Well, they must know that because it's in most news media about me.
But the other one I didn't respond to publicly because it's like –
Barbra Streisand effect.
It's like sometimes calling attention to someone who's abusing you who basically has no followers and so on is just a waste.
Yeah, I mean, it's actually...
An example, so we don't generally use my picture in the banners anymore on Wikipedia, but we did.
And then we did an experiment one year where we tried other people's pictures, so one of our developers.
And, you know, one guy, lovely, very sweet guy, and he doesn't look like your immediate thought of a nerdy Silicon Valley developer.
He looks like a heavy metal dude because he's cool.
And so suddenly here he is with long hair and tattoos, and there's his sort of say, here's what your money goes for.
Here's my letter asking for support.
And he got massive abuse from Wikipedia, like calling him creepy and, you know, like really massive.
And this was being shown to 80 million people a day.
His picture, not the abuse, right?
The abuse was elsewhere on the internet.
And he was bothered by it.
And I thought, you know what?
There is a difference.
I actually am in the public eye.
I get huge benefits from being in the public eye.
I go around and make public speeches.
Any random thing I think of, I can write and get it published in the New York Times and have this interesting life.
He's not a public figure.
And so, actually...
He wasn't mad at us.
It was just like, yeah, actually suddenly being thrust in the public eye and you get suddenly lots of abuse, which normally, you know, if you're a teenager and somebody in your class is abusing you, it's not going to go viral.
So it's going to be hurtful because it's local and it's your classmates or whatever.
But when sort of ordinary people go viral in some abusive way, it's really, really quite tragic.
I think you're right.
I think it's really hard to fix, because I don't think that problem isn't necessarily new.
Someone in high school who writes
graffiti that says Becky is a slut and spreads a rumor about what Becky did last weekend.
That's always been damaging.
It's always been hurtful.
And that's really hard.
I don't know enough about specifically how it's implemented to really have a very deep view.
But I do think it's quite, the uses I've seen of it, I've found quite good.
And in some cases, changed my mind.
You know, it's like I see something.
And of course, you know, the sort of human tendency is,
to retweet something that you hope is true or that you are afraid is true, or, you know, it's like that kind of quick mental action.
And then, you know, I saw something that I liked and agreed with, and then a community note under it that made me think, oh, actually,
This is a more nuanced issue.
I think that's really important.
Now, how is it specifically implemented?
I don't really know how they've done it, so I can't really comment on that.
But in general, I do think it's, you know, when you're only mechanisms on Twitter, and you're a big Twitter user, you know, we know the platform and you've got plenty of followers and all of that.
The only mechanisms are retweeting,
replying, blocking.
It's a pretty limited scope, and it's kind of good if there's a way to elevate a specific thoughtful response.
And it kind of goes to, again, like, does the algorithm just pick the...
retweet or the, I mean, retweeting, it's not even the algorithm that makes it viral.
Like, you know, if Paolo Quello, very famous author, I think he's got like, I don't know, I haven't looked lately.
He used to have 8 million Twitter followers.
I think I looked, he's got 16 million now or whatever.
Well, if he retweets something, it's going to get seen a lot.
Or Elon Musk, if he retweets something, it's going to get seen a lot.
That's not an algorithm.
That's just the way the platform works.
So it is kind of nice if you have something else and how that something else is designed.
That's obviously,
complicated question.
I've been saying for a long time, if I went on Facebook one morning and they said, oh, we're testing a new option,
Rather than showing you things we think you're going to like, we want to show you some things that we think you will disagree with, but which we have some signals that suggest it's of quality.
Like, now that sounds interesting.
Yeah, that sounds really interesting.
I want to see something where, you know, like, oh, I don't agree with it.
So Larry Lessig is a good friend of mine, founder of Creative Commons, and he's moved on to doing stuff about corruption in politics and so on.
And I don't always agree with Larry, right?
But I always grapple with Larry because he's so interesting and he's so thoughtful that even when we don't agree, I'm like, actually, I want to hear him out, right?
Because I'm going to learn from it.
And that doesn't mean I always come around to agreeing with him, but I'm going to understand a perspective on it.
And that's really great feeling.
It's even an odd thing to cue you off because you have quite a wide range of long conversations with a very diverse bunch of people.
Yeah, I talk sometimes about how people assume that the big debates in Wikipedia or the sort of arguments are between the party of the left and the party of the right.
And I say, no, it's actually the party of the kind and thoughtful and the party of the jerks is really it.
I mean, left and right, like, yeah, bring me somebody I disagree with politically.
As long as they're thoughtful, kind, we're going to have a real discussion.
I give an example of...
our article on abortion.
So, you know, if you can bring together a kind and thoughtful Catholic priest and a kind and thoughtful Planned Parenthood activist, and they're going to work together on the oracle on abortion, that can be a really great thing, if they're both kind and thoughtful.
Like, that's the important part.
They're never going to agree on the topic, but they will understand, okay, like, Wikipedia is not going to take a side, but Wikipedia is going to explain what the debate is about, and we're going to try to characterize it fairly effectively.
And it turns out, like, you're kind and thoughtful people, even if they're quite ideological.
Like, a Catholic priest is generally going to be quite ideological on the subject of abortion.
But they can grapple with ideas, and they can discuss, and they may feel very proud of the entry at the end of the day.
Not because they suppress the other side's views, but because they think...
The case has been stated very well that other people can come to understand it.
And if you're highly ideological, you assume, I think naturally, if people understood as much about this as I do, they'll probably agree with me.
You may be wrong about that, but that's often the case.
So that's where, you know, that's what I think we need to encourage more of in society generally is grappling with ideas in a really, you know, thoughtful way.
Yeah, I think so.
And I think if you read the article, it's pretty good.
And I think a piece of that is within our community, if people have the self-awareness to understand.
So I personally wouldn't go and edit the entry on Donald Trump.
I get emotional about it, and I'm like, I'm not good at this.
And if I tried to do it, I would fail.
I wouldn't be a good Wikipedian.
So it's better if I just step back and let people who are more dispassionate on this topic edit it.
Whereas there are other topics that are incredibly emotional to some people where I can actually do quite well.
Like I'm going to be okay.
Maybe we were discussing earlier the efficacy of masks.
I'm like, oh, I think that's an interesting problem, and I don't know the answer, but I can help kind of catalog what's the best evidence and so on.
I'm not going to get upset.
I'm not going to get angry.
I'm able to be a good Wikipedian.
So I think that's important.
And I do think, though, in a related framework, that –
The composition of the community is really important, not because Wikipedia is or should be a battleground, but because blind spots.
Like maybe I don't even realize what's biased if I'm particularly of a certain point of view and I've never thought much about it.
So one of the things we focus on a lot, the Wikipedia volunteers are –
We don't know the exact number, but let's say 80% plus male.
And they're of a certain demographic.
They tend to be college-educated, heavier on tech geeks than not, et cetera, et cetera.
So there is a demographic to the community, and that's pretty much global.
I mean, somebody said to me once, why is it only white men who edit Wikipedia?
you've obviously not met the Japanese Wikipedia community.
It's kind of a joke because the broader principle still stands.
Who edits Japanese Wikipedia?
A bunch of geeky men, right?
And women as well.
So we do have women in the community and that's very important, but we do think, okay, you know what?
That does lead to some problems.
It leads to some content issues simply because people write more about what they know and what they're interested in.
They'll tend to be dismissive of things as being unimportant if it's not something that they personally have an interest in.
I like the example as a parent.
I would say our entries on early childhood development probably aren't as good as they should be.
Because a lot of the Wikipedia volunteers – actually, we're getting older, the Wikipedians, so the demographic has changed a bit.
But, you know, it's like if you've got a bunch of 25-year-old tech geek dudes who don't have kids, they're just not going to be interested in early childhood development.
And if they tried to write about it, they probably wouldn't do a good job because they don't know anything about it.
And somebody did a look at our entries on novelists who've won a major literary prize.
And they looked at the male novelist versus the female.
And the male novelist had longer and higher quality entries.
Well, it's not because – because I know –
hundreds of wikipedians it's not because these are a bunch of biased sexist men who are like books by women are not important it's like no actually there is a gender kind of breakdown of readership there are books
Like hard science fiction is a classic example.
Hard science fiction, mostly read by men.
Other types of novels, more read by women.
And if we don't have women in the community, then these award-winning, clearly important novelists may have less coverage.
And not because anybody consciously thinks, oh, we don't, like what, a book by Maya Angelou, like who cares?
She's a poet, like that's not interesting.
No, but just because, well, people write what they know.
They write what they're interested in.
So we do think diversity in the community is really important.
And that's one area where I do think it's really clear.
But I can also say, you know what?
Actually, that also applies in the political sphere.
Like to say, actually, we do want kind and thoughtful Catholic priests, kind and thoughtful conservatives, kind and thoughtful libertarians, kind and thoughtful Marxists, you know, to come in.
But the key is the kind and thoughtful piece.
So when people sometimes come to Wikipedia –
outraged by some dramatic thing that's happened on Twitter, they come to Wikipedia with a chip on their shoulder ready to do battle, and it just doesn't work out very well.
Yeah, we think that's really important.
And so oftentimes people come in and there's a lot –
When I talk about community health, one of the aspects of that that we do think about a lot, that I think about a lot, is not about politics.
It's just like, how are we treating newcomers to the community?
And so I can tell you what our ideals are, what our philosophy is.
Um, but do we live up to that?
So, you know, the ideal is you come to Wikipedia, you know, we have, uh, rules, like one of our fundamental rules is ignore all rules, which is partly written that way because it kind of piques people's attention.
Like, what the hell kind of rule is that?
You know, but basically says, look, don't get nervous and depressed about a bunch of, you know, what's the formatting of your footnote, right?
So you shouldn't come to Wikipedia and
add a link and then get banned or yelled at because it's not the right format.
Instead, somebody should go, oh, hey, thanks for helping, but here's the link to how to format.
If you want to keep going, you might want to learn how to format a footnote.
And to be friendly and to be open and to say, oh, right, oh, you're new and you clearly don't know everything about Wikipedia.
And, you know, sometimes in any community that can be quite hard.
So people come in and they've got a great big idea and they're going to propose this to the Wikipedia community and they have no idea.
That's basically a perennial discussion we've had 7,000 times before.
And so then, ideally, you would say to the person, oh, yeah, great, thanks.
Like, a lot of people have...
And here's where we got to, and here's the nuanced conversation we've had about that in the past that I think you'll find interesting.
And sometimes people are just like, oh God, another one, you know, who's come in with this idea, which doesn't work and they don't understand why.
And that's kind of human, you know, but I think it just does require really thinking, you know, in a self-aware manner of like, oh, I was once a newbie.
Actually, we do have – we have a great – I just did an interview with Emily Temple Woods, who – she was Wikipedia of the Year.
She's just like a great, well-known Wikipedian.
And I interviewed her for my book, and she told me something I never knew.
Apparently, it's not secret.
Like, she didn't reveal it to me, but is that when she started –
Wikipedia, she was a vandal.
She came in and vandalized Wikipedia.
And then basically what happened was she'd done some sort of vandalized a couple of articles.
And then somebody popped up on her talk page and said, hey, like, why are you doing this?
Like, we're trying to make an encyclopedia here.
And this wasn't very kind.
And she felt so bad.
She's like, oh, right.
I didn't really think of it that way.
She just was coming in as she was like 13 years old, combative and, you know, like having fun and trolling a bit.
And then she's like, oh, actually, oh, I see your point and became a great Wikipedian.
So that's the ideal, really, is that you don't just go troll, block, fuck off.
You go, hey, you know, like what goes, you know, which is, I think the way...
We tend to treat things in real life.
You know, if you've got somebody who's doing something obnoxious in your friend group, you probably go, hey, like, really, I don't know if you've noticed, but I think this person is actually quite hurt that you keep making that joke about them.
And then they usually go, oh, you know what?
I didn't, I thought that was okay.
And then they stop.
Or they keep it up and then everybody goes, well, you're the asshole.
No, I once was at Wikimanias, our annual conference, and people come from all around the world, like really active volunteers.
I was at the dinner.
We were in Egypt at Wikimanias in Alexandria at the sort of closing dinner or whatever.
And a friend of mine came and sat at the table, and she's sort of been in the movement more broadly, Creative Commons.
She's not really a Wikipedian.
She'd come to the conference because she's into Creative Commons and all that.
So we have dinner and it just turned out, I sat down at the table with most of the members of the English language arbitration committee.
And they're a bunch of very sweet, geeky Wikipedians.
And as we left the table, I said to her, it's really like, I still find this kind of sense of amazement.
Like we just had dinner with some of the most powerful people in English language media.
Because they're the people who are like the final court of appeal in English Wikipedia.
And thank goodness they're not media moguls, right?
They're just a bunch of geeks who are just like well liked in the community because they're kind and they're thoughtful and they really, you know, sort of think about things.
I was like, this is great.
If you've never heard of this or looked into it, you'll enjoy it.
I read something recently that I didn't even know about, but like...
the fundamental time zones, and they change from time to time.
Sometimes a country will pass daylight savings or move it by a week, whatever.
There's a file that's on all sort of Unix-based computers, and basically all computers end up using this file.
It's the official time zone file, but why is it official?
It's just this one guy.
It's like this guy and a group, a community around him, and basically something weird happened
and it broke something because he was on vacation.
And I'm just like, isn't that wild, right?
That you would think, I mean, first of all, most people never even think about like, how do computers know about time zones?
Well, they know because they just use this file, which tells all the time zones and which dates they change and all of that.
But there's this one guy and he doesn't get paid for it.
It's just, he's like, you know, with all the billions of people on the planet, he sort of put his hand up and goes, yo, I'll take care of the time zones.
I met a guy many years ago, lovely, really sweet guy.
And he was running a bot on English Wikipedia that I thought, wow, that's actually super clever.
And what he had done is...
His bot was like spell checking.
But rather than simple spell checking, what he had done is create a database of words that are commonly mistaken for other words.
They're spelled wrong, so I can't even give an example.
And so the word is, people often spell it wrong, but no spell checker catches it because it is another word.
And so what he did is he wrote a bot that looks for these words and then checks the sentence around it for certain keywords.
So in some context...
This isn't correct, but buoy and boy.
People sometimes type B-O-Y when they mean B-O-U-Y.
So if he sees the word boy, B-O-Y, in an article, he would look in the context and see, is this a nautical reference?
And if it was, he didn't autocorrect, he just would flag it up to himself to go, oh, check this one out.
And that's not a great example, but he had thousands of examples.
I was like, that's amazing.
Like I would have never thought to do that.
And I'm glad that somebody did.
And that's also part of the openness of the system.
And also I think being a charity, being, you know, this idea of like, actually this is a gift to the world that makes someone go, oh, oh, well I'll put my hand up.
Like I see a little piece of things that I can make better because I'm a good programmer and I can write this script to do this thing and I'll find it fun.
So I think most people know this, but we're a charity.
So in the U.S., you know, registered as a charity.
And we don't have any ads on the site.
And the vast majority of the money is from...
donations, but the vast majority from small donors.
So people giving 25 bucks or whatever.
And we have, you know, millions of donors every year, but it's like a small percentage of people.
I would say in the early days, a big part of it was aesthetic almost.
as much as anything else it was just like i just think i don't really want ads in wikipedia like i just think it would be there's a lot of reasons why it might not be good and even back then um i didn't think as much as i have since about a business model can tend to drive you in a certain place and really thinking that through in advance is really important because you might say
Yeah, we're really, really keen on community control and neutrality.
But if we had an advertising-based business model, probably that would begin to erode.
Even if I believe in it very strongly, organizations tend to follow the money in the DNA in the long run.
And so things like – I mean, it's easy to think about some of the immediate problems.
So, like, if you go to read about –
I don't know, Nissan car company.
And if you saw an ad for the new Nissan at the top of the page, you might be like, did they pay for this?
Or like, do the advertisers have influence over the content?
Because you kind of wonder about that for all kinds of media.
And that undermines trust.
Undermines trust, right?
But also things like,
We don't have clickbait headlines in Wikipedia.
You've never seen Wikipedia entries with all this kind of listicles, sort of the 10 funniest cat pictures, number seven will make you cry.
None of that kind of stuff, because there's no incentive, no reason to do that.
Also, there's no reason to have an algorithm –
to say, actually, we're going to use our algorithm to drive you to stay on the website longer.
We're going to use the algorithm to drive you to, you know, it's like, oh, you're reading about Queen Victoria.
There's nothing to sell you when you're reading about Queen Victoria.
Let's move you on to Las Vegas because actually the ad revenue around hotels in Las Vegas is quite good.
So we don't have that sort of
There's no incentive for the organization to go, oh, let's move people around to things that have better ad revenue.
Instead, it's just like, oh, well, what's most interesting to the community?
Just to make those links.
So that decision just seemed obvious to me.
But as I say, it was less of a business decision and more of an aesthetic.
It's like, oh, this is how I like Wikipedia.
It doesn't have ads.
I don't really want...
You know, in these early days, like a lot of the ads, that was well before the era of really quality ad targeting and all that.
So you get a lot of banners, punch the monkey ads and all that kind of nonsense.
And so, you know, but there was no guarantee.
There was no, it was not really clear that.
How could we fund this?
It was pretty cheap.
It still is quite cheap compared to most... We don't have 100,000 employees and all of that.
But would we be able to raise money through donations?
And so I remember...
The first time that we really did a donation campaign was on a Christmas day in 2003, I think it was.
We had three servers, database servers and two front-end servers, and they were all the same size or whatever.
And two of them crashed.
Like, I don't even know, remember now, like the hard drive.
It was like, it was Christmas day.
So I scrambled on Christmas day to sort of go onto the database server, which fortunately survived, and have it become a front-end server as well.
And then the site was really slow and it wasn't working very well.
And I was like, okay, it's time.
We need to do a fundraiser.
And so I was hoping to raise $20,000 in a month's time, but we raised nearly $30,000 within two, three weeks' time.
So that was the first proof point of like, oh, we put a banner up and people will donate.
We just explain we need the money and people are like, already, we were very small back then, and people were like, oh, yeah, I love this, I want to contribute.
Then over the years, we've...
become more sophisticated about the fundraising campaigns and we've tested a lot of different messaging and so forth.
What we used to think, you know, I remember one year we really went heavy with, we have great ambitions to, you know, the idea of Wikipedia is a free encyclopedia for every single person on the planet.
So what about the languages of Sub-Saharan Africa?
So I thought, okay, we're trying to raise money.
We need to talk about that because it's really important and near and dear to my heart.
And just instinctively knowing nothing about charity fundraising, you see it all around.
It's like, oh, charities always mention like the poor people they're helping.
So let's talk about that.
Didn't really work as well.
The pitch that, like this is very vague and very sort of broad, but the pitch that works better than any other in general is,
A fairness pitch of like, you use it all the time.
You should probably chip in.
And most people are like, yeah, you know what?
My life would suck without Wikipedia.
I use it constantly.
And whatever, I should chip in.
Like, it just seems like the right thing to do.
And there's many variants on that, obviously.
And that's really – it works.
And, like, people are like, oh, yeah, like Wikipedia.
I love Wikipedia.
And, you know, I shouldn't.
And so sometimes people say, you know, why are you always begging for money on the website?
And, you know, it's not that often.
It's not that much.
But it does happen.
They're like, why don't you just get Google and Facebook and Microsoft?
Why don't they pay for it?
And I'm like, I don't think that's really the right answer.
Influence starts to creep in.
Influence starts to creep in and questions start to creep in.
Like the best funding for Wikipedia is the small donors.
We also have major donors, right?
We have high net worth people who donate.
But we always are very careful about that sort of thing to say, wow, that's really great and really important.
But we can't let that become influence because that would just be really quite bad.
Yeah, not good for Wikipedia.
Well, you know, the Guardian newspaper has a similar model, which is they have ads, but they also – there's no paywall, but they just encourage people to donate –
And they do that.
Like I've sometimes seen a banner saying, oh, this is your 134th article you've read this year.
Would you like to donate?
And I think that's, I think it's effective.
I mean, they're testing.
But also I wonder just for some people if they just don't feel like guilty and then think, well, I shouldn't bother them so much.
It's a good question.
I don't know the answer.
I gave her a media contributor of the year award this year.
Cause she's so great.
Wikipedia is so fun.
Yeah, that's right.
And I also like Stack Overflow, although I wonder what you think of this.
So I only program for fun as a hobby, and I don't have enough time to do it.
But I do, and I'm not very good at it.
So therefore, I end up on Stack Overflow quite a lot, trying to figure out what's gone wrong.
And I have really transitioned to using ChatGPT much more for that, because I can often find the answer clearly explained,
And it just works better than sifting through threads.
And I kind of feel bad about that because I do love Stack Overflow and their community.
I mean, I'm assuming – I haven't read anything in the news about it.
I'm assuming they are keenly aware of this and they're thinking about how can we sort of use this chunk of knowledge that we've got here and provide a new type of interface where you can query it with a question and actually get –
an answer that's based on the answers that we've had.
And so we are similar in that regard.
Obviously, all the things we've talked about, like ChatGPT makes stuff up and it makes up references.
So our community has already put into place some policies about it.
But roughly speaking, there's always more nuance.
But roughly speaking, it's sort of like you, the human, are responsible for what you put into Wikipedia.
So if you use ChatGPT, you better check it.
Because there's a lot of great use cases of, you know, like, oh, well, I'm...
I'm not a native speaker of German, but I kind of am pretty good.
I'm not talking about myself, a hypothetical me that's pretty good.
And I kind of just want to run my edit through ChatGPT in German to go make sure my grammar's okay.
That's actually cool.
Yeah, no, no, that's completely fine.
I mean, part of it is our ethos has always been, here's our gift to the world, make something.
So if the knowledge is more accessible to people, even if they're not coming through us, that's fine.
Now, obviously, we do have certain business model concerns, right?
And where we've had more conversation about this, this whole GPT thing is new.
Things like, if you ask Alexa, you know, what is the Eiffel Tower?
And she reads you the first two sentences from Wikipedia and doesn't say it's from Wikipedia.
and they've recently started citing Wikipedia, then we worry like, oh, if people don't know they're getting the knowledge from us, are they going to donate money?
Or do they just think, oh, what's Wikipedia for?
I can just ask Alexa.
It's like, well, Alexa only knows anything because she read Wikipedia.
So we do think about that, but it doesn't bother me in the sense of like, oh, I want people to always come to Wikipedia first.
But we're also, you know, had a great demo, like literally just hacked together over a weekend by
our head of machine learning, where he did this little thing to say, you could ask any question.
And he was just knocking it together.
So he used the OpenAI's API just to make a demo, asking a question, why do ducks fly south for winter?
Which is the kind of thing you think, oh, I might just Google for that.
I might start looking in Wikipedia.
And so what he did is he asked Chachapiti, what are some Wikipedia entries that might answer this?
Then he grabbed those Wikipedia entries, said, here's some Wikipedia entries.
Answer this question based only on the information in this.
And he had pretty good results.
And it kind of prevented them making stuff up.
It's just he hacked together a weekend.
But what it made me think about was, oh, okay, so now we've got this huge body of knowledge.
That in many cases, you're like, oh, I'm really, I want to know about Queen Victoria.
I'm just going to go read the Wikipedia entry and it's going to take me through her life and so forth.
But other times you've got a specific question and maybe we could have a better search experience where you can come to Wikipedia, ask your specific question, get your specific answer that's from Wikipedia, including links to the articles you might want to read next.
And that's just a step forward.
Like that's just using a new type of technology to make the extraction of information from this body of text into my brain faster and easier.
So I think that's kind of cool.
You pulled in... And you need somebody to have filtered through that and sort of tried to knock off the rough edges.
No, it's very, I think that's exactly right.
And I think, you know, I think that kind of grounding is, I think they're working really hard on it.
I think that's really important.
And that actually, when I, so if you asked me to step back and be like very business-like about our business model and where's it going to go for us, and are we going to lose half our donations because everybody's just going to stop coming to Wikipedia and go to ChatGPT, I think grounding will help a lot.
because frankly, most questions people have, if they provide proper links, we're gonna be at the top of that just like we are in Google.
So we're still gonna get tons of recognition and tons of traffic just from, even if it's just the moral properness of saying,
here's my source.
So I think we're going to be all right in that.
or the co-founder of wikipedia ironic absurd interesting important um what are your comments so i would say unimportant um not that interesting i mean one of the things that uh people are sometimes surprised to hear me say is i actually think larry sanger doesn't get enough credit for his early work in wikipedia even though i think co-founder is not the right title for that
So, you know, like, he had a lot of impact and a lot of great work, and I disagree with him about a lot of things since and all that, and that's fine.
So, yeah, no, to me, that's like, it's one of these things that the media love a falling out story, so they want to make a big deal out of it, and I'm just like...
Yeah, definitely.
Yeah, I mean, just straight up, I disagree.
Like, go and read any Wikipedia entry on a controversial topic, and what you'll see is...
a really diligent effort to explain all the relevant sides.
So yeah, just disagree.
Yeah, no, I mean, for sure.
Like, to take this area of discussion seriously is to say, yeah, you know what, actually, that is a big part of what Wikipedians spend their time grappling with, is to say, you know, how do we figure out whether a...
less popular view is pseudoscience?
Is it just a less popular view that's gaining acceptance in the mainstream?
Is it fringe versus crackpot, et cetera, et cetera?
And that debate is what you've got to do.
There's no choice about having that debate, of grappling with something.
And I think we do.
And I think that's really important.
And I think if anybody said to the
gee, you should stop, you know, sort of covering minority viewpoints on this issue.
I think they would say, I don't even understand why you would say that.
Like, we have to sort of grapple with minority viewpoints in science and politics and so on.
But it's, and like, this is one of the reasons why, you know, there is no magic simple answer to all these things.
It's case by case.
It's like, you know, you've got to really say, okay, what is the context here?
How do you do it?
And you've always got to be open to correction and to change and to sort of challenge and always be sort of serious about that.
I can give a really good example, which is there was this sort of dust-up about
the definition of recession in Wikipedia.
So the accusation was, and accusation was often quite ridiculous and extreme, which is under pressure from the Biden administration, Wikipedia changed the definition of recession to make Biden look good.
Or we did it not under pressure, but because we're,
a bunch of lunatic leftists and so on.
And then, you know, when I see something like that in the press, I'm like, oh dear, like what's happened here?
How did we do that?
Because I always just accept things for five seconds first.
And then I go and I look and I'm like, you know what?
That's literally completely not what happened.
What happened was one editor thought the article needed restructuring.
So the article is always said, so the traditional kind of loose definition of recession is two quarters of negative growth.
But there's always been within economics, within important agencies in different countries around the world, a lot of nuance around that.
And there's other like factors that go into it and so forth.
And it's just an interesting, complicated topic.
And so the article has always had the definition of two quarters.
And the only thing that really changed was moving that from the lead, from the top paragraph to further down.
And then news stories appeared saying, Wikipedia has changed the definition of recession.
And then we got a huge rush of trolls coming in.
So the article was temporarily protected.
I think only semi-protected.
And people were told, go to the talk page to discuss.
So it was a dust-up that was, you know, when you look at it as a Wikipedia, and you're like, oh, this is a really routine kind of editorial debate.
Another example, which unfortunately our friend Elon fell for, I would say, is...
The Twitter files.
So there was an article called The Twitter Files, which is about these files that were released once Elon took control of Twitter and he released internal documents.
And what happened was somebody nominated it for deletion, but even the nomination said, this is actually...
This is mainly about the Hunter Biden laptop controversy.
Shouldn't this information be there instead?
So anyone can, like, it takes exactly one human being anywhere on the planet to propose something for deletion.
And that triggers a process where people discuss it, which, within a few hours, it was what we call snowball closed, i.e., this doesn't have a snowball's chance in hell of passing.
So an admin goes, yeah, wrong.
and closed the debate, and that was it.
That was the whole thing that happened.
And so nobody proposed suppressing the information.
Nobody proposed it wasn't important.
It was just, like, editorially boring internal questions.
And, you know, so sometimes people read stuff like that, and they're like, oh, you see?
Look at these leftists.
They're trying to suppress the truth again.
It's like, well, slow down a second and come and look.
Like, literally, it's not what happened.
Yeah, yeah, yeah.
It sounds really...
uh enticing and intriguing and surprising to most people because they're like i'm reading wikipedia it doesn't seem like a crackpot leftist website it seems pretty kind of dull really in its own geeky way well that's how that makes a good story it's like oh am i being misled because there's a shadowy cabal of jimmy wales you know i generally i read political stuff i mentioned to you that i'm um traveling to uh
A little bit, but not too much.
No, I think we always have to challenge ourselves of like, what do I potentially have wrong?
So we're super hardcore on this.
We've never bowed down to government pressure anywhere in the world, and we never will.
And we understand that we're hardcore.
And actually, there is a bit of nuance about how different companies respond to this, but our response has always been just to say no.
And if they threaten to block
Well, knock yourself out.
You're going to lose Wikipedia.
And that's been very successful for us as a strategy because governments know they can't just casually threaten to block Wikipedia or block us for two days and we're going to cave in immediately to get back into the market.
And that's what a lot of companies have done, and I don't think that's good.
We can go one level deeper and say, I'm actually quite sympathetic.
Like, if you have staff members in a certain country and they are at physical risk, you've got to put that into your equation.
So I understand that.
Like, if Elon said, actually, I've got 100 staff members on the ground in such and such a country, and if we don't comply, somebody's going to get arrested and it could be quite serious –
Okay, that's a tough one, right?
That's actually really hard.
And then the FBI one, no.
The criticism I saw, I kind of prepared for this because I saw people responding to your request for questions.
And I was like...
Somebody's like, oh, don't you think it was really bad that you da-da-da-da-da?
And I said, I actually reached out to staff and said, can you just make sure I've got my facts right?
And the answer is, we received zero requests of any kind from the FBI or any of the other government agencies for any changes to content in Wikipedia.
And had we received those requests at the level of the Wikimedia Foundation, we would have said...
It's not our – like we can't do anything because Wikipedia is written by the community.
And so the Wikimedia Foundation can't change the content of Wikipedia without causing – I mean, God, that would be a massive controversy.
You can't even imagine.
What we did do – and this is what I've done.
I've been to China and met with the Minister of Propaganda.
We've had discussions with governments all around the world.
Not because we want to do their bidding, but because we don't want to do their bidding, but we also don't want to be blocked.
And we think actually having these conversations are really important.
Now, there's no threat of being blocked in the U.S.
Like, that's just never going to happen.
There is the First Amendment.
But in other countries around the world, it's like, okay...
What are you upset about?
Let's have the conversation.
Like, let's understand.
And let's have a dialogue about it so that you can understand where we come from and what we're doing and why.
And then, you know, sometimes it's like, gee, like, if somebody complains that something's bad in Wikipedia, whoever they are, don't care who they are, could be...
you, could be the government, could be the Pope, I don't care who they are.
It's like, oh, okay, well, our responsibility as Wikipedia is to go, oh, hold on, let's check, right?
Is that right or wrong?
Is there something that we've got wrong in Wikipedia?
Not because you're threatening to block us, but because we want Wikipedia to be correct.
So we do have these dialogues with people.
And, you know, a big part of, like, what was going on with...
you might call it pressure on social media companies or dialogue with, depending on, you know, as we talked earlier, grapple with the language, depending on what your view is.
In our case, it was really just about, oh, okay, right, they want to have a dialogue about COVID information, misinformation.
We're this enormous source of information, which the world depends on,
we're gonna have that conversation, right?
We're happy to say, here's, you know, if they say, how do you know that Wikipedia is not gonna be pushing some crazy anti-vax narrative?
First, I mean, I think it's somewhat inappropriate for a government to be asking pointed questions in a way that implies
possible penalties.
I'm not sure that ever happened because we would just go, I don't know, the Chinese blocked us.
And so it goes, right?
We're not going to cave into any kind of government pressure, but whatever the appropriateness of what they were doing, I think there is a role for government in just saying, let's understand the information ecosystem.
Let's think about the problem of misinformation, disinformation in society, particularly around election security, all these kinds of things.
So, you know, I think it would be irresponsible of us to get a call from a government agency and say, yeah, why don't you just fuck off?
You're the government.
But it would also be irresponsible to go, oh, dear, government agency is not happy.
Let's fix Wikipedia so the FBI loves us.
Well, it's actually important to say, like, whatever the Wikimedia Foundation thinks has no impact on what's in Wikipedia.
So it's more about saying to them, right, we understand you're the World Health Organization or you're whoever, and part of your job is to sort of public health is about communications.
You want to understand the world.
So it's more about, oh, well, let's explain how Wikipedia works.
Yeah, yeah, exactly.
Well, I think in many cases, and this goes back to my topic of trust,
So there were definitely cases of public officials, public organizations, where I felt like they lost the trust of the public because they didn't trust the public.
And so the idea is like, we really need people to take this seriously and take actions.
we're going to put out some overblown claims because it's going to scare people into behaving correctly.
That might work for a little while, but it doesn't work in the long run because suddenly people go from a default stance of like the Center for Disease Control, very well-respected scientific organization, sort of, I don't know, they've got
vault in Atlanta with the last vial of smallpox or whatever it is that people think about them and to go oh right these are scientists we should actually take seriously and listen to and they're not politicized and they're you know it's like okay and if you put out statements I don't know if the CDC did but health organization whoever that are provably false and also provably you kind of knew they were false but you did it to scare people because you wanted them to do the right thing and
It's like, no, you know what?
That's not going to work in the long run.
Like, you're going to lose people.
And now you've got a bigger problem, which is a lack of trust in science, a lack of trust in authorities who are, you know, by and large, they're like quite boring government bureaucrat scientists who just are trying to help the world.
Well, it's interesting because, as I say, I live in the UK, and I think all these things are a little less politicized there.
And I haven't paid close enough attention to –
Fauci to have a really strong view.
I'm sure I would disagree with some things.
I definitely, you know, I remember hearing at the beginning of the pandemic, as I'm unwrapping my Amazon package with the masks I bought, because I heard there's a pandemic and I just was like, I want some N95 mask, please.
And they were saying, don't buy masks.
And the motivation was because they didn't want there to be shortages in hospitals.
But they were also statements of masks won't, they're not effective and they won't help you.
And then the complete about face to you're ridiculous if you're not wearing them.
You know, it's just like, no, like that about face just lost people from day one.
Yeah, this is exactly what, you know, I think this is where the Wikipedia neutral point of view is and should be an ideal.
And obviously every article and everything we could, you know me now and you know how I am about these things.
But like ideally it's to say, look,
We're happy to show you all the perspectives.
This is Planned Parenthood's view, and this is Catholic Church view, and we're going to explain that, and we're going to try to be thoughtful and put in the best arguments from all sides, because I trust you.
Like, you read that, and you're going to be more educated, and you're going to begin to make a decision.
I mean, I can just talk in the U.K.,
The government, da-da-da-da.
When we found out in the UK that very high-level government officials were not following the rules they had put on everyone else, I moved from – I had just become a UK citizen just a little while before the pandemic.
And, you know, it's kind of emotional.
Like, you get a passport in a new country, and you feel quite good.
And I did my oath to the Queen, and then they –
drag the poor old lady out to tell us all to be good.
And I was like, we're British and we're going to do the right things.
And, and, you know, it's going to be tough, but we're going to, you know, so you have that kind of Dunkirk spirit moment.
And you're like following the rules to a T. And then suddenly it's like, well, they're not following the rules.
And so suddenly I shifted personally from I'm going to follow the rules, even if I don't completely agree with them.
I'll still follow because I think we've got all chipping together to like, you know what?
I'm going to make wise and thoughtful decisions for myself and my family.
And that generally is going to mean following the rules, but it's basically, you know, when they're, you know, at certain moments in time, like you're not allowed to be in an outside space unless you're exercising.
I'm like, I think I can sit in a park and read a book.
Like, it's going to be fine.
Like, that's your rational rule, which I would have been following just personally of like, I'm just going to do the right thing.
And I think you see some of the ramifications of this.
There's always been pretty anti-science, anti-vax people.
That's always been a thing, but I feel like it's bigger now simply because of that lowering of trust.
So a lot of people, yeah, maybe it's like you say, a lot of people are like, yeah, I got vaccinated and I really don't want to talk about this because it's so toxic.
And that's unfortunate because I think people should say, what an amazing thing.
There's also a whole range of discourse around if this were a disease that was primarily killing babies, I think people's emotions about it would have been very different, right or wrong.
Then the fact that when you really looked at the sort of death rate of getting COVID, wow, it's really dramatically different.
If you're late in life, this was really dangerous.
And if you're 23 years old, yeah, well, it's not great.
And long COVID's a thing and all of that.
And I think some of the public communications, again, were failing to properly –
I love the comparison to how many video games.
And that definitely speaks to my earlier as like...
if you've got a lot of young geeky men who really like video games, that doesn't necessarily give you, get you the right place in every respect.
Um, certainly, um, yeah.
So here's a funny story.
I woke up one morning to a bunch of journalists in Germany trying to get in touch with me because German language Wikipedia chose to have as the featured article of the day swastika.
and people were going crazy about it.
And some people were saying it's illegal, has German Wikipedia been taken over by Nazi sympathizers and so on.
And it turned out it's not illegal, like discussing the swastika, using the swastika
as a political campaign and using it in certain ways is illegal in Germany in a way that it wouldn't be in the U.S.
because of the First Amendment.
But in this case, it was like, actually, part of the point is the swastika symbol is from other cultures as well, and they just thought it was interesting.
And I did joke to the committee, I'm like, please don't put the swastika on the front page without warning me, because I'm going to get a lot, now it wouldn't be me, it's the foundation, I'm not that much on the front lines.
So I would say that to put Hitler on the front page of Wikipedia, it is a special topic.
And you would want to say, yeah, let's be really careful that it's really, really good before we do that.
Because if we put it on the front page and it's not good enough, that could be a problem.
There's no inherent reason.
Like, clearly, World War II...
is a very popular topic in Wikipedia.
It's like, they're on the History Channel.
Like, people, it's a fascinating period of history that people are very interested in.
And then on the other piece, like, anarchism and Karl Marx?
I mean, that's interesting.
I'm surprised to hear that not more political...
books or topics have made it to the front page.
Now we're taking this Reddit comment.
I mean, as if it's completely... But I'm trusting, so I think that probably is right.
They probably did have the list up.
No, I think that piece, the piece about how many of those featured articles have been video games, and if it's disproportionate, I think we should, the community should go, actually, what's gone, like, that doesn't seem quite right.
You know, I mean, you can imagine...
that because you're looking for an article to be on the front page of wikipedia um you you want to have a bit of diversity in it you want it to be not always something that's really popular that week so like i don't know the last couple of weeks maybe succession a big finale of succession might lead you think oh let's put succession on the front page that's going to be popular in other cases you you kind of want pick something super obscure and quirky because people also find that interesting and fun so yeah don't know but you don't want it to be video games
most of the time.
That sounds quite bad.
My favorite article?
Well, I've got an amusing answer, which is possibly also true.
There's an article in Wikipedia called Inherently Funny Words.
And one of the reasons I love it is...
When it was created early in the history of Wikipedia, it kind of became like a dumping ground.
People would just come by and write in any word that they thought sounded funny.
And then it was nominated for deletion because somebody's like, this is just a dumping ground.
Like people are putting all kinds of nonsense in.
And in that deletion debate, somebody came forward and said, essentially, wait a second, hold on.
This is actually a legitimate concept.
in the theory of humor and comedy, and a lot of famous comedians and humorists have written about it.
And it's, you know, it's actually a legitimate topic.
So then they went through and they meticulously referenced every word that was in there and threw out a bunch that weren't.
And so it becomes this really interesting.
Now, my biggest disappointment, and it's the right decision to make because there was no source, but it was a picture of a cow.
But there was a rope around its head tying on some horns onto the cow.
So it was kind of a funny looking picture.
It looked like, you know, like a bull, you know, with horns, but it's just like a normal milk cow.
And below it, the caption said, according to some, cow is an inherently funny word.
which is just hilarious to me, partly because the according to some sounds a lot like Wikipedia.
But there was no source, so it went away, and I feel very sad about that.
But I've always liked that.
And actually, the reason Depths of Wikipedia amuses me so greatly is because it does highlight...
really interesting, obscure stuff.
And you're like, wow, I can't believe somebody wrote about that in Wikipedia.
It's quite amusing.
And sometimes there's a bit of wry humor in Wikipedia.
There's always a struggle.
You're not trying to be funny, but occasionally a little inside humor can be quite healthy.
That's very lonely.
Isn't that, that's the kind of thing that makes you wanna, like it sounds implausible at first, because shouldn't everybody have on average about the same number of friends as all their friends?
So you really wanna dig into the math of that and really think, oh, why would that be true?
I mean, it's hard to say.
So part of what I always say about myself is that I'm a pathological optimist.
So I always think everything is fine.
And so things that other people might find a struggle, I'm just like, oh, well, this is the thing we're doing today.
So that's kind of about me.
And it's actually, I'm aware of this about myself.
So I do like to have a few pessimistic people around me to keep me a bit on balance.
Yeah, I mean, I would say some of the hard things, I mean, there were hard moments like when two out of three servers crashed on Christmas Day, and then we needed to do a fundraiser, and no idea what was going to happen.
I would say as well, the...
Like, in that early period of time, the growth of the website and the traffic to the website was phenomenal and great.
The growth of the community and, in fact, the healthy growth of the community was fine.
And then the Wikimedia Foundation, the nonprofit I set up to own and operate Wikipedia, as a small organization, it had a lot of growing pains.
um and you know that was it was like that was the piece that's just like many companies or many organizations that are in a fast growth it's like you've hired the wrong people or there's this conflict that's arisen and nobody's got experience to do this and all that so no specific stories to tell but you know like i would say growing the organization was harder than growing the community and growing the website which is interesting well yeah it's kind of miraculous and inspiring that a community can
Yeah, I think that's exactly right.
And at Fandom, my for-profit wiki company, where it's like all these communities about pop culture mainly,
sort of entertainment, gaming, and so on.
There's a lot of small communities.
And so I went last year to our Community Connect conference and just met some of these people.
And like, you know, here's one of the leaders of the Star Wars wiki, which is called Wookieepedia, which I think is great.
And, you know, he's telling me about his community and all that.
And I'm like, oh, right.
Yeah, I love this.
Like, so it's not the same purpose as Wikipedia of a neutral, high-quality encyclopedia, but a lot of the same values are there of like, oh, people should be nice to each other.
It's like when people get upset, it's like, just remember, we're working on a Star Wars wiki together.
Like, there's no reason to get too outraged.
And just kind people, just like geeky people with a hobby.
So 10 years, I would say pretty much the same.
Like we're not going to have – we're not going to become TikTok, you know, with –
entertainment, scroll by video, humor, and blah, blah, blah, an encyclopedia.
I think in 10 years, we probably will have a lot more AI supporting tools like I've talked about, and probably your search experience will be
you can ask a question and get the answer rather than, you know, from our body of work.
So search and discovery, a little bit improved interface, some of the... All that.
I always say one of the things that people, most people won't notice, because already they don't notice it, is the growth of Wikipedia in the languages of the developing world.
You probably don't speak Swahili, so you're probably not checking out that Swahili Wikipedia is doing very well.
And it is doing very well.
And I think that kind of growth is actually super important and super interesting.
But most people won't notice that.
So what I used to say is like machine translation for many years wasn't much use to the community because it just wasn't good enough.
As it's gotten better, it's tended to be a lot better in what we might call economically important languages.
That's because the corpus that they train on and all of that.
So to translate from English to Spanish, if you've tried Google Translate recently, Spanish to English is what I would do.
It's pretty good.
It's actually not bad.
It used to be half a joke, and then for a while it was kind of like, well, you can get the gist of something.
And now it's like, actually, it's pretty good.
However, we've got a huge Spanish community who write in native Spanish.
So they're able to use it, and they find it useful, but they're writing.
But if you tried to do English to Zulu, where there's not that much investment, like there's loads of reasons to invest in English to Spanish because they're both huge economically important languages, Zulu not so much.
So for those smaller languages, it was just still terrible.
My understanding is it's improved dramatically and also because the new methods of training don't necessarily involve...
identical corpuses to try to match things up, but rather reading and understanding with tokens and large language models, and then reading and understanding, and then you get a much richer... Anyway, apparently it's quite improved, so I think that now...
It is quite possible that these smaller language communities are going to say, oh, well, finally, I can put something in English and I can get out Zulu that I feel comfortable sharing with my community because it's actually good enough, or I can edit it a bit here and there.
So I think that's huge.
So I do think that's going to happen a lot.
And that's going to accelerate, again, what will remain to most people an invisible trend, but that's the growth in all these other languages.
So then move on to 100 years.
It's starting to get scary.
Well, the only thing I say about 100 years is like...
We've built the Wikimedia Foundation, and we run it in a quite cautious and financially conservative and careful way.
So every year we build our reserves.
Every year we put aside a little bit more money.
We also have the endowment fund, which we just passed $100 million.
That's a completely separate fund.
with a separate board so that it's not just like a big fat bank account for some future profligate CEO to blow through.
The foundation will have to get the approval of a second order board to be able to access that money.
And that board can make other grants through the community and things like that.
So the point of all that is I hope and believe that we're building in a financially stable way, that we can weather various storms along the way so that hopefully we're not taking the kind of risks
And by the way, we're not taking too few risks either.
That's always hard.
I think the Wikimedia Foundation and Wikipedia will exist in 100 years.
If anybody exists in 100 years, we'll be there.
I mean, I think right now, this sort of enormous step forward we've seen and has become public in the last year of the large language models really is something else, right?
It's really interesting.
And you and I have both talked today about the flaws and the limitations, but still, as someone who's been around technology for a long time, it's sort of that feeling of the first time I saw a web browser,
The first time I saw the iPhone, like the first time the internet was like really usable on a phone, and it's like, wow, that's a step change difference.
There's a few other, you know.
Maybe Google Search.
Google Search was actually one.
I remember the first search.
Because I remember AltaVista was kind of cool for a while, then it just got more and more useless because the algorithm wasn't good.
And it's like, oh, Google Search, now like the internet works again.
And so large language model, it feels like that to me.
Like, oh, wow, this is something new and, like, really pretty remarkable.
And it's going to have some downsides.
Like, you know, the negative use case.
You know, people in the area who are experts, they're giving a lot of warnings.
And I don't know enough to... I'm not that worried, but I'm a pathological optimist.
But I do see some, like, really low-hanging fruit bad things that can happen.
So my example is...
How about some highly customized spam where the email that you receive isn't just like misspelled words and like trying to get through filters, but actually is a targeted email to you that knows something about you by reading your LinkedIn profile and writes a plausible email that will get through the filters.
And it's like suddenly, oh, that's a new problem.
That's going to be interesting.
So one of my predictions, and we'll see, you know, ask me again in five years how this panned out, is that...
In a way, this will strengthen the value and importance of some traditional brands.
So if I see a news story and it's from The Wall Street Journal, from The New York Times, from Fox News, I know what I'm getting and I trust it to whatever extent I might have trust or distrust in any of those.
And if I see a brand new website that looks plausible, but I've never heard of it, and it could be machine generated content that may be full of errors, I think I'll be more cautious.
I think I'm more interested.
And we can also talk about this around photographic evidence.
So obviously there will be scandals where major media organizations get fooled by a fake photo.
However, if I see a photo of, the recent one was the Pope wearing an expensive puffer jacket,
I'm going to go, yeah, that's amazing that a fake like that could be generated.
But my immediate thought is not, oh, so the Pope's dipping into the money, eh?
Probably because this particular Pope doesn't seem like he'd be the type.
people will care about the provenance of a photo.
And if you show me a photo and you say, yeah, this photo is from Fox News, even though I don't necessarily think that's the highest, but I'm like, well, it's a news organization and they're going to have journalists and they're going to make sure the photo is what it purports to be.
That's very different from a photo randomly circulating on Twitter.
Whereas I would say 15 years ago, a photo randomly circulating on Twitter, in most cases,
The worst you could do, and this did happen, is misrepresent the battlefield.
So like, oh, here's a bunch of injured children.
Look what Israel's done.
But actually, it wasn't Israel.
It was another case 10 years ago.
That has happened.
That has always been around.
But now we can have much more specifically constructed, plausible-looking photos that if I just see them circulating on Twitter, I'm going to go, just don't know.
I can make that in five minutes.
Now, I agree, but the one thing I've said in the past, and this depends on who that person is and what they're doing, but it's like I think my credibility, my general credibility in the world should be the equal of a New York Times reporter.
So if something happens and I witness it and I write about it, people are going to go, well, Jimmy Wells said it.
That's just like if a New York Times reporter said it.
I'm going to tend to think he didn't just make it up.
truth is, nothing interesting ever happens around me.
I don't go to war zones.
I don't go to big press conferences.
I don't interview Putin and Zelensky, right?
So just to an extent, yes.
Whereas I do think for other people, those traditional models of credibility are really, really important.
And then there is this sort of citizen journalism.
I don't know if you think of what you do as journalism
kind of thing it is but yeah you do interviews you do long-form interviews and i think people you know like if you come and you say right here's my tape but you wouldn't hand out a tape like i just gestured you as if i'm handing you a cassette tape but if you put it into your podcast here's my interview with zielinski and people aren't going to go yeah how do we know that could be a deep fake like you could have faked that because people are like well
No, like you're a well-known podcaster and you do interview interesting people.
And yeah, like you wouldn't think that.
So that your brand becomes really important.
Whereas if suddenly, and I've seen this already, I've seen sort of video with subtitles in English, and apparently the Ukrainian was the same.
And it was Zelensky saying something really outrageous.
And I'm like, yeah, I don't believe that.
Like, I don't think he said that in a meeting with, you know, whatever.
I think that's Russian propaganda or probably just trolls.
If you want to be successful, do something you're really passionate about rather than some kind of cold calculation of what can make you the most money.
Because if you go and try to do something and you're like, I'm not that interested, but I'm going to make a lot of money doing it, you're probably not going to be that good at it.
And so that is a big piece of it.
I also like – so for startups, I give this advice.
And this is a career startup, any kind of young person just starting out is like –
you know, be persistent, right?
There'll be moments when it's not working out and you can't just give up too easily.
You've got to persist through some hard times, maybe two servers crash on a Sunday and you've got to sort of scramble to figure it out, but persist through that.
And then also be prepared to pivot.
That's a newer word, new for me, but when I pivoted from
newpedia to wikipedia it's like this isn't working i've got to completely change so be willing to completely change direction when something's not working now the problem with these two wonderful pieces of advice is which situation am i in today right is this a moment when i need to just power through and persist because i'm going to find a way to make this work or is this a moment where i need to go actually this is totally not working and i need to change direction
But also I think for me, that always gives me a framework of like, okay, here's a problem.
Do we need to change direction or do we need to kind of power through it?
And just knowing like those are the choices, not always the only choices, but those choices, I think can be helpful to say, okay, am I right?
Am I checking it out, like, because I'm having a little bump and I'm feeling an emotional and I'm just going to give up too soon?
Okay, ask yourself that question.
And also, it's like, am I being pigheaded and trying to do something that actually doesn't make sense?
Okay, ask yourself that question too, even though they're contradictory questions.
Sometimes it'll be one, sometimes it'll be the other, and you got to really think it through.
Yeah, just stay with it.
I mean, I always like to give an example of MySpace because I just think it's an amusing story.
So MySpace was poised, I would say, to be Facebook, right?
It was lots of things.
Kind of foreshadowed a bit of maybe even TikTok because it was like a lot of entertainment content, casual.
And then Rupert Murdoch bought it and it collapsed within a few years.
And part of that, I think, was because they were really, really heavy on ads and less heavy on the customer experience.
So I remember to accept a friend request was like three clicks where you saw three ads.
And on Facebook, you accept the friend request, you didn't even leave the page.
It just, like, that just accepted it.
But what is interesting, so I used to give this example of like, yeah, well, Rupert Murdoch really screwed that one up, in a sense, maybe he did, but somebody said, you know what, actually, he bought it for, and I don't remember the numbers, he bought it for 800 million, and it was very profitable through its decline.
he actually made his money back and more.
So it wasn't like from a financial point of view, it was a bad investment in the sense of you could have been Facebook, but on sort of more mundane metrics, it's like, actually it worked out okay for him.
It all matters how you define success.
And that is also advice to young people.
One of the things I would say, like when we have our mental models of success as an entrepreneur, for example, and your examples in your mind are,
Bill Gates, Mark Zuckerberg.
So people who at a very young age had one really great idea that just went straight to the moon and became one of the richest people in the world.
That is really unusual, like really, really rare.
And for most entrepreneurs, that is not the life path you're going to take.
You're going to fail.
You're going to reboot.
You're going to learn from what you failed at.
You're going to try something different.
And that is really important because if your standard of success is –
Well, I feel sad because I'm not as rich as Elon Musk.
It's like, well, so should almost everyone, possibly everyone except Elon Musk is not as rich as Elon Musk.
And so that, you know, like, realistically, you can set a standard of success, even in a really narrow sense, which I don't recommend, of thinking about your financial success.
It's like if you measure your financial success by thinking about billionaires, like that's heavy.
Like that's probably not good.
I don't recommend it.
Whereas like I personally, you know, like for me, when people, when journalists say, oh, how does it feel to not be a billionaire?
I usually say, I don't know, how does it feel to you?
Because they're not.
But also I'm like, I live in London.
The number of bankers that no one's ever heard of who live in London who make far more money than I ever will is quite a large number.
And I wouldn't trade my life for theirs at all, right?
Because mine is so interesting.
Like, oh, right, Jimmy, we need you to go –
and meet the Chinese propaganda minister.
That's super interesting.
Like, yeah, Jimmy, you know, like here's the situation.
Like you can go to this country and while you're there, the president has asked to see you.
I was like, God, that's super interesting.
Jimmy, you're going to this place and there's a local Wikipedia who said, do you want to stay with me and my family?
And I'm like, yeah, like that's really cool.
Like I would like to do that.
That's really interesting.
I don't do that all the time, but I've done it, and it's great.
So for me, that's like arranging your life so that you have interesting experiences.
It's just great.
I don't think there is an external answer to that question.
Oh, interesting.
I have to read that and see what I think.
Yeah, yeah, yeah.
No, I think there's no external answer to that.
I think it's internal.
I think we decide what meaning we will have in our lives and what we're going to do with ourselves.
And so when I think, you know, if we're talking about a thousand years, millions of years, you know,
Uri Milner wrote a book.
He's a big internet investor guy.
He wrote a book advocating quite strongly for humans exploring the universe and getting off the planet.
And he funds projects using lasers to send little cameras and interesting stuff.
And he talks a lot in the book about meaning.
His view is that the purpose of the human species is to
broadly survive and get off the planet.
Well, I don't agree with everything he has to say, because I think that's not a meaning that can motivate most people in their own lives.
It's like, okay, great.
You know, like, the distances of space are absolutely enormous, so I don't know, should we build generation ships to start flying places?
Well, I can't do that, and I'm not, even if I could, even if I'm Elon Musk and I could
devote all my wealth to build it.
I'll be dead on the ship on the way.
So is that really meaning?
But I think it's really interesting to think about.
And reading his little book, it's quite a short little book, reading his book, it did make me think about, wow, this is big.
This is not what you think about in your day-to-day life.
It's like, where is the human species going to be in 10 million years?
And it does make you sort of turn back to Earth and say,
Gee, let's not destroy the planet.
We're stuck here for at least a while.
And therefore, we should really think about sustainability and sustainability.
one million year sustainability.
And we don't have all the answers.
We have nothing close to the answers.
I'm actually excited about AI in this regard, while also bracketing.
Yeah, I understand there's also risks and people are terrified of AI.
But I actually think it is quite interesting, this moment in time that we may have in the next 50 years,
to really, really solve some really long-term human problems, for example, in health.
Like the progress that's being made in cancer treatment, because we are able to, at scale, model molecules and genetics and things like this.
It's really exciting.
So if we can hang on for a little while...
And certain problems that seem completely intractable today, like climate change, may end up being actually not that hard.
Yeah, you just triggered me to say something really interesting, which is when we talked earlier about translating,
and using machines to translate.
We mostly talked about small languages and translating into English, but I always like to tell this story of something inconsequential, really.
I was in Norway, in Bergen, Norway, where every year they've got this annual festival called Buekor, which is young groups drumming, and they have a drumming competition.
It's the 17 sectors of the city, and they've been doing it for a couple hundred years or whatever.
They wrote about it in the three languages of Norway.
And then from there, it was translated into English, into German, et cetera, et cetera.
And so what I love about that story is what it reminds me is, like, this machine translation...
And like, when you talk about the richness and broadness of human culture, we're already seeing some really great pieces of this.
So like Korean soap operas, really popular, not with me, but with people.
And the ability to, you know, imagine taking a very famous, very popular, very well-known Korean drama and
And now, I mean, and I literally mean now, we're just about there technologically, where we use a machine to re-dub it in English in an automated way, including digitally editing the faces so it doesn't look dubbed.
And so suddenly you say, oh, wow, like here's a piece of, you know, it's the Korean equivalent of maybe it's Friends as a comedy or maybe it's Succession just to be very contemporary.
It's something that really impacted a lot of people and they really loved it and we have literally no idea what it's about.
And suddenly it's like, wow, you know, like music, street music from wherever in the world can suddenly become –
accessible to us all in new ways.
One of my unsuccessful arguments with the
But blocking Wikipedia, right, you aren't just stopping people in China from reading Chinese Wikipedia and other language versions of Wikipedia.
You're also preventing the Chinese people from telling their story.
So is there a small festival in a small town in China like Buyei Corp.
But by the way, the people who live in that village, that small town of 50,000, they can't put that in Wikipedia and get it translated into other places.
They can't share their culture and their knowledge.
And I think for China, this should be a somewhat influential argument because China does feel misunderstood in the world.
It's like, okay, well, there's one way.
If you want to help people understand...
Put it in Wikipedia.
That's what people go to when they want to understand.
I keep saying Wikipedia.