Adam Oleksik
👤 PersonVoice Profile Active
This person's voice can be automatically recognized across podcast episodes using AI voice matching.
Appearances Over Time
Podcast Appearances
How sure are you that you can tell what's real online?
You might think it's easy to spot an obviously AI-generated image, and you're probably aware that algorithms are biased in some way.
But all the evidence is suggesting that we're pretty bad at understanding that on a subconscious level.
Take, for example, the growing perception gap in America.
We keep over and overestimating how extreme other people's political beliefs are, and this is only getting worse with social media, because algorithms show us the most extreme picture of reality.
As an etymologist and content creator, I always see controversial messages go more viral because they generate more engagement than a neutral perspective.
But that means we all end up seeing this more extreme version of reality, and we're clearly starting to confuse that with actual reality.
The same thing is currently happening with AI chatbots, because you probably assume that ChatGPT is speaking English to you.
Except it's not speaking English, in the same way that the algorithm's not showing you reality.
There are always distortions, depending on what goes into the model and how it's trained.
Like, we know that ChatGPT says delve at way higher rates than usual, possibly because OpenAI outsourced its training process to workers in Nigeria, who do actually say delve more frequently.
Over time, though, that little linguistic over-representation got reinforced into the model even more than in the workers' own dialects.
Now that's affecting everybody's language.
Multiple studies have found that since ChatGPT came out, people everywhere have been saying the word Delvemore in spontaneous spoken conversation.
Essentially, we're subconsciously confusing the AI version of language with actual language.
But that means that the real thing is ironically getting closer to the machine version of the thing.
We're in a positive feedback loop with the AI representing reality, us thinking that's the real reality, and then regurgitating it so that the AI can be fed more of our data.
You can also see this happening with the algorithm through words like hyperpop, which wasn't really part of our cultural lexicon until Spotify noticed an emerging cluster of similar users in their algorithm.
As soon as they identified it and introduced a hyperpop playlist, however, the aesthetic was given a direction.
Now people began to debate what did and did not qualify as hyperpop.