Ed Zetron
๐ค SpeakerAppearances Over Time
Podcast Appearances
Hello and welcome to Better Offline.
I'm of course your host Ed Zetron.
As ever, support your neighborhood Zitron by subscribing to the premium newsletter discount link in the episode notes, of course.
Buy a t-shirt, download a blog, whatever it is you want to do, okay?
It's not up to me what you do.
But today, I'm joined by the incredible ComSci professor and commentator Cal Newport.
Cal, thank you for joining me.
So I kind of wanted to start with, I asked you for a quote a few, like a week ago, maybe two weeks ago.
I can't remember how time works anymore, but it was around the way the reporters cover AI and how it seems that a lot of the reporting is kind of directionally true rather than actually true.
And I see it a lot with anything to do with AI and job studies.
Like I've been sent this Tufts report where it's like, oh yeah, AI affected, or they find these weird weasel words where it's like,
jobs that could be at risk from ai at some point and we put them in one bucket and then jobs that might one day be we'll put that in another bucket and there you go don't know what we're like you said don't know what we're meant to do with this don't know what anyone's meant to do with this information but it's just like well there you have it there you have it we're all fucked it's it's the it's the end the job even though the data does not say that like i've read i think every ai jobs report now
Every single one.
And they're all the same.
They are all, right now, AI can do this.
And then you look at what it says.
It's like, it can do law.
Well, it can't really do law.
It can do one sigma within law, kind of.
And even then, it isn't really obvious.