Harlan Stewart
๐ค SpeakerAppearances Over Time
Podcast Appearances
And we don't really know that either, because I think it's really easy to anthropomorphize these things because they sort of train them to have these charming personalities that are kind of human like.
But under the hood, you know, these things are just a big pile of math and numbers, and we don't really know what's going on in there.
We don't really know.
So if
I think that's a good point.
I mean, neuroscience is like famously a science that, uh, we still have a lot of confusion about, you know, when we peer into the brain, we see a lot of stuff that we don't understand that well.
Um, but you know, I think for understanding humans, we at least have the advantage of, uh, of being a human, you know, we, we can all have this shared experience.
And I think we're sort of growing these digital minds now.
And, um,
Maybe they're human-like, but it could be much more like introducing an alien species to Earth.
Yeah, I do think it is quite an amazing invention.
I mean, it's fascinating and it's changing so quickly, which is fascinating.
Um,
You know, the AI industry's explicit goal is to make superhumanly powerful autonomous agents that can do anything a human can do, but better.
And it's easy to understand why you might want something like that, because, you know, if we could get it to solve our problems for us to do the stuff we want it to, it'd be great to have, you know, just a
sort of a genie that you could just send off into the world and say, hey, do the stuff that I want to.
But the problem is that our ability to actually understand what's going on in there and our ability to reliably steer their behavior.
And by reliably steer, I mean not after some
trial and error where there's been a lot of failures, but, um, reliable enough that like a powerful one, we could send it out in the first try and, you know, and trust it.
But our ability to do those things is lagging.