Sean Kelly
๐ค SpeakerAppearances Over Time
Podcast Appearances
which like you can't fault us for it.
Cause what else do we have to compare it to?
But I think it's always going to look a little bit different and I don't really think, I think we're just going to keep moving the goalposts to be perfectly honest.
And it's going to keep getting more, more powerful, more general purpose, more amazing.
But I don't know if we're really going to get to this moment where we're like, now it's, now it's arrived.
develop a consciousness yeah i mean like yeah like that the whole conscious debate i mean you could do that debate forever but we don't even have consciousness pinned down in humans right we don't defined fully uh it's not been proven in humans too yeah we don't we don't know we don't know we're just some physical pattern doing cool stuff yeah and ai is going to be similar some physical pattern doing cool stuff a lot of which we can't do
Yeah.
Good luck.
But it's going to keep getting crazier.
That's for damn sure.
And it's going to keep doing, you know, just crazier and crazier stuff.
Like one thing that, you know, Demis, the founder of DeepMind often points to is like, can we get AI to generate original science, for example?
Yeah, like people like Newton or Einstein or Teller or Dirac.
Yeah, they observed our reality and extrapolated math from it, essentially, that we could then use to make predictions about how reality behaves.
And can we get an AI to do that is a really interesting question.
And right now it's like,
unclear like there's some little early examples but we have we definitely haven't figured out how to like automate physics like automate scientific discovery yet with ai um but that would that would be sick that would be nuts that would be nuts that would be nuts that'd be nuts because right now it's all the prompting like the human still has to put in a lot of effort into the ai
Humans have to put a lot of effort into the AI, and there's no prompt right now that we can give to really get some original observation about reality out from the AI.
So there's also just a model limitation that the model just doesn't understand or isn't able to observe our reality sufficiently enough to really draw its own conclusions, if that makes sense.
Yeah, thanks for asking.