Chris Pedregal
๐ค SpeakerAppearances Over Time
Podcast Appearances
And if you're not careful about what you outsource, I think there's real danger there. So the tool that I would want would be one that Right now, we have so many silos of information and so many silos of where knowledge or inspiration or information comes from. And oftentimes, I'm only really looking at data or information from one of those silos when I'm thinking about a topic.
And if you're not careful about what you outsource, I think there's real danger there. So the tool that I would want would be one that Right now, we have so many silos of information and so many silos of where knowledge or inspiration or information comes from. And oftentimes, I'm only really looking at data or information from one of those silos when I'm thinking about a topic.
And what I want is a tool that will pull out the most relevant and best stuff from my personal life and my context, but also out there that humans have figured out and present that to me dynamically on the fly in a way that I can interpret and make use in real time. What that looks like, I don't think anyone knows. I saw this amazing demo a friend of mine made.
And what I want is a tool that will pull out the most relevant and best stuff from my personal life and my context, but also out there that humans have figured out and present that to me dynamically on the fly in a way that I can interpret and make use in real time. What that looks like, I don't think anyone knows. I saw this amazing demo a friend of mine made.
There's this microphone hooked up to something like mid-journey, but it was running at something like, I think, five or eight frames a second. And what it was doing is like real time you were talking about, like for this conversation, it would be projecting on the wall imagery that was related to what we were talking about, but slightly divergent.
There's this microphone hooked up to something like mid-journey, but it was running at something like, I think, five or eight frames a second. And what it was doing is like real time you were talking about, like for this conversation, it would be projecting on the wall imagery that was related to what we were talking about, but slightly divergent.
He was using this from like a Burning Man creative experience. But you could imagine something like that in a work context where it's like, it's, helping you think out loud, but it's also extending and bringing in ideas or useful information that you wouldn't have had otherwise. I think doing that in a way that's helpful and not distracting is actually really, really hard.
He was using this from like a Burning Man creative experience. But you could imagine something like that in a work context where it's like, it's, helping you think out loud, but it's also extending and bringing in ideas or useful information that you wouldn't have had otherwise. I think doing that in a way that's helpful and not distracting is actually really, really hard.
And there are a lot of these ideas in sci-fi that sound fantastic and then in practice don't work for really silly tactical reasons, like the notes being written for you in real time being distracting. I think there's like a lot about the human experience that defines what works and what doesn't. I can talk about this for hours. I gush on it.
And there are a lot of these ideas in sci-fi that sound fantastic and then in practice don't work for really silly tactical reasons, like the notes being written for you in real time being distracting. I think there's like a lot about the human experience that defines what works and what doesn't. I can talk about this for hours. I gush on it.
I just think it's such an incredible moment to be alive and to be building things.
I just think it's such an incredible moment to be alive and to be building things.
I think it's good to separate the reality today from what's a reality that will persist? What's the limitation that will persist in the future? It is surprising to me how unpersonalized any of these models feel today. If you ask it a question, because I ask it a question, the answers are going to be identical or almost identical given we're X number of years into this cycle.
I think it's good to separate the reality today from what's a reality that will persist? What's the limitation that will persist in the future? It is surprising to me how unpersonalized any of these models feel today. If you ask it a question, because I ask it a question, the answers are going to be identical or almost identical given we're X number of years into this cycle.
I think that's really surprising. This is a small thing we do at Granola that people like, but if you were using Granola in, let's say, a meeting and I'm using Granola in that meeting, your notes and my notes would look completely different. And that's just because we built it that way.
I think that's really surprising. This is a small thing we do at Granola that people like, but if you were using Granola in, let's say, a meeting and I'm using Granola in that meeting, your notes and my notes would look completely different. And that's just because we built it that way.
We're like, okay, the things that matter to Patrick in this meeting we think are this, the things that are going to matter to Grace are this. But the low level of personalization is surprising to me.
We're like, okay, the things that matter to Patrick in this meeting we think are this, the things that are going to matter to Grace are this. But the low level of personalization is surprising to me.
I'm not an investor, so it's hard for me to give advice to investors. I can tell what speaks to me. So the same way I talked about when you're building a feature, you need to know, is this exploit mode or an explore mode? I think AI as a whole is an explore mode problem. No one knows what the right thing is. I think maybe foundation models are like now more in an exploit mode.
I'm not an investor, so it's hard for me to give advice to investors. I can tell what speaks to me. So the same way I talked about when you're building a feature, you need to know, is this exploit mode or an explore mode? I think AI as a whole is an explore mode problem. No one knows what the right thing is. I think maybe foundation models are like now more in an exploit mode.