Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Mike Krieger

👤 Speaker
633 total appearances

Appearances Over Time

Podcast Appearances

Decoder with Nilay Patel
Anthropic’s Mike Krieger wants to build AI products that are worth the hype

So you have to kind of like walk through the door and say, the earliest I'd be willing to go back the other way is, you know, two months from now or with this particular piece of information. And hopefully that kind of quiets the like, even internal critic of like, It's a two-way door. I'm always going to want to go back there.

Decoder with Nilay Patel
Anthropic’s Mike Krieger wants to build AI products that are worth the hype

So you have to kind of like walk through the door and say, the earliest I'd be willing to go back the other way is, you know, two months from now or with this particular piece of information. And hopefully that kind of quiets the like, even internal critic of like, It's a two-way door. I'm always going to want to go back there.

Decoder with Nilay Patel
Anthropic’s Mike Krieger wants to build AI products that are worth the hype

I think current generation, yes, in some areas, no, in others. I think maybe what makes me an interesting product person here is that I really believe in our researchers, but default belief is everything takes longer in life and in general and in research and in engineering than we think it does. I do this mental exercise with the team, which is,

Decoder with Nilay Patel
Anthropic’s Mike Krieger wants to build AI products that are worth the hype

I think current generation, yes, in some areas, no, in others. I think maybe what makes me an interesting product person here is that I really believe in our researchers, but default belief is everything takes longer in life and in general and in research and in engineering than we think it does. I do this mental exercise with the team, which is,

Decoder with Nilay Patel
Anthropic’s Mike Krieger wants to build AI products that are worth the hype

I think current generation, yes, in some areas, no, in others. I think maybe what makes me an interesting product person here is that I really believe in our researchers, but default belief is everything takes longer in life and in general and in research and in engineering than we think it does. I do this mental exercise with the team, which is,

Decoder with Nilay Patel
Anthropic’s Mike Krieger wants to build AI products that are worth the hype

If our research team like got Rip Van Winkle all fell asleep for like five years, I still think we'd have five years of product roadmap. And we'd be like, we are bad at our jobs. We're terrible at our jobs.

Decoder with Nilay Patel
Anthropic’s Mike Krieger wants to build AI products that are worth the hype

If our research team like got Rip Van Winkle all fell asleep for like five years, I still think we'd have five years of product roadmap. And we'd be like, we are bad at our jobs. We're terrible at our jobs.

Decoder with Nilay Patel
Anthropic’s Mike Krieger wants to build AI products that are worth the hype

If our research team like got Rip Van Winkle all fell asleep for like five years, I still think we'd have five years of product roadmap. And we'd be like, we are bad at our jobs. We're terrible at our jobs.

Decoder with Nilay Patel
Anthropic’s Mike Krieger wants to build AI products that are worth the hype

We can't think of all the things that even in our current models could do in terms of improving work, accelerating coding, making things easier, coordinating work, even intermediating disputes between people, which I think is a funny LLM use case that like we've even seen play out internally around like.

Decoder with Nilay Patel
Anthropic’s Mike Krieger wants to build AI products that are worth the hype

We can't think of all the things that even in our current models could do in terms of improving work, accelerating coding, making things easier, coordinating work, even intermediating disputes between people, which I think is a funny LLM use case that like we've even seen play out internally around like.

Decoder with Nilay Patel
Anthropic’s Mike Krieger wants to build AI products that are worth the hype

We can't think of all the things that even in our current models could do in terms of improving work, accelerating coding, making things easier, coordinating work, even intermediating disputes between people, which I think is a funny LLM use case that like we've even seen play out internally around like.

Decoder with Nilay Patel
Anthropic’s Mike Krieger wants to build AI products that are worth the hype

These two people have this belief, like help us even ask each other the right questions to get us to that place. So it's just a good sounding board as well. Like there's a lot in there that is embedded in the current models.

Decoder with Nilay Patel
Anthropic’s Mike Krieger wants to build AI products that are worth the hype

These two people have this belief, like help us even ask each other the right questions to get us to that place. So it's just a good sounding board as well. Like there's a lot in there that is embedded in the current models.

Decoder with Nilay Patel
Anthropic’s Mike Krieger wants to build AI products that are worth the hype

These two people have this belief, like help us even ask each other the right questions to get us to that place. So it's just a good sounding board as well. Like there's a lot in there that is embedded in the current models.

Decoder with Nilay Patel
Anthropic’s Mike Krieger wants to build AI products that are worth the hype

I would agree with you that like the big open questions to me, I think it's basically like for longer horizon tasks, what is the sort of horizon of independence that you can and are willing to give the model? Like the metaphor I've been using is right now, LLM chat is very much, you've got to do the back and forth because you have to correct, you know, you've got to iterate.

Decoder with Nilay Patel
Anthropic’s Mike Krieger wants to build AI products that are worth the hype

I would agree with you that like the big open questions to me, I think it's basically like for longer horizon tasks, what is the sort of horizon of independence that you can and are willing to give the model? Like the metaphor I've been using is right now, LLM chat is very much, you've got to do the back and forth because you have to correct, you know, you've got to iterate.

Decoder with Nilay Patel
Anthropic’s Mike Krieger wants to build AI products that are worth the hype

I would agree with you that like the big open questions to me, I think it's basically like for longer horizon tasks, what is the sort of horizon of independence that you can and are willing to give the model? Like the metaphor I've been using is right now, LLM chat is very much, you've got to do the back and forth because you have to correct, you know, you've got to iterate.

Decoder with Nilay Patel
Anthropic’s Mike Krieger wants to build AI products that are worth the hype

No, that's not quite what I meant. I meant this. A good litmus test for me is like, when can I email Claude and generally expect that an hour later, it's not going to give me the answer it would have given me in the chat, which would have been a failure, but like it would have done more interesting things and gone find out things and iterate on them and even like self-critiqued and then responded.

Decoder with Nilay Patel
Anthropic’s Mike Krieger wants to build AI products that are worth the hype

No, that's not quite what I meant. I meant this. A good litmus test for me is like, when can I email Claude and generally expect that an hour later, it's not going to give me the answer it would have given me in the chat, which would have been a failure, but like it would have done more interesting things and gone find out things and iterate on them and even like self-critiqued and then responded.

Decoder with Nilay Patel
Anthropic’s Mike Krieger wants to build AI products that are worth the hype

No, that's not quite what I meant. I meant this. A good litmus test for me is like, when can I email Claude and generally expect that an hour later, it's not going to give me the answer it would have given me in the chat, which would have been a failure, but like it would have done more interesting things and gone find out things and iterate on them and even like self-critiqued and then responded.