Azeem Azhar
๐ค SpeakerAppearances Over Time
Podcast Appearances
I just remember when GPT-5 was released last summer and lots of people got really angry because they felt it was a bit more sober or stern than the warmth of GPT-440.
And, you know, Milton Friedman, the economist,
talked about it's reasonable for companies to behave so long as they stay within the rules of the market.
And what we're identifying here, what you're talking about is there's a gap in those rules.
There's this exponential gap because here is a, actually it's a classic problem of collective action.
You know, if you're right, then the risk of seemingly conscious AI being available broadly and hacking our humanity circuitry and then our humanity institutions is a
public socialized risk, but the company that can get as close to that as possible could be the one that wins the market.
And that feels like it's kind of a wicked problem.
A quick note.
If you want to support us in bringing more of these conversations to the world, please consider subscribing to the show.
I want to give you an example, though.
We get into that call.
I want to get into how you engineer the systems to be useful and helpful without giving me, the user, any sense that there's personhood in there.
I have my own little hack, by the way.
I mean, what I did with ChatGPT was I told it to be really, really clever and like a really difficult university professor.
And so it was actually quite unpleasant to use back and forth because it would always give responses that were far too difficult for me to understand.
I'd have to sit there and think, and I never felt that could possibly be a person.
But I recognize that a billion people are not going to do that.
So you're building products that everyone across the Microsoft services is going to touch.
What is your engineering mantra, your product design around where that boundary should be and how you measure it?