Grant Harvey
👤 SpeakerVoice Profile Active
This person's voice can be automatically recognized across podcast episodes using AI voice matching.
Appearances Over Time
Podcast Appearances
Like nobody has made a good agentic writing tool where you can like work with words like for a document you're making just like...
a coding agent?
Like, why does that not exist?
And can you make one?
What's the key insight?
Yeah.
Do you have any intuition following that same line of thinking how a company could make the rest of their business agent ready?
So it sounds like if you're just starting with, this is very powerful for engineers, but if you're just starting with engineering, that this also is a good tool for you.
Do you agree with that or what do you think?
Do you have any advice?
Because you are clearly very good at building agentic tools and working with agents.
Do you have any advice for people who are working with agents or trying to build their own agents?
Especially if, let's say, they're even new to working with agents or working in this field.
Doing good.
Doing good.
Really excited today because we are talking about something deceptively simple but incredibly fragile, whether or not we can monitor an AI's reasoning before it causes harm.
Reasoning models don't just spit out answers, they work through their responses.
They plan, they deliberate, and sometimes that internal chain of thought reveals intent that never shows up in the final output.
I guess let's start a little bit further back than that.
So Bowen, when did you join OpenAI and how did you get involved into monitorability research more broadly?