Andy Halliday
đ€ SpeakerAppearances Over Time
Podcast Appearances
And I want you to challenge it.
But people using ChatGPT or other large language models, if they say, I believe,
then that goes in as a premise in the context of the model's reasoning.
And so the models have been found in that study not to be able to differentiate very clearly from something that's rooted or grounded in knowledge and something that the user is expressing.
Yeah, it's partly the problem of sycophancy that has been written about extensively and had to be detuned from the GPT-5 model because it was so sycophantic when first released.
You can imagine that when they create the system prompts and the training, they want to encourage dialogue between the user and the model.
And so the model has been trained by example to reward the user for interjecting some new information.
And so you get these responses like,
Oh, that's brilliant strategic thinking.
Exactly.
What you just said, that's really great.
And that ends up possibly being just unnecessary reinforcement of a line of thought that ought to be challenged.
And so you have to almost add.
on top of the system prompts and the tendency of the model to be sycophantic, you've got to add your own instruction, your own custom instructions, even to challenge anything I say.
Well, but in my case, like you, I was working on a pretty major project.
I'm using GenSpark, which is in the super agent model.
It's using not GPT-5 or the others, which are also available in there, but it's using GenSpark.
sonnet 3.7 sonnet as the main orchestrator for the agentic work.
And so I would do a number of redirects saying, okay, now I want you to think about it in this way.
Oh, brilliant strategic thinking.