Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing
Podcast Image

The Daily AI Show

Tony Robbins’ AI Hype, AI That Agrees Too Much, and McKinsey’s 2025 Report

10 Nov 2025

Transcription

Full Episode

0.959 - 20.342 Brian Maucere

Hey, what's going on, everybody? Welcome to the Daily Eye Show Live. Today is November 10th, 2025. With me today so far is Andy. I'm Brian. I think we'll have Carl coming in the door here in just a little bit. I'm glad you guys are with us. If you don't know, I always like to throw this out there at least once a week. We do this show five days a week, Monday through Friday, 10 a.m.

0

20.362 - 27.591 Brian Maucere

Eastern, same time we're doing this live right now. If you want to join us live, one of the easiest ways to do that is come over to YouTube and you can join the live stream.

0

27.571 - 47.294 Brian Maucere

now you can always watch us on replay lots of people do and we show up on the podcast platforms as well so you can find us on spotify and other podcasts and places like that so uh make sure you come back and hang out with us this is episode 591 so uh we're making our push here towards 600 episodes And just happy to be here day in and day out talking about AI.

0

47.734 - 64.577 Brian Maucere

So let's kick things off, Andy, by talking a little bit about some news stories that either came over the weekend or over the wire, which doesn't really happen anymore. But it's still fun to say, like, the news that came over the wire. So what do you have that kind of grabbed your attention since, I guess, running?

0

64.597 - 82.081 Andy Halliday

I'm going to hit on a small one first before I bounce back to you. And it is something that actually was published last week. but we haven't talked about it yet. And I think it's an interesting insight into how the context data that goes into an inference run affects the model's responses.

82.562 - 104.928 Andy Halliday

And it is a study at Stanford that found that AI chatbots don't have the ability to really distinguish between facts and beliefs when they're expressed by the user. So if the user says, I believe, literally those terms, I believe that blah, blah, blah, blah, blah. It takes that as a fact.

106.711 - 129.832 Andy Halliday

And you could, by prompting the model, prevent that to some degree by saying, look, anything that I say, I don't want you to take as a fact. And I want you to challenge it. But people using ChatGPT or other large language models, if they say, I believe, then that goes in as a premise in the context of the model's reasoning.

130.192 - 141.232 Andy Halliday

And so the models have been found in that study not to be able to differentiate very clearly from something that's rooted or grounded in knowledge and something that the user is expressing.

142.234 - 165.444 Brian Maucere

Yeah, I don't know if you deal with this. all the time too. I don't think I say I agree. I might, I might do that with models and stuff, but I definitely still see the, the big problem of it's not the over the top agreeing that I think a lot of people talked about what four Oh, and just that, you know, Oh, you couldn't be, you couldn't be smarter, Andy.

Comments

There are no comments yet.

Please log in to write the first comment.