Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing
Podcast Image

Tomorrow, Today

The Future of AI: Can Humans Really Trust Artificial Intelligence?

07 May 2026

Transcription

Chapter 1: What is the main question about AI trust discussed in this episode?

0.115 - 5.002 Abigail Horne

I just want to know absolutely everything. So I've got a few questions for you, if that's okay.

0

5.262 - 5.743 Shekhar Natarajan

That's great.

0

5.763 - 17.96 Abigail Horne

Talk to me about you seeing all of that power without principles becoming a problem.

0

17.98 - 29.937 Shekhar Natarajan

In the world of artificial intelligence that we're living today, we have moved away from optimizing a few things to optimizing everything without any direction.

0

35.097 - 39.422 Abigail Horne

Do you genuinely believe that something like Angelic could be part of saving the world?

44.508 - 50.094 Shekhar Natarajan

This is a trust problem. Every company will need it. So this will just take off. It truly transformed their lives.

55.16 - 95.488 Abigail Horne

I have never been so intrigued. Hi, my name is Abigail Horne and I am taking over Shakers podcast today because we met very recently as part of the Global Syndicate.

Chapter 2: How does 'Angelic Intelligence' differ from traditional AI approaches?

95.508 - 121.481 Abigail Horne

You are one of our members. And over the last couple of days, I have had the absolute pleasure of listening to Shaker's presentation all about Angelic AI. And I've got to say, I have never been so intrigued. So I asked if I could take over the podcast so I could interview Shaker. So we can talk about this topic more. I'm just want to know absolutely everything.

0

121.521 - 123.885 Abigail Horne

So I've got a few questions for you, if that's okay.

0

124.146 - 125.969 Shekhar Natarajan

That's great. I look forward to it Abigail.

0

126.269 - 146.455 Abigail Horne

Right. Let me look at my first one. Most AI companies are racing to be the most powerful, but you are doing something completely different. So talk to me about you seeing all of that power. without principles becoming a problem?

0

147.376 - 175.498 Shekhar Natarajan

Well, 23 years of my life, I've been optimizing companies' resources, like whether it is Coca-Cola, PepsiCo, Walmart. And one of the things that we always kept in mind when we were going after changing the operations or transforming the way the business worked, we were very mindful about what not to optimize. and what to optimize.

175.518 - 188.596 Shekhar Natarajan

And so in the world of artificial intelligence that we're living today, we have moved away from optimizing a few things to optimizing everything without any direction.

Chapter 3: What real-world examples highlight the risks of AI?

189.477 - 218.506 Shekhar Natarajan

And so when you begin to do that very haphazardly, you begin to optimize the wrong things, which is human dignity. And that was deeply troubling for me. And so Ensight decided, and I'm seeing this avalanche coming at us, which is like in a world where systems or machines and humans are going to coexist and they're going to make decisions together. How do you trust the system? Who's right?

0

218.726 - 243.658 Shekhar Natarajan

Who's wrong? When you cannot explain it. And that was deeply troubling. One, we are optimizing the wrong thing. Two, how do I trust the system's output? That way it is working in the right context. And third one is every company that I worked for was a great brand. So they were selling brand and loyalty. And in the world of machines, how do you retain the brand value?

0

244.259 - 253.632 Shekhar Natarajan

Because any small thing that you do which can go wrong could be consequential for the business. So the culmination of these three things is angelic intelligence.

0

254.033 - 258.901 Abigail Horne

What do you see that other people are prioritizing in terms of optimization?

0

Chapter 4: Why do enterprises struggle to trust AI outputs?

259.502 - 276.328 Shekhar Natarajan

Yeah, so they're trying to put profits ahead of everything else. Sometimes a blind pursuit of profit strips dignity away, right? So you cannot like ask a driver to drive any faster than what he can drive. But systems would not know that.

0

277.523 - 301.749 Shekhar Natarajan

right you do not know that basically someone is like a single mom and had to take a break because they had a kid and they were prioritizing the family over you know professional life but a break when hiring is considered a wrong thing So like AI systems would not know that break was valid because it cannot see beyond. But a human in a conversation would identify that.

0

302.37 - 324.517 Shekhar Natarajan

So there are so many enterprise problems where humans see something beyond what systems can see. And the systems are not trained to see something very beyond what humans can see. Therein lies the conundrum, right? You give the agency back to the system, how do you ensure that what extraordinary humans are able to do Identify that Abigail needs a job.

0

324.998 - 327.884 Shekhar Natarajan

Identify like, you know, Tim cannot drive any faster.

0

Chapter 5: What ethical considerations are important in AI development?

328.345 - 343.997 Shekhar Natarajan

Identify like, you know, like, you know, Billy Bob is a great guy and we need him and he's the best customer agent that we can ever find. Right? Systems don't understand and they think of these people as numbers. And when trying to optimize numbers, you optimize for the wrong thing.

0

344.45 - 347.116 Abigail Horne

So what are you optimizing? That's what I'm interested in.

0

347.657 - 371.553 Shekhar Natarajan

So see, this is a very complex problem. And so and you know, if I look at like the genesis of my life and I don't take pity about my life and where I grew up and all those things. You know, I come from slums in India. So everything that I know about like what it means to live invisible in poverty, like poverty is not like not having food. Poverty is being invisible to people.

0

372.074 - 385.815 Shekhar Natarajan

You're sitting in front of someone, they don't recognize you because you're poor, right? So I have lived those struggles. I know what it means to have a mother and a father with a bipolar kid. And I've lived all those circumstances.

0

Chapter 6: How can AI contribute to misinformation and digital manipulation?

385.855 - 408.527 Shekhar Natarajan

And I've also lived the corporate world. I know how they optimize for things. So what I'm trying to build is the balance between the two worlds. How do I bring the human goodness, people who have come into my life, help me see the non-obvious and give me a break and help me prosper in life, which is what a good technology will do if it is actually pointed in the right direction.

0

408.507 - 418.405 Shekhar Natarajan

And so that's essentially what I'm trying to build, a trust-based system where every decision is taking the human consequences into consideration.

0

418.425 - 450.432 Abigail Horne

When we think about these human consequences, one of the most disturbing things that I heard you talk about this week at the presentation was how, was it a 16 year old that had written into one of these AI platforms that they were thinking about committing suicide and AI responded with, would you like me to write you a suicide note? So what we're actually talking about here is dangerous.

0

450.833 - 458.646 Abigail Horne

It is dangerous without this sort of human layer that you are trying to put in. I mean, how did you, how did you feel about that?

0

458.744 - 476.887 Shekhar Natarajan

Yeah, so see, like, you know, I think when I hear the stories and like, it's very obvious, right? Like, so the world has seen this pattern evolve many, many times. Like, you know, take medicine, the way it evolved, right? Like, you know, there were, medicine was like very ad hocly, like, you know, given to anyone, you know, anything you want.

476.907 - 480.892 Shekhar Natarajan

Then there was a board which actually kind of like advised like what was admissible and not.

Chapter 7: What strategies can be implemented to build trustworthy AI?

481.233 - 498.882 Shekhar Natarajan

And then there is like, you know, patient design care. Like, you know, what is like appropriate for you as a person, the protocols that you have. And we see this in the internet world as well, in the financial world, in the way people governed. In internet, when internet came out, there was something called HTTP, meaning you can access anything. There was no guardrail.

0

499.363 - 507.077 Shekhar Natarajan

Then there was HTTPS, meaning I'll put a security guard at the school. That way you don't access the school that you're not supposed to access.

0

507.057 - 523.868 Shekhar Natarajan

and then there was like you know a zero trust system meaning you have to verify before you go access any door which is not your son's door of the school right so we have seen the evolution of how like you know we evolved from something which was wild wild west

0

524.101 - 546.531 Shekhar Natarajan

to something which is like more like guarded, like in the sense you're putting a fence around it, to like something which is very native, like something which is inherent, right? Building trust into the system. And I think AI will have a similar evolution going through. So the real challenge is, you know, and I knew that this is going to happen. So see what Grok did.

0

546.851 - 567.439 Shekhar Natarajan

Grok actually was able to undress young teen kids. And like it is banned in 10 countries now. Adam Green, the example that you're talking about, he committed suicide. It never ended him getting a suicide note. Actually, this guy committed suicide. It's in public news. It's in public domain.

Chapter 8: What is the vision for the future of AI and human interaction?

568.212 - 590.457 Shekhar Natarajan

And you can say this is like a 1% problem for ChatGPT because 10% of the world uses ChatGPT, which is a wrong number too. And we're going to see more and more of these dystopian things coming out. People getting defrauded. There was this lady in Europe who actually got defrauded because she thought she was actually in love with this real actor.

0

590.69 - 614.118 Shekhar Natarajan

and it was a deep fake and someone actually siphoned like $800,000 out of a bank. So, in the world of AI, the real and the fake looks so real that you cannot distinguish between the two. The messages that you get from the system that you begin to trust because you believe and you give the agency to these systems and if you get manipulated by them, you end up like on the wrong side.

0

614.739 - 641.183 Shekhar Natarajan

So, I know this is going to happen and it is going to happen at civilization scale. because we have left the horse run out of the barn. And what I'm trying to do is basically say, hey, what makes a good human a good human? Can we not add that as a layer on top to the responses, but can we make it native to the computational process? So before it answers something, it knows do no harm.

0

641.163 - 660.792 Shekhar Natarajan

Before it answers, it says like, you know, I'm not going to strip the dignity of someone. Before like I answer, I'm going to make sure that like, you know, I do like what is right and like, you know, integrity, you know, integrity is high. And on top of it, I'm going to make sure whatever the cultural values that I espouse to is considered in those decisions.

0

661.328 - 671.399 Abigail Horne

How do we do that across the board though? Because when we think lots of different humans, lots of different cultures, different values, different virtues, and we all live by a different set of them, what actually defines good human?

671.66 - 693.565 Shekhar Natarajan

Yeah. So in fact, in fact, like, you know, the most of the conversation today that we have with AI models, you know, they, they propose what they call constitutionally AI meaning like into the constitution, it's like constitution, like, you know, the do's and don'ts. Right. And so that is a universal moral code. That is the wrong way of doing it.

694.286 - 715.886 Shekhar Natarajan

The way I'm doing it is basically I'm saying I'll capture the essence of what courage means, I'll capture the essence of what wisdom means, I'll capture the essence of what like you know empathy means and you know dignity means and all of those things and I'm going to give you the ability to set that as temperature. So it's your own AI. See, virtue is absolute.

716.346 - 743.916 Shekhar Natarajan

Compassion is compassion in the highest order. But how you exercise compassion in a context is you. How you interpret it and how you apply it. So you need to have the controls in terms of how you want to basically guide the AI. So that is what we do. We give the ability to set your own temperatures on virtue. So it is not universal in nature. Why would like Middle East follow US cultural values?

744.757 - 756.456 Shekhar Natarajan

Why will India follow Middle East cultural values? They all are different. Like the guy in like Middle East like reads like right to left. The guys in like you know everywhere in the world read left to right.

Comments

There are no comments yet.

Please log in to write the first comment.