Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

Tomorrow, Today

The Future of AI — Setting the Stage for Tomorrow, Today

20 Oct 2025

43 min duration
7001 words
3 speakers
20 Oct 2025
Description

In the premiere of Tomorrow, Today, host Shekhar Natarajan sits down with Nadja Atwal and Kate Hancock from the Global AI Council to explore the realities and possibilities of artificial intelligence. From how AI is already shaping our world to the profound questions it raises about humanity’s future, this conversation sets the foundation for the series. Together, they dive into the opportunities, risks, and choices that will define the next era—while introducing the provocative, unfiltered lens that Tomorrow, Today brings to breakthrough ideas before they hit the mainstream.

Audio
Transcription

Chapter 1: What is the main topic discussed in this episode?

0.149 - 24.908 Shekhar Natarajan

So welcome to my show. It's called Tomorrow Today with Shekhar Natarajan. So we're going to be talking about what's going to happen tomorrow or into the future. And we're going to be talking about it today. And it's a show about not about AI and where AI is basically like improving technologically, but the deep societal impact.

0

25.597 - 52.802 Shekhar Natarajan

that we need to consider, we need to have answers to, and how are we preparing ourselves for the future? And on my show, I have the two best people that I would love to actually launch the show with. One is Nadia Atwal. She's the queen of journalism, my best friend, amazing journalist, and a thought-provoking person. She has answers to every question that you could ever imagine.

0

52.782 - 76.909 Shekhar Natarajan

And on the other side of the world is my other best friend, Kate Hancock. And basically, Kate is a two-time TED Talk speaker, amazing soul, and she's leading a beautiful effort called AI for Good. Welcome, both of you. And I'm here to basically kick off the show. Kate, like, why don't you actually walk us through what is happening?

0

Chapter 2: How does AI impact societal norms and values?

76.929 - 89.488 Shekhar Natarajan

Where are you today? Like, you know, like you are in in another part of the world. I wish like you were here with us as we launched the show. But you're doing something even more important than anything else. So please actually educate us what is going on.

0

89.587 - 112.337 Kate Hancock

Yes, absolutely. I'm currently in Vienna, Austria, currently attending the TED AI and everyone's talking about AI and media. So Shaker, I'm kind of curious, like the media often swing between fear mongering and hype when it comes to AI. What do you think the world is getting wrong or right about AI right now? I'm curious about your thought about that.

0

112.604 - 130.5 Shekhar Natarajan

Yeah, and it's different depending on who you talk to. The people who are actually developing AI, they're all about AI. They will let you drink the Kool-Aid about AI. And the guys who are actually drinking the Kool-Aid feel that it's not as cool as what it is. It really is contextual.

0

Chapter 3: What are the current misconceptions about AI?

131.381 - 150.005 Shekhar Natarajan

And there's so much going on in the world of AI. And let's get to the heart of the question itself, to the show itself. What do you guys think are the deep societal things that we should think about from an AI perspective?

0

150.626 - 179.373 Nadja Atwal

Well, I personally think there's a big issue right now with disinformation that is led by AI, where people just feel they're being pushed into certain news narratives and And like, why would I care? And why is this being pushed on me? And is this real or is this not? And there's fake voices and fake videos. And that is what people are very confused by.

0

179.893 - 202.351 Nadja Atwal

And we need to educate them on that also how to spot. I think this is basically going to be the new art of everything is how to decipher. How can we tell what is real and what is not? And why would certain information being pushed on us may already give you the first indication that you should look closer?

0

202.411 - 205.538 Shekhar Natarajan

Kate, what do you think?

0

205.72 - 227.565 Kate Hancock

Yeah, absolutely. Especially what's going, what's happening in the world, like whether it's politics or the current news, you don't even know which one is real and everyone looks so good. And for us in the AR and the media world, it's easy for us to detect, but for everyday people, they're getting the wrong information and their emotions get carried away, but they're all fake.

227.585 - 241.323 Kate Hancock

And that's something that we need to address. How can we identify the real information between the wrong? I mean, again, not everyone can detect it. where every people stress about something that's not even the truth.

241.584 - 261.574 Shekhar Natarajan

Yeah. No, I'll give you my own personal experience. Like, you know, like, like I've been playing around quite a bit, uh, growing up as a kid, like, you know, we didn't have access to many things. Like, you know, I wanted to be an artist. I wanted to learn music. I wanted to learn like painting. I wanted to like play cricket.

261.594 - 265.54 Shekhar Natarajan

Like my father said, like, you will make a horrible cricket player because we cannot bribe the system.

265.941 - 266.001

Um,

Chapter 4: How can we identify real versus fake information in the age of AI?

305.027 - 324.542 Nadja Atwal

I have to say, you know, I've never known a composer who has so much range because you're going from Bhangra to... uh, hip hop. And it's, it's very interesting, but I think this is part of the appeal of AI that AI doesn't care where you're coming from. It just gives you whatever the market will bear.

0

324.602 - 340.605 Shekhar Natarajan

Right. Exactly. So like, so what was fascinating about, it was like dopamine, you know, cortisol all kicking in, like all at the same time. And I was basically composing music and I was literally in it for like three days, nonstop composing music.

0

340.923 - 367.296 Shekhar Natarajan

And then like things started blurring for me after that, because all I was doing was giving it prompts to say, use these types of instruments, like make this epic, like make this cinematic, make this like, you know, have these instruments play at this sequence, use these decibels. I had no clue of music. But it was composing music as though it was something really good.

0

367.336 - 390.99 Shekhar Natarajan

And then I started playing it to my driver in India. And there was this huge festival where we organize Lord Ganesha on the streets and people dance to music. And my driver said, hey, like, you know, this music is so good, better than what like basically all these guys have actually done, that let me start like playing this. The artist was unknown.

0

392.352 - 392.412

Oh.

392.527 - 401.039 Shekhar Natarajan

The music is unknown. I am unknown. I have no clue. I'm not a Beethoven. I have no clue of what I'm doing.

401.079 - 402.662 Nadja Atwal

Do you know how to write music?

402.842 - 432.499 Shekhar Natarajan

I didn't even know how to write music. But here's the beautiful thing. I didn't even know how to write music. And when I actually gave some lyrics about some of the songs, it said it's copyright protected. And so it rejected it. I said, how do I hack it? So my computer brain started going off. And I said, like, let me add invisible text to the lyrics. And I gave that to that system.

433.16 - 435.323 Shekhar Natarajan

And it simply took it and recorded the music.

Chapter 5: What role does education play in shaping our future with AI?

658.901 - 684.167 Nadja Atwal

And when I read some chat GPT lyrics and notes, I have to say they're like good literature. Yeah. And I think our kids today should read more of that instead of YouTubing and doing social media. I think it would already be an upgrade, you know. So there's still a value to it. Maybe also kind of like could it be a teacher where we're saying, okay, we're putting this in.

0

684.207 - 708.858 Nadja Atwal

This is what we had in mind. And we're getting something back that is far better than we even envisioned. And there is a takeaway. We're saying, oh, this is actually better than I imagined. planned it. So, huh, I'm taking notes of that. If you're not lazy, that's the way you may, may go about it. I mean, that has happened to me where actually I learned from chat GPT.

0

708.878 - 728.947 Nadja Atwal

I said, Oh, you know, this is actually an interesting nuance. It's interesting. English is my second language. So I'm still an eternal student. And then sometimes when I'm getting certain things back in a certain language, I want to always put it in chat GPT, make it funny and, And witty, because that's my personality. I want to have it. So I never want to sound boring and just informational.

0

728.967 - 738.678 Nadja Atwal

I want to always have infotainment. So I try to teach LGBT to reflect my personality. It's a work in progress. What do you think of that?

0

739.118 - 767.715 Shekhar Natarajan

I think we will begin to lose agency if you try to do that. Right? And so I think like in schools, what people need to teach you to do more is use AI to but basically use your brain as well. Yes. So the, the don't, don't, don't like outsource your brain because once you outsource your brain, then you outsource your human aspect, right?

767.795 - 790.595 Shekhar Natarajan

Like, so, you know, people try to use their brain for reasoning capabilities to like, you know, all decisions are made through your heart, but like, it's just a head is your reasoning engine. And if you keep like delegating it, outsourcing it, then you begin to lose you as a person. So the role of schools and like education in the future is to really teach people agency.

792.137 - 808.861 Shekhar Natarajan

Like how do I take what is coming out of chat GPT? How do I like have the sense of right and wrong? How do I control the agency and the choices I make with what is being presented? I think that's where we should be thinking more.

809.302 - 835.861 Nadja Atwal

So basically the combination of human thinking and AI is still the gold standard. And Kate is on the forefront of that. That's why she created the Global AI Council, because it's all about AI for good. And Kate and I both have sons. And now it's all about thinking of how we're going to educate our children in the future. And I came across a school named Alpha that is very AI based.

836.562 - 862.822 Nadja Atwal

And they are saying, oh, we only have basically two hours of school per day for kids. And it's very tailored to the individual. Well, in medicine, that's where we are going. So the whole medical approach of the future is we're going to tailor your medical treatment towards your DNA. Now, why do we still have schools? that is one thing fits all. I find that actually very retro, I have to say.

Chapter 6: How can AI assist in creative processes like music composition?

1118.239 - 1123.707 Shekhar Natarajan

And the boxes don't have identity. It doesn't know it is a medical delivery.

0

1125.188 - 1150.116 Nadja Atwal

Yeah, I just had a horrible experience on that front because I could not get my new replacement cut of Amex forever. But I landed, thank God, with a supervisor and she sorted it out. And it was a very simple error that happened, but it delayed things forever and ever. And then you compare that to other things we are capable of through AI. And you just simply think this doesn't match.

0

1150.096 - 1161.017 Shekhar Natarajan

This like, you know, so like that's exactly the point, the cognitive, right? Like our ability to solve these very complex problems and then forget the very simple problems that actually is going to be like lifesaving.

0

1161.402 - 1165.428 Nadja Atwal

Like, with this girl who couldn't get her medicine. Tell us about what happened with her.

0

1165.589 - 1180.913 Shekhar Natarajan

So, basically, like, you know, this is a great friend of mine. Like, you know, he works for, like, Fortune 100 company. He had an 8-year-old daughter. Name was, like, Maya. And basically, like, you know, she was born, like, diabetic. Like, you know, she was, like, you know...

1180.893 - 1205.078 Shekhar Natarajan

uh and then like when uh when they were looked like you know they were basically this was a prescription drug came in they it said like it's out for delivery and then like it never showed up then she basically had like arrhythmia like you know she went into like shock and then like you know like because like you know she she needed the medication her heart and her brain like both were competing at the same time

1206.188 - 1231.863 Shekhar Natarajan

And literally within 36 hours, this young soul, beautiful girl, full of energy. And this guy is like my bestie. Okay. The girl just vanishes 36 hours later. And it's a simple problem. Like, you know, like, and... Gosh, we can go back and run this in history and say, should we have gone to a medical store and tried to see or do this or do that? It was a weekend.

1231.923 - 1254.134 Shekhar Natarajan

And who would have expected that all of these sequence of events would just happen so fast? And we would just lose a kid to multiple things all at the same time. Because she had multiple things going on at the same time. She had the heart problem. She had the arrhythmia and the diabetic shock. It's crazy.

1254.415 - 1259.867 Nadja Atwal

Yeah. But at the end of the day, the simple problem was also all that didn't arrive on time.

Chapter 7: What are the challenges and risks associated with AI in logistics?

1785.56 - 1814.617 Nadja Atwal

It would have been a comfortable life. Instead, you chose to start your own company with all the risks that it comes with, all the unknown, all the uncertainty, and raising a young family in the midst of it. What are you seeing for orchestra in the next one, two years? Because obviously we are in a race that is pretty tight. There's urgency.

0

1814.657 - 1824.897 Nadja Atwal

At least the media is trying to suggest to us we are in a very tight race to save humanity.

0

1825.13 - 1853.247 Shekhar Natarajan

Yeah. So I guess like, you know, like, as I said, we can control what we can control. And I think like in the next two years, we already have begun to take the world, which is highly human, but least respected logistics. And if I can apply the concept of angelic intelligence to logistics, then we can actually take it everywhere else because it just becomes a service that is

0

1853.227 - 1877.187 Shekhar Natarajan

like available to every sector, like hospitality, to like, you know, healthcare and like any profession where humans are involved, we can take the same philosophy. And the way we are building it is fundamentally different. And I have gone out to many countries. You know, you and I keep talking about it. I've gone out to like, you know, many places in the world.

0

1877.687 - 1890.14 Shekhar Natarajan

And then there is a sharp, like, there's a lot of fear. And there's a sharp contrast between what I am building and what the rest of the world is building.

1890.26 - 1891.541 Nadja Atwal

What is the sharp contrast?

1891.662 - 1921.737 Shekhar Natarajan

The contrast is very simple. The current model, the AI models, like assumes safeguard as an afterthought. Right? So you build, you run into mistakes, you have accidents, then you put a patch. Right? And that's not how you build the modern day machines. You build virtues as the native machines. It's built in. It's not an afterthought.

1922.718 - 1946.734 Shekhar Natarajan

And computationally, and the way you think about the problem itself is very different than how the traditional models have been trained. The traditional models is the humans gave the data to the machines. The machines took the data and is helping the humans get better answers. Now, we don't like the answers. We like the answers. And we basically are like, you know, in this loop.

1946.714 - 1970.393 Shekhar Natarajan

What I am saying is stop. Let's take what makes us human first. Let's take what are the outcomes we are looking for humans to serve the human better. So think of the machine serving the human, not the other way around. With the current approach, what's happening is machines are behaving like humans.

Chapter 8: How can we ensure AI development aligns with human values?

2243.973 - 2273.28 Shekhar Natarajan

They're setting what we need to be in the future through their tools. And I don't know who gave them the right. So, you know, just imagine, right? Like these guys don't run a government, but they can tell you like how humanity needs to behave in the future. Is it not very perverse what is happening? And how as us as humans, like, you know, that's the question I want to ask you.

0

2273.3 - 2282.749 Shekhar Natarajan

How us as humans can actually like stand up to some of these things? How do we regulate this? Governments are not going to regulate it.

0

2283.235 - 2302.56 Shekhar Natarajan

right companies have no like you know need to regulate any of these things but what is our role as humans to be able to stand up to this and say this is how we want these things to work shayka i struggle with even opting out of the cookies when it comes to a website

0

2303.823 - 2320.178 Nadja Atwal

I can't even recognize that. You're asking me about the AI thing. You know, I think this is a way beyond my thing. I'm just trying to figure out. I couldn't even do the parental control of my son's iPhone. He's outsmarting me.

0

2320.158 - 2348.022 Nadja Atwal

both of us parents on the laptop and on the iphone i speak with other moms that my sons have all figured out how to outsmart the parental control uh i think we need some crash courses on that front but i also would say this and kate i'm curious for your uh take on that but i would say you know it's the same like we had with anything in the world electronics medical companies

2348.66 - 2379.649 Nadja Atwal

pharmaceuticals that have majorly impact us in a good and a bad way i think in the end it's we're all learning by doing and um as for who's making the decisions well it seems the politicians are hesitant and they're also hesitant because they feel they're between a rock and a hard place. Because on one hand, you want to give an industry the room to create innovation.

2380.35 - 2393.297 Nadja Atwal

And you want to win the race against other nations. On the other hand, you also don't want to create a Frankenstein. So it is a very delicate duck dance. Kate, what do you think?

2393.581 - 2416.412 Kate Hancock

Yeah, I think that's a very, very good question. I think it takes a lot of that's the reason why we launched the Global AI Council for all of us to be literate about AI, because we didn't even know, like you mentioned, we are a slave and we You know, just rely on AI, whether it's being creative or our emotions and how we write our speeches, right?

2416.852 - 2439.669 Kate Hancock

So I think it takes time for us to all be enlightened. And then there's going to be a movement for people. Wait a minute. We have to do something about it. I think it takes time. We're going to get there. I think people will voice out like, okay, this is not a good thing, but it's definitely a very scary future if not all of us are aware of what's happening.

Comments

There are no comments yet.

Please log in to write the first comment.