Menu
Sign In Search Podcasts Charts Entities Add Podcast API Pricing
Podcast Image

The Joe Rogan Experience

#2311 - Jeremie & Edouard Harris

Fri, 25 Apr

From Default Workspace • No contributors

Description

Jeremie Harris is the CEO and Edouard Harris the CTO of Gladstone AI, a company dedicated to promoting the responsible development and adoption of artificial intelligence. https://superintelligence.gladstone.ai/ Learn more about your ad choices. Visit podcastchoices.com/adchoices

Audio
Transcription

Chapter 1: What is the current state and timeline of AI capabilities?

2.15 - 22.279 Unknown

All right, so if there's a doomsday clock for AI and we're fucked, what time is it? If midnight is, we're fucked. We're getting right into it.

0

22.759 - 42.151 Joe Rogan

You're not even going to ask us what we had for breakfast? Jesus. Okay. Let's get freaked out. Well, OK, so there's one without speaking to like the fucking doomsday dimension right up here. There's a question about like, where are we at in terms of AI capabilities right now? And what do those timelines look like? Right. There's a bunch of disagreement.

0

43.031 - 67.162 Joe Rogan

One of the most concrete pieces of evidence that we have recently came out of a lab, an AI kind of evaluation lab called Meter. And they put together this this test. Basically, it's like you ask the question. Pick a task that takes a certain amount of time, like an hour. It takes like a human a certain amount of time. And then see like how likely the best AI system is to solve for that task.

0

67.663 - 87.191 Joe Rogan

Then try a longer task. See like a 10-hour task. Can it do that one? And so right now what they're finding is when it comes to AI research itself, so basically like automate the work of an AI researcher. You're hitting 50% success rates for these AI systems for tasks that take an hour long. And that is doubling every, right now it's like every four months.

0

87.371 - 104.602 Unknown

So like you had tasks that you could do, you know, a person does in five minutes, like, you know, ordering an Uber Eats or like something that takes like 15 minutes, like maybe booking a flight or something like that. And it's a question of like, how much can these AI agents do, right? Like from five minutes to 15 minutes to 30 minutes.

105.002 - 114.352 Unknown

And in some of these spaces, like research, software engineering. And it's getting further and further and further. And doubling, it looks like, every four months.

114.572 - 127.332 Joe Rogan

So if you extrapolate that, you basically get to tasks that take a month to complete. Like by 2027... Tasks that take an AI researcher a month to complete, these systems will be completing with like a 50% success rate.

127.572 - 139.382 Unknown

So you'll be able to have an AI on your show and ask it what the doomsday clock is like by then. It probably won't laugh. It'll have a terrible sense of humor about it.

139.482 - 161.396 Joe Rogan

Just make sure you ask it what it had for breakfast before you start. What about quantum computing getting involved in AI? So, yeah, honestly, I don't think it's – if you think that you're going to hit human-level AI capabilities across the board, say, 2027, 2028, which when you talk to some of these – the people in the labs themselves, that's the timelines they're looking at.

Chapter 2: How does quantum computing influence AI development?

1916.789 - 1923.454 Joe Rogan

And they brought out the seal. And that's how it became public. It was basically like the response to the Russians saying like, you know.

0

1923.794 - 1942.702 Unknown

Wow. Yeah. Yeah, they're all dirty. Everyone's spying on everybody. That's the thing. And I think they probably all have some sort of UFO technology. We need to talk about that. We can turn off our mics. I'm 99% sure a lot of that shit is ours. You need to talk to some of the... I've been talking to people.

0

1943.102 - 1945.683 Joe Rogan

I've been talking to a lot of people.

0

1946.643 - 1968.619 Unknown

There might be some other people that you'd be interested in chatting with. I would very much be interested. Here's the problem. Some of the people I'm talking to, I'm positive, they're talking to me to give me bullshit. Are we on your list? No, you guys aren't on the list. But there's certain people I'm like, okay, maybe most of this is true, but some of it's not on purpose. There's that.

0

1969.039 - 1971.822 Unknown

I guarantee you I know I talk to people that don't tell me the truth.

1972.222 - 1985.253 Joe Rogan

Yeah. Yeah. It's an interesting problem in like all Intel, right? Because there's always the mix of incentives is so fucked. Like the adversary is trying to add noise into the system. You've got pockets of people within the government that have different incentives from other pockets.

1996.902 - 2014.524 Unknown

Yeah. Just stop listening to him. One of the techniques is actually to inject so much noise that you don't know what's what and you can't follow. So this actually happened in the COVID thing, right? The lab leak versus the natural wet market thing.

2014.825 - 2014.965 Jamie

Yeah.

2015.205 - 2035.93 Unknown

So I remember there was a there was a debate that that happened about what was the origin of COVID. This was like a few years ago. It was like an 18 or 20 hour long YouTube debate, just like punishingly long. And it was like there was a hundred thousand dollar bet either way on who would win. And it was like lab leak versus wet market.

Chapter 3: What are the challenges and culture issues in academia versus startups?

2431.377 - 2443.14 Joe Rogan

If that if you get to that point, you know, these these labs ultimately will have the ability to deploy agents at scale that can just persuade a lot of people to do whatever they want, including pushing legislative agendas.

0

2443.86 - 2459.705 Unknown

And even help them help them prep for meetings with the Hill, the administration, whatever. And like, how should I convince this person to do that? Like, yeah, well, they'll do that with text messages, make it more businesslike. Yep. Make it friendlier. Make it more jovial.

0

2459.845 - 2469.332 Joe Rogan

But this is like the same optimization pressure that keeps you on TikTok, that same addiction. Imagine that applied to persuading you of some fact, right? Yeah.

0

2470.053 - 2501.512 Unknown

On the other hand, maybe a few months from now, we're all just going to be very, very convinced that it was all fine. There's no big deal. 80%.

0

2501.572 - 2511.16 Joe Rogan

That's one of the reasons why the bot purge, like when Elon acquired it and started working on it, is so important. Like there needs to be – the challenge is like detecting these things is so hard, right? So hard.

2511.18 - 2529.188 Unknown

Increasingly. Like more and more they can hide like – Basically perfectly. Like how do you tell the difference between a cutting edge AI bot and a human just from the – You can because they can generate AI images of a family, of a backyard barbecue, post all these things up and make it seem like it's real.

2529.649 - 2549.007 Unknown

Especially now. AI images are insanely good now. They really are. It's crazy. And if you have a person, you could just – you could take a photo of a person and manipulate it in any way you'd like. And then now this is your new guy. You could do it instantaneously. And then this guy has a bunch of opinions on things and seems to – seems to always align with the Democratic Party. But whatever.

2549.027 - 2571.71 Unknown

He's a good guy. Yeah. Family man. Look, he's out in his barbecue. He's not even a fucking human being. And people are arguing with this bot like back and forth. And you'll see it on any social issue. You see it with Gaza and Palestine. You see it with abortion. You see it with religious freedoms. Yeah. You just see these bots, you see these arguments, and you see various levels.

2572.13 - 2583.617 Unknown

You see the extreme position, and then you see a more reasonable centrist position. But essentially what they're doing is they're consistently moving what's okay further and further in a certain direction.

Comments

There are no comments yet.

Please log in to write the first comment.