Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Andy Halliday

๐Ÿ‘ค Speaker
3893 total appearances

Appearances Over Time

Podcast Appearances

The Daily AI Show
Anthropic's Chief Scientist Issues a Warning

And Confessions is an interesting strategy.

The Daily AI Show
Anthropic's Chief Scientist Issues a Warning

We've seen it, I think, in the.

The Daily AI Show
Anthropic's Chief Scientist Issues a Warning

in the research that's been done by alignment teams where they actually have the model do this sort of sidebar discussion of what their internal thinking process is and providing some more transparency to what's going on in the deliberations that the model's using to arrive at a certain response.

The Daily AI Show
Anthropic's Chief Scientist Issues a Warning

Yeah.

The Daily AI Show
Anthropic's Chief Scientist Issues a Warning

I hope, I hope we stay ahead of them because they, it was back to the first point that we launched with, which is, wow, you know, these things are going to be far, far beyond our comprehension in their capabilities and speed and power and grasp.

The Daily AI Show
Anthropic's Chief Scientist Issues a Warning

Like the, the,

The Daily AI Show
Anthropic's Chief Scientist Issues a Warning

the combination of their expansive memory and their computational speed and the refinement of their intelligence, it really makes you worry about making sure that they are in alignment with us.

The Daily AI Show
Anthropic's Chief Scientist Issues a Warning

Yeah, and then let's fast forward to when quantum computing is involved, and we don't understand how it's getting to that level of understanding.

The Daily AI Show
Anthropic's Chief Scientist Issues a Warning

Yeah.

The Daily AI Show
Anthropic's Chief Scientist Issues a Warning

So I want to spin off of just your reference to mixture of experts, because I have a little bit of news here about that and a discussion.

The Daily AI Show
Anthropic's Chief Scientist Issues a Warning

So almost all of the major models now are being designed as what was called sparse mixture of experts, MOE.

The Daily AI Show
Anthropic's Chief Scientist Issues a Warning

And the term sparse or sparsity in AI refers to

The Daily AI Show
Anthropic's Chief Scientist Issues a Warning

Selective activation of different components of the deep neural network.

The Daily AI Show
Anthropic's Chief Scientist Issues a Warning

So you're not activating the entire thing.

The Daily AI Show
Anthropic's Chief Scientist Issues a Warning

A dense deep neural network is one that sends every token through every layer and calculates every relationship to every other token in the thing.

The Daily AI Show
Anthropic's Chief Scientist Issues a Warning

And that's the dense model.

The Daily AI Show
Anthropic's Chief Scientist Issues a Warning

A sparse...

The Daily AI Show
Anthropic's Chief Scientist Issues a Warning

model is one that sends the tokens only through a certain expert domain that's been selectively identified as having a particular pre-training or it's relevant to that particular query that's coming through.

The Daily AI Show
Anthropic's Chief Scientist Issues a Warning

So the reason this architecture is being used is we're trying to reduce the cost of inference and the consumption of energy in the process of doing inference.

The Daily AI Show
Anthropic's Chief Scientist Issues a Warning

So a mixture of experts model is the right way to go because, say, you have a trillion parameter deep neural network, well, it's only activating 200 billion at a time, for example.