Ramin Hassani
๐ค SpeakerAppearances Over Time
Podcast Appearances
So you see that the neurons flash actually like under microscope when you look at the worm.
So you can actually see how the neuron behaves while you can record the brain activities of the worm.
So you have a lot of data.
So it becomes a very good model organism.
So I started looking into this.
I thought that, okay, so neurons and synapses,
are the same, almost the same in terms of functionality in this worm and in humans.
So if we can understand on this worm how things work from the mathematical principles and how behavior emerges from a set of neural activities with mathematics that are involved, then we can take this and evolve this into better versions of itself, which became human brain.
And maybe we can also evolve artificial intelligence that way.
The thing is like the AI systems were transparent and they are still traceable.
You know, the problem that we have with these AI systems is the scale of these AI systems today.
So you started taking this very simple mathematics, you know, simple if condition, you know, if something happens, neuron gets activated.
If doesn't happen, the neuron turns off.
Then we took this function and we scaled this technology.
We scaled it into billions, or now we are getting into trillions of parameters.
So now a system, imagine you have trillion knobs that you have to turn.
Now, if you want to go and reverse engineer what are these trillions of knobs are actually doing, this becomes a non-tractable process.
So you wouldn't be able to really say,
what each of these one out of trillions of knobs are actually doing and what's the function of these things in the overall kind of behavioral generation of the generative AI system.
That's why we call them black boxes.