Ramin Hassani
๐ค SpeakerAppearances Over Time
Podcast Appearances
You know, when we scaled the models, we saw that
much, much better and smarter behavior emerged from these AI systems.
That's the excitement that we move towards, right?
We always want to design systems that are more fascinating, you know, getting closer, getting smarter than humans.
And then that excitement sometimes prevents us from looking into the socio-technical challenges that these AI systems can bring, right?
And that is something that we have to control.
That's a great question.
So think about it like this.
When you're sitting on an airplane as a passenger,
then the pilot turns on autopilot.
You as a passenger completely trust that autopilot.
Even if you don't understand that system, how is it that we trust that autopilot in action in such a safety critical task?
The reason why you trust it is because the engineers who designed that whole system, they completely understand how that mathematics works.
They go through multiples of testing so that they can get into this safety critical kind of system.
That's the best type of explainability that you want to have.
You know, you want the engineers who design the systems understand fully how the technology works.
Now, with liquid neural networks, the core mathematics is something that is tractable.
That's why us engineers and scientists are being able to actually get into the systems.
And we have a lot of tools to really steer and put controls on top of the systems.
Data representation is one aspect.