Swale Asif
👤 PersonAppearances Over Time
Podcast Appearances
AWS is just really, really good. It's really good. Whenever you use an AWS product, you just know that it's going to work. It might be absolute hell to go through the steps to set it up.
AWS is just really, really good. It's really good. Whenever you use an AWS product, you just know that it's going to work. It might be absolute hell to go through the steps to set it up.
because it's just so good it doesn't need the nature of winning i think it's exactly it's just nature they're winning yeah yeah but aws you can always trust like it will always work and if there is a problem it's probably your problem uh yeah okay is there some interesting like challenges to you guys a pretty new startup to get scaling to like to so many people and
because it's just so good it doesn't need the nature of winning i think it's exactly it's just nature they're winning yeah yeah but aws you can always trust like it will always work and if there is a problem it's probably your problem uh yeah okay is there some interesting like challenges to you guys a pretty new startup to get scaling to like to so many people and
because it's just so good it doesn't need the nature of winning i think it's exactly it's just nature they're winning yeah yeah but aws you can always trust like it will always work and if there is a problem it's probably your problem uh yeah okay is there some interesting like challenges to you guys a pretty new startup to get scaling to like to so many people and
I think the most obvious one is just you want to find out where something is happening in your large code base. And you sort of have a fuzzy memory of, okay, I want to find the place where we do X. But you don't exactly know what to search for in a normal text search.
I think the most obvious one is just you want to find out where something is happening in your large code base. And you sort of have a fuzzy memory of, okay, I want to find the place where we do X. But you don't exactly know what to search for in a normal text search.
I think the most obvious one is just you want to find out where something is happening in your large code base. And you sort of have a fuzzy memory of, okay, I want to find the place where we do X. But you don't exactly know what to search for in a normal text search.
And so you ask a chat, you hit command enter to ask with the codebase chat, and then very often it finds the right place that you were thinking of.
And so you ask a chat, you hit command enter to ask with the codebase chat, and then very often it finds the right place that you were thinking of.
And so you ask a chat, you hit command enter to ask with the codebase chat, and then very often it finds the right place that you were thinking of.
Yeah, we thought about it, and I think it would be cool to do it locally. I think it's just really hard. And one thing to keep in mind is that some of our users use the latest MacBook Pro, but most of our users, like more than 80% of our users, are in Windows machines, and many of them are not very powerful. And so... local models really only works on the latest computers.
Yeah, we thought about it, and I think it would be cool to do it locally. I think it's just really hard. And one thing to keep in mind is that some of our users use the latest MacBook Pro, but most of our users, like more than 80% of our users, are in Windows machines, and many of them are not very powerful. And so... local models really only works on the latest computers.
Yeah, we thought about it, and I think it would be cool to do it locally. I think it's just really hard. And one thing to keep in mind is that some of our users use the latest MacBook Pro, but most of our users, like more than 80% of our users, are in Windows machines, and many of them are not very powerful. And so... local models really only works on the latest computers.
And it's also a big overhead to build that in. And so even if we would like to do that, it's currently not something that we are able to focus on. And I think there are some people that do that, and I think that's great. But especially as models get bigger and bigger and you want to do fancier things with like bigger models, it becomes even harder to do it locally.
And it's also a big overhead to build that in. And so even if we would like to do that, it's currently not something that we are able to focus on. And I think there are some people that do that, and I think that's great. But especially as models get bigger and bigger and you want to do fancier things with like bigger models, it becomes even harder to do it locally.
And it's also a big overhead to build that in. And so even if we would like to do that, it's currently not something that we are able to focus on. And I think there are some people that do that, and I think that's great. But especially as models get bigger and bigger and you want to do fancier things with like bigger models, it becomes even harder to do it locally.
There's actually an alternative to local models that I am particularly fond of. I think it's still very much in the research stage, but you could imagine to do homomorphic encryption for language model inference. So you encrypt your input on your local machine, then you send that up, and then the server can use loss of computation.
There's actually an alternative to local models that I am particularly fond of. I think it's still very much in the research stage, but you could imagine to do homomorphic encryption for language model inference. So you encrypt your input on your local machine, then you send that up, and then the server can use loss of computation.
There's actually an alternative to local models that I am particularly fond of. I think it's still very much in the research stage, but you could imagine to do homomorphic encryption for language model inference. So you encrypt your input on your local machine, then you send that up, and then the server can use loss of computation.