Heiki Riesenkampf
👤 PersonAppearances Over Time
Podcast Appearances
But it would be very hard for me to predict how long it would take us to prove each step of the way. And regarding the APIs or how our customers are going to connect to the core piece, which is the speech-to-speech translation, I feel like that part will likely stay relatively stable. And most of the efforts are going to go into just building the best in class speech translator.
But it would be very hard for me to predict how long it would take us to prove each step of the way. And regarding the APIs or how our customers are going to connect to the core piece, which is the speech-to-speech translation, I feel like that part will likely stay relatively stable. And most of the efforts are going to go into just building the best in class speech translator.
This is the third venture-backed company that I'm now building, so I feel like I have plenty of battle scars and tiring mistakes from my last two ventures. My motto for building an early startup team is lean and mean. And what I mean by lean and mean is hiring people that punch above their weight. I do not care about pedigree. I mostly care about people being honest
This is the third venture-backed company that I'm now building, so I feel like I have plenty of battle scars and tiring mistakes from my last two ventures. My motto for building an early startup team is lean and mean. And what I mean by lean and mean is hiring people that punch above their weight. I do not care about pedigree. I mostly care about people being honest
This is the third venture-backed company that I'm now building, so I feel like I have plenty of battle scars and tiring mistakes from my last two ventures. My motto for building an early startup team is lean and mean. And what I mean by lean and mean is hiring people that punch above their weight. I do not care about pedigree. I mostly care about people being honest
on board with the mission and willing to work hard and having a very tight knit of engineers that come to the office every day and have as high information sharing and collaborative environment as possible. And so if you ask me in a few words, what do I look for in an early hire? It is excitement.
on board with the mission and willing to work hard and having a very tight knit of engineers that come to the office every day and have as high information sharing and collaborative environment as possible. And so if you ask me in a few words, what do I look for in an early hire? It is excitement.
on board with the mission and willing to work hard and having a very tight knit of engineers that come to the office every day and have as high information sharing and collaborative environment as possible. And so if you ask me in a few words, what do I look for in an early hire? It is excitement.
It is willingness to work hard and definitely low ego because it just makes the collaboration a whole lot easier. And so that will be like a short answer.
It is willingness to work hard and definitely low ego because it just makes the collaboration a whole lot easier. And so that will be like a short answer.
It is willingness to work hard and definitely low ego because it just makes the collaboration a whole lot easier. And so that will be like a short answer.
I feel like when it comes to scaling machine learning models or architecture, We're still very much in the early days. I feel like most companies are still writing their own infra once they get to the stage where they need to scale either their training infrastructure or scale their inference infrastructure.
I feel like when it comes to scaling machine learning models or architecture, We're still very much in the early days. I feel like most companies are still writing their own infra once they get to the stage where they need to scale either their training infrastructure or scale their inference infrastructure.
I feel like when it comes to scaling machine learning models or architecture, We're still very much in the early days. I feel like most companies are still writing their own infra once they get to the stage where they need to scale either their training infrastructure or scale their inference infrastructure.
There are tools built by big tech to make that simpler, but typically you run into limitations relatively quickly. And so, so far... I see most companies having to roll some version of the infra themselves just to fit their very specific use case. We plan to try to rely on kind of the cloud providers as long as possible. I don't want to build my own GPU cluster.
There are tools built by big tech to make that simpler, but typically you run into limitations relatively quickly. And so, so far... I see most companies having to roll some version of the infra themselves just to fit their very specific use case. We plan to try to rely on kind of the cloud providers as long as possible. I don't want to build my own GPU cluster.
There are tools built by big tech to make that simpler, but typically you run into limitations relatively quickly. And so, so far... I see most companies having to roll some version of the infra themselves just to fit their very specific use case. We plan to try to rely on kind of the cloud providers as long as possible. I don't want to build my own GPU cluster.
I think that's a waste of time for most machine learning companies. So unless you're building an LLM that costs you 10 millions or more to train, I think you're very, you're much better off relying on someone else's infra and not take on that DevOps responsibility yourself. So we're definitely going to rely on the GPUs that we get from the cloud providers in the early days.
I think that's a waste of time for most machine learning companies. So unless you're building an LLM that costs you 10 millions or more to train, I think you're very, you're much better off relying on someone else's infra and not take on that DevOps responsibility yourself. So we're definitely going to rely on the GPUs that we get from the cloud providers in the early days.
I think that's a waste of time for most machine learning companies. So unless you're building an LLM that costs you 10 millions or more to train, I think you're very, you're much better off relying on someone else's infra and not take on that DevOps responsibility yourself. So we're definitely going to rely on the GPUs that we get from the cloud providers in the early days.