Daniel Dines
👤 PersonAppearances Over Time
Podcast Appearances
So we will capture the edit box image and then like a label and we will find them during runtime. But from the perspective of the user was really simple. So I remember it was like 2013 when I showed this product to some guys that were really Blue Prism experts.
And in order to do the same thing in Bluebrism, it would have taken like two days, and the outcome would be not as reliable as in our case. I did in front of them this flow, like in three, five minutes, press run, and it worked. And I asked them, what do you think, guys? Total silence. They couldn't believe their eyes.
And in order to do the same thing in Bluebrism, it would have taken like two days, and the outcome would be not as reliable as in our case. I did in front of them this flow, like in three, five minutes, press run, and it worked. And I asked them, what do you think, guys? Total silence. They couldn't believe their eyes.
And in order to do the same thing in Bluebrism, it would have taken like two days, and the outcome would be not as reliable as in our case. I did in front of them this flow, like in three, five minutes, press run, and it worked. And I asked them, what do you think, guys? Total silence. They couldn't believe their eyes.
And that was our first niche where people would deploy and would prefer us versus Blue Prism. And from that one, we have basically expanded into what we are today.
And that was our first niche where people would deploy and would prefer us versus Blue Prism. And from that one, we have basically expanded into what we are today.
And that was our first niche where people would deploy and would prefer us versus Blue Prism. And from that one, we have basically expanded into what we are today.
Yeah, I personally don't believe that the models can innovate in a material way in a reasonable amount of time. I think they reach maturity in a way. I think I am pleased with what LLMs can deliver, both frontier models, but also something like smaller models. Look, for instance, we are using Gwen, which is a fantastic model built by Alibaba. It's just totally open source.
Yeah, I personally don't believe that the models can innovate in a material way in a reasonable amount of time. I think they reach maturity in a way. I think I am pleased with what LLMs can deliver, both frontier models, but also something like smaller models. Look, for instance, we are using Gwen, which is a fantastic model built by Alibaba. It's just totally open source.
Yeah, I personally don't believe that the models can innovate in a material way in a reasonable amount of time. I think they reach maturity in a way. I think I am pleased with what LLMs can deliver, both frontier models, but also something like smaller models. Look, for instance, we are using Gwen, which is a fantastic model built by Alibaba. It's just totally open source.
We are using it into understanding like a lot of our semi-structured documents. It's a lot of product.
We are using it into understanding like a lot of our semi-structured documents. It's a lot of product.
We are using it into understanding like a lot of our semi-structured documents. It's a lot of product.
Because at this point, that's basically the best model for this particular job. We might change it. This is why I think the experience around the product will be so important. Because we do a lot, it's very difficult to use that model without the entire product experience, without helping people tagging documents and retraining the model on the fly.
Because at this point, that's basically the best model for this particular job. We might change it. This is why I think the experience around the product will be so important. Because we do a lot, it's very difficult to use that model without the entire product experience, without helping people tagging documents and retraining the model on the fly.
Because at this point, that's basically the best model for this particular job. We might change it. This is why I think the experience around the product will be so important. Because we do a lot, it's very difficult to use that model without the entire product experience, without helping people tagging documents and retraining the model on the fly.
So it's an entire experience that we make it extremely simple, and we can exchange the model. If we find another model, maybe LAMA 3.3 is better. And it's always a cost versus speed and versus accuracy equation that we have to consider.
So it's an entire experience that we make it extremely simple, and we can exchange the model. If we find another model, maybe LAMA 3.3 is better. And it's always a cost versus speed and versus accuracy equation that we have to consider.
So it's an entire experience that we make it extremely simple, and we can exchange the model. If we find another model, maybe LAMA 3.3 is better. And it's always a cost versus speed and versus accuracy equation that we have to consider.
I don't think models would measure the cloud development. I think there will be multiple models. Even if we look at the human brain development, we have multiple models. We have some general cognitive models, but we have a lot of specialized models that would do some tasks better than using my general model.