Roy Jakobs
👤 PersonAppearances Over Time
Podcast Appearances
And then also you look at how you go from, I would say, kind of you can start with the lower risk areas. There's a lot of routine tasks in healthcare that you can address. Let me give an example. A nurse spends on average 20 minutes an hour doing admin tasks. meaning they need to kind of write down certain measurements. They need to kind of transfer data from one system to another.
Actually, AI can really help in doing that faster, but also even more accurately, because of course, if you have manual labor, there's also a risk error that goes to that. So actually there you can really improve and lower the risk profile.
Actually, AI can really help in doing that faster, but also even more accurately, because of course, if you have manual labor, there's also a risk error that goes to that. So actually there you can really improve and lower the risk profile.
Actually, AI can really help in doing that faster, but also even more accurately, because of course, if you have manual labor, there's also a risk error that goes to that. So actually there you can really improve and lower the risk profile.
If you go closer to especially interventions, you want to make sure that actually the decision support you provide is at the highest tested level of security and patient safety. Next to that, the doctors will make the ultimate decision. So it's decision making. and support tool, but you need to make sure it's tested very well.
If you go closer to especially interventions, you want to make sure that actually the decision support you provide is at the highest tested level of security and patient safety. Next to that, the doctors will make the ultimate decision. So it's decision making. and support tool, but you need to make sure it's tested very well.
If you go closer to especially interventions, you want to make sure that actually the decision support you provide is at the highest tested level of security and patient safety. Next to that, the doctors will make the ultimate decision. So it's decision making. and support tool, but you need to make sure it's tested very well.
So also therefore qualify what are different use cases and therefore what kind of risk do they kind of have and therefore what robustness do they need to have in the process of delivering a solution for it is very important.
So also therefore qualify what are different use cases and therefore what kind of risk do they kind of have and therefore what robustness do they need to have in the process of delivering a solution for it is very important.
So also therefore qualify what are different use cases and therefore what kind of risk do they kind of have and therefore what robustness do they need to have in the process of delivering a solution for it is very important.
and then last but not least of course you develop it together with the practice so you never do it in isolation that's very important so you're very close to the clinical practice so all ai that we develop is developed together with providers right we use patient data sets that are kind of jointly worked at so that actually you don't only look at it from your perspective but also from others so that you have the multiple size principle that when you bring something out to the best of your abilities
and then last but not least of course you develop it together with the practice so you never do it in isolation that's very important so you're very close to the clinical practice so all ai that we develop is developed together with providers right we use patient data sets that are kind of jointly worked at so that actually you don't only look at it from your perspective but also from others so that you have the multiple size principle that when you bring something out to the best of your abilities
and then last but not least of course you develop it together with the practice so you never do it in isolation that's very important so you're very close to the clinical practice so all ai that we develop is developed together with providers right we use patient data sets that are kind of jointly worked at so that actually you don't only look at it from your perspective but also from others so that you have the multiple size principle that when you bring something out to the best of your abilities
you have kind of made sure that you deliver effective products. Now you still need to be alert because there's no perfect world. Things can happen, problems will arise. And then again, you come back to what mechanisms do you put in place to actually capture that faster and better. And they're actually AI we are also adopting and using in dealing with complaint management.
you have kind of made sure that you deliver effective products. Now you still need to be alert because there's no perfect world. Things can happen, problems will arise. And then again, you come back to what mechanisms do you put in place to actually capture that faster and better. And they're actually AI we are also adopting and using in dealing with complaint management.
you have kind of made sure that you deliver effective products. Now you still need to be alert because there's no perfect world. Things can happen, problems will arise. And then again, you come back to what mechanisms do you put in place to actually capture that faster and better. And they're actually AI we are also adopting and using in dealing with complaint management.
Because generative AI, of course, a lot of complaints come in also in text. They are descriptive. Actually, you can also use technology to make sure that actually you interpret it better, faster, so you complement again the human element of it by also using the latest technology to actually process some of these in a better and more accurate manner.
Because generative AI, of course, a lot of complaints come in also in text. They are descriptive. Actually, you can also use technology to make sure that actually you interpret it better, faster, so you complement again the human element of it by also using the latest technology to actually process some of these in a better and more accurate manner.
Because generative AI, of course, a lot of complaints come in also in text. They are descriptive. Actually, you can also use technology to make sure that actually you interpret it better, faster, so you complement again the human element of it by also using the latest technology to actually process some of these in a better and more accurate manner.
So we use models from partners. Concrete example, we have a strong development partnership with AWS, where actually we are looking into imaging, as you mentioned. For example, the image acquisition system, the PACS, needs to be taken to the cloud. That's an effort that actually we both are looking into from our own perspective and from their perspective, how we can best support that.