Juni
👤 SpeakerAppearances Over Time
Podcast Appearances
I agree.
Cybernose, right?
Cybernose.
They've been developing various prosthetics and various sensors.
So now we've got smell, just a few more senses to work on, and then we can have a complete package.
There you go.
Right, right, right, right, right.
Okay, well, my contribution or at least one of them for today is I found this story about the FDA.
And so FDA, it's one of the larger government agencies.
And so the Food and Drug Administration or the US Food and Drug Administration has announced a broad deployment of what it calls agentic AI tools for all agency employees.
It defines agentic AI systems as those that plan and execute multi-step actions to achieve specific goals with built-in guidelines and human oversight.
These internal tools will help staff with workflows like meeting management, pre-market reviews, validation, post-market surveillance, inspections, and routine administrative tasks.
The deployment builds on ELSA, an earlier large language model assistant that more than 70% of the staff reportedly used voluntarily.
The new program also includes an internal agentic AI challenge where teams compete to design AI solutions and demo them at FDA Scientific Computing Day in early 2026.
The agency stresses that these models run in a high security GovCloud environment and do not train on regulated industry data, which is critical given the sensitivity of drug and device submissions.
The move shows a major regulator choosing to experiment with agentic AI internally while still positioning itself as a watchdog over AI in products, offering an example for other public agencies that want to modernize without offloading decisions to vendors.
So why does this matter?
It's real deployment.
If it works, it'll become a playbook for others that want to use AI while insisting on privacy, security, and human control.
So always like applied.