Jyunmi Hatcher
๐ค SpeakerAppearances Over Time
Podcast Appearances
Governance is moving alongside with the technology.
According to the Brookings Institution, the United Nations Office for Outer Space Affairs, which I didn't know existed before this, has called for international frameworks that pre-authorize AI decisions within defined parameters for deep space missions where real-time human intervention is impossible.
That's a slightly different governance model than anything we currently have in place for AI on Earth.
and it's being shaped now while Artemis is still in its early flights.
For Artemis 2 specifically, Splashdown on Friday will close out a successful proof of concept for the first crewed Orion flight.
Artemis 3 is targeted for 2027 and is intended to test integrated operations between Orion and commercial landers.
Artemis 4, currently planned for early 2028, is intended to be the first crewed lunar landing since Apollo 17 in 1972.
Each of those missions will lean more heavily on autonomous systems than the one before.
Not because NASA's in any hurry to remove humans from the loop, but because the missions themselves are pushing past the point where keeping humans in the loop on every decision is physically possible.
So a lot of stuff has gone on in space and our approach in space travel, space sciences and things like that are sort of highlighted by the two pads there.
You know, either very conservative, we're just going to use tried and true methods, just newer versions of them.
And then, of course, yeah, we'll try it out on a rover on Mars, but we're certainly not going to do it in our space.
our crewed missions right now.
So the question for the day that I want to pose to everybody in chat, and of course, everybody here.
So NASA's running those two very different AI strategies in parallel, extremely conservative autonomy on Artemis II and the experimental generative AI on Perseverance.
Which approach do we think will define the next decade of the space exploration?
Cautious incrementalism or controlled risk taking with newer models?
That is the question I am posing for everyone.
So basically, because NASA has these two paths that they're sort of developing out for AI use in NASA missions.
One is the conservative, where they're making incremental adjustments and building on, and then the other is experimental,