Ed Zitron
๐ค SpeakerAppearances Over Time
Podcast Appearances
One other thing I will add is that the ReadyLand guy, the AI storybook, specifically when talking about the importance of guardrails, he said that there's multiple levels to safety, right? An AI kid's robot that swears, right, is one thing that's pretty easy to avoid, actually. That's pretty easy.
And you can just block out certain things from happening. You can build that in. But another aspect that's really important to safety is the accuracy of the things it's saying. What if it's saying something that's supposed to be some factual statement about the world that just isn't true or could actually lead to danger?
And you can just block out certain things from happening. You can build that in. But another aspect that's really important to safety is the accuracy of the things it's saying. What if it's saying something that's supposed to be some factual statement about the world that just isn't true or could actually lead to danger?
And you can just block out certain things from happening. You can build that in. But another aspect that's really important to safety is the accuracy of the things it's saying. What if it's saying something that's supposed to be some factual statement about the world that just isn't true or could actually lead to danger?
What if it tells your kid to do something which is actually kind of dangerous? Or what if it says... Not even directly telling them, but, you know, it says something that if the kid then tries to do that, it's really dangerous. And, like, this is why their storybook program, you know, does not generate new content. So everything it says is, like, already pre-approved.
What if it tells your kid to do something which is actually kind of dangerous? Or what if it says... Not even directly telling them, but, you know, it says something that if the kid then tries to do that, it's really dangerous. And, like, this is why their storybook program, you know, does not generate new content. So everything it says is, like, already pre-approved.
What if it tells your kid to do something which is actually kind of dangerous? Or what if it says... Not even directly telling them, but, you know, it says something that if the kid then tries to do that, it's really dangerous. And, like, this is why their storybook program, you know, does not generate new content. So everything it says is, like, already pre-approved.
Like, it already is going to have, you know, like, verified content. Like verified, safe, you know, sentences versus this AI teddy bear because it is generating new content. You know, it could, if things go horribly wrong, you know, talk about drinking bleach, you know, theoretically, you know, just like something, you know, like things can go wrong.
Like, it already is going to have, you know, like, verified content. Like verified, safe, you know, sentences versus this AI teddy bear because it is generating new content. You know, it could, if things go horribly wrong, you know, talk about drinking bleach, you know, theoretically, you know, just like something, you know, like things can go wrong.
Like, it already is going to have, you know, like, verified content. Like verified, safe, you know, sentences versus this AI teddy bear because it is generating new content. You know, it could, if things go horribly wrong, you know, talk about drinking bleach, you know, theoretically, you know, just like something, you know, like things can go wrong.
So it's not just about, you know, avoiding bad words or talking about sex or, you know, those types of like inappropriate things. It's also making sure it's not like hallucinating or saying things that could like lead to like dangerous situations.
So it's not just about, you know, avoiding bad words or talking about sex or, you know, those types of like inappropriate things. It's also making sure it's not like hallucinating or saying things that could like lead to like dangerous situations.
So it's not just about, you know, avoiding bad words or talking about sex or, you know, those types of like inappropriate things. It's also making sure it's not like hallucinating or saying things that could like lead to like dangerous situations.
Yeah, no one wants this. Even six-year-olds are like, eh, I would prefer just a regular toy I can play with.
Yeah, no one wants this. Even six-year-olds are like, eh, I would prefer just a regular toy I can play with.
Yeah, no one wants this. Even six-year-olds are like, eh, I would prefer just a regular toy I can play with.
So Poe the AI Bear is $50 on Amazon.
So Poe the AI Bear is $50 on Amazon.
So Poe the AI Bear is $50 on Amazon.
We can even maybe order one and see what we can get out of it. Yeah. All right, we're going to go on another break and return to talk once again about AI products for your children.