Ed Zitron
๐ค SpeakerAppearances Over Time
Podcast Appearances
Toner also noted that Altman was an aggressive political player, with the board, correctly by the way, worrying that, and I quote again, "...that if Sam Altman had any inkling that the board might do something that went against him, he'd pull out all the stops, do everything in his power to undermine the board, and to prevent them from even getting to the point of being able to fire him."
As a reminder, by the way, the board succeeded in firing Sam Altman in November last year, but not for long, with Altman returning as CEO a few days later, kicking Helen Toner off of the board, along with Ilya Sutskeva, a technical co-founder that Altman manipulated long enough to build ChatGPT, and then ousted him the moment that he chose to complain. Sutskeva, by the way, has resigned now.
As a reminder, by the way, the board succeeded in firing Sam Altman in November last year, but not for long, with Altman returning as CEO a few days later, kicking Helen Toner off of the board, along with Ilya Sutskeva, a technical co-founder that Altman manipulated long enough to build ChatGPT, and then ousted him the moment that he chose to complain. Sutskeva, by the way, has resigned now.
As a reminder, by the way, the board succeeded in firing Sam Altman in November last year, but not for long, with Altman returning as CEO a few days later, kicking Helen Toner off of the board, along with Ilya Sutskeva, a technical co-founder that Altman manipulated long enough to build ChatGPT, and then ousted him the moment that he chose to complain. Sutskeva, by the way, has resigned now.
He's also one of the biggest technical minds there. How is open AI going to continue? Anyway, last week, a group of insiders at various AI companies published an open letter asking for their overlords, the heads of these companies, for the right to warn about advanced artificial intelligence in a monument, genuinely impressive monument, to the bullshit machine that Sam Altman has created.
He's also one of the biggest technical minds there. How is open AI going to continue? Anyway, last week, a group of insiders at various AI companies published an open letter asking for their overlords, the heads of these companies, for the right to warn about advanced artificial intelligence in a monument, genuinely impressive monument, to the bullshit machine that Sam Altman has created.
He's also one of the biggest technical minds there. How is open AI going to continue? Anyway, last week, a group of insiders at various AI companies published an open letter asking for their overlords, the heads of these companies, for the right to warn about advanced artificial intelligence in a monument, genuinely impressive monument, to the bullshit machine that Sam Altman has created.
While there are genuine safety concerns with AI, there really are, there are many of them to consider, these people are desperately afraid of the computer coming alive and killing them, when they should fear the non-technical asshole manipulator getting rich making egregious promises about what AI can do. AI researchers, you have to live up to Sam Altman's promises. Sam Altman doesn't.
While there are genuine safety concerns with AI, there really are, there are many of them to consider, these people are desperately afraid of the computer coming alive and killing them, when they should fear the non-technical asshole manipulator getting rich making egregious promises about what AI can do. AI researchers, you have to live up to Sam Altman's promises. Sam Altman doesn't.
While there are genuine safety concerns with AI, there really are, there are many of them to consider, these people are desperately afraid of the computer coming alive and killing them, when they should fear the non-technical asshole manipulator getting rich making egregious promises about what AI can do. AI researchers, you have to live up to Sam Altman's promises. Sam Altman doesn't.
This is not your friend. The problem is not the boogeyman computer coming alive. That's not happening, man. What's happening is this guy is leading your industry to ruin.
This is not your friend. The problem is not the boogeyman computer coming alive. That's not happening, man. What's happening is this guy is leading your industry to ruin.
This is not your friend. The problem is not the boogeyman computer coming alive. That's not happening, man. What's happening is this guy is leading your industry to ruin.
And the bigger concern that they should have should be about what Leo Aschenbrenner, a former safety researcher at OpenAI, had to say on the Dwarkesh Patel podcast, where he claimed that security processes at OpenAI were, and I quote, "...egregiously insufficient." and that the priorities at the company were focused on growth over stability or security.
And the bigger concern that they should have should be about what Leo Aschenbrenner, a former safety researcher at OpenAI, had to say on the Dwarkesh Patel podcast, where he claimed that security processes at OpenAI were, and I quote, "...egregiously insufficient." and that the priorities at the company were focused on growth over stability or security.
And the bigger concern that they should have should be about what Leo Aschenbrenner, a former safety researcher at OpenAI, had to say on the Dwarkesh Patel podcast, where he claimed that security processes at OpenAI were, and I quote, "...egregiously insufficient." and that the priorities at the company were focused on growth over stability or security.
These people are afraid of open AI potentially creating a computer that can think for itself that will come and kill them at a time where they should be far more concerned about this manipulative con artist that's running open AI.
These people are afraid of open AI potentially creating a computer that can think for itself that will come and kill them at a time where they should be far more concerned about this manipulative con artist that's running open AI.
These people are afraid of open AI potentially creating a computer that can think for itself that will come and kill them at a time where they should be far more concerned about this manipulative con artist that's running open AI.
Sam Altman is dangerous to artificial intelligence, not because he's building artificial general intelligence, which is a kind of AI that meets or surpasses human cognitive capabilities, by the way, Kind of like data from Star Trek. They're afraid of that happening when they should be afraid of Altman's focus. What does Sam Altman care about?