Ed Zitron
๐ค SpeakerAppearances Over Time
Podcast Appearances
First product that really made money, arguably the biggest product in tech. You wanna know how they found out about it? Well, they found out when they were browsing Twitter. They found out then, not from the CEO of OpenAI, the company which they were the board of. Very weird.
First product that really made money, arguably the biggest product in tech. You wanna know how they found out about it? Well, they found out when they were browsing Twitter. They found out then, not from the CEO of OpenAI, the company which they were the board of. Very weird.
Toner also noted that Altman was an aggressive political player, with the board, correctly by the way, worrying that, and I quote again, "...that if Sam Altman had any inkling that the board might do something that went against him, he'd pull out all the stops, do everything in his power to undermine the board, and to prevent them from even getting to the point of being able to fire him."
Toner also noted that Altman was an aggressive political player, with the board, correctly by the way, worrying that, and I quote again, "...that if Sam Altman had any inkling that the board might do something that went against him, he'd pull out all the stops, do everything in his power to undermine the board, and to prevent them from even getting to the point of being able to fire him."
Toner also noted that Altman was an aggressive political player, with the board, correctly by the way, worrying that, and I quote again, "...that if Sam Altman had any inkling that the board might do something that went against him, he'd pull out all the stops, do everything in his power to undermine the board, and to prevent them from even getting to the point of being able to fire him."
As a reminder, by the way, the board succeeded in firing Sam Altman in November last year, but not for long, with Altman returning as CEO a few days later, kicking Helen Toner off of the board, along with Ilya Sutskeva, a technical co-founder that Altman manipulated long enough to build ChatGPT, and then ousted him the moment that he chose to complain. Sutskeva, by the way, has resigned now.
As a reminder, by the way, the board succeeded in firing Sam Altman in November last year, but not for long, with Altman returning as CEO a few days later, kicking Helen Toner off of the board, along with Ilya Sutskeva, a technical co-founder that Altman manipulated long enough to build ChatGPT, and then ousted him the moment that he chose to complain. Sutskeva, by the way, has resigned now.
As a reminder, by the way, the board succeeded in firing Sam Altman in November last year, but not for long, with Altman returning as CEO a few days later, kicking Helen Toner off of the board, along with Ilya Sutskeva, a technical co-founder that Altman manipulated long enough to build ChatGPT, and then ousted him the moment that he chose to complain. Sutskeva, by the way, has resigned now.
He's also one of the biggest technical minds there. How is open AI going to continue? Anyway, last week, a group of insiders at various AI companies published an open letter asking for their overlords, the heads of these companies, for the right to warn about advanced artificial intelligence in a monument, genuinely impressive monument, to the bullshit machine that Sam Altman has created.
He's also one of the biggest technical minds there. How is open AI going to continue? Anyway, last week, a group of insiders at various AI companies published an open letter asking for their overlords, the heads of these companies, for the right to warn about advanced artificial intelligence in a monument, genuinely impressive monument, to the bullshit machine that Sam Altman has created.
He's also one of the biggest technical minds there. How is open AI going to continue? Anyway, last week, a group of insiders at various AI companies published an open letter asking for their overlords, the heads of these companies, for the right to warn about advanced artificial intelligence in a monument, genuinely impressive monument, to the bullshit machine that Sam Altman has created.
While there are genuine safety concerns with AI, there really are, there are many of them to consider, these people are desperately afraid of the computer coming alive and killing them, when they should fear the non-technical asshole manipulator getting rich making egregious promises about what AI can do. AI researchers, you have to live up to Sam Altman's promises. Sam Altman doesn't.
While there are genuine safety concerns with AI, there really are, there are many of them to consider, these people are desperately afraid of the computer coming alive and killing them, when they should fear the non-technical asshole manipulator getting rich making egregious promises about what AI can do. AI researchers, you have to live up to Sam Altman's promises. Sam Altman doesn't.
While there are genuine safety concerns with AI, there really are, there are many of them to consider, these people are desperately afraid of the computer coming alive and killing them, when they should fear the non-technical asshole manipulator getting rich making egregious promises about what AI can do. AI researchers, you have to live up to Sam Altman's promises. Sam Altman doesn't.
This is not your friend. The problem is not the boogeyman computer coming alive. That's not happening, man. What's happening is this guy is leading your industry to ruin.
This is not your friend. The problem is not the boogeyman computer coming alive. That's not happening, man. What's happening is this guy is leading your industry to ruin.
This is not your friend. The problem is not the boogeyman computer coming alive. That's not happening, man. What's happening is this guy is leading your industry to ruin.
And the bigger concern that they should have should be about what Leo Aschenbrenner, a former safety researcher at OpenAI, had to say on the Dwarkesh Patel podcast, where he claimed that security processes at OpenAI were, and I quote, "...egregiously insufficient." and that the priorities at the company were focused on growth over stability or security.
And the bigger concern that they should have should be about what Leo Aschenbrenner, a former safety researcher at OpenAI, had to say on the Dwarkesh Patel podcast, where he claimed that security processes at OpenAI were, and I quote, "...egregiously insufficient." and that the priorities at the company were focused on growth over stability or security.
And the bigger concern that they should have should be about what Leo Aschenbrenner, a former safety researcher at OpenAI, had to say on the Dwarkesh Patel podcast, where he claimed that security processes at OpenAI were, and I quote, "...egregiously insufficient." and that the priorities at the company were focused on growth over stability or security.