Rob Wiblin
๐ค SpeakerAppearances Over Time
Podcast Appearances
actually looked at the data, changed his mind and said, oh yeah, this is happening now and it's dangerous.
And it's very hard to get those kinds of common knowledge dynamics if everything is just sent to governments.
That said, of course, I think sending things to governments is better than not sending it anywhere.
So I also think that's good.
I think I laid out a whole spectrum of ideal kind of like information sharing practices.
And I don't think going all or nothing on that whole package is like a top priority fight to pick.
But I think sort of the algorithm of thinking really hard about what pieces of information we would want to know in order to know for ourselves if we
the intelligence explosion was happening and sort of getting the like highest value items on that list or the like biggest bang for buck items on that list to me feels very high.
And I think that's the like strategy that people working on AI safety related legislation have landed on.
So like the RAISE Act in New York and SB 53 in California are both like quite transparency oriented and
and both oriented around, for example, whistleblower protections, which are an important policy plank underlying transparency.
I think that's very plausible.
I still think that
Information that leaks in the form of rumors in San Francisco tech bro parties doesn't have the ability to impact policy and decision-making all the way in D.C.
or London or Brussels in the same way as information that is just...
sort of clearly unrefuted and very salient and sort of official.
So I think that the AI safety scene in the Bay Area has benefited from having close social ties to people who work at AI companies, getting a sense of what might be coming around the corner.
But that's not something you can just
that's not something that you can use to really like pull an alarm or like advocate for very costly actions.
Um, so I think it like, isn't really enough.