The smallest technical decisions become humanity's biggest pivots:The same-origin policy—a well-intentioned browser security rule from the 1990s—accidentally created Facebook, Google, and every data monopoly since. It locks your data in silos—and you stayed where your stuff already is. This dynamic created aggregators.Alex Komoroske—who led Chrome's web platform team at Google and ran corporate strategy at Stripe—saw this pattern play out firsthand. And he's obsessed with the tiny decisions that will shape AI's next 30 years:Whether AI keeps memory centrally or user-controlled?Is AI free/ad-supported or user-paid?Should AI be engagement-maximizing or intention-aligned?How should we handle prompt injection in MCP and agentic systems?Should AI be built with AOL-style aggregation or web-style openness?This is a much-watch if you care about the future of AI and humanity.If you found this episode interesting, please like, subscribe, comment, and share! Want even more?Sign up for Every to unlock our ultimate guide to prompting ChatGPT here: https://every.ck.page/ultimate-guide-to-prompting-chatgpt. It’s usually only for paying subscribers, but you can get it here for free.To hear more from Dan Shipper:Subscribe to Every: https://every.to/subscribe Follow him on X: https://twitter.com/danshipper Sponsors: Google Gemini: Experience high quality AI video generation with Google's most capable video model: Veo 3. Try it in the Gemini app at gemini.google with a Google AI Pro plan or get the highest access with the Ultra plan.Attio: Go to https://attio.com/every and get 15% off your first year on your AI-powered CRM.Timestamps:Introduction: 00:01:45Why chatbots are a feature not a paradigm: 00:04:25Toward AI that’s aligned with our intentions: 00:06:50The four pillars of “intentional technology”: 00:11:54The type of structures in which intentional technology can thrive: 00:14:16Why ChatGPT is the AOL of the AI era: 00:18:26Why AI needs to break out of the silos of the early internet: 00:25:55Alex’s personal journey into systems-thinking: 00:41:53How LLMs can encode what we know but can’t explain: 00:48:15Can LLMs solve the coordination problem inside organizations: 00:54:35The under-discussed risk of prompt injection: 01:01:39Links to resources mentioned in the episode:Alex Komoroske: @komoramaCommon Tools: https://common.tools/ The public Google document with Alex’s raw ideas and thoughts: Bits and BobsA couple of Alex’s favorite books: Why Information Grows by Cesar Hidalgo and The Origin of Wealth by Eric Beinhocker
No persons identified in this episode.
This episode hasn't been transcribed yet
Help us prioritize this episode for transcription by upvoting it.
Popular episodes get transcribed faster
Other recent transcribed episodes
Transcribed and ready to explore now
Eric Larsen on the emergence and potential of AI in healthcare
10 Dec 2025
McKinsey on Healthcare
Reducing Burnout and Boosting Revenue in ASCs
10 Dec 2025
Becker’s Healthcare -- Spine and Orthopedic Podcast
Dr. Erich G. Anderer, Chief of the Division of Neurosurgery and Surgical Director of Perioperative Services at NYU Langone Hospital–Brooklyn
09 Dec 2025
Becker’s Healthcare -- Spine and Orthopedic Podcast
Dr. Nolan Wessell, Assistant Professor and Well-being Co-Director, Department of Orthopedic Surgery, Division of Spine Surgery, University of Colorado School of Medicine
08 Dec 2025
Becker’s Healthcare -- Spine and Orthopedic Podcast
NPR News: 12-08-2025 2AM EST
08 Dec 2025
NPR News Now
NPR News: 12-08-2025 1AM EST
08 Dec 2025
NPR News Now