00:00:33 高手与普通人的差距,在于“记忆预算”的分配00:04:16 AI当“牛顿”:我们如何找到万物生长的公式? 00:08:26 让AI不止听话,更要会提问 00:12:09 AI 思考的艺术:如何做到又快又好? 00:18:06 AI识人心:20盘棋,就“看穿”了你 本期介绍的五篇论文:[LG] Capacity-Constrained Continual Learning [Google DeepMind] https://arxiv.org/abs/2507.21479 ---[LG] EvoSLD: Automated Neural Scaling Law Discovery With Large Language Models [Peking University & Tsinghua University] https://arxiv.org/abs/2507.21184 ---[LG] Teaching Language Models To Gather Information Proactively [Microsoft] https://arxiv.org/abs/2507.21389 ---[LG] TriangleMix: A Lossless and Efficient Attention Pattern for Long Context Prefilling [Microsoft Research] https://arxiv.org/abs/2507.21526 ---[LG] Learning to Imitate with Less: Efficient Individual Behavior Modeling in Chess [University of Toronto] https://arxiv.org/abs/2507.21488
No persons identified in this episode.
This episode hasn't been transcribed yet
Help us prioritize this episode for transcription by upvoting it.
Popular episodes get transcribed faster
Other recent transcribed episodes
Transcribed and ready to explore now
SpaceX Said to Pursue 2026 IPO
10 Dec 2025
Bloomberg Tech
Don’t Call It a Comeback
10 Dec 2025
Motley Fool Money
Japan Claims AGI, Pentagon Adopts Gemini, and MIT Designs New Medicines
10 Dec 2025
The Daily AI Show
Eric Larsen on the emergence and potential of AI in healthcare
10 Dec 2025
McKinsey on Healthcare
What it will take for AI to scale (energy, compute, talent)
10 Dec 2025
Azeem Azhar's Exponential View
Reducing Burnout and Boosting Revenue in ASCs
10 Dec 2025
Becker’s Healthcare -- Spine and Orthopedic Podcast