Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

AI可可AI生活

[人人能懂] 从不对称数据、自我审视到代码世界模型

08 Oct 2025

Description

今天我们来聊聊,怎样才能更聪明地培养一个AI,而不只是一味地堆砌数据和算力。我们会探讨,AI的“童年教育”怎样才能事半功倍?它又是如何学会像我们一样“先打草稿再修改”来提升工作效率的?从把AI变成程序员,到解开它“长考”反而犯错的谜团,再到给训练过程安装“涡轮增压”,最新几篇论文将刷新你对AI学习方式的认知。00:00:32 AI界的“鸡娃”指南00:05:12 AI写作提速:先打草稿,再一笔修正00:09:32 让AI下棋?不如让它当个“规则翻译官”00:14:52 AI“长考”之后,为什么反而会出错?00:20:56 AI训练的快车道:最后一层,我们算出来本期介绍的几篇论文:[LG] Front-Loading Reasoning: The Synergy between Pretraining and Post-Training Data  [NVIDIA & CMU]  https://arxiv.org/abs/2510.03264 ---[LG] Self-Speculative Masked Diffusions  [Google DeepMind]  https://arxiv.org/abs/2510.03929 ---[LG] Code World Models for General Game Playing  [Google DeepMind]  https://arxiv.org/abs/2510.04542 ---[LG] Understanding the Role of Training Data in Test-Time Scaling  [University of Southern California & University of California Los Angeles]  https://arxiv.org/abs/2510.03605 ---[LG] Closed-Form Last Layer Optimization  [Google Deep & Mind University of Tubingen & Secondmind]  https://arxiv.org/abs/2510.04606 

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.