Menu
Sign In Search Podcasts Libraries Charts People & Topics Add Podcast API Blog Pricing
Podcast Image

AI可可AI生活

Technology

Episodes

Showing 401-500 of 934
«« ← Prev Page 5 of 10 Next → »»

[人人能懂] 不止于“怎么用”:AI运行的五大核心原理

12 Jul 2025

Contributed by Lukas

00:01:30 人工智能的安全带,真的能系牢吗? 00:06:39 给人工智能算算账:它能力的边界在哪? 00:11:34 你一“...

[人生智慧] 你最宝贵的资产正在无声流失,而你甚至没有察觉

11 Jul 2025

Contributed by Lukas

我们拼命管理着自己的金钱、时间、人脉,却对自己最重要的核心资产——“情绪资产”——视而不见。

[人人能懂] AI的首席架构师:解构与重组“智能”

11 Jul 2025

Contributed by Lukas

00:01:50 人工智能界的“分工”智慧:如何让天才更天才? 00:06:40 人工智能预测那么准,它真的“懂”了吗?&nb...

[人生智慧] 你付出的真正代价,不是时间,而是“重启成本”

10 Jul 2025

Contributed by Lukas

23分15秒,是一个普通人,在注意力被一次微小的干扰打断后,重新回到之前深度专注状态所需要付出的平均时间。...

[人人能懂] AI的“手艺人”精神:从模仿、练习到顿悟

10 Jul 2025

Contributed by Lukas

00:01:33 “差生”配对,如何“炼”出优等生? 00:06:09 AI的“刻意练习”:怎样探索才最高效? 00:10:19 让AI学...

[人生智慧] 为什么人生最可怕的不是差评,而是“未读”

09 Jul 2025

Contributed by Lukas

我们真正要警惕的,不是犯错,而是那种‘万一错了怎么办’的恐惧。

[人人能懂] AI的系统之美:当“设计”比“天才”更重要

09 Jul 2025

Contributed by Lukas

00:01:37 你的 AI,是“记性好”还是“真会学”?00:06:19 管好AI的“注意力”:一个“分而治之”的智慧 00:10:22 造...

[感悟]你的快乐,正在“复利”还是“透支”?

08 Jul 2025

Contributed by Lukas

“我此刻的行为,是在点燃一支烟火,还是在种下一棵树?是在透支未来的我,还是在投资未来的我?”

拉开AI的魔法幕布:后台的巧思与匠心

08 Jul 2025

Contributed by Lukas

00:01:26 AI的“心里话”,为什么也可能是装出来的? 00:06:38 AI的“记忆力”,如何才能赶上一本小说? 00:12...

[感悟]迷路定律:为何走错路,才是回家的捷径?

07 Jul 2025

Contributed by Lukas

迷路,不是一种状态,它是一种必要的暂停。它强迫我们从“自动驾驶”切换到“手动模式”,用自己的意图,去...

从“战略指纹”到“解耦赋分”:AI的非技术性进化

07 Jul 2025

Contributed by Lukas

00:01:29 AI的“人设”:谷歌“腹黑”,OpenAI“傻白甜”? 00:06:09 给AI“减肥餐”:为什么数据越多,模型可能越...

[感悟]时间的复利:你亏欠自己的那10小时

06 Jul 2025

Contributed by Lukas

时间本身就是最高形式的财富,而你,需要成为第一个收款人。

当AI撕掉“成长公式”:拥抱一个更真实的智能世界

06 Jul 2025

Contributed by Lukas

00:01:33 AI育儿经:聪明的大脑,是“喂”出来的还是“练”出来的?00:07:02  换个方向看世界:当AI学会“倒着想...

[感悟]为什么九次失败,胜过一次平庸?

05 Jul 2025

Contributed by Lukas

你应该容忍、甚至鼓励那些可能带来巨大回报的、高失败率的实验。因为在商业世界里,一次本垒打的得分,远超...

AI的效率革命:当“更巧”比“更大”更重要

05 Jul 2025

Contributed by Lukas

00:01:45 AI 炼丹术:发现模型训练的“万能公式”? 00:07:01 AI军备竞赛,弹药快打光了怎么办? 00:12:16 让 AI ...

[感悟]你是在“发脾气”还是在“解决问题”?

04 Jul 2025

Contributed by Lukas

我们每个人的人生,都充满了各种各样的“刺激”——可能是老板的一句批评,伴侣的一句抱怨,甚至只是社交媒...

AI的“书房”与“镜子”:探索智能的基石与反思

04 Jul 2025

Contributed by Lukas

00:01:19 高手过招,拼的不是“脑子”,而是“书房”? 00:05:35 AI的“自我反思”:如何让机器像高手一样思考?...

[感悟]你的“情绪豁免权”已过期,请向内续费

03 Jul 2025

Contributed by Lukas

真正困住我们的,是我们为自己亲手建造的那座“心牢”。只要你还被关在这座牢里,那么无论你逃到天涯海角,...

不止是强大,更是聪明:AI如何学会思考、复盘与好奇?

03 Jul 2025

Contributed by Lukas

00:01:19 AI的“偏科”难题:学好数理化,走遍天下真的不怕吗?00:05:08 AI 也会“复盘”?聊聊如何让机器像高手一样...

[感悟]打不赢的仗,换个打法

02 Jul 2025

Contributed by Lukas

人生,没有真正的绝境。

AI的“顿悟”时刻:从玩游戏到省大钱,揭秘智能进化的新路径

02 Jul 2025

Contributed by Lukas

本期「人人能懂的AI前沿」,我们重点介绍五篇最新的AI论文:00:00:27 高手过招:AI是如何在游戏中“悟道”的?00:...

[感悟]警惕!那个正在“喂养”你大脑的隐形厨师

01 Jul 2025

Contributed by Lukas

有一个我们不易察觉的变化,正在重塑我们和知识、和世界相处的方式。

你以为AI在理解语言?其实它在开一场“关系”派对

01 Jul 2025

Contributed by Lukas

[LG] Transformers are Graph Neural Networks[University of Cambridge]arxiv.org

AI的“顿悟”:机器如何学会了举一反三?

01 Jul 2025

Contributed by Lukas

[LG] Why Neural Network Can Discover Symbolic Structures with Gradient-based Training: An Algebraic and Geometric Foundation for Neurosymbolic Reasoni...

想看懂未来?先学会给机器讲个好故事

01 Jul 2025

Contributed by Lukas

[LG] Performance Prediction for Large Systems via Text-to-Text Regression[Google Research]arxiv.org

AI当医生,怎样才能不把天“聊死”?

01 Jul 2025

Contributed by Lukas

[CL] Sequential Diagnosis with Language Models[Microsoft AI] arxiv.org

你的大脑里,是不是也住着一个“CEO”和一个“项目经理”?

01 Jul 2025

Contributed by Lukas

[LG] Hierarchical Reasoning Model[Sapient Intelligence, Singapore]arxiv.org

你以为AI是学霸,其实它只是个“刷题匠”

01 Jul 2025

Contributed by Lukas

[CL] OMEGA: Can LLMs Reason Outside the Box in Math? Evaluating Exploratory, Compositional, and Transformative Generalization[dmodel.ai & UC Berke...

AI写作的秘密:高手不是“教”出来的,而是“练”出来的

01 Jul 2025

Contributed by Lukas

[CL] LongWriter-Zero: Mastering Ultra-Long Text Generation via Reinforcement Learning[Singapore University of Technology and Design & Tsinghua Uni...

思想的“锚点”:AI是如何抓住重点的?

01 Jul 2025

Contributed by Lukas

[LG] Thought Anchors: Which LLM Reasoning Steps Matter?[Duke University & Aiphabet]arxiv.org

AI前沿:从大脑启发到硬件彩票

01 Jul 2025

Contributed by Lukas

[LG] Hierarchical Reasoning Model[Sapient Intelligence, Singapore]https://arxiv.org/abs/2506.21734---[CL] Sequential Diagnosis with Language Models[Mi...

[感悟]如何干掉你99%的烦恼?

30 Jun 2025

Contributed by Lukas

成年人最高级的活法:在自己能掌控的领域里,拼尽全力;在自己无法掌控的领域里,保持内心的安宁。

寻找“标准答案”:在混乱中导航的终极技巧

30 Jun 2025

Contributed by Lukas

[LG] Gaussian Invariant Markov Chain Monte Carlo[Google DeepMind & UCL]arxiv.org

AI瘦身指南:如何让“大胖子”模型变得又快又好?

30 Jun 2025

Contributed by Lukas

[LG] Distilling Normalizing Flows[University of Oregon & HSE University & Picsart AI Research]arxiv.org

为什么“精益求精”反而会把事情搞砸?

30 Jun 2025

Contributed by Lukas

[LG] Overtuning in Hyperparameter Optimization[LMU Munich]arxiv.org

AI界的“学霸”和“学神”:差的不是智商,是训练方法

30 Jun 2025

Contributed by Lukas

[CL] OctoThinker: Mid-training Incentivizes Reinforcement Learning Scaling[Shanghai Jiao Tong University]arxiv.org

AI前沿:从图像生成到数学推理

30 Jun 2025

Contributed by Lukas

[LG] Guidance in the Frequency Domain Enables High-Fidelity Sampling at Low CFG Scales[ETH Zürich & DisneyResearch|Studios]https://arxiv.org/abs/...

[感悟]你的下一个机会,藏在“无用”的角落里

29 Jun 2025

Contributed by Lukas

我们真正安身立命的核心竞争力,往往就藏在那些看起来最“没用”、最“务虚”的地方。

喂养AI的新姿势:不加料,只调序

29 Jun 2025

Contributed by Lukas

[CL] Data Efficacy for Language Model Training[Microsoft Research]arxiv.org

AI的“学霸秘籍”:原来功夫在“节奏”上

29 Jun 2025

Contributed by Lukas

[CL] Bridging Offline and Online Reinforcement Learning for LLMs[FAIR at Meta]arxiv.org

“听起来很美”的AI创意,为何一做就“翻车”?

29 Jun 2025

Contributed by Lukas

[LG] The Ideation-Execution Gap: Execution Outcomes of LLM-Generated versus Human Research Ideas[Stanford University]arxiv.org

AI的“懂”,是真的懂吗?

29 Jun 2025

Contributed by Lukas

[CL] Potemkin Understanding in Large Language Models[MIT & University of Chicago & Harvard University]arxiv.org

给 AI 一个“提示”,还是给它“上一课”?

29 Jun 2025

Contributed by Lukas

[CL] Can Gradient Descent Simulate Prompting?[MIT CSAIL]arxiv.org

AI前沿:从数学推理到模型优化

29 Jun 2025

Contributed by Lukas

[CL] OctoThinker: Mid-training Incentivizes Reinforcement Learning Scaling[Shanghai Jiao Tong University]https://arxiv.org/abs/2506.20512---[LG] Overt...

[感悟]这个时代最高级的“活法”

28 Jun 2025

Contributed by Lukas

这个时代对普通人最底层的要求,其实就是做一个“正常人”。

AI成长的秘密:如何拿捏“奖”与“罚”的尺度

28 Jun 2025

Contributed by Lukas

[LG] Asymmetric REINFORCE for off-Policy Reinforcement Learning: Balancing positive and negative rewards  [FAIR at Meta]  arxiv.org

聪明的“调度员”:AI如何决定谁来干活?

28 Jun 2025

Contributed by Lukas

[LG] Mastering Multiple-Expert Routing: Realizable H-Consistency and Strong Guarantees for Learning to Defer  [Courant Institute of Mathematical ...

AI的“情商”密码:它怎么学会既说真话,又不得罪你?

28 Jun 2025

Contributed by Lukas

[CL] Inside you are many wolves: Using cognitive models to interpret value trade-offs in LLMs  [Harvard University]  arxiv.org

让AI自己造AI,这事儿靠谱吗?

28 Jun 2025

Contributed by Lukas

[LG] Language Modeling by Language Models  [Allen Institute for AI]  arxiv.org

你的“思考”方式,是唯一的吗?

28 Jun 2025

Contributed by Lukas

[CL] DiffuCoder:Understanding and Improving Masked Diffusion Models for Code Generation  [Apple]  arxiv.org

AI前沿:从梯度下降模拟提示到数据效能的革命

28 Jun 2025

Contributed by Lukas

[CL] Can Gradient Descent Simulate Prompting?[MIT CSAIL]https://arxiv.org/abs/2506.20989---[CL] Potemkin Understanding in Large Language Models[MIT &a...

[感悟]两把尺子,量出两种人生

27 Jun 2025

Contributed by Lukas

决定你人生品质的,不是世界怎么看你,而是你如何看待你自己的世界。

给大象绣花,换个针法就简单了?

27 Jun 2025

Contributed by Lukas

[LG] Orthogonal Finetuning Made Scalable[Max Planck Institute for Intelligent Systems & University of Cambridge]arxiv.org

AI公司的“裁员”秘密:谁是真正的骨干?

27 Jun 2025

Contributed by Lukas

[LG] Who Does What in Deep Learning? Multidimensional Game-Theoretic Attribution of Function of Neural Units[University Medical Center Eppendorf &...

AI换脑术:如何让模型一键学会新本事?

27 Jun 2025

Contributed by Lukas

[LG] Command-V: Pasting LLM Behaviors via Activation Profiles[CMU]https://arxiv.org/abs/2506.19140

AI的秘密:打碎了还能认出来?

27 Jun 2025

Contributed by Lukas

[CL] Broken Tokens? Your Language Model can Secretly Handle Non-Canonical Tokenizations[University of Washington]https://arxiv.org/abs/2506.19004

高手过招:接力赛为何胜过群英会?

27 Jun 2025

Contributed by Lukas

[LG] Chain-of-Experts: Unlocking the Communication Power of Mixture-of-Experts Models[Northwestern University]https://arxiv.org/abs/2506.18945

AI前沿:从代码生成到自动化科研

27 Jun 2025

Contributed by Lukas

[CL] DiffuCoder:Understanding and Improving Masked Diffusion Models for Code Generation[Apple]https://arxiv.org/abs/2506.20639---[LG] Language Model...

[感悟]你的“监狱”有多大,世界就有多大

26 Jun 2025

Contributed by Lukas

这个世界上最大的“监狱”,不是用钢筋水泥建造的,而是我们自己思维的产物。

AI界的“好学生”:不仅会答题,还会写参考文献

26 Jun 2025

Contributed by Lukas

[CL] Cite Pretrain: Retrieval-Free Knowledge Attribution for Large Language Models[Duke University & Meta]https://arxiv.org/abs/2506.17585

聪明人的“笨功夫”,和“笨人”的聪明劲

26 Jun 2025

Contributed by Lukas

[LG] In-Context Learning Strategies Emerge Rationally[Stanford University & Harvard University]https://arxiv.org/abs/2506.17859

AI变聪明的秘密:请个好管家,而不是多招人

26 Jun 2025

Contributed by Lukas

[LG] Routing Mamba: Scaling State Space Models with Mixture-of-Experts Projection[Microsoft]https://arxiv.org/abs/2506.18145

AI变聪明的秘密:刷新“脑回路”,而不是扩容“硬盘”

26 Jun 2025

Contributed by Lukas

[LG] The 4th Dimension for Scaling Model Size[University of Illinois at Urbana-Champaign & University of Toronto]https://arxiv.org/abs/2506.18233

AI大模型:大力真能出奇迹吗?

26 Jun 2025

Contributed by Lukas

[LG] These are Not All the Features You are Looking For: A Fundamental Bottleneck In Supervised Pretraining[Facebook AI Research (FAIR) at Meta & ...

[感悟]你的“倒霉”,是世界给你的“私教课”

25 Jun 2025

Contributed by Lukas

当你给出“新答案”的瞬间,你会发现,那道反复折磨你的题,就这么烟消云散了。

如何让AI“明辨是非”,而不是“投机取巧”?

25 Jun 2025

Contributed by Lukas

[LG] Robust Reward Modeling via Causal Rubrics[Google DeepMind]https://arxiv.org/abs/2506.16507

我们怎么知道,AI是真的“懂了”?

25 Jun 2025

Contributed by Lukas

[LG] Latent Concept Disentanglement in Transformer-based Language Models[Purdue University & University of Southern California]https://arxiv.org/a...

人多,真的力量大吗?

25 Jun 2025

Contributed by Lukas

[CL] When Does Divide and Conquer Work for Long Context LLM? A Noise Decomposition Framework[University of Chicago & Together AI]https://arxiv.org...

给AI做“CT扫描”:一份来自科学家的操作手册

25 Jun 2025

Contributed by Lukas

[LG] On the Theoretical Understanding of Identifiable Sparse Autoencoders and Beyond[Peking University & MIT]https://arxiv.org/abs/2506.15963

养一个“AI神童”,原来是个精细活儿

25 Jun 2025

Contributed by Lukas

[CL] EvoLM: In Search of Lost Language Model Training Dynamics[Harvard & Stanford & EPFL]https://arxiv.org/abs/2506.16029

[感悟]放下“自证”:成年人最高级的自由

24 Jun 2025

Contributed by Lukas

真正的自由,是从不再解释开始。当你不再需要向世界证明什么时,你才真正开始拥有了自己的人生。

AI画画,怎样才能“又快又好”?

24 Jun 2025

Contributed by Lukas

[CV] Align Your Flow: Scaling Continuous-Time Flow Map Distillation  [NVIDIA]  https://arxiv.org/abs/2506.14603

AI 的“读心术”:我们如何才能信任一个“聪明的大脑”?

24 Jun 2025

Contributed by Lukas

[LG] Taming Polysemanticity in LLMs: Provable Feature Recovery via Sparse Autoencoders  [Yale University & Shanghai Jiao Tong University] &nb...

你的AI在偷偷“修炼”:通往无穷的平坦大道

24 Jun 2025

Contributed by Lukas

[LG] Flat Channels to Infinity in Neural Loss Landscapes  [EPFL & Flatiron Institute]  https://arxiv.org/abs/2506.14951

从死记硬背到融会贯通,AI的“开窍”秘籍

24 Jun 2025

Contributed by Lukas

[LG] GrokAlign: Geometric Characterisation and Acceleration of Grokking  [Rice University & Brown University]  https://arxiv.org/abs/250...

[感悟]高手过招:如何砍掉内心的“全都要”?

23 Jun 2025

Contributed by Lukas

真正的自由,从来不是拥有无限的选择权。

AI育儿经:学霸是“刷题”刷出来的,还是“试错”试出来的?

23 Jun 2025

Contributed by Lukas

[CL] AceReason-Nemotron 1.1: Advancing Math and Code Reasoning through SFT and RL Synergy  [NVIDIA]  https://arxiv.org/abs/2506.13284

AI黑箱里的一张新地图

23 Jun 2025

Contributed by Lukas

[LG] Random Matrix Theory for Deep Learning: Beyond Eigenvalues of Linear Models  [Huazhong University of Science and Technolog & UC Berkeley...

AI程序员“封神”?别急,先看看“奥赛冠军”的体检报告

23 Jun 2025

Contributed by Lukas

[LG] LiveCodeBench Pro: How Do Olympiad Medalists Judge LLMs in Competitive Programming?  [New York University & Princeton University]  ...

顶尖高手过招,为何有时“笨办法”反而更有效?

23 Jun 2025

Contributed by Lukas

[LG] Is your batch size the problem? Revisiting the Adam-SGD gap in language modeling  [Max Planck Institute for Intelligent Systems]  https...

[感悟]如何干掉那个叫“明天再说”的自己?

22 Jun 2025

Contributed by Lukas

你会发现,决定你五年、十年后成为一个什么样的人,根本不是你那些写在纸上、存在手机里的宏伟计划,而是你...

AI黑箱?不如跟它聊聊天

22 Jun 2025

Contributed by Lukas

[LG] Because we have LLMs, we Can and Should Pursue Agentic Interpretability[Google DeepMind]https://arxiv.org/abs/2506.12152

用嘴造工具:AI定制进入“白话时代”

22 Jun 2025

Contributed by Lukas

[LG] Text-to-LoRA: Instant Transformer Adaption[Sakana AI]https://arxiv.org/abs/2506.06105

AI绘画的秘密:创造力竟然源于“学不会”?

22 Jun 2025

Contributed by Lukas

[LG] On the Closed-Form of Flow Matching: Generalization Does Not Arise from Target Stochasticity[CNRS]https://arxiv.org/abs/2506.03719

AI的记忆革命:如何只记重点,忘掉其它?

22 Jun 2025

Contributed by Lukas

[CL] Don't Pay Attention[Avey AI]https://arxiv.org/abs/2506.11305

[感悟]别再“收藏”了!把笔记变成一台思想印钞机!

21 Jun 2025

Contributed by Lukas

真正的知识管理,不是收藏,而是创造。

让AI自己给自己“立规矩”,结果会怎样?

21 Jun 2025

Contributed by Lukas

[LG] AutoRule: Reasoning Chain-of-thought Extracted Rule-based Rewards Improve Preference Learning[CMU]https://arxiv.org/abs/2506.15651

你的“土办法”过时了,AI正在打造“策略工具箱”

21 Jun 2025

Contributed by Lukas

[LG] HeurAgenix: Leveraging LLMs for Solving Complex Combinatorial Optimization Challenges[Microsoft Research Asia]https://arxiv.org/abs/2506.15196

AI大脑里的“万金油”神经元,是臭虫还是宝贝?

21 Jun 2025

Contributed by Lukas

[LG] Dense SAE Latents Are Features, Not Bugs[MIT & ETH Zürich]https://arxiv.org/abs/2506.156

AI界的“调参玄学”:一个被遗忘的旋钮

21 Jun 2025

Contributed by Lukas

[LG] Optimal Embedding Learning Rate in LLMs: The Effect of Vocabulary Size[UC Berkeley & Microsoft Research]https://arxiv.org/abs/2506.15025

AI的“读心术”:模型的大脑会出卖它的秘密吗?

21 Jun 2025

Contributed by Lukas

[CL] Approximating Language Model Training Data from Weights[Cornell University]https://arxiv.org/abs/2506.155

[感悟]如何成为一个高明的“刻意选择者”?

20 Jun 2025

Contributed by Lukas

不被外界的噪音裹挟,不被内心的惯性驱动,清清楚楚地知道自己要什么,不要什么,然后全力以赴,成为一个清...

AI的“强迫症”:为什么你一句话没说完,它就“疯了”?

19 Jun 2025

Contributed by Lukas

[CL] Sampling from Your Language Model One Byte at a Time[University of Washington]https://arxiv.org/abs/2506.14123

为什么“抓重点”让AI学得更快?

19 Jun 2025

Contributed by Lukas

[LG] Transformers Learn Faster with Semantic Focus[IBM Research]https://arxiv.org/abs/2506.14095

AI的“识字”革命:我们离“读懂”世界又近了一步?

19 Jun 2025

Contributed by Lukas

[CL] From Bytes to Ideas: Language Modeling with Autoregressive U-Nets[FAIR at Meta]https://arxiv.org/abs/2506.14761

AI的“选择困难症”:通往更高智慧的秘密通道

19 Jun 2025

Contributed by Lukas

[CL] Reasoning with Exploration: An Entropy Perspective[RUC & MSRA & SJTU]https://arxiv.org/abs/2506.14758

AI训练的“七分饱”智慧

19 Jun 2025

Contributed by Lukas

[LG] Less is More: Undertraining Experts Improves Model Upcycling[Université de Montréal & Concordia University]https://arxiv.org/abs/2506.14126

[感悟]你的效率,藏在截止日期里

19 Jun 2025

Contributed by Lukas

工作量会像气体一样膨胀填满给定的时间容器,因此主动设定紧迫的截止日期能激发最高效率,让你成为时间的主...

为什么你总觉得,没人能真正懂你?

18 Jun 2025

Contributed by Lukas

[LG] Wanting to Be Understood Explains the Meta-Problem of Consciousness[Google DeepMind]https://arxiv.org/abs/2506.12086

AI的“分身术”:为什么你的万能助手不会再轻易“忘事”了?

18 Jun 2025

Contributed by Lukas

[CL] Multipole Attention for Efficient Long Context Reasoning[UC Berkeley]https://arxiv.org/abs/2506.13059

«« ← Prev Page 5 of 10 Next → »»