Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing
Podcast Image

LessWrong (Curated & Popular)

Technology Society & Culture

Episodes

Showing 301-400 of 805
«« ← Prev Page 4 of 9 Next → »»

“Interpretability Will Not Reliably Find Deceptive AI” by Neel Nanda

05 May 2025

Contributed by Lukas

(Disclaimer: Post written in a personal capacity. These are personal hot takes and do not in any way represent my employer's views.) TL;DR: I do...

“Slowdown After 2028: Compute, RLVR Uncertainty, MoE Data Wall” by Vladimir_Nesov

03 May 2025

Contributed by Lukas

It'll take until ~2050 to repeat the level of scaling that pretraining compute is experiencing this decade, as increasing funding can't sus...

“Early Chinese Language Media Coverage of the AI 2027 Report: A Qualitative Analysis” by jeanne_, eeeee

01 May 2025

Contributed by Lukas

In this blog post, we analyse how the recent AI 2027 forecast by Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean has be...

[Linkpost] “Jaan Tallinn’s 2024 Philanthropy Overview” by jaan

25 Apr 2025

Contributed by Lukas

This is a link post. to follow up my philantropic pledge from 2020, i've updated my philanthropy page with the 2024 results. in 2024 my donations...

“Impact, agency, and taste” by benkuhn

24 Apr 2025

Contributed by Lukas

I’ve been thinking recently about what sets apart the people who’ve done the best work at Anthropic. You might think that the main thing that mak...

[Linkpost] “To Understand History, Keep Former Population Distributions In Mind” by Arjun Panickssery

24 Apr 2025

Contributed by Lukas

This is a link post. Guillaume Blanc has a piece in Works in Progress (I assume based on his paper) about how France's fertility declined earlier...

“AI-enabled coups: a small group could use AI to seize power” by Tom Davidson, Lukas Finnveden, rosehadshar

23 Apr 2025

Contributed by Lukas

We’ve written a new report on the threat of AI-enabled coups. I think this is a very serious risk – comparable in importance to AI takeover but ...

“Accountability Sinks” by Martin Sustrik

23 Apr 2025

Contributed by Lukas

Back in the 1990s, ground squirrels were briefly fashionable pets, but their popularity came to an abrupt end after an incident at Schiphol Airport o...

“Training AGI in Secret would be Unsafe and Unethical” by Daniel Kokotajlo

21 Apr 2025

Contributed by Lukas

Subtitle: Bad for loss of control risks, bad for concentration of power risks I’ve had this sitting in my drafts for the last year. I wish I’d be...

“Why Should I Assume CCP AGI is Worse Than USG AGI?” by Tomás B.

20 Apr 2025

Contributed by Lukas

Though, given my doomerism, I think the natsec framing of the AGI race is likely wrongheaded, let me accept the Dario/Leopold/Altman frame that AGI w...

“Surprising LLM reasoning failures make me think we still need qualitative breakthroughs for AGI” by Kaj_Sotala

17 Apr 2025

Contributed by Lukas

Introduction Writing this post puts me in a weird epistemic position. I simultaneously believe that: The reasoning failures that I'll discuss ar...

“Frontier AI Models Still Fail at Basic Physical Tasks: A Manufacturing Case Study” by Adam Karvonen

16 Apr 2025

Contributed by Lukas

Dario Amodei, CEO of Anthropic, recently worried about a world where only 30% of jobs become automated, leading to class tensions between the automat...

“Negative Results for SAEs On Downstream Tasks and Deprioritising SAE Research (GDM Mech Interp Team Progress Update #2)” by Neel Nanda, lewis smith, Senthooran Rajamanoharan, Arthur Conmy, Callum McDougall, Tom Lieberum, János Kramár, Rohin Shah

12 Apr 2025

Contributed by Lukas

Audio note: this article contains 31 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text i...

[Linkpost] “Playing in the Creek” by Hastings

11 Apr 2025

Contributed by Lukas

This is a link post. When I was a really small kid, one of my favorite activities was to try and dam up the creek in my backyard. I would carefully mo...

“Thoughts on AI 2027” by Max Harms

10 Apr 2025

Contributed by Lukas

This is part of the MIRI Single Author Series. Pieces in this series represent the beliefs and opinions of their named authors, and do not claim to s...

“Short Timelines don’t Devalue Long Horizon Research” by Vladimir_Nesov

09 Apr 2025

Contributed by Lukas

Short AI takeoff timelines seem to leave no time for some lines of alignment research to become impactful. But any research rebalances the mix of cur...

“Alignment Faking Revisited: Improved Classifiers and Open Source Extensions” by John Hughes, abhayesian, Akbir Khan, Fabien Roger

09 Apr 2025

Contributed by Lukas

In this post, we present a replication and extension of an alignment faking model organism: Replication: We replicate the alignment faking (AF) pa...

“METR: Measuring AI Ability to Complete Long Tasks” by Zach Stein-Perlman

07 Apr 2025

Contributed by Lukas

Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently e...

“Why Have Sentence Lengths Decreased?” by Arjun Panickssery

04 Apr 2025

Contributed by Lukas

“In the loveliest town of all, where the houses were white and high and the elms trees were green and higher than the houses, where the front yards...

“AI 2027: What Superintelligence Looks Like” by Daniel Kokotajlo, Thomas Larsen, elifland, Scott Alexander, Jonas V, romeo

03 Apr 2025

Contributed by Lukas

In 2021 I wrote what became my most popular blog post: What 2026 Looks Like. I intended to keep writing predictions all the way to AGI and beyond, bu...

“OpenAI #12: Battle of the Board Redux” by Zvi

03 Apr 2025

Contributed by Lukas

Back when the OpenAI board attempted and failed to fire Sam Altman, we faced a highly hostile information environment. The battle was fought largely t...

“The Pando Problem: Rethinking AI Individuality” by Jan_Kulveit

03 Apr 2025

Contributed by Lukas

Epistemic status: This post aims at an ambitious target: improving intuitive understanding directly. The model for why this is worth trying is that I...

“OpenAI #12: Battle of the Board Redux” by Zvi

03 Apr 2025

Contributed by Lukas

Back when the OpenAI board attempted and failed to fire Sam Altman, we faced a highly hostile information environment. The battle was fought largely t...

“You will crash your car in front of my house within the next week” by Richard Korzekwa

02 Apr 2025

Contributed by Lukas

I'm not writing this to alarm anyone, but it would be irresponsible not to report on something this important. On current trends, every car will...

“My ‘infohazards small working group’ Signal Chat may have encountered minor leaks” by Linch

02 Apr 2025

Contributed by Lukas

Remember: There is no such thing as a pink elephant. Recently, I was made aware that my “infohazards small working group” Signal chat, an informa...

“Leverage, Exit Costs, and Anger: Re-examining Why We Explode at Home, Not at Work” by at_the_zoo

02 Apr 2025

Contributed by Lukas

Let's cut through the comforting narratives and examine a common behavioral pattern with a sharper lens: the stark difference between how anger ...

“PauseAI and E/Acc Should Switch Sides” by WillPetillo

02 Apr 2025

Contributed by Lukas

In the debate over AI development, two movements stand as opposites: PauseAI calls for slowing down AI progress, and e/acc (effective accelerationism...

“VDT: a solution to decision theory” by L Rudolf L

02 Apr 2025

Contributed by Lukas

Introduction Decision theory is about how to behave rationally under conditions of uncertainty, especially if this uncertainty involves being acausal...

“LessWrong has been acquired by EA” by habryka

01 Apr 2025

Contributed by Lukas

Dear LessWrong community, It is with a sense of... considerable cognitive dissonance that I announce a significant development regarding the future t...

“We’re not prepared for an AI market crash” by Remmelt

01 Apr 2025

Contributed by Lukas

Our community is not prepared for an AI crash. We're good at tracking new capability developments, but not as much the company financials. Curre...

“Conceptual Rounding Errors” by Jan_Kulveit

29 Mar 2025

Contributed by Lukas

Epistemic status: Reasonably confident in the basic mechanism. Have you noticed that you keep encountering the same ideas over and over? You read ano...

“Tracing the Thoughts of a Large Language Model” by Adam Jermyn

28 Mar 2025

Contributed by Lukas

[This is our blog post on the papers, which can be found at https://transformer-circuits.pub/2025/attribution-graphs/biology.html and https://transfo...

“Recent AI model progress feels mostly like bullshit” by lc

25 Mar 2025

Contributed by Lukas

About nine months ago, I and three friends decided that AI had gotten good enough to monitor large codebases autonomously for security problems. We s...

“AI for AI safety” by Joe Carlsmith

25 Mar 2025

Contributed by Lukas

(Audio version here (read by the author), or search for "Joe Carlsmith Audio" on your podcast app. This is the fourth essay in a series that...

“Policy for LLM Writing on LessWrong” by jimrandomh

25 Mar 2025

Contributed by Lukas

LessWrong has been receiving an increasing number of posts and contents that look like they might be LLM-written or partially-LLM-written, so we&apos...

“Will Jesus Christ return in an election year?” by Eric Neyman

25 Mar 2025

Contributed by Lukas

Thanks to Jesse Richardson for discussion. Polymarket asks: will Jesus Christ return in 2025? In the three days since the market opened, traders hav...

“Good Research Takes are Not Sufficient for Good Strategic Takes” by Neel Nanda

23 Mar 2025

Contributed by Lukas

TL;DR Having a good research track record is some evidence of good big-picture takes, but it's weak evidence. Strategic thinking is hard, and re...

“Intention to Treat” by Alicorn

22 Mar 2025

Contributed by Lukas

When my son was three, we enrolled him in a study of a vision condition that runs in my family. They wanted us to put an eyepatch on him for part of ...

“On the Rationality of Deterring ASI” by Dan H

22 Mar 2025

Contributed by Lukas

I’m releasing a new paper “Superintelligence Strategy” alongside Eric Schmidt (formerly Google), and Alexandr Wang (Scale AI). Below is the exec...

[Linkpost] “METR: Measuring AI Ability to Complete Long Tasks” by Zach Stein-Perlman

19 Mar 2025

Contributed by Lukas

This is a link post. Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has...

“I make several million dollars per year and have hundreds of thousands of followers—what is the straightest line path to utilizing these resources to reduce existential-level AI threats?” by shrimpy

19 Mar 2025

Contributed by Lukas

I have, over the last year, become fairly well-known in a small corner of the internet tangentially related to AI.As a result, I've begun making ...

“Claude Sonnet 3.7 (often) knows when it’s in alignment evaluations” by Nicholas Goldowsky-Dill, Mikita Balesni, Jérémy Scheurer, Marius Hobbhahn

18 Mar 2025

Contributed by Lukas

Note: this is a research note based on observations from evaluating Claude Sonnet 3.7. We’re sharing the results of these ‘work-in-progress’ inv...

“Levels of Friction” by Zvi

18 Mar 2025

Contributed by Lukas

Scott Alexander famously warned us to Beware Trivial Inconveniences.When you make a thing easy to do, people often do vastly more of it.When you put u...

“Why White-Box Redteaming Makes Me Feel Weird” by Zygi Straznickas

17 Mar 2025

Contributed by Lukas

There's this popular trope in fiction about a character being mind controlled without losing awareness of what's happening. Think Jessica Jo...

“Reducing LLM deception at scale with self-other overlap fine-tuning” by Marc Carauleanu, Diogo de Lucena, Gunnar_Zarncke, Judd Rosenblatt, Mike Vaiana, Cameron Berg

17 Mar 2025

Contributed by Lukas

This research was conducted at AE Studio and supported by the AI Safety Grants programme administered by Foresight Institute with additional support f...

“Auditing language models for hidden objectives” by Sam Marks, Johannes Treutlein, dmz, Sam Bowman, Hoagy, Carson Denison, Akbir Khan, Euan Ong, Christopher Olah, Fabien Roger, Meg, Drake Thomas, Adam Jermyn, Monte M, evhub

16 Mar 2025

Contributed by Lukas

We study alignment audits—systematic investigations into whether an AI is pursuing hidden objectives—by training a model with a hidden misaligned ...

“The Most Forbidden Technique” by Zvi

14 Mar 2025

Contributed by Lukas

The Most Forbidden Technique is training an AI using interpretability techniques.An AI produces a final output [X] via some method [M]. You can analyz...

“Trojan Sky” by Richard_Ngo

13 Mar 2025

Contributed by Lukas

You learn the rules as soon as you’re old enough to speak. Don’t talk to jabberjays. You recite them as soon as you wake up every morning. Keep yo...

“OpenAI:” by Daniel Kokotajlo

11 Mar 2025

Contributed by Lukas

Exciting Update: OpenAI has released this blog post and paper which makes me very happy. It's basically the first steps along the research agenda...

“How Much Are LLMs Actually Boosting Real-World Programmer Productivity?” by Thane Ruthenis

09 Mar 2025

Contributed by Lukas

LLM-based coding-assistance tools have been out for ~2 years now. Many developers have been reporting that this is dramatically increasing their produ...

“So how well is Claude playing Pokémon?” by Julian Bradshaw

09 Mar 2025

Contributed by Lukas

Background: After the release of Claude 3.7 Sonnet,[1] an Anthropic employee started livestreaming Claude trying to play through Pokémon Red. The liv...

“Methods for strong human germline engineering” by TsviBT

07 Mar 2025

Contributed by Lukas

Note: an audio narration is not available for this article. Please see the original text. The original text contained 169 footnotes which were omitte...

“Have LLMs Generated Novel Insights?” by abramdemski, Cole Wyeth

06 Mar 2025

Contributed by Lukas

In a recent post, Cole Wyeth makes a bold claim:. . . there is one crucial test (yes this is a crux) that LLMs have not passed. They have never done a...

“A Bear Case: My Predictions Regarding AI Progress” by Thane Ruthenis

06 Mar 2025

Contributed by Lukas

This isn't really a "timeline", as such – I don't know the timings – but this is my current, fairly optimistic take on where w...

“Statistical Challenges with Making Super IQ babies” by Jan Christian Refsgaard

05 Mar 2025

Contributed by Lukas

This is a critique of How to Make Superbabies on LessWrong.Disclaimer: I am not a geneticist[1], and I've tried to use as little jargon as possib...

“Self-fulfilling misalignment data might be poisoning our AI models” by TurnTrout

04 Mar 2025

Contributed by Lukas

This is a link post.Your AI's training data might make it more “evil” and more able to circumvent your security, monitoring, and control meas...

“Judgements: Merging Prediction & Evidence” by abramdemski

01 Mar 2025

Contributed by Lukas

I recently wrote about complete feedback, an idea which I think is quite important for AI safety. However, my note was quite brief, explaining the ide...

“The Sorry State of AI X-Risk Advocacy, and Thoughts on Doing Better” by Thane Ruthenis

26 Feb 2025

Contributed by Lukas

First, let me quote my previous ancient post on the topic:Effective Strategies for Changing Public OpinionThe titular paper is very relevant here. I&a...

“Power Lies Trembling: a three-book review” by Richard_Ngo

26 Feb 2025

Contributed by Lukas

In a previous book review I described exclusive nightclubs as the particle colliders of sociology—places where you can reliably observe extreme forc...

“Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs” by Jan Betley, Owain_Evans

26 Feb 2025

Contributed by Lukas

This is the abstract and introduction of our new paper. We show that finetuning state-of-the-art LLMs on a narrow task, such as writing vulnerable cod...

“The Paris AI Anti-Safety Summit” by Zvi

22 Feb 2025

Contributed by Lukas

It doesn’t look good.What used to be the AI Safety Summits were perhaps the most promising thing happening towards international coordination for AI...

“Eliezer’s Lost Alignment Articles / The Arbital Sequence” by Ruby

20 Feb 2025

Contributed by Lukas

Note: this is a static copy of this wiki page. We are also publishing it as a post to ensure visibility.Circa 2015-2017, a lot of high quality content...

“Arbital has been imported to LessWrong” by RobertM, jimrandomh, Ben Pace, Ruby

20 Feb 2025

Contributed by Lukas

Arbital was envisioned as a successor to Wikipedia. The project was discontinued in 2017, but not before many new features had been built and a substa...

“How to Make Superbabies” by GeneSmith, kman

20 Feb 2025

Contributed by Lukas

We’ve spent the better part of the last two decades unravelling exactly how the human genome works and which specific letter changes in our DNA affe...

“A computational no-coincidence principle” by Eric Neyman

19 Feb 2025

Contributed by Lukas

Audio note: this article contains 134 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text ...

“A History of the Future, 2025-2040” by L Rudolf L

19 Feb 2025

Contributed by Lukas

This is an all-in-one crosspost of a scenario I originally published in three parts on my blog (No Set Gauge). Links to the originals: A History of th...

“It’s been ten years. I propose HPMOR Anniversary Parties.” by Screwtape

18 Feb 2025

Contributed by Lukas

On March 14th, 2015, Harry Potter and the Methods of Rationality made its final post. Wrap parties were held all across the world to read the ending a...

“Some articles in ‘International Security’ that I enjoyed” by Buck

16 Feb 2025

Contributed by Lukas

A friend of mine recently recommended that I read through articles from the journal International Security, in order to learn more about international...

“The Failed Strategy of Artificial Intelligence Doomers” by Ben Pace

16 Feb 2025

Contributed by Lukas

This is the best sociological account of the AI x-risk reduction efforts of the last ~decade that I've seen. I encourage folks to engage with its...

“Murder plots are infohazards” by Chris Monteiro

14 Feb 2025

Contributed by Lukas

Hi allI've been hanging around the rationalist-sphere for many years now, mostly writing about transhumanism, until things started to change in 2...

“Why Did Elon Musk Just Offer to Buy Control of OpenAI for $100 Billion?” by garrison

11 Feb 2025

Contributed by Lukas

This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, ...

“The ‘Think It Faster’ Exercise” by Raemon

09 Feb 2025

Contributed by Lukas

Ultimately, I don’t want to solve complex problems via laborious, complex thinking, if we can help it. Ideally, I'd want to basically intuitive...

“So You Want To Make Marginal Progress...” by johnswentworth

08 Feb 2025

Contributed by Lukas

Once upon a time, in ye olden days of strange names and before google maps, seven friends needed to figure out a driving route from their parking lot ...

“What is malevolence? On the nature, measurement, and distribution of dark traits” by David Althaus

08 Feb 2025

Contributed by Lukas

Summary In this post, we explore different ways of understanding and measuring malevolence and explain why individuals with concerning levels of male...

“How AI Takeover Might Happen in 2 Years” by joshc

08 Feb 2025

Contributed by Lukas

I’m not a natural “doomsayer.” But unfortunately, part of my job as an AI safety researcher is to think about the more troubling scenarios.I’...

“Gradual Disempowerment, Shell Games and Flinches” by Jan_Kulveit

05 Feb 2025

Contributed by Lukas

Over the past year and half, I've had numerous conversations about the risks we describe in Gradual Disempowerment. (The shortest useful summary ...

“Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development” by Jan_Kulveit, Raymond D, Nora_Ammann, Deger Turan, David Scott Krueger (formerly: capybaralet), David Duvenaud

04 Feb 2025

Contributed by Lukas

This is a link post.Full version on arXiv | X Executive summary AI risk scenarios usually portray a relatively sudden loss of human control to AIs, ...

“Planning for Extreme AI Risks” by joshc

03 Feb 2025

Contributed by Lukas

This post should not be taken as a polished recommendation to AI companies and instead should be treated as an informal summary of a worldview. The co...

“Catastrophe through Chaos” by Marius Hobbhahn

03 Feb 2025

Contributed by Lukas

This is a personal post and does not necessarily reflect the opinion of other members of Apollo Research. Many other people have talked about similar ...

“Will alignment-faking Claude accept a deal to reveal its misalignment?” by ryan_greenblatt

01 Feb 2025

Contributed by Lukas

I (and co-authors) recently put out "Alignment Faking in Large Language Models" where we show that when Claude strongly dislikes what it is ...

“‘Sharp Left Turn’ discourse: An opinionated review” by Steven Byrnes

30 Jan 2025

Contributed by Lukas

Summary and Table of ContentsThe goal of this post is to discuss the so-called “sharp left turn”, the lessons that we learn from analogizing evol...

“Ten people on the inside” by Buck

29 Jan 2025

Contributed by Lukas

(Many of these ideas developed in conversation with Ryan Greenblatt)In a shortform, I described some different levels of resources and buy-in for misa...

“Anomalous Tokens in DeepSeek-V3 and r1” by henry

28 Jan 2025

Contributed by Lukas

“Anomalous”, “glitch”, or “unspeakable” tokens in an LLM are those that induce bizarre behavior or otherwise don’t behave like regular t...

“Tell me about yourself:LLMs are aware of their implicit behaviors” by Martín Soto, Owain_Evans

28 Jan 2025

Contributed by Lukas

This is the abstract and introduction of our new paper, with some discussion of implications for AI Safety at the end. Authors: Jan Betley*, Xuchan B...

“Instrumental Goals Are A Different And Friendlier Kind Of Thing Than Terminal Goals” by johnswentworth, David Lorell

27 Jan 2025

Contributed by Lukas

The CakeImagine that I want to bake a chocolate cake, and my sole goal in my entire lightcone and extended mathematical universe is to bake that cake...

“A Three-Layer Model of LLM Psychology” by Jan_Kulveit

26 Jan 2025

Contributed by Lukas

This post offers an accessible model of psychology of character-trained LLMs like Claude. Epistemic StatusThis is primarily a phenomenological model ...

“Training on Documents About Reward Hacking Induces Reward Hacking” by evhub

24 Jan 2025

Contributed by Lukas

This is a link post.This is a blog post reporting some preliminary work from the Anthropic Alignment Science team, which might be of interest to resea...

“AI companies are unlikely to make high-assurance safety cases if timelines are short” by ryan_greenblatt

24 Jan 2025

Contributed by Lukas

One hope for keeping existential risks low is to get AI companies to (successfully) make high-assurance safety cases: structured and auditable argumen...

“Mechanisms too simple for humans to design” by Malmesbury

24 Jan 2025

Contributed by Lukas

Cross-posted from Telescopic TurnipAs we all know, humans are terrible at building butterflies. We can make a lot of objectively cool things like nucl...

“The Gentle Romance” by Richard_Ngo

22 Jan 2025

Contributed by Lukas

This is a link post.A story I wrote about living through the transition to utopia.This is the one story that I've put the most time and effort in...

“Quotes from the Stargate press conference” by Nikola Jurkovic

22 Jan 2025

Contributed by Lukas

This is a link post.Present alongside President Trump:  Sam AltmanLarry Ellison (Oracle executive chairman and CTO)Masayoshi Son (Softbank CEO who be...

“The Case Against AI Control Research” by johnswentworth

21 Jan 2025

Contributed by Lukas

The AI Control Agenda, in its own words:… we argue that AI labs should ensure that powerful AIs are controlled. That is, labs should make sure that ...

“Don’t ignore bad vibes you get from people” by Kaj_Sotala

20 Jan 2025

Contributed by Lukas

I think a lot of people have heard so much about internalized prejudice and bias that they think they should ignore any bad vibes they get about a per...

“[Fiction] [Comic] Effective Altruism and Rationality meet at a Secular Solstice afterparty” by tandem

19 Jan 2025

Contributed by Lukas

(Both characters are fictional, loosely inspired by various traits from various real people. Be careful about combining kratom and alcohol.) The origi...

“Building AI Research Fleets” by bgold, Jesse Hoogland

18 Jan 2025

Contributed by Lukas

From AI scientist to AI research fleetResearch automation is here (1, 2, 3). We saw it coming and planned ahead, which puts us ahead of most (4, 5, 6...

“What Is The Alignment Problem?” by johnswentworth

17 Jan 2025

Contributed by Lukas

So we want to align future AGIs. Ultimately we’d like to align them to human values, but in the shorter term we might start with other targets, like...

“Applying traditional economic thinking to AGI: a trilemma” by Steven Byrnes

14 Jan 2025

Contributed by Lukas

Traditional economics thinking has two strong principles, each based on abundant historical data: Principle (A): No “lump of labor”: If human popu...

“Passages I Highlighted in The Letters of J.R.R.Tolkien” by Ivan Vendrov

14 Jan 2025

Contributed by Lukas

All quotes, unless otherwise marked, are Tolkien's words as printed in The Letters of J.R.R.Tolkien: Revised and Expanded Edition. All emphases m...

“Parkinson’s Law and the Ideology of Statistics” by Benquo

13 Jan 2025

Contributed by Lukas

The anonymous review of The Anti-Politics Machine published on Astral Codex X focuses on a case study of a World Bank intervention in Lesotho, and tel...

“Capital Ownership Will Not Prevent Human Disempowerment” by beren

11 Jan 2025

Contributed by Lukas

Crossposted from my personal blog. I was inspired to cross-post this here given the discussion that this post on the role of capital in an AI future e...

«« ← Prev Page 4 of 9 Next → »»