Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing
Podcast Image

LessWrong (Curated & Popular)

Technology Society & Culture

Episodes

Showing 501-600 of 805
«« ← Prev Page 6 of 9 Next → »»

“Universal Basic Income and Poverty” by Eliezer Yudkowsky

27 Jul 2024

Contributed by Lukas

(Crossposted from Twitter)I'm skeptical that Universal Basic Income can get rid of grinding poverty, since somehow humanity's 100-fold produ...

“Optimistic Assumptions, Longterm Planning, and ‘Cope’” by Raemon

19 Jul 2024

Contributed by Lukas

Eliezer Yudkowsky periodically complains about people coming up with questionable plans with questionable assumptions to deal with AI, and then either...

“Superbabies: Putting The Pieces Together” by sarahconstantin

15 Jul 2024

Contributed by Lukas

This post was inspired by some talks at the recent LessOnline conference including one by LessWrong user “Gene Smith”.Let's say you want to h...

“Poker is a bad game for teaching epistemics. Figgie is a better one.” by rossry

12 Jul 2024

Contributed by Lukas

This is a link post.Editor's note: Somewhat after I posted this on my own blog, Max Chiswick cornered me at LessOnline / Manifest and gave me a w...

“Reliable Sources: The Story of David Gerard” by TracingWoodgrains

11 Jul 2024

Contributed by Lukas

This is a linkpost for https://www.tracingwoodgrains.com/p/reliable-sources-how-wikipedia-admin, posted in full here given its relevance to this commu...

“When is a mind me?” by Rob Bensinger

08 Jul 2024

Contributed by Lukas

xlr8harder writes:In general I don’t think an uploaded mind is you, but rather a copy. But one thought experiment makes me question this. A Ship of ...

“80,000 hours should remove OpenAI from the Job Board (and similar orgs should do similarly)” by Raemon

04 Jul 2024

Contributed by Lukas

I haven't shared this post with other relevant parties – my experience has been that private discussion of this sort of thing is more paralyzi...

[Linkpost] “introduction to cancer vaccines” by bhauth

02 Jul 2024

Contributed by Lukas

This is a linkpost for https://www.bhauth.com/blog/biology/cancer%20vaccines.html cancer neoantigensFor cells to become cancerous, they must have muta...

“Priors and Prejudice” by MathiasKB

02 Jul 2024

Contributed by Lukas

IImagine an alternate version of the Effective Altruism movement, whose early influences came from socialist intellectual communities such as the Fab...

“My experience using financial commitments to overcome akrasia” by William Howard

02 Jul 2024

Contributed by Lukas

About a year ago I decided to try using one of those apps where you tie your goals to some kind of financial penalty. The specific one I tried is Forf...

“The Incredible Fentanyl-Detecting Machine” by sarahconstantin

01 Jul 2024

Contributed by Lukas

An NII machine in Nogales, AZ. (Image source)There's bound to be a lot of discussion of the Biden-Trump presidential debates last night, but I wa...

“AI catastrophes and rogue deployments” by Buck

01 Jul 2024

Contributed by Lukas

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.[Thanks to Aryan Bhatt, Ansh Radhakrishnan, Adam Kaufman, Vivek ...

“Loving a world you don’t trust” by Joe Carlsmith

01 Jul 2024

Contributed by Lukas

(Cross-posted from my website. Audio version here, or search for "Joe Carlsmith Audio" on your podcast app.)This is the final essay in a ser...

“Formal verification, heuristic explanations and surprise accounting” by paulfchristiano

27 Jun 2024

Contributed by Lukas

ARC's current research focus can be thought of as trying to combine mechanistic interpretability and formal verification. If we had a deep unders...

“LLM Generality is a Timeline Crux” by eggsyntax

25 Jun 2024

Contributed by Lukas

Summary Summary . LLMs may be fundamentally incapable of fully general reasoning, and if so, short timelines are less plausible. Longer summaryThere...

“SAE feature geometry is outside the superposition hypothesis” by jake_mendel

25 Jun 2024

Contributed by Lukas

Summary: Superposition-based interpretations of neural network activation spaces are incomplete. The specific locations of feature vectors contain cru...

“Connecting the Dots: LLMs can Infer & Verbalize Latent Structure from Training Data” by Johannes Treutlein, Owain_Evans

23 Jun 2024

Contributed by Lukas

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.This is a link post.TL;DR: We published a new paper on out-of-co...

“Boycott OpenAI” by PeterMcCluskey

21 Jun 2024

Contributed by Lukas

This is a link post.I have canceled my OpenAI subscription in protest over OpenAI's lack ofethics.In particular, I object to: threats to confisca...

“Sycophancy to subterfuge: Investigating reward tampering in large language models” by evhub, Carson Denison

20 Jun 2024

Contributed by Lukas

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.This is a link post.New Anthropic model organisms research paper...

“I would have shit in that alley, too” by Declan Molony

18 Jun 2024

Contributed by Lukas

After living in a suburb for most of my life, when I moved to a major U.S. city the first thing I noticed was the feces. At first I assumed it was dog...

“Getting 50% (SoTA) on ARC-AGI with GPT-4o” by ryan_greenblatt

18 Jun 2024

Contributed by Lukas

ARC-AGI post Getting 50% (SoTA) on ARC-AGI with GPT-4oI recently got to 50%[1] accuracy on the public test set for ARC-AGI by having GPT-4o generate a...

“Why I don’t believe in the placebo effect” by transhumanist_atom_understander

15 Jun 2024

Contributed by Lukas

Have you heard this before? In clinical trials, medicines have to be compared to a placebo to separate the effect of the medicine from the psychologic...

“Safety isn’t safety without a social model (or: dispelling the myth of per se technical safety)” by Andrew_Critch

14 Jun 2024

Contributed by Lukas

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.As an AI researcher who wants to do technical work that helps hu...

“My AI Model Delta Compared To Christiano” by johnswentworth

13 Jun 2024

Contributed by Lukas

Preamble: Delta vs CruxThis section is redundant if you already read My AI Model Delta Compared To Yudkowsky.I don’t natively think in terms of cru...

“My AI Model Delta Compared To Yudkowsky” by johnswentworth

10 Jun 2024

Contributed by Lukas

Preamble: Delta vs CruxI don’t natively think in terms of cruxes. But there's a similar concept which is more natural for me, which I’ll cal...

“Response to Aschenbrenner’s ‘Situational Awareness’” by Rob Bensinger

07 Jun 2024

Contributed by Lukas

(Cross-posted from Twitter.) My take on Leopold Aschenbrenner's new report: I think Leopold gets it right on a bunch of important counts.Three th...

“Humming is not a free $100 bill” by Elizabeth

07 Jun 2024

Contributed by Lukas

Last month I posted about humming as a cheap and convenient way to flood your nose with nitric oxide (NO), a known antiviral. Alas, the economists wer...

“Announcing ILIAD — Theoretical AI Alignment Conference ” by Nora_Ammann, Alexander Gietelink Oldenziel

06 Jun 2024

Contributed by Lukas

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.We are pleased to announce ILIAD — a 5-day conference bringing...

“Non-Disparagement Canaries for OpenAI” by aysja, Adam Scholl

31 May 2024

Contributed by Lukas

Since at least 2017, OpenAI has asked departing employees to sign offboarding agreements which legally bind them to permanently—that is, for the res...

“MIRI 2024 Communications Strategy” by Gretta Duleba

30 May 2024

Contributed by Lukas

As we explained in our MIRI 2024 Mission and Strategy update, MIRI has pivoted to prioritize policy, communications, and technical governance research...

“OpenAI: Fallout” by Zvi

28 May 2024

Contributed by Lukas

Previously: OpenAI: Exodus (contains links at top to earlier episodes), Do Not Mess With Scarlett JohanssonWe have learned more since last week. It&ap...

[HUMAN VOICE] Update on human narration for this podcast

28 May 2024

Contributed by Lukas

Contact: patreon.com/lwcurated or [perrin dot j dot walker plus lesswrong fnord gmail].All Solenoid's narration work found here.

“Maybe Anthropic’s Long-Term Benefit Trust is powerless” by Zach Stein-Perlman

28 May 2024

Contributed by Lukas

Crossposted from AI Lab Watch. Subscribe on Substack.Introduction.  Anthropic has an unconventional governance mechanism: an independent "Long-...

“Notifications Received in 30 Minutes of Class” by tanagrabeast

27 May 2024

Contributed by Lukas

Introduction.  If you are choosing to read this post, you've probably seen the image below depicting all the notifications students received on...

“AI companies aren’t really using external evaluators” by Zach Stein-Perlman

24 May 2024

Contributed by Lukas

New blog: AI Lab Watch. Subscribe on Substack.Many AI safety folks think that METR is close to the labs, with ongoing relationships that grant it acce...

“EIS XIII: Reflections on Anthropic’s SAE Research Circa May 2024” by scasper

24 May 2024

Contributed by Lukas

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.Part 13 of 12 in the Engineer's Interpretability Sequence. ...

“What’s Going on With OpenAI’s Messaging?” by ozziegoen

22 May 2024

Contributed by Lukas

This is a quickly-written opinion piece, of what I understand about OpenAI. I first posted it to Facebook, where it had some discussion.   Some arg...

“Language Models Model Us” by eggsyntax

21 May 2024

Contributed by Lukas

Produced as part of the MATS Winter 2023-4 program, under the mentorship of @Jessica RumbelowOne-sentence summary: On a dataset of human-written essay...

Jaan Tallinn’s 2023 Philanthropy Overview

21 May 2024

Contributed by Lukas

This is a link post.to follow up my philantropic pledge from 2020, i've updated my philanthropy page with 2023 results.in 2023 my donations funde...

“OpenAI: Exodus” by Zvi

21 May 2024

Contributed by Lukas

Previously: OpenAI: Facts From a Weekend, OpenAI: The Battle of the Board, OpenAI: Leaks Confirm the Story, OpenAI: Altman Returns, OpenAI: The Board ...

DeepMind’s ”​​Frontier Safety Framework” is weak and unambitious

20 May 2024

Contributed by Lukas

FSF blogpost. Full document (just 6 pages; you should read it). Compare to Anthropic's RSP, OpenAI's RSP ("PF"), and METR's K...

Do you believe in hundred dollar bills lying on the ground? Consider humming

18 May 2024

Contributed by Lukas

Introduction. [Reminder: I am an internet weirdo with no medical credentials]A few months ago, I published some crude estimates of the power of nitri...

Deep Honesty

12 May 2024

Contributed by Lukas

Most people avoid saying literally false things, especially if those could be audited, like making up facts or credentials. The reasons for this are b...

On Not Pulling The Ladder Up Behind You

02 May 2024

Contributed by Lukas

Epistemic Status: Musing and speculation, but I think there's a real thing here. 1.When I was a kid, a friend of mine had a tree fort. If you&apo...

Mechanistically Eliciting Latent Behaviors in Language Models

02 May 2024

Contributed by Lukas

Produced as part of the MATS Winter 2024 program, under the mentorship of Alex Turner (TurnTrout).TL,DR: I introduce a method for eliciting latent beh...

Ironing Out the Squiggles

01 May 2024

Contributed by Lukas

Adversarial Examples: A ProblemThe apparent successes of the deep learning revolution conceal a dark underbelly. It may seem that we now know how to ...

Introducing AI Lab Watch

01 May 2024

Contributed by Lukas

This is a linkpost for https://ailabwatch.orgI'm launching AI Lab Watch. I collected actions for frontier AI labs to improve AI safety, then eval...

Refusal in LLMs is mediated by a single direction

28 Apr 2024

Contributed by Lukas

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.This work was produced as part of Neel Nanda's stream in th...

Funny Anecdote of Eliezer From His Sister

24 Apr 2024

Contributed by Lukas

This comes from a podcast called 18Forty, of which the main demographic of Orthodox Jews. Eliezer's sister (Hannah) came on and talked about her ...

Thoughts on seed oil

21 Apr 2024

Contributed by Lukas

This is a linkpost for https://dynomight.net/seed-oil/A friend has spent the last three years hounding me about seed oils. Every time I thought I was ...

Why Would Belief-States Have A Fractal Structure, And Why Would That Matter For Interpretability? An Explainer

19 Apr 2024

Contributed by Lukas

Yesterday Adam Shai put up a cool post which… well, take a look at the visual:Yup, it sure looks like that fractal is very noisily embedded in the r...

Express interest in an “FHI of the West”

18 Apr 2024

Contributed by Lukas

TLDR: I am investigating whether to found a spiritual successor to FHI, housed under Lightcone Infrastructure, providing a rich cultural environment a...

Transformers Represent Belief State Geometry in their Residual Stream

17 Apr 2024

Contributed by Lukas

Produced while being an affiliate at PIBBSS[1]. The work was done initially with funding from a Lightspeed Grant, and then continued while at PIBBSS. ...

Paul Christiano named as US AI Safety Institute Head of AI Safety

16 Apr 2024

Contributed by Lukas

This is a linkpost for https://www.commerce.gov/news/press-releases/2024/04/us-commerce-secretary-gina-raimondo-announces-expansion-us-ai-safetyU.S. S...

[HUMAN VOICE] "On green" by Joe Carlsmith

12 Apr 2024

Contributed by Lukas

Cross-posted from my website. Podcast version here, or search for "Joe Carlsmith Audio" on your podcast app.This essay is part of a series t...

[HUMAN VOICE] "Toward a Broader Conception of Adverse Selection" by Ricki Heicklen

12 Apr 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedThis is a linkpost for https://bayesshammai.substack.com/p...

[HUMAN VOICE] "My PhD thesis: Algorithmic Bayesian Epistemology" by Eric Neyman

12 Apr 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedIn January, I defended my PhD thesis, which I called Algor...

[HUMAN VOICE] "How could I have thought that faster?" by mesaoptimizer

12 Apr 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedThis is a linkpost for https://twitter.com/ESYudkowsky/sta...

LLMs for Alignment Research: a safety priority?

06 Apr 2024

Contributed by Lukas

A recent short story by Gabriel Mukobi illustrates a near-term scenario where things go bad because new developments in LLMs allow LLMs to accelerate ...

[HUMAN VOICE] "Scale Was All We Needed, At First" by Gabriel Mukobi

05 Apr 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/xLDwCemt5qvchzgHd/s...

[HUMAN VOICE] "Using axis lines for good or evil" by dynomight

05 Apr 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/Yay8SbQiwErRyDKGb/u...

[HUMAN VOICE] "Social status part 1/2: negotiations over object-level preferences" by Steven Byrnes

05 Apr 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/SPBm67otKq5ET5CWP/s...

[HUMAN VOICE] "Acting Wholesomely" by OwenCB

05 Apr 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/Cb7oajdrA5DsHCqKd/a...

The Story of “I Have Been A Good Bing”

01 Apr 2024

Contributed by Lukas

Rationality is Systematized Winning, so rationalists should win. We’ve tried saving the world from AI, but that's really hard and we’ve had …...

The Best Tacit Knowledge Videos on Every Subject

01 Apr 2024

Contributed by Lukas

TL;DRTacit knowledge is extremely valuable. Unfortunately, developing tacit knowledge is usually bottlenecked by apprentice-master relationships. Tac...

[HUMAN VOICE] "Deep atheism and AI risk" by Joe Carlsmith

20 Mar 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/sJPbmm8Gd34vGYrKd/d...

[HUMAN VOICE] "My Clients, The Liars" by ymeskhout

20 Mar 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/h99tRkpQGxwtb9Dpv/m...

[HUMAN VOICE] "Speaking to Congressional staffers about AI risk" by Akash, hath

10 Mar 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/2sLwt2cSAag74nsdN/s...

[HUMAN VOICE] "CFAR Takeaways: Andrew Critch" by Raemon

10 Mar 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/Jash4Gbi2wpThzZ4k/c...

Many arguments for AI x-risk are wrong

09 Mar 2024

Contributed by Lukas

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.The following is a lightly edited version of a memo I wrote for ...

Tips for Empirical Alignment Research

07 Mar 2024

Contributed by Lukas

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.TLDR: I’ve collected some tips for research that I’ve given ...

Timaeus’s First Four Months

29 Feb 2024

Contributed by Lukas

Timaeus was announced in late October 2023, with the mission of making fundamental breakthroughs in technical AI alignment using deep ideas from mathe...

Contra Ngo et al. “Every ‘Every Bay Area House Party’ Bay Area House Party”

23 Feb 2024

Contributed by Lukas

This is a linkpost for https://bayesshammai.substack.com/p/contra-ngo-et-al-every-every-bayWith thanks to Scott Alexander for the inspiration, Jeffrey...

[HUMAN VOICE] "Updatelessness doesn't solve most problems" by Martín Soto

20 Feb 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/g8HHKaWENEbqh2mgK/u...

[HUMAN VOICE] "And All the Shoggoths Merely Players" by Zack_M_Davis

20 Feb 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/8yCXeafJo67tYe5L4/a...

Every “Every Bay Area House Party” Bay Area House Party

19 Feb 2024

Contributed by Lukas

Inspired by a house party inspired by Scott Alexander.By the time you arrive in Berkeley, the party is already in full swing. You’ve come late becau...

2023 Survey Results

19 Feb 2024

Contributed by Lukas

The Data 0. PopulationThere were 558 responses over 32 days. The spacing and timing of the responses had hills and valleys because of an experiment I...

Raising children on the eve of AI

18 Feb 2024

Contributed by Lukas

Cross-posted with light edits from Otherwise. I think of us in some kind of twilight world as transformative AI looks more likely: things are about to...

“No-one in my org puts money in their pension”

18 Feb 2024

Contributed by Lukas

This is a linkpost for https://seekingtobejolly.substack.com/p/no-one-in-my-org-puts-money-in-theirEpistemic status: the stories here are all as true ...

Masterpiece

16 Feb 2024

Contributed by Lukas

This is a linkpost for https://www.narrativeark.xyz/p/masterpieceA sequel to qntm's Lena. Reading Lena first is helpful but not necessary.We’re...

CFAR Takeaways: Andrew Critch

15 Feb 2024

Contributed by Lukas

I'm trying to build my own art of rationality training, and I've started talking to various CFAR instructors about their experiences – thi...

[HUMAN VOICE] "Believing In" by Anna Salamon

14 Feb 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/duvzdffTzL3dWJcxn/b...

[HUMAN VOICE] "Attitudes about Applied Rationality" by Camille Berger

14 Feb 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/5jdqtpT6StjKDKacw/a...

Scale Was All We Needed, At First

14 Feb 2024

Contributed by Lukas

This is a hasty speculative fiction vignette of one way I expect we might get AGI by January 2025 (within about one year of writing this). Like simila...

Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy

11 Feb 2024

Contributed by Lukas

This is a linkpost for https://garrisonlovely.substack.com/p/sam-altmans-chip-ambitions-undercut If you enjoy this, please consider subscribing to my ...

[HUMAN VOICE] "A Shutdown Problem Proposal" by johnswentworth, David Lorell

09 Feb 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/PhTBDHu9PKJFmvb4p/a...

Brute Force Manufactured Consensus is Hiding the Crime of the Century

04 Feb 2024

Contributed by Lukas

People often parse information through an epistemic consensus filter. They do not ask "is this true", they ask "will others be OK with ...

[HUMAN VOICE] "Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI" by Jeremy Gillen, peterbarnett

03 Feb 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/GfZfDHZHCuYwrHGCd/w...

Leading The Parade

02 Feb 2024

Contributed by Lukas

Background Terminology: Counterfactual Impact vs “Leading The Parade”Y’know how a parade or marching band has a person who walks in front wavin...

[HUMAN VOICE] "The case for ensuring that powerful AIs are controlled" by ryan_greenblatt, Buck

02 Feb 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/kcKrE9mzEHrdqtDpE/t...

Processor clock speeds are not how fast AIs think

01 Feb 2024

Contributed by Lukas

I often encounter some confusion about whether the fact that synapses in the brain typically fire at frequencies of 1-100 Hz while the clock frequency...

Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI

31 Jan 2024

Contributed by Lukas

A pdf version of this report is available here.Summary. In this report we argue that AI systems capable of large scale scientific research will likel...

Making every researcher seek grants is a broken model

29 Jan 2024

Contributed by Lukas

This is a linkpost for https://rootsofprogress.org/the-block-funding-model-for-scienceWhen Galileo wanted to study the heavens through his telescope, ...

The case for training frontier AIs on Sumerian-only corpus

28 Jan 2024

Contributed by Lukas

Let your every day be full of joy, love the child that holds your hand, let your wife delight in your embrace, for these alone are the concerns of hum...

This might be the last AI Safety Camp

25 Jan 2024

Contributed by Lukas

We are organising the 9th edition without funds. We have no personal runway left to do this again. We will not run the 10th edition without funding. ...

[HUMAN VOICE] "There is way too much serendipity" by Malmesbury

22 Jan 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedCrossposted from substack.As we all know, sugar is sweet a...

[HUMAN VOICE] "Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training" by evhub et al

20 Jan 2024

Contributed by Lukas

This is a linkpost for https://arxiv.org/abs/2401.05566Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSou...

[HUMAN VOICE] "How useful is mechanistic interpretability?" by ryan_greenblatt, Neel Nanda, Buck, habryka

20 Jan 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/tEPHGZAb63dfq2v8n/h...

The impossible problem of due process

17 Jan 2024

Contributed by Lukas

I wrote this entire post in February of 2023, during the fallout from the TIME article. I didn't post it at the time for multiple reasons: becaus...

[HUMAN VOICE] "Gentleness and the artificial Other" by Joe Carlsmith

14 Jan 2024

Contributed by Lukas

"(Cross-posted from my website. Audio version here, or search "Joe Carlsmith Audio" on your podcast app.)"This is the first essay ...

«« ← Prev Page 6 of 9 Next → »»