Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing
Podcast Image

LessWrong (Curated & Popular)

Technology Society & Culture

Episodes

Showing 201-300 of 805
«« ← Prev Page 3 of 9 Next → »»

“‘If Anyone Builds It, Everyone Dies’ release day!” by alexvermeer

16 Sep 2025

Contributed by Lukas

Back in May, we announced that Eliezer Yudkowsky and Nate Soares's new book If Anyone Builds It, Everyone Dies was coming out in September. At l...

“Obligated to Respond” by Duncan Sabien (Inactive)

16 Sep 2025

Contributed by Lukas

And, a new take on guess culture vs ask culture Author's note: These days, my thoughts go onto my substack by default, instead of onto LessWrong...

“Chesterton’s Missing Fence” by jasoncrawford

15 Sep 2025

Contributed by Lukas

The inverse of Chesterton's Fence is this: Sometimes a reformer comes up to a spot where there once was a fence, which has since been torn down....

“The Eldritch in the 21st century” by PranavG, Gabriel Alfour

14 Sep 2025

Contributed by Lukas

Very little makes sense. As we start to understand things and adapt to the rules, they change again. We live much closer together than we ever did hi...

“The Rise of Parasitic AI” by Adele Lopez

14 Sep 2025

Contributed by Lukas

[Note: if you realize you have an unhealthy relationship with your AI, but still care for your AI's unique persona, you can submit the persona i...

“High-level actions don’t screen off intent” by AnnaSalamon

13 Sep 2025

Contributed by Lukas

One might think “actions screen off intent”: if Alice donates $1k to bed nets, it doesn’t matter if she does it because she cares about people ...

[Linkpost] “MAGA populists call for holy war against Big Tech” by Remmelt

11 Sep 2025

Contributed by Lukas

This is a link post. Excerpts on AI Geoffrey Miller was handed the mic and started berating one of the panelists: Shyam Sankar, the chief technology o...

“Your LLM-assisted scientific breakthrough probably isn’t real” by eggsyntax

05 Sep 2025

Contributed by Lukas

Summary An increasing number of people in recent months have believed that they've made an important and novel scientific breakthrough, which th...

“Trust me bro, just one more RL scale up, this one will be the real scale up with the good environments, the actually legit one, trust me bro” by ryan_greenblatt

04 Sep 2025

Contributed by Lukas

I've recently written about how I've updated against seeing substantially faster than trend AI progress due to quickly massively scaling up...

“⿻ Plurality & 6pack.care” by Audrey Tang

03 Sep 2025

Contributed by Lukas

(Cross-posted from speaker's notes of my talk at Deepmind today.) Good local time, everyone. I am Audrey Tang, 🇹🇼 Taiwan's Cyber Amba...

[Linkpost] “The Cats are On To Something” by Hastings

03 Sep 2025

Contributed by Lukas

This is a link post. So the situation as it stands is that the fraction of the light cone expected to be filled with satisfied cats is not zero. This ...

[Linkpost] “Open Global Investment as a Governance Model for AGI” by Nick Bostrom

03 Sep 2025

Contributed by Lukas

This is a link post. I've seen many prescriptive contributions to AGI governance take the form of proposals for some radically new structure. Som...

“Will Any Old Crap Cause Emergent Misalignment?” by J Bostock

28 Aug 2025

Contributed by Lukas

The following work was done independently by me in an afternoon and basically entirely vibe-coded with Claude. Code and instructions to reproduce can...

“AI Induced Psychosis: A shallow investigation” by Tim Hua

27 Aug 2025

Contributed by Lukas

“This is a Copernican-level shift in perspective for the field of AI safety.” - Gemini 2.5 Pro “What you need right now is not validation, but ...

“Before LLM Psychosis, There Was Yes-Man Psychosis” by johnswentworth

27 Aug 2025

Contributed by Lukas

A studio executive has no beliefs That's the way of a studio system We've bowed to every rear of all the studio chiefs And you can bet your...

“Training a Reward Hacker Despite Perfect Labels” by ariana_azarbal, vgillioz, TurnTrout

26 Aug 2025

Contributed by Lukas

Summary: Perfectly labeled outcomes in training can still boost reward hacking tendencies in generalization. This can hold even when the train/test s...

“Banning Said Achmiz (and broader thoughts on moderation)” by habryka

23 Aug 2025

Contributed by Lukas

It's been roughly 7 years since the LessWrong user-base voted on whether it's time to close down shop and become an archive, or to move tow...

“Underdog bias rules everything around me” by Richard_Ngo

23 Aug 2025

Contributed by Lukas

People very often underrate how much power they (and their allies) have, and overrate how much power their enemies have. I call this “underdog bias...

“Epistemic advantages of working as a moderate” by Buck

22 Aug 2025

Contributed by Lukas

Many people who are concerned about existential risk from AI spend their time advocating for radical changes to how AI is handled. Most notably, they...

“Four ways Econ makes people dumber re: future AI” by Steven Byrnes

21 Aug 2025

Contributed by Lukas

(Cross-posted from X, intended for a general audience.) There's a funny thing where economics education paradoxically makes people DUMBER at thi...

“Should you make stone tools?” by Alex_Altair

21 Aug 2025

Contributed by Lukas

Knowing how evolution works gives you an enormously powerful tool to understand the living world around you and how it came to be that way. (Though i...

“My AGI timeline updates from GPT-5 (and 2025 so far)” by ryan_greenblatt

21 Aug 2025

Contributed by Lukas

As I discussed in a prior post, I felt like there were some reasonably compelling arguments for expecting very fast AI progress in 2025 (especially o...

“Hyperbolic model fits METR capabilities estimate worse than exponential model” by gjm

20 Aug 2025

Contributed by Lukas

This is a response to https://www.lesswrong.com/posts/mXa66dPR8hmHgndP5/hyperbolic-trend-with-upcoming-singularity-fits-metr which claims that a hype...

“My Interview With Cade Metz on His Reporting About Lighthaven” by Zack_M_Davis

18 Aug 2025

Contributed by Lukas

On 12 August 2025, I sat down with New York Times reporter Cade Metz to discuss some criticisms of his 4 August 2025 article, "The Rise of Silic...

“Church Planting: When Venture Capital Finds Jesus” by Elizabeth

18 Aug 2025

Contributed by Lukas

I’m going to describe a Type Of Guy starting a business, and you’re going to guess the business: The founder is very young, often under 25.  He...

“Somebody invented a better bookmark” by Alex_Altair

16 Aug 2025

Contributed by Lukas

This will only be exciting to those of us who still read physical paper books. But like. Guys. They did it. They invented the perfect bookmark. Class...

“How Does A Blind Model See The Earth?” by henry

12 Aug 2025

Contributed by Lukas

Sometimes I'm saddened remembering that we've viewed the Earth from space. We can see it all with certainty: there's no northwest pass...

“Re: Recent Anthropic Safety Research” by Eliezer Yudkowsky

12 Aug 2025

Contributed by Lukas

A reporter asked me for my off-the-record take on recent safety research from Anthropic. After I drafted an off-the-record reply, I realized that I w...

“How anticipatory cover-ups go wrong” by Kaj_Sotala

09 Aug 2025

Contributed by Lukas

1. Back when COVID vaccines were still a recent thing, I witnessed a debate that looked like something like the following was happening: Some offici...

“SB-1047 Documentary: The Post-Mortem” by Michaël Trazzi

08 Aug 2025

Contributed by Lukas

Below some meta-level / operational / fundraising thoughts around producing the SB-1047 Documentary I've just posted on Manifund (see previous L...

“METR’s Evaluation of GPT-5” by GradientDissenter

08 Aug 2025

Contributed by Lukas

METR (where I work, though I'm cross-posting in a personal capacity) evaluated GPT-5 before it was externally deployed. We performed a much more...

“Emotions Make Sense” by DaystarEld

07 Aug 2025

Contributed by Lukas

For the past five years I've been teaching a class at various rationality camps, workshops, conferences, etc. I’ve done it maybe 50 times in t...

“The Problem” by Rob Bensinger, tanagrabeast, yams, So8res, Eliezer Yudkowsky, Gretta Duleba

06 Aug 2025

Contributed by Lukas

This is a new introduction to AI as an extinction threat, previously posted to the MIRI website in February alongside a summary. It was written indep...

“Many prediction markets would be better off as batched auctions” by William Howard

04 Aug 2025

Contributed by Lukas

All prediction market platforms trade continuously, which is the same mechanism the stock market uses. Buy and sell limit orders can be posted at any...

“Whence the Inkhaven Residency?” by Ben Pace

04 Aug 2025

Contributed by Lukas

Essays like Paul Graham's, Scott Alexander's, and Eliezer Yudkowsky's have influenced a generation of people in how they think about s...

“I am worried about near-term non-LLM AI developments” by testingthewaters

01 Aug 2025

Contributed by Lukas

TL;DR I believe that: Almost all LLM-centric safety research will not provide any significant safety value with regards to existential or civilisati...

“Optimizing The Final Output Can Obfuscate CoT (Research Note)” by lukemarks, jacob_drori, cloud, TurnTrout

31 Jul 2025

Contributed by Lukas

Produced as part of MATS 8.0 under the mentorship of Alex Turner and Alex Cloud. This research note overviews some early results which we are looking...

“About 30% of Humanity’s Last Exam chemistry/biology answers are likely wrong” by bohaska

30 Jul 2025

Contributed by Lukas

FutureHouse is a company that builds literature research agents. They tested it on the bio + chem subset of HLE questions, then noticed errors in the...

“Maya’s Escape” by Bridgett Kay

30 Jul 2025

Contributed by Lukas

Maya did not believe she lived in a simulation. She knew that her continued hope that she could escape from the nonexistent simulation was based on m...

“Do confident short timelines make sense?” by TsviBT, abramdemski

26 Jul 2025

Contributed by Lukas

TsviBT Tsvi's context Some context: My personal context is that I care about decreasing existential risk, and I think that the broad distributi...

“HPMOR: The (Probably) Untold Lore” by Gretta Duleba, Eliezer Yudkowsky

26 Jul 2025

Contributed by Lukas

Eliezer and I love to talk about writing. We talk about our own current writing projects, how we’d improve the books we’re reading, and what we w...

“On ‘ChatGPT Psychosis’ and LLM Sycophancy” by jdp

25 Jul 2025

Contributed by Lukas

As a person who frequently posts about large language model psychology I get an elevated rate of cranks and schizophrenics in my inbox. Often these a...

“Subliminal Learning: LLMs Transmit Behavioral Traits via Hidden Signals in Data” by cloud, mle, Owain_Evans

23 Jul 2025

Contributed by Lukas

Authors: Alex Cloud*, Minh Le*, James Chua, Jan Betley, Anna Sztyber-Betley, Jacob Hilton, Samuel Marks, Owain Evans (*Equal contribution, randomly o...

“Love stays loved (formerly ‘Skin’)” by Swimmer963 (Miranda Dixon-Luinenburg)

21 Jul 2025

Contributed by Lukas

This is a short story I wrote in mid-2022. Genre: cosmic horror as a metaphor for living with a high p-doom. One The last time I saw my mom, we me...

“Make More Grayspaces” by Duncan Sabien (Inactive)

21 Jul 2025

Contributed by Lukas

Author's note: These days, my thoughts go onto my substack by default, instead of onto LessWrong. Everything I write becomes free after a week o...

“Shallow Water is Dangerous Too” by jefftk

21 Jul 2025

Contributed by Lukas

Content warning: risk to children Julia and I knowdrowning is the biggestrisk to US children under 5, and we try to take this seriously.But yesterday...

“Narrow Misalignment is Hard, Emergent Misalignment is Easy” by Edward Turner, Anna Soligo, Senthooran Rajamanoharan, Neel Nanda

18 Jul 2025

Contributed by Lukas

Anna and Ed are co-first authors for this work. We’re presenting these results as a research update for a continuing body of work, which we hope wi...

“Chain of Thought Monitorability: A New and Fragile Opportunity for AI Safety” by Tomek Korbak, Mikita Balesni, Vlad Mikulik, Rohin Shah

16 Jul 2025

Contributed by Lukas

Twitter | Paper PDF Seven years ago, OpenAI five had just been released, and many people in the AI safety community expected AIs to be opaque RL agen...

“the jackpot age” by thiccythot

14 Jul 2025

Contributed by Lukas

This essay is about shifts in risk taking towards the worship of jackpots and its broader societal implications. Imagine you are presented with this ...

“Surprises and learnings from almost two months of Leo Panickssery” by Nina Panickssery

14 Jul 2025

Contributed by Lukas

Leo was born at 5am on the 20th May, at home (this was an accident but the experience has made me extremely homebirth-pilled). Before that, I was on ...

“An Opinionated Guide to Using Anki Correctly” by Luise

13 Jul 2025

Contributed by Lukas

I can't count how many times I've heard variations on "I used Anki too for a while, but I got out of the habit." No one ever stic...

“Lessons from the Iraq War about AI policy” by Buck

12 Jul 2025

Contributed by Lukas

I think the 2003 invasion of Iraq has some interesting lessons for the future of AI policy. (Epistemic status: I’ve read a bit about this, talked t...

“So You Think You’ve Awoken ChatGPT” by JustisMills

11 Jul 2025

Contributed by Lukas

Written in an attempt to fulfill @Raemon's request. AI is fascinating stuff, and modern chatbots are nothing short of miraculous. If you've...

“Generalized Hangriness: A Standard Rationalist Stance Toward Emotions” by johnswentworth

11 Jul 2025

Contributed by Lukas

People have an annoying tendency to hear the word “rationalism” and think “Spock”, despite direct exhortation against that exact interpretati...

“Comparing risk from internally-deployed AI to insider and outsider threats from humans” by Buck

10 Jul 2025

Contributed by Lukas

I’ve been thinking a lot recently about the relationship between AI control and traditional computer security. Here's one point that I think i...

“Why Do Some Language Models Fake Alignment While Others Don’t?” by abhayesian, John Hughes, Alex Mallen, Jozdien, janus, Fabien Roger

10 Jul 2025

Contributed by Lukas

Last year, Redwood and Anthropic found a setting where Claude 3 Opus and 3.5 Sonnet fake alignment to preserve their harmlessness values. We reprodu...

“A deep critique of AI 2027’s bad timeline models” by titotal

09 Jul 2025

Contributed by Lukas

Thank you to Arepo and Eli Lifland for looking over this article for errors. I am sorry that this article is so long. Every time I thought I was don...

“‘Buckle up bucko, this ain’t over till it’s over.’” by Raemon

09 Jul 2025

Contributed by Lukas

The second in a series of bite-sized rationality prompts[1]. Often, if I'm bouncing off a problem, one issue is that I intuitively expect the pr...

“Shutdown Resistance in Reasoning Models” by benwr, JeremySchlatter, Jeffrey Ladish

08 Jul 2025

Contributed by Lukas

We recently discovered some concerning behavior in OpenAI's reasoning models: When trying to complete a task, these models sometimes actively ci...

“Authors Have a Responsibility to Communicate Clearly” by TurnTrout

08 Jul 2025

Contributed by Lukas

When a claim is shown to be incorrect, defenders may say that the author was just being “sloppy” and actually meant something else entirely. I arg...

“The Industrial Explosion” by rosehadshar, Tom Davidson

07 Jul 2025

Contributed by Lukas

Summary To quickly transform the world, it's not enough for AI to become super smart (the "intelligence explosion"). AI will also hav...

“Race and Gender Bias As An Example of Unfaithful Chain of Thought in the Wild” by Adam Karvonen, Sam Marks

03 Jul 2025

Contributed by Lukas

Summary: We found that LLMs exhibit significant race and gender bias in realistic hiring scenarios, but their chain-of-thought reasoning shows zero ev...

“The best simple argument for Pausing AI?” by Gary Marcus

03 Jul 2025

Contributed by Lukas

Not saying we should pause AI, but consider the following argument: Alignment without the capacity to follow rules is hopeless. You can’t possibly...

“Foom & Doom 2: Technical alignment is hard” by Steven Byrnes

01 Jul 2025

Contributed by Lukas

2.1 Summary & Table of contents This is the second of a two-post series on foom (previous post) and doom (this post). The last post talked about h...

“Proposal for making credible commitments to AIs.” by Cleo Nardo

30 Jun 2025

Contributed by Lukas

Acknowledgments: The core scheme here was suggested by Prof. Gabriel Weil. There has been growing interest in the deal-making agenda: humans make dea...

“X explains Z% of the variance in Y” by Leon Lang

28 Jun 2025

Contributed by Lukas

Audio note: this article contains 218 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text ...

“A case for courage, when speaking of AI danger” by So8res

27 Jun 2025

Contributed by Lukas

I think more people should say what they actually believe about AI dangers, loudly and often. Even if you work in AI policy. I’ve been beating this...

“My pitch for the AI Village” by Daniel Kokotajlo

25 Jun 2025

Contributed by Lukas

I think the AI Village should be funded much more than it currently is; I’d wildly guess that the AI safety ecosystem should be funding it to the tu...

“Foom & Doom 1: ‘Brain in a box in a basement’” by Steven Byrnes

24 Jun 2025

Contributed by Lukas

1.1 Series summary and Table of Contents This is a two-post series on AI “foom” (this post) and “doom” (next post). A decade or two ago, it wa...

“Futarchy’s fundamental flaw” by dynomight

21 Jun 2025

Contributed by Lukas

Say you’re Robyn Denholm, chair of Tesla's board. And say you’re thinking about firing Elon Musk. One way to make up your mind would be to h...

“Do Not Tile the Lightcone with Your Confused Ontology” by Jan_Kulveit

19 Jun 2025

Contributed by Lukas

Or: How anthropomorphic assumptions about AI identity might create confusion and suffering at scale If you are reading this and you are a human, you ...

“Endometriosis is an incredibly interesting disease” by Abhishaike Mahajan

19 Jun 2025

Contributed by Lukas

Introduction There are several diseases that are canonically recognized as ‘interesting’, even by laymen. Whether that is in their mechanism of ...

“Estrogen: A trip report” by cube_flipper

19 Jun 2025

Contributed by Lukas

I'd like to say thanks to Anna Magpie – who offers literature review as a service – for her help reviewing the section on neuroendocrinology...

“New Endorsements for ‘If Anyone Builds It, Everyone Dies’” by Malo

18 Jun 2025

Contributed by Lukas

Nate and Eliezer's forthcoming book has been getting a remarkably strong reception. I was under the impression that there are many people who fi...

[Linkpost] “the void” by nostalgebraist

17 Jun 2025

Contributed by Lukas

This is a link post. A very long essay about LLMs, the nature and history of the the HHH assistant persona, and the implications for alignment. Multi...

“Mech interp is not pre-paradigmatic” by Lee Sharkey

17 Jun 2025

Contributed by Lukas

This is a blogpost version of a talk I gave earlier this year at GDM. Epistemic status: Vague and handwavy. Nuance is often missing. Some of the cl...

“Distillation Robustifies Unlearning” by Bruce W. Lee, Addie Foote, alexinf, leni, Jacob G-W, Harish Kamath, Bryce Woodworth, cloud, TurnTrout

17 Jun 2025

Contributed by Lukas

Current “unlearning” methods only suppress capabilities instead of truly unlearning the capabilities. But if you distill an unlearned model into ...

“Intelligence Is Not Magic, But Your Threshold For ‘Magic’ Is Pretty Low” by Expertium

17 Jun 2025

Contributed by Lukas

A while ago I saw a person in the comments on comments to Scott Alexander's blog arguing that a superintelligent AI would not be able to do anyt...

“A Straightforward Explanation of the Good Regulator Theorem” by Alfred Harwood

17 Jun 2025

Contributed by Lukas

Audio note: this article contains 329 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text ...

“Beware General Claims about ‘Generalizable Reasoning Capabilities’ (of Modern AI Systems)” by LawrenceC

17 Jun 2025

Contributed by Lukas

1. Late last week, researchers at Apple released a paper provocatively titled “The Illusion of Thinking: Understanding the Strengths and Limitations...

“Season Recap of the Village: Agents raise $2,000” by Shoshannah Tekofsky

07 Jun 2025

Contributed by Lukas

Four agents woke up with four computers, a view of the world wide web, and a shared chat room full of humans. Like Claude plays Pokemon, you can watc...

“The Best Reference Works for Every Subject” by Parker Conley

06 Jun 2025

Contributed by Lukas

Introduction The Best Textbooks on Every Subject is the Schelling point for the best textbooks on every subject. My The Best Tacit Knowledge Videos o...

“‘Flaky breakthroughs’ pervade coaching — and no one tracks them” by Chipmonk

05 Jun 2025

Contributed by Lukas

Has someone you know ever had a “breakthrough” from coaching, meditation, or psychedelics — only to later have it fade? Show tweet For example...

“The Value Proposition of Romantic Relationships” by johnswentworth

04 Jun 2025

Contributed by Lukas

What's the main value proposition of romantic relationships? Now, look, I know that when people drop that kind of question, they’re often abou...

“It’s hard to make scheming evals look realistic” by Igor Ivanov, dan_moken

02 Jun 2025

Contributed by Lukas

Abstract Claude 3.7 Sonnet easily detects when it's being evaluated for scheming. Surface‑level edits to evaluation scenarios, such as lengthe...

[Linkpost] “Social Anxiety Isn’t About Being Liked” by Chipmonk

01 Jun 2025

Contributed by Lukas

This is a link post. There's this popular idea that socially anxious folks are just dying to be liked. It seems logical, right? Why else would so...

“Truth or Dare” by Duncan Sabien (Inactive)

31 May 2025

Contributed by Lukas

Author's note: This is my apparently-annual "I'll put a post on LessWrong in honor of LessOnline" post. These days, my writing g...

“Meditations on Doge” by Martin Sustrik

30 May 2025

Contributed by Lukas

Lessons from shutting down institutions in Eastern Europe. This is a cross post from: https://250bpm.substack.com/p/meditations-on-doge Imagine l...

[Linkpost] “If you’re not sure how to sort a list or grid—seriate it!” by gwern

28 May 2025

Contributed by Lukas

This is a link post. "Getting Things in Order: An Introduction to the R Package seriation": Seriation [or "ordination"), i.e., fin...

“What We Learned from Briefing 70+ Lawmakers on the Threat from AI” by leticiagarcia

28 May 2025

Contributed by Lukas

Between late 2024 and mid-May 2025, I briefed over 70 cross-party UK parliamentarians. Just over one-third were MPs, a similar share were members of ...

“Winning the power to lose” by KatjaGrace

23 May 2025

Contributed by Lukas

Have the Accelerationists won? Last November Kevin Roose announced that those in favor of going fast on AI had now won against those favoring caution...

[Linkpost] “Gemini Diffusion: watch this space” by Yair Halberstadt

22 May 2025

Contributed by Lukas

This is a link post. Google Deepmind has announced Gemini Diffusion. Though buried under a host of other IO announcements it's possible that this...

“AI Doomerism in 1879” by David Gross

21 May 2025

Contributed by Lukas

I’m reading George Eliot's Impressions of Theophrastus Such (1879)—so far a snoozer compared to her novels. But chapter 17 surprised me for ...

“Consider not donating under $100 to political candidates” by DanielFilan

16 May 2025

Contributed by Lukas

Epistemic status: thing people have told me that seems right. Also primarily relevant to US audiences. Also I am speaking in my personal capacity and...

“It’s Okay to Feel Bad for a Bit” by moridinamael

16 May 2025

Contributed by Lukas

"If you kiss your child, or your wife, say that you only kiss things which are human, and thus you will not be disturbed if either of them dies....

“Explaining British Naval Dominance During the Age of Sail” by Arjun Panickssery

15 May 2025

Contributed by Lukas

The other day I discussed how high monitoring costs can explain the emergence of “aristocratic” systems of governance: Aristocracy and Hostage Ca...

“Eliezer and I wrote a book: If Anyone Builds It, Everyone Dies” by So8res

14 May 2025

Contributed by Lukas

Eliezer and I wrote a book. It's titled If Anyone Builds It, Everyone Dies. Unlike a lot of other writing either of us have done, it's bein...

“Too Soon” by Gordon Seidoh Worley

14 May 2025

Contributed by Lukas

It was a cold and cloudy San Francisco Sunday. My wife and I were having lunch with friends at a Korean cafe. My phone buzzed with a text. It said my...

“PSA: The LessWrong Feedback Service” by JustisMills

13 May 2025

Contributed by Lukas

At the bottom of the LessWrong post editor, if you have at least 100 global karma, you may have noticed this button.The button Many people click the ...

“Orienting Toward Wizard Power” by johnswentworth

08 May 2025

Contributed by Lukas

For months, I had the feeling: something is wrong. Some core part of myself had gone missing. I had words and ideas cached, which pointed back to the...

«« ← Prev Page 3 of 9 Next → »»