Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing
Podcast Image

LessWrong (Curated & Popular)

Technology Society & Culture

Episodes

Showing 501-600 of 743
«« ← Prev Page 6 of 8 Next → »»

[HUMAN VOICE] "Acting Wholesomely" by OwenCB

05 Apr 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/Cb7oajdrA5DsHCqKd/a...

The Story of “I Have Been A Good Bing”

01 Apr 2024

Contributed by Lukas

Rationality is Systematized Winning, so rationalists should win. We’ve tried saving the world from AI, but that's really hard and we’ve had …...

The Best Tacit Knowledge Videos on Every Subject

01 Apr 2024

Contributed by Lukas

TL;DRTacit knowledge is extremely valuable. Unfortunately, developing tacit knowledge is usually bottlenecked by apprentice-master relationships. Tac...

[HUMAN VOICE] "Deep atheism and AI risk" by Joe Carlsmith

20 Mar 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/sJPbmm8Gd34vGYrKd/d...

[HUMAN VOICE] "My Clients, The Liars" by ymeskhout

20 Mar 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/h99tRkpQGxwtb9Dpv/m...

[HUMAN VOICE] "Speaking to Congressional staffers about AI risk" by Akash, hath

10 Mar 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/2sLwt2cSAag74nsdN/s...

[HUMAN VOICE] "CFAR Takeaways: Andrew Critch" by Raemon

10 Mar 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/Jash4Gbi2wpThzZ4k/c...

Many arguments for AI x-risk are wrong

09 Mar 2024

Contributed by Lukas

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.The following is a lightly edited version of a memo I wrote for ...

Tips for Empirical Alignment Research

07 Mar 2024

Contributed by Lukas

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.TLDR: I’ve collected some tips for research that I’ve given ...

Timaeus’s First Four Months

29 Feb 2024

Contributed by Lukas

Timaeus was announced in late October 2023, with the mission of making fundamental breakthroughs in technical AI alignment using deep ideas from mathe...

Contra Ngo et al. “Every ‘Every Bay Area House Party’ Bay Area House Party”

23 Feb 2024

Contributed by Lukas

This is a linkpost for https://bayesshammai.substack.com/p/contra-ngo-et-al-every-every-bayWith thanks to Scott Alexander for the inspiration, Jeffrey...

[HUMAN VOICE] "Updatelessness doesn't solve most problems" by Martín Soto

20 Feb 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/g8HHKaWENEbqh2mgK/u...

[HUMAN VOICE] "And All the Shoggoths Merely Players" by Zack_M_Davis

20 Feb 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/8yCXeafJo67tYe5L4/a...

Every “Every Bay Area House Party” Bay Area House Party

19 Feb 2024

Contributed by Lukas

Inspired by a house party inspired by Scott Alexander.By the time you arrive in Berkeley, the party is already in full swing. You’ve come late becau...

2023 Survey Results

19 Feb 2024

Contributed by Lukas

The Data 0. PopulationThere were 558 responses over 32 days. The spacing and timing of the responses had hills and valleys because of an experiment I...

Raising children on the eve of AI

18 Feb 2024

Contributed by Lukas

Cross-posted with light edits from Otherwise. I think of us in some kind of twilight world as transformative AI looks more likely: things are about to...

“No-one in my org puts money in their pension”

18 Feb 2024

Contributed by Lukas

This is a linkpost for https://seekingtobejolly.substack.com/p/no-one-in-my-org-puts-money-in-theirEpistemic status: the stories here are all as true ...

Masterpiece

16 Feb 2024

Contributed by Lukas

This is a linkpost for https://www.narrativeark.xyz/p/masterpieceA sequel to qntm's Lena. Reading Lena first is helpful but not necessary.We’re...

CFAR Takeaways: Andrew Critch

15 Feb 2024

Contributed by Lukas

I'm trying to build my own art of rationality training, and I've started talking to various CFAR instructors about their experiences – thi...

[HUMAN VOICE] "Believing In" by Anna Salamon

14 Feb 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/duvzdffTzL3dWJcxn/b...

[HUMAN VOICE] "Attitudes about Applied Rationality" by Camille Berger

14 Feb 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/5jdqtpT6StjKDKacw/a...

Scale Was All We Needed, At First

14 Feb 2024

Contributed by Lukas

This is a hasty speculative fiction vignette of one way I expect we might get AGI by January 2025 (within about one year of writing this). Like simila...

Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy

11 Feb 2024

Contributed by Lukas

This is a linkpost for https://garrisonlovely.substack.com/p/sam-altmans-chip-ambitions-undercut If you enjoy this, please consider subscribing to my ...

[HUMAN VOICE] "A Shutdown Problem Proposal" by johnswentworth, David Lorell

09 Feb 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/PhTBDHu9PKJFmvb4p/a...

Brute Force Manufactured Consensus is Hiding the Crime of the Century

04 Feb 2024

Contributed by Lukas

People often parse information through an epistemic consensus filter. They do not ask "is this true", they ask "will others be OK with ...

[HUMAN VOICE] "Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI" by Jeremy Gillen, peterbarnett

03 Feb 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/GfZfDHZHCuYwrHGCd/w...

Leading The Parade

02 Feb 2024

Contributed by Lukas

Background Terminology: Counterfactual Impact vs “Leading The Parade”Y’know how a parade or marching band has a person who walks in front wavin...

[HUMAN VOICE] "The case for ensuring that powerful AIs are controlled" by ryan_greenblatt, Buck

02 Feb 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/kcKrE9mzEHrdqtDpE/t...

Processor clock speeds are not how fast AIs think

01 Feb 2024

Contributed by Lukas

I often encounter some confusion about whether the fact that synapses in the brain typically fire at frequencies of 1-100 Hz while the clock frequency...

Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI

31 Jan 2024

Contributed by Lukas

A pdf version of this report is available here.Summary. In this report we argue that AI systems capable of large scale scientific research will likel...

Making every researcher seek grants is a broken model

29 Jan 2024

Contributed by Lukas

This is a linkpost for https://rootsofprogress.org/the-block-funding-model-for-scienceWhen Galileo wanted to study the heavens through his telescope, ...

The case for training frontier AIs on Sumerian-only corpus

28 Jan 2024

Contributed by Lukas

Let your every day be full of joy, love the child that holds your hand, let your wife delight in your embrace, for these alone are the concerns of hum...

This might be the last AI Safety Camp

25 Jan 2024

Contributed by Lukas

We are organising the 9th edition without funds. We have no personal runway left to do this again. We will not run the 10th edition without funding. ...

[HUMAN VOICE] "There is way too much serendipity" by Malmesbury

22 Jan 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedCrossposted from substack.As we all know, sugar is sweet a...

[HUMAN VOICE] "Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training" by evhub et al

20 Jan 2024

Contributed by Lukas

This is a linkpost for https://arxiv.org/abs/2401.05566Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSou...

[HUMAN VOICE] "How useful is mechanistic interpretability?" by ryan_greenblatt, Neel Nanda, Buck, habryka

20 Jan 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/tEPHGZAb63dfq2v8n/h...

The impossible problem of due process

17 Jan 2024

Contributed by Lukas

I wrote this entire post in February of 2023, during the fallout from the TIME article. I didn't post it at the time for multiple reasons: becaus...

[HUMAN VOICE] "Gentleness and the artificial Other" by Joe Carlsmith

14 Jan 2024

Contributed by Lukas

"(Cross-posted from my website. Audio version here, or search "Joe Carlsmith Audio" on your podcast app.)"This is the first essay ...

Introducing Alignment Stress-Testing at Anthropic

14 Jan 2024

Contributed by Lukas

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.Following on from our recent paper, “Sleeper Agents: Training ...

Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training

13 Jan 2024

Contributed by Lukas

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.This is a linkpost for https://arxiv.org/abs/2401.05566I'm ...

[HUMAN VOICE] "Meaning & Agency" by Abram Demski

07 Jan 2024

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedThe goal of this post is to clarify a few concepts relatin...

What’s up with LLMs representing XORs of arbitrary features?

07 Jan 2024

Contributed by Lukas

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.Thanks to Clément Dumas, Nikola Jurković, Nora Belrose, Arthur...

Gentleness and the artificial Other

05 Jan 2024

Contributed by Lukas

(Cross-posted from my website. Audio version here, or search "Joe Carlsmith Audio" on your podcast app.This is the first essay in a series t...

MIRI 2024 Mission and Strategy Update

05 Jan 2024

Contributed by Lukas

As we announced back in October, I have taken on the senior leadership role at MIRI as its CEO. It's a big pair of shoes to fill, and an awesome ...

The Plan - 2023 Version

04 Jan 2024

Contributed by Lukas

Background: The Plan, The Plan: 2022 Update. If you haven’t read those, don’t worry, we’re going to go through things from the top this year, an...

Apologizing is a Core Rationalist Skill

03 Jan 2024

Contributed by Lukas

In certain circumstances, apologizing can also be a countersignalling power-move, i.e. “I am so high status that I can grovel a bit without anybody ...

[HUMAN VOICE] "A case for AI alignment being difficult" by jessicata

02 Jan 2024

Contributed by Lukas

This is a linkpost for https://unstableontology.com/2023/12/31/a-case-for-ai-alignment-being-difficult/Support ongoing human narrations of LessWrong&a...

The Dark Arts

01 Jan 2024

Contributed by Lukas

lsusrIt is my understanding that you won all of your public forum debates this year. That's very impressive. I thought it would be interesting to...

Critical review of Christiano’s disagreements with Yudkowsky

28 Dec 2023

Contributed by Lukas

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.This is a review of Paul Christiano's article "where I...

Most People Don’t Realize We Have No Idea How Our AIs Work

27 Dec 2023

Contributed by Lukas

This point feels fairly obvious, yet seems worth stating explicitly.Those of us familiar with the field of AI after the deep-learning revolution know ...

Discussion: Challenges with Unsupervised LLM Knowledge Discovery

26 Dec 2023

Contributed by Lukas

TL;DR: Contrast-consistent search (CCS) seemed exciting to us and we were keen to apply it. At this point, we think it is unlikely to be directly help...

Succession

24 Dec 2023

Contributed by Lukas

This is a linkpost for https://www.narrativeark.xyz/p/succession“A table beside the evening sea where you sit shelling pistachios, flicking the next...

Nonlinear’s Evidence: Debunking False and Misleading Claims

21 Dec 2023

Contributed by Lukas

Recently, Ben Pace wrote a well-intentioned blog post mostly based on complaints from 2 (of 21) Nonlinear employees who 1) wanted more money, 2) felt...

Effective Aspersions: How the Nonlinear Investigation Went Wrong

20 Dec 2023

Contributed by Lukas

The New York Times Picture a scene: the New York Times is releasing an article on Effective Altruism (EA) with an express goal to dig up every piece ...

Constellations are Younger than Continents

20 Dec 2023

Contributed by Lukas

At the Bay Area Solstice, I heard the song Bold Orion for the first time. I like it a lot. It does, however, have one problem:He has seen the rise and...

The ‘Neglected Approaches’ Approach: AE Studio’s Alignment Agenda

19 Dec 2023

Contributed by Lukas

Many thanks to Samuel Hammond, Cate Hall, Beren Millidge, Steve Byrnes, Lucius Bushnaq, Joar Skalse, Kyle Gracey, Gunnar Zarncke, Ross Nordby, David L...

“Humanity vs. AGI” Will Never Look Like “Humanity vs. AGI” to Humanity

18 Dec 2023

Contributed by Lukas

When discussing AGI Risk, people often talk about it in terms of a war between humanity and an AGI. Comparisons between the amounts of resources at bo...

Is being sexy for your homies?

17 Dec 2023

Contributed by Lukas

Epistemic status: Speculation. An unholy union of evo psych, introspection, random stuff I happen to observe & hear about, and thinking. Done on a...

[HUMAN VOICE] "Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible" by Gene Smith and Kman

17 Dec 2023

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedTL;DR versionIn the course of my life, there have been a h...

[HUMAN VOICE] "Moral Reality Check (a short story)" by jessicata

15 Dec 2023

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedThis is a linkpost for https://unstableontology.com/2023/1...

AI Control: Improving Safety Despite Intentional Subversion

15 Dec 2023

Contributed by Lukas

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.We’ve released a paper, AI Control: Improving Safety Despite I...

2023 Unofficial LessWrong Census/Survey

13 Dec 2023

Contributed by Lukas

The Less Wrong General Census is unofficially here! You can take it at this link.It's that time again.If you are reading this post and identify a...

The likely first longevity drug is based on sketchy science. This is bad for science and bad for longevity.

13 Dec 2023

Contributed by Lukas

If you are interested in the longevity scene, like I am, you probably have seen press releases about the dog longevity company, Loyal for Dogs, gettin...

[HUMAN VOICE] "What are the results of more parental supervision and less outdoor play?" by Julia Wise

13 Dec 2023

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedCrossposted from OtherwiseParents supervise their children...

Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible

12 Dec 2023

Contributed by Lukas

In the course of my life, there have been a handful of times I discovered an idea that changed the way I thought about the world. The first occurred w...

re: Yudkowsky on biological materials

11 Dec 2023

Contributed by Lukas

I was asked to respond to this comment by Eliezer Yudkowsky. This post is partly redundant with my previous post.Why is flesh weaker than diamond?When...

Speaking to Congressional staffers about AI risk

05 Dec 2023

Contributed by Lukas

In May and June of 2023, I (Akash) had about 50-70 meetings about AI risks with congressional staffers. I had been meaning to write a post reflecting ...

[HUMAN VOICE] "Shallow review of live agendas in alignment & safety" by technicalities & Stag

04 Dec 2023

Contributed by Lukas

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedYou can’t optimise an allocation of resources if you don...

Thoughts on “AI is easy to control” by Pope & Belrose

02 Dec 2023

Contributed by Lukas

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.Quintin Pope & Nora Belrose have a new “AI Optimists” we...

The 101 Space You Will Always Have With You

30 Nov 2023

Contributed by Lukas

Any community which ever adds new people will need to either routinely teach the new and (to established members) blindingly obvious information to th...

[HUMAN VOICE] "Social Dark Matter" by Duncan Sabien

28 Nov 2023

Contributed by Lukas

The author's Substack:https://substack.com/@homosabiensSupport ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCurat...

Shallow review of live agendas in alignment & safety

28 Nov 2023

Contributed by Lukas

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.Summary.You can’t optimise an allocation of resources if you d...

Ability to solve long-horizon tasks correlates with wanting things in the behaviorist sense

25 Nov 2023

Contributed by Lukas

Status: Vague, sorry. The point seems almost tautological to me, and yet also seems like the correct answer to the people going around saying “LLMs ...

[HUMAN VOICE] "The 6D effect: When companies take risks, one email can be very powerful." by scasper

23 Nov 2023

Contributed by Lukas

Support ongoing human narrations of curated posts:www.patreon.com/LWCuratedRecently, I have been learning about industry norms, legal discovery procee...

OpenAI: The Battle of the Board

22 Nov 2023

Contributed by Lukas

Previously: OpenAI: Facts from a Weekend. On Friday afternoon, OpenAI's board fired CEO Sam Altman. Overnight, an agreement in principle was reac...

OpenAI: Facts from a Weekend

20 Nov 2023

Contributed by Lukas

Approximately four GPTs and seven years ago, OpenAI's founders brought forth on this corporate landscape a new entity, conceived in liberty, and ...

Sam Altman fired from OpenAI

18 Nov 2023

Contributed by Lukas

This is a linkpost for https://openai.com/blog/openai-announces-leadership-transitionBasically just the title, see the OAI blog post for more details....

Social Dark Matter

17 Nov 2023

Contributed by Lukas

You know it must be out there, but you mostly never see it.Author's Note 1: I'm something like 75% confident that this will be the last essa...

[HUMAN VOICE] "Thinking By The Clock" by Screwtape

17 Nov 2023

Contributed by Lukas

Support ongoing human narrations of curated posts:www.patreon.com/LWCuratedI'm sure Harry Potter and the Methods of Rationality taught me some of...

"You can just spontaneously call people you haven't met in years" by lc

17 Nov 2023

Contributed by Lukas

Here's a recent conversation I had with a friend:Me: "I wish I had more friends. You guys are great, but I only get to hang out with you lik...

[HUMAN VOICE] "AI Timelines" by habryka, Daniel Kokotajlo, Ajeya Cotra, Ege Erdil

17 Nov 2023

Contributed by Lukas

Support ongoing human narrations of curated posts:www.patreon.com/LWCuratedHow many years will pass before transformative AI is built? Three people wh...

"EA orgs' legal structure inhibits risk taking and information sharing on the margin" by Elizabeth

17 Nov 2023

Contributed by Lukas

It’s fairly common for EA orgs to provide fiscal sponsorship to other EA orgs.  Wait, no, that sentence is not quite right. The more accurate sente...

"Integrity in AI Governance and Advocacy" by habryka, Olivia Jimenez

17 Nov 2023

Contributed by Lukas

habrykaOk, so we both had some feelings about the recent Conjecture post on "lots of people in AI Alignment are lying", and the associated m...

Loudly Give Up, Don’t Quietly Fade

16 Nov 2023

Contributed by Lukas

1.There's a supercharged, dire wolf form of the bystander effect that I’d like to shine a spotlight on.First, a quick recap. The Bystander Effe...

"Does davidad's uploading moonshot work?" by jacobjabob et al.

09 Nov 2023

Contributed by Lukas

davidad has a 10-min talk out on a proposal about which he says: “the first time I’ve seen a concrete plan that might work to get human uploads be...

[HUMAN VOICE] "Deception Chess: Game #1" by Zane et al.

09 Nov 2023

Contributed by Lukas

Support ongoing human narrations of curated posts:www.patreon.com/LWCurated(You can sign up to play deception chess here if you haven't already.)...

[HUMAN VOICE] "Towards Monosemanticity: Decomposing Language Models With Dictionary Learning" by Zac Hatfield-Dodds

09 Nov 2023

Contributed by Lukas

Support ongoing human narrations of curated posts:www.patreon.com/LWCuratedThis is a linkpost for https://transformer-circuits.pub/2023/monosemantic-f...

"The 6D effect: When companies take risks, one email can be very powerful." by scasper

09 Nov 2023

Contributed by Lukas

Recently, I have been learning about industry norms, legal discovery proceedings, and incentive structures related to companies building risky systems...

"The other side of the tidal wave" by Katja Grace

09 Nov 2023

Contributed by Lukas

I guess there’s maybe a 10-20% chance of AI causing human extinction in the coming decades, but I feel more distressed about it than even that sugge...

"Propaganda or Science: A Look at Open Source AI and Bioterrorism Risk" by 1a3orn

09 Nov 2023

Contributed by Lukas

I examined all the biorisk-relevant citations from a policy paper arguing that we should ban powerful open source LLMs.None of them provide good evide...

"My thoughts on the social response to AI risk" by Matthew Barnett

09 Nov 2023

Contributed by Lukas

A common theme implicit in many AI risk stories has been that broader society will either fail to anticipate the risks of AI until it is too late, or ...

Comp Sci in 2027 (Short story by Eliezer Yudkowsky)

09 Nov 2023

Contributed by Lukas

This is a linkpost for https://nitter.net/ESYudkowsky/status/1718654143110512741Comp sci in 2017:Student:  I get the feeling the compiler is just ign...

"Thoughts on the AI Safety Summit company policy requests and responses" by So8res

03 Nov 2023

Contributed by Lukas

Over the next two days, the UK government is hosting an AI Safety Summit focused on “the safe and responsible development of frontier AI”. They re...

"President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence" by Tristan Williams

03 Nov 2023

Contributed by Lukas

This is a linkpost for https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-sa...

[Human Voice] "Book Review: Going Infinite" by Zvi

31 Oct 2023

Contributed by Lukas

Support ongoing human narrations of curated posts:www.patreon.com/LWCuratedPreviously: Sadly, FTXI doubted whether it would be a good use of time to r...

"Thoughts on responsible scaling policies and regulation" by Paul Christiano

30 Oct 2023

Contributed by Lukas

I am excited about AI developers implementing responsible scaling policies; I’ve recently been spending time refining this idea and advocating for i...

"We're Not Ready: thoughts on "pausing" and responsible scaling policies" by Holden Karnofsky

30 Oct 2023

Contributed by Lukas

Views are my own, not Open Philanthropy’s. I am married to the President of Anthropic and have a financial interest in both Anthropic and OpenAI via...

"At 87, Pearl is still able to change his mind" by rotatingpaguro

30 Oct 2023

Contributed by Lukas

Judea Pearl is a famous researcher, known for Bayesian networks (the standard way of representing Bayesian models), and his statistical formalization ...

"Architects of Our Own Demise: We Should Stop Developing AI" by Roko

30 Oct 2023

Contributed by Lukas

Some brief thoughts at a difficult time in the AI risk debate.Imagine you go back in time to the year 1999 and tell people that in 24 years time, huma...

"AI as a science, and three obstacles to alignment strategies" by Nate Soares

30 Oct 2023

Contributed by Lukas

AI used to be a science. In the old days (back when AI didn't work very well), people were attempting to develop a working theory of cognition.Th...

«« ← Prev Page 6 of 8 Next → »»