by The 80000 Hours team
Resources on how to do good with your career — and anything else we here at 80,000 Hours feel like releasing.
Language
🇺🇲
Publishing Since
2/24/2022
Email Addresses
1 available
Phone Numbers
0 available
April 18, 2025
<p>Most AI safety conversations centre on alignment: ensuring AI systems share our values and goals. But despite progress, we’re unlikely to know we’ve solved the problem before the arrival of human-level and superhuman systems in as little as three years.</p><p>So some — including Buck Shlegeris, CEO of <a href="https://www.redwoodresearch.org/">Redwood Research</a> — are developing a backup plan to safely deploy models we fear are actively scheming to harm us: so-called “AI control.” While this may sound mad, given the reluctance of AI companies to delay deploying anything they train, not developing such techniques is probably even crazier. </p><p>These highlights are from episode #214 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>: <a href="https://80000hours.org/podcast/episodes/buck-shlegeris-ai-control-scheming/"><strong>Buck Shlegeris on controlling AI that wants to take over – so we can use it anyway</strong></a>, and include:</p><ul><li>What is AI control? (00:00:15)</li><li>One way to catch AIs that are up to no good (00:07:00)</li><li>What do we do once we catch a model trying to escape? (00:13:39)</li><li>Team Human vs Team AI (00:18:24)</li><li>If an AI escapes, is it likely to be able to beat humanity from there? (00:24:59)</li><li>Is alignment still useful? (00:32:10)</li><li>Could 10 safety-focused people in an AGI company do anything useful? (00:35:34)</li></ul><p>These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:[email protected]">[email protected]</a>. </p><p>Highlights put together by Ben Cordell, Milo McGuire, and Dominic Armstrong</p>
April 1, 2025
<p><a href="https://youtu.be/fJssGodnCQg"><strong>Watch this episode on YouTube!</strong></a> https://youtu.be/fJssGodnCQg</p><p>Conor and Arden sit down with Matt in his farewell episode to discuss the law, their team retreat, his lessons learned from 80k, and the fate of the show.</p>
March 25, 2025
<p>The 20th century saw unprecedented change: nuclear weapons, satellites, the rise and fall of communism, third-wave feminism, the internet, postmodernism, game theory, genetic engineering, the Big Bang theory, quantum mechanics, birth control, and more. Now imagine all of it compressed into just 10 years.</p><p>That’s the future Will MacAskill — philosopher and researcher at the <a href="https://www.forethought.org/">Forethought Centre for AI Strategy</a> — argues we need to prepare for in his new paper “<a href="https://www.forethought.org/preparing-for-the-intelligence-explosion">Preparing for the intelligence explosion</a>.” Not in the distant future, but probably in three to seven years.</p><p>These highlights are from episode #213 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>: <a href="https://80000hours.org/podcast/episodes/will-macaskill-century-in-a-decade-navigating-intelligence-explosion/"><strong>Will MacAskill on AI causing a “century in a decade” — and how we’re completely unprepared</strong></a>, and include:</p><ul><li>Rob's intro (00:00:00)</li><li>A century of history crammed into a decade (00:00:17)</li><li>What does a good future with AGI even look like? (00:04:48)</li><li>AI takeover might happen anyway — should we rush to load in our values? (00:09:29)</li><li>Lock-in is plausible where it never was before (00:14:40)</li><li>ML researchers are feverishly working to destroy their own power (00:20:07)</li><li>People distrust utopianism for good reason (00:24:30)</li><li>Non-technological disruption (00:29:18)</li><li>The 3 intelligence explosions (00:31:10)</li></ul><p>These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:[email protected]">[email protected]</a>. </p><p>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</p>
Spencer Greenberg
Dwarkesh Patel
Mercatus Center at George Mason University
Patrick McKenzie
Machine Learning Street Talk (MLST)
Russ Roberts
Tom Chivers and Stuart Ritchie
Sean Carroll | Wondery
Sam Harris
New York Times Opinion
Bloomberg
The New York Times
Bloomberg
Foreign Policy
Vox
Pod Engine is not affiliated with, endorsed by, or officially connected with any of the podcasts displayed on this platform. We operate independently as a podcast discovery and analytics service.
All podcast artwork, thumbnails, and content displayed on this page are the property of their respective owners and are protected by applicable copyright laws. This includes, but is not limited to, podcast cover art, episode artwork, show descriptions, episode titles, transcripts, audio snippets, and any other content originating from the podcast creators or their licensors.
We display this content under fair use principles and/or implied license for the purpose of podcast discovery, information, and commentary. We make no claim of ownership over any podcast content, artwork, or related materials shown on this platform. All trademarks, service marks, and trade names are the property of their respective owners.
While we strive to ensure all content usage is properly authorized, if you are a rights holder and believe your content is being used inappropriately or without proper authorization, please contact us immediately at [email protected] for prompt review and appropriate action, which may include content removal or proper attribution.
By accessing and using this platform, you acknowledge and agree to respect all applicable copyright laws and intellectual property rights of content owners. Any unauthorized reproduction, distribution, or commercial use of the content displayed on this platform is strictly prohibited.