by Robert Roche
The purpose of this project is to help lay the foundational elements of becoming a Type One Planet. Watch Ep. 1 for information on what a Type One Planet is. This video podcast features interviews, round tables, debates, and collaborative sessions to ideate on concepts that will enable us to lay the foundation for becoming a Type One Planet. Each guest is a mission driven visionary, including inventors, scientists, and early entrepreneurs. Each interview is a deep exploration on the innovation, technology, or line of scientific study that the guest is pursuing.
Language
🇺🇲
Publishing Since
10/16/2022
Email Addresses
1 available
Phone Numbers
0 available
November 17, 2023
<p>◄ Episode Description</p> <p><br></p> <p>Ruben Dieleman is a campaigner for the Existential Risk Observatory, an organization dedicated to reducing human existential risk by increasing public awareness about threats facing our civilization. In this interview, Ruben focuses primarily on artificial intelligence, discussing the AI alignment problem, defining key phrases used in AI debates, and explaining why there are so many differing perspectives on AI's risks and ethical development.</p> <p>We explore how awareness affects outcomes, how to educate politicians and the public on complex issues like AI without causing confusion or dismissal. Ruben provides recommendations for newsletters, individuals, and organizations to follow to stay current on AI safety research and debates. He previews an upcoming summit on AI safety that the Existential Risk Observatory is hosting, indicating it will be an important milestone in bringing more political leaders into the conversation.</p> <p>Overall this is an essential listen for anyone looking to enhance their understanding of AI risk, the dynamics of the AI safety community, and how civil society organizations are working to raise awareness on issues relating to human existential risk.</p> <p><br></p> <p>◄ Episode Timestamps</p> <p><br></p> <p>(00:00:00) Introduction</p> <p>(00:02:00) The mission of the Existential Risk Observatory</p> <p>(00:03:24) Where the 1 in 6 existential risk statistic comes from</p> <p>(00:05:07) Defining existential risk</p> <p>(00:07:33) Explaining unaligned AI and the alignment problem</p> <p>(00:09:16) Moving away from the concept of AI alignment</p> <p>(00:10:56) New concepts like scalable/responsible AI</p> <p>(00:12:25) Calls for a moratorium on certain kinds of AI development</p> <p>(00:13:34) Game theory dynamics around calls for AI pauses</p> <p>(00:15:08) Key risks posed by artificial superintelligence</p> <p>(00:16:46) Informing the general public without inducing dismissiveness</p> <p>(00:18:45) AI and the future of human employment</p> <p>(00:21:01) The upcoming AI Safety Summit and what it signifies</p> <p>(00:23:40) Keeping abreast of AI developments and debates</p> <p>(00:26:43) Communicating AI risks to politicians and the general public</p> <p>(00:29:37) Government regulation and oversight of AI development</p> <p>(00:31:43) Hopes for initiatives like an AI atomic agency</p> <p>(00:33:15) Resources for staying current on AI safety topics</p> <p>(00:35:31) How to follow the Existential Risk Observatory's work<br></p> <p><br></p> <p>◄ Episode Topic Score </p> <p><br></p> <p>Culture (8)</p> <p>Design (7)</p> <p>Education (9)</p> <p>Environment (4) </p> <p>Science (6)</p> <p>Technology (10)</p> <p><br></p> <p>◄ Additional Episode Resources </p> <p><br>Existential Risk Observatory: <a href="https://www.existentialriskobservatory.org/">https://www.existentialriskobservatory.org/</a></p> <p>Ruben’s Twitter: <a href="https://twitter.com/RBNDLM">https://twitter.com/RBNDLM</a></p> <p>AI Summit Talk Recording: <a href="https://www.youtube.com/watch?v=n3LIKX13V60">https://www.youtube.com/watch?v=n3LIKX13V60</a></p> <p><br></p> <p>◄ Ruben’s ultimate AI newsletter recommendations: </p> <p><br></p> <p>Existential Risk Observatory Newsletter: <a href="https://xriskobservatory.substack.com/">https://xriskobservatory.substack.com/</a></p> <p>Navigating AI Risks: <a href="https://www.navigatingrisks.ai/">https://www.navigatingrisks.ai/</a></p> <p>Second Best: <a href="https://www.secondbest.ca/">https://www.secondbest.ca/</a></p> <p>The EU AI Act Newsletter: <a href="https://artificialintelligenceact.substack.com/">https://artificialintelligenceact.substack.com/</a></p> <p>AGI Safety Weekly: <a href="https://safety.blog/">https://safety.blog/</a></p> <p>Marcus On AI: <a href="https://garymarcus.substack.com/">https://garymarcus.substack.com/</a></p> <p>AI Policy Perspectives: <a href="https://aipolicyperspectives.substack.com/">https://aipolicyperspectives.substack.com/</a></p> <p>AI Safety Newsletter: <a href="https://newsletter.safe.ai/">https://newsletter.safe.ai/</a></p> <p>Understanding AI: <a href="https://www.understandingai.org/">https://www.understandingai.org/</a></p>
November 3, 2023
◄ Episode Description Tobias Rose-Stockwell is the author of “The Outrage Machine: How Tech Amplifies Discontent, Disrupts Democracy and What We Can Do About It”. He is a leading thinker and researcher on the impacts of social media and technology on society, and in this interview Tobias provides deep insights into how social media companies like Facebook, Twitter and TikTok have shaped modern discourse, impacted institutions like journalism and democracy, and how they can be transformed to positively alter the trajectory of our species. In this interview we have an in-depth discussion around the incentives and structures within social media that drive outrage, disinformation and division. Key topics include how social media algorithms maximize engagement through intermittent variable rewards, how context collapse and context creep distort information, and how journalism has been impacted by the race for clicks and outrage. Tobias outlines constructive solutions, including regulations like the Platform Accountability and Transparency Act, as well as bottom-up community moderation tools like X's Community Notes. This episode is important for understanding the urgent challenges we face from today's social media landscape, and how we can create technologies that bring out the best in humankind rather than the worst. Tobias makes a compelling case that while social media has delivered tremendous value, thoughtful reforms of incentives, greater transparency, and empowering users are needed to realize its full potential while mitigating harms. His articulate analysis provides actionable insights for users, platforms and policymakers alike. ◄ Episode Timestamps (00:00) Introduction (00:03:00) Defining key concepts like memes, Moloch, and the dark valley(00:08:30) How social media incentives drive engagement and virality(00:13:00) The early optimistic vision for social media(00:17:00) How social media won out - verified connections(00:22:00) Features that shifted social media to information sharing(00:29:00) Virality, clickbait and optimized outrage(00:36:00) Incentives corrupting journalism(00:43:00) Context collapse and context creep distorting events(00:51:00) Scissor statements perfectly dividing groups(00:53:00) Free speech challenges and minority opinions(01:08:30) Can social media be reformed or is it inherently corrupting?(01:12:00) Individual tactics for healthier media consumption(01:14:00) Policy reforms like Platform Accountability and Transparency Act(01:15:00) Better bottom-up and community-driven moderation approaches ◄ Episode Topic Score Culture (9) Design (8) Education (7) Environment (4) Science (5) Technology (10) ◄ Additional Episode Resources Outrage Machine: https://www.outragemachine.org/ X Account: https://twitter.com/TobiasRose Instagram: https://www.instagram.com/tobiased/?hl=en Medium: https://tobiasrose.medium.com/ Website: https://tobias.cc/ ◄ Engage with Type One Planet: Website: www.typeoneplanet.net Instagram: https://www.instagram.com/typeoneplanet/ TikTok: https://www.tiktok.com/@typeoneplanet
October 6, 2023
<p>◄ Episode Description</p> <p><br></p> <p>Joseph is a professor in the Department of Environment and Society at Utah State University. He is perhaps best known as the author of the 1988 book The Collapse of Complex Societies, which examines the dynamics and processes that lead civilizations to decay and unravel. This seminal work remains a key text for anyone seeking to comprehend how societies evolve, adapt, and sometimes catastrophically fail.</p> <p>In his research, Joseph tackles big questions about civilizational sustainability, the ability to problem-solve, and the complex interplay of factors that allow civilizations to thrive or decline. His core argument is that as societies evolve to solve problems, they become more complex. This added complexity initially yields benefits and new capabilities, but over time it requires ever more resources to sustain itself, leading to diminishing returns. Eventually the costs of maintaining complexity overwhelm the benefits, setting the stage for collapse.</p> <p>Joseph’s ability to analyze civilizations using an anthropological lens provides a unique vantage point for assessing our current global system. </p> <p><br></p> <p>◄ Episode Timestamps</p> <p><br></p> <p>(00:00:00) Defining complexity in societies - structure and organization</p> <p>(00:05:25) The tradeoff between structure and organization</p> <p>(00:06:54) Why inequality and heterogeneity are signs of a complex society</p> <p>(00:08:15) Why collapse may not intrinsically be a catastrophe</p> <p>(00:11:00) Every time history repeats, the cost goes up</p> <p>(00:14:05) The diminishing returns of sociopolitical complexity</p> <p>(00:17:06) Assessing our response to COVID-19</p> <p>(00:22:20) The paradox of collectively investing in complexity</p> <p>(00:26:30) Why energy subsidies delay civilizational collapse</p> <p>(00:28:52) Modern existential risks </p> <p>(00:30:55) What has changed about existential risk in recent decades years?</p> <p>(00:37:00) Book recommendations for learning about civilizational collapse</p> <p>(00:39:30) Explanation of Type One Planet</p> <p><br></p> <p>◄ Episode Topic Score </p> <p><br></p> <p>Culture (9)</p> <p>Design (8)</p> <p>Education (10)</p> <p>Environment (5)</p> <p>Science (6)</p> <p>Technology (7)</p> <p><br></p> <p>◄ Additional Episode Resources </p> <p><br></p> <p>The Collapse of Complex Societies (Book): https://www.amazon.com/Collapse-Complex-Societies-Studies-Archaeology/dp/052138673X</p> <p>The Great Wave (Book) : https://en.wikipedia.org/wiki/The_Great_Wave_(book)</p> <p>Heat, Power, and Light (Book): https://www.amazon.com/Heat-Power-Light-Revolutions-Services/dp/1845426606</p> <p>The Lessons of History (Book): https://www.amazon.com/Lessons-History-Will-Durant/dp/143914995X</p> <p><br></p> <p>◄ Engage with Type One Planet: </p> <p><br></p> <p>Website: <a href="http://www.typeoneplanet.net">www.typeoneplanet.net</a></p> <p>Instagram: <a href="https://www.instagram.com/typeoneplanet/">https://www.instagram.com/typeoneplanet/</a></p> <p>TikTok: <a href="https://www.tiktok.com/@typeoneplanet">https://www.tiktok.com/@typeoneplanet</a><br></p>
Pod Engine is not affiliated with, endorsed by, or officially connected with any of the podcasts displayed on this platform. We operate independently as a podcast discovery and analytics service.
All podcast artwork, thumbnails, and content displayed on this page are the property of their respective owners and are protected by applicable copyright laws. This includes, but is not limited to, podcast cover art, episode artwork, show descriptions, episode titles, transcripts, audio snippets, and any other content originating from the podcast creators or their licensors.
We display this content under fair use principles and/or implied license for the purpose of podcast discovery, information, and commentary. We make no claim of ownership over any podcast content, artwork, or related materials shown on this platform. All trademarks, service marks, and trade names are the property of their respective owners.
While we strive to ensure all content usage is properly authorized, if you are a rights holder and believe your content is being used inappropriately or without proper authorization, please contact us immediately at [email protected] for prompt review and appropriate action, which may include content removal or proper attribution.
By accessing and using this platform, you acknowledge and agree to respect all applicable copyright laws and intellectual property rights of content owners. Any unauthorized reproduction, distribution, or commercial use of the content displayed on this platform is strictly prohibited.