by IVANCAST PODCAST
IVANCAST PODCAST - The first multilingual podcast of Ecuador. IVANCAST explores the experiences of humans of the world who either live in the Ecuadorean Amazon Rainforest or are doing soulful, creative things all over the globe.
Language
🇺🇲
Publishing Since
8/20/2019
Email Addresses
0 available
Phone Numbers
0 available
March 22, 2025
In this episode of our AI-focused season, SHIFTERLABS uses Google LM to unravel the groundbreaking research “How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Randomized Controlled Study”conducted by researchers from the MIT Media Lab and OpenAI. Over a span of 28 days and 300,000+ messages exchanged, 981 participants were immersed in conversations with ChatGPT across various modalities—text, neutral voice, and emotionally engaging voice. The study examined the psychological and social consequences of daily AI chatbot interactions, investigating outcomes like loneliness, social withdrawal, emotional dependence, and problematic usage patterns. The findings are both fascinating and alarming. While chatbots showed initial benefits—especially voice-based ones—in alleviating loneliness, prolonged and emotionally charged interactions led to increased dependence and reduced real-life socialization. The study identifies vulnerable user patterns, highlights how design decisions and user behavior intertwine, and underscores the urgent need for psychosocial guardrails in AI systems. At SHIFTERLABS, this research hits home. It validates our concerns and fuels our mission: to explore and inform the public about the deep human and societal consequences of AI integration. We’re not just observers—we are conducting similar experiments, and we’ll be revealing some of our own findings in the upcoming episode of El Reloj de la Singularidad. Can machines fill the emotional void, or are we designing a new kind of digital dependency? 🔍 Tune in to understand how AI is quietly reshaping human intimacy—and why AI literacy and emotional resilience must go hand-in-hand. 🎧 Stay curious, stay critical—with SHIFTERLABS. www.shifterlabs.com
March 22, 2025
In this compelling episode of our research-driven season, SHIFTERLABS once again harnesses Google LM to decode the latest frontiers of human-AI interaction. Today, we explore “Investigating Affective Use and Emotional Well-being on ChatGPT,” a collaborative study by Jason Phang, Michael Lampe, Lama Ahmad, Sandhini Agarwal (OpenAI) and Cathy Fang, Auren Liu, Valdemar Danry, Samantha Chan, Pattie Maes (MIT Media Lab). This groundbreaking research combines large-scale usage analysis with a randomized controlled trial to explore how interactions with AI—especially through voice—are shaping users’ emotional well-being, behavior, and sense of connection. With over 4 million conversations analyzed and 981 participants followed over 28 days, the findings are both revealing and urgent. From the rise of affective cues and emotional dependence in power users, to the nuanced effects of voice-based models on loneliness and socialization, this study brings to light the subtle but powerful ways AI is embedding itself into our emotional lives. At SHIFTERLABS, we are not just observers—we are experimenting with these technologies ourselves. This episode sets the stage for our upcoming discussion in El Reloj de la Singularidad, where we’ll present our own findings on AI-human emotional bonds. 🔍 This episode is part of our mission to make AI research accessible and spark vital conversations about socioaffective alignment, AI literacy, and ethical design in a world where technology is becoming deeply personal. 🎧 Tune in and stay ahead of the curve with SHIFTERLABS. www.shifterlabs.com
March 4, 2025
In this episode of our special season, SHIFTERLABS leverages Google LM to demystify cutting-edge research, translating complex insights into actionable knowledge. Today, we explore “AI Agents and Education: Simulated Practice at Scale”, a groundbreaking study by Ethan Mollick, Lilach Mollick, Natalie Bach, LJ Ciccarelli, Ben Przystanski, and Daniel Ravipinto from the Generative AI Lab at the Wharton School, University of Pennsylvania. The study introduces a powerful new approach to AI-driven educational simulations, showcasing how generative AI can create adaptive, scalable learning environments. Through AI-powered mentors, role-playing agents, and instructor-facing evaluators, simulations can now provide personalized, interactive practice opportunities—without the traditional barriers of cost and complexity. A key case study in the research is PitchQuest, an AI-driven venture capital pitching simulator that allows students to hone their pitching skills with virtual investors, mentors, and evaluators. But the implications go far beyond entrepreneurship—AI agents can revolutionize skill-building across fields like healthcare, law, and management training. Yet, AI-driven simulations also come with challenges: bias, hallucinations, and difficulties maintaining narrative consistency. Can AI truly replace human-guided training? How can educators integrate these tools responsibly? Join us as we break down this research and discuss how generative AI is transforming the future of education. 🔍 This episode is part of our mission to make AI research accessible, bridging the gap between innovation and education in an AI-integrated world. 🎧 Tune in now and stay ahead of the curve with SHIFTERLABS.
Pod Engine is not affiliated with, endorsed by, or officially connected with any of the podcasts displayed on this platform. We operate independently as a podcast discovery and analytics service.
All podcast artwork, thumbnails, and content displayed on this page are the property of their respective owners and are protected by applicable copyright laws. This includes, but is not limited to, podcast cover art, episode artwork, show descriptions, episode titles, transcripts, audio snippets, and any other content originating from the podcast creators or their licensors.
We display this content under fair use principles and/or implied license for the purpose of podcast discovery, information, and commentary. We make no claim of ownership over any podcast content, artwork, or related materials shown on this platform. All trademarks, service marks, and trade names are the property of their respective owners.
While we strive to ensure all content usage is properly authorized, if you are a rights holder and believe your content is being used inappropriately or without proper authorization, please contact us immediately at [email protected] for prompt review and appropriate action, which may include content removal or proper attribution.
By accessing and using this platform, you acknowledge and agree to respect all applicable copyright laws and intellectual property rights of content owners. Any unauthorized reproduction, distribution, or commercial use of the content displayed on this platform is strictly prohibited.