by Hugo Bowne-Anderson
A podcast about all things data, brought to you by data scientist Hugo Bowne-Anderson. It's time for more critical conversations about the challenges in our industry in order to build better compasses for the solution space! To this end, this podcast will consist of long-format conversations between Hugo and other people who work broadly in the data science, machine learning, and AI spaces. We'll dive deep into all the moving parts of the data world, so if you're new to the space, you'll have an opportunity to learn from the experts. And if you've been around for a while, you'll find out what's happening in many other parts of the data world.
Language
🇺🇲
Publishing Since
2/16/2022
Email Addresses
1 available
Phone Numbers
0 available
April 7, 2025
<p>What if the cost of writing code dropped to zero — but the cost of understanding it skyrocketed?</p> <p>In this episode, Hugo sits down with Joe Reis to unpack how AI tooling is reshaping the software development lifecycle — from experimentation and prototyping to deployment, maintainability, and everything in between.</p> <p>Joe is the co-author of Fundamentals of Data Engineering and a longtime voice on the systems side of modern software. He’s also one of the sharpest critics of “vibe coding” — the emerging pattern of writing software by feel, with heavy reliance on LLMs and little regard for structure or quality.</p> <p>We dive into:<br> • Why “vibe coding” is more than a meme — and what it says about how we build today<br> • How AI tools expand the surface area of software creation — for better and worse<br> • What happens to technical debt, testing, and security when generation outpaces understanding<br> • The changing definition of “production” in a world of ephemeral, internal, or just-good-enough tools<br> • How AI is flattening the learning curve — and threatening the talent pipeline<br> • Joe’s view on what real craftsmanship means in an age of disposable code</p> <p>This conversation isn’t about doom, and it’s not about hype. It’s about mapping the real, messy terrain of what it means to build software today — and how to do it with care.</p> <p><strong>LINKS</strong></p> <ul> <li><a href="https://practicaldatamodeling.substack.com/" rel="nofollow">Joe's Practical Data Modeling Newsletter on Substack</a></li> <li><a href="https://discord.gg/HhSZVvWDBb" rel="nofollow">Joe's Practical Data Modeling Server on Discord</a></li> <li><a href="https://www.youtube.com/channel/UC_NafIo-Ku2loOLrzm45ABA" rel="nofollow">Vanishing Gradients YouTube Channel</a><br></li> <li><a href="https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk" rel="nofollow">Upcoming Events on Luma</a></li> </ul> <p>🎓 Want to go deeper?<br> Check out my course: Building LLM Applications for Data Scientists and Software Engineers.<br> Learn how to design, test, and deploy production-grade LLM systems — with observability, feedback loops, and structure built in.<br> This isn’t about vibes or fragile agents. It’s about making LLMs reliable, testable, and actually useful.</p> <p>Includes over $2,500 in compute credits and guest lectures from experts at DeepMind, Moderna, and more.<br> Cohort starts April 7 — <a href="https://maven.com/hugo-stefan/building-llm-apps-ds-and-swe-from-first-principles?promoCode=LLM10" rel="nofollow">Use this link for a 10% discount</a></p>
April 3, 2025
<p>What if building software felt more like composing than coding?</p> <p>In this episode, Hugo and Greg explore how LLMs are reshaping the way we think about software development—from deterministic programming to a more flexible, prompt-driven, and collaborative style of building. It’s not just hype or grift—it’s a real shift in how we express intent, reason about systems, and collaborate across roles.</p> <p>Hugo speaks with Greg Ceccarelli—co-founder of SpecStory, former CPO at Pluralsight, and Director of Data Science at GitHub—about the rise of software composition and how it changes the way individuals and teams create with LLMs.</p> <p>We dive into:</p> <ul> <li>Why software composition is emerging as a serious alternative to traditional coding</li> <li>The real difference between vibe coding and production-minded prototyping</li> <li>How LLMs are expanding who gets to build software—and how</li> <li>What changes when you focus on intent, not just code</li> <li>What Greg is building with SpecStory to support collaborative, traceable AI-native workflows</li> <li>The challenges (and joys) of debugging and exploring with agentic tools like Cursor and Claude</li> </ul> <p>We’ve removed the visual demos from the audio—but you can catch our live-coded Chrome extension and JFK document explorer on YouTube. Links below.</p> <ul> <li><a href="https://youtu.be/JpXCkuV58QE" rel="nofollow">JFK Docs Vibe Coding Demo (YouTube)</a><br></li> <li><a href="https://youtu.be/ESVKp37jDwc" rel="nofollow">Chrome Extension Vibe Coding Demo (YouTube)</a><br></li> <li><a href="https://www.meditationsontech.com/" rel="nofollow">Meditations on Tech (Greg’s Substack)</a><br></li> <li><a href="https://simonwillison.net/2025/Mar/19/vibe-coding/" rel="nofollow">Simon Willison on Vibe Coding</a><br></li> <li><a href="https://johnowhitaker.dev/essays/vibe_coding.html" rel="nofollow">Johnno Whitaker: On Vibe Coding</a><br></li> <li><a href="https://www.oreilly.com/radar/the-end-of-programming-as-we-know-it/" rel="nofollow">Tim O’Reilly – The End of Programming</a><br></li> <li><a href="https://www.youtube.com/channel/UC_NafIo-Ku2loOLrzm45ABA" rel="nofollow">Vanishing Gradients YouTube Channel</a><br></li> <li><a href="https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk" rel="nofollow">Upcoming Events on Luma</a><br></li> <li><a href="https://www.linkedin.com/in/gregceccarelli/" rel="nofollow">Greg Ceccarelli on LinkedIn</a><br></li> <li><a href="https://news.ycombinator.com/item?id=43557698" rel="nofollow">Greg’s Hacker News Post on GOOD</a><br></li> <li><a href="https://github.com/specstoryai/getspecstory/blob/main/GOOD.md" rel="nofollow">SpecStory: GOOD – Git Companion for AI Workflows</a></li> </ul> <p>🎓 Want to go deeper?<br> Check out my course: Building LLM Applications for Data Scientists and Software Engineers.<br> Learn how to design, test, and deploy production-grade LLM systems — with observability, feedback loops, and structure built in.<br> This isn’t about vibes or fragile agents. It’s about making LLMs reliable, testable, and actually useful.</p> <p>Includes over $2,500 in compute credits and guest lectures from experts at DeepMind, Moderna, and more.<br> Cohort starts April 7 — <a href="https://maven.com/hugo-stefan/building-llm-apps-ds-and-swe-from-first-principles?promoCode=LLM10" rel="nofollow">Use this link for a 10% discount</a></p> <h3>🔍 Want to help shape the future of SpecStory?</h3> <p>Greg and the team are looking for <strong>design partners</strong> for their new SpecStory Teams product—built for collaborative, AI-native software development.</p> <p>If you're working with LLMs in a team setting and want to influence the next wave of developer tools, you can apply here:<br><br> 👉 <a href="https://specstory.com/teams" rel="nofollow">specstory.com/teams</a></p>
February 20, 2025
<p>Too many teams are building AI applications without truly understanding why their models fail. Instead of jumping straight to LLM evaluations, dashboards, or vibe checks, how do you actually <strong>fix</strong> a broken AI app? </p> <p>In this episode, Hugo speaks with <strong>Hamel Husain</strong>, longtime ML engineer, open-source contributor, and consultant, about why debugging generative AI systems starts with looking at your data. </p> <p>In this episode, we dive into: </p> <ul> <li>Why “look at your data” is the best debugging advice no one follows.<br></li> <li>How <strong>spreadsheet-based error analysis</strong> can uncover failure modes faster than complex dashboards.<br></li> <li>The role of <strong>synthetic data</strong> in bootstrapping evaluation.<br></li> <li>When to trust <strong>LLM judges</strong>—and when they’re misleading.<br></li> <li>Why most AI dashboards measuring <strong>truthfulness, helpfulness, and conciseness</strong> are often a waste of time.<br></li> </ul> <p>If you're building AI-powered applications, this episode will change how you approach debugging, iteration, and improving model performance in production. </p> <p><strong>LINKS</strong></p> <ul> <li><a href="https://youtube.com/live/Vz4--82M2_0?feature=share" rel="nofollow">The podcast livestream on YouTube</a></li> <li><a href="https://hamel.dev/" rel="nofollow">Hamel's blog</a></li> <li><a href="https://x.com/HamelHusain" rel="nofollow">Hamel on twitter</a></li> <li><a href="https://x.com/hugobowne" rel="nofollow">Hugo on twitter</a></li> <li><a href="https://x.com/vanishingdata" rel="nofollow">Vanishing Gradients on twitter</a></li> <li><a href="https://www.youtube.com/channel/UC_NafIo-Ku2loOLrzm45ABA" rel="nofollow">Vanishing Gradients on YouTube</a></li> <li><a href="https://x.com/vanishingdata" rel="nofollow">Vanishing Gradients on Twitter</a></li> <li><p><a href="https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk" rel="nofollow">Vanishing Gradients on Lu.ma</a></p></li> <li><p><a href="https://maven.com/s/course/d56067f338" rel="nofollow">Building LLM Application for Data Scientists and SWEs, Hugo course on Maven (use VG25 code for 25% off)</a></p></li> <li><p><a href="https://maven.com/p/ed7a72/llm-agents-when-to-use-them-and-when-not-to?utm_medium=ll_share_link&utm_source=instructor" rel="nofollow">Hugo is also running a free lightning lesson next week on LLM Agents: When to Use Them (and When Not To)</a></p></li> </ul>
swyx + Alessio
Sam Charrington
Machine Learning Street Talk (MLST)
Practical AI LLC
Conviction
Kyle Polich
Michael Kennedy
Andreessen Horowitz
Jon Krohn
Dwarkesh Patel
DataCamp
Sequoia Capital
Michael Kennedy and Brian Okken
Michael Sharkey, Chris Sharkey
Perplexity
Pod Engine is not affiliated with, endorsed by, or officially connected with any of the podcasts displayed on this platform. We operate independently as a podcast discovery and analytics service.
All podcast artwork, thumbnails, and content displayed on this page are the property of their respective owners and are protected by applicable copyright laws. This includes, but is not limited to, podcast cover art, episode artwork, show descriptions, episode titles, transcripts, audio snippets, and any other content originating from the podcast creators or their licensors.
We display this content under fair use principles and/or implied license for the purpose of podcast discovery, information, and commentary. We make no claim of ownership over any podcast content, artwork, or related materials shown on this platform. All trademarks, service marks, and trade names are the property of their respective owners.
While we strive to ensure all content usage is properly authorized, if you are a rights holder and believe your content is being used inappropriately or without proper authorization, please contact us immediately at [email protected] for prompt review and appropriate action, which may include content removal or proper attribution.
By accessing and using this platform, you acknowledge and agree to respect all applicable copyright laws and intellectual property rights of content owners. Any unauthorized reproduction, distribution, or commercial use of the content displayed on this platform is strictly prohibited.