by Fiddler AI
Safe and Sound AI is your go-to podcast for staying ahead in predictive and generative AI development. From pre-production design and post-production monitoring to governance and compliance, we deliver bite-sized episodes packed with technical insights and best practices. Designed for data scientists, engineers, trust and safety teams, and business leaders, our focus is to help you deliver and scale AI innovations with safety, trust, and transparency in mind. Safe and Sound AI is brought to you by Fiddler AI.
Language
🇺🇲
Publishing Since
12/12/2024
Email Addresses
0 available
Phone Numbers
0 available
April 2, 2025
In this episode, we discuss the new integration between Fiddler Guardrails with NVIDIA Nemo Guardrails, pairing the industry's fastest guardrails with your secure environment. We explore the setup process, practical implications, and the role of the Fiddler Trust Service in providing guardrails, monitoring and custom metrics. Plus, we highlight the free trial opportunity to experience Fiddler Guardrails firsthand. Read the article to learn more, or sign up for the Fiddler Guardrails free trial to test the integration for yourself.
February 27, 2025
In this episode, we explore how Fiddler Guardrails helps organizations keep large language models (LLMs) on track by moderating prompts and responses before they can cause damage. We break down its industry best latency, secure deployment options, and how it works with Fiddler’s AI observability platform to provide the visibility and control to adapt to evolving threats. Read the article to learn more about how Fiddler Guardrails can help safeguard your LLM Applications.
February 12, 2025
In this episode, we explore two key approaches for monitoring AI models: metrics and inference observation. We break down their trade-offs and provide real-world examples from various industries to illustrate the advantages of each model monitoring strategy for driving responsible AI development. Read the article by Fiddler AI and explore additional resources for more information on how AI observability can help developers build trust into AI services.
Pod Engine is not affiliated with, endorsed by, or officially connected with any of the podcasts displayed on this platform. We operate independently as a podcast discovery and analytics service.
All podcast artwork, thumbnails, and content displayed on this page are the property of their respective owners and are protected by applicable copyright laws. This includes, but is not limited to, podcast cover art, episode artwork, show descriptions, episode titles, transcripts, audio snippets, and any other content originating from the podcast creators or their licensors.
We display this content under fair use principles and/or implied license for the purpose of podcast discovery, information, and commentary. We make no claim of ownership over any podcast content, artwork, or related materials shown on this platform. All trademarks, service marks, and trade names are the property of their respective owners.
While we strive to ensure all content usage is properly authorized, if you are a rights holder and believe your content is being used inappropriately or without proper authorization, please contact us immediately at [email protected] for prompt review and appropriate action, which may include content removal or proper attribution.
By accessing and using this platform, you acknowledge and agree to respect all applicable copyright laws and intellectual property rights of content owners. Any unauthorized reproduction, distribution, or commercial use of the content displayed on this platform is strictly prohibited.