by Qohash
Welcome to Future of Data Security, the podcast where industry leaders come together to share their insights, lessons, and strategies on the forefront of data security. Each episode features in-depth interviews with top CISOs and security experts who discuss real-world solutions, innovations, and the latest technologies that are shaping the future of cybersecurity across various industries. Join us to gain actionable advice and stay ahead in the ever-evolving world of data security.
Language
🇺🇲
Publishing Since
9/3/2024
Email Addresses
0 available
Phone Numbers
0 available
April 1, 2025
The security landscape has radically shifted from ”if you get breached” to ”when you get breached” — and Morgan Stanley’s approach to data protection reflects this fundamental change in mindset. In this episode of The Future of Data Security, Faith Rotimi-Ajayi, AVP of Operational Risk, discusses how sophisticated attackers are now researching and targeting specific financial institutions rather than relying on opportunistic attacks. Faith tells Jean why social engineering attacks have evolved to target entire family units, including compromising newborns’ Social Security numbers for future fraud, and why third-party risk management demands rigorous new approaches as vendors increasingly implement AI without adequate security governance. She also shares her experience implementing dedicated AI governance committees, using risk-based authentication that adjusts friction based on user behavior analysis, and how the pandemic accelerated zero trust implementation by eliminating location-based security models. Topics discussed: - The challenges of maintaining operational resilience against increasingly sophisticated targeted attacks rather than merely opportunistic ones in the financial sector. - The evolution of third-party risk management as attackers now strategically target trusted vendors to gain backdoor access to financial environments. - How AI functions as a ”double agent” in security, enhancing defensive capabilities while simultaneously enabling sophisticated deep fakes and voice cloning attacks. - The emergence of shadow AI and strategies to mitigate risks through dedicated AI governance committees and internal alternative applications. - Why regulatory compliance is an innovation driver rather than an obstacle, using frameworks like GDPR, GLBA, and DORA as baselines for robust security programs. - Implementing security-by-design principles and risk-based authentication that adjusts friction based on context rather than applying uniform controls. - Using user behavior analysis (UBA) and indications of compromise (IOCs) to create security measures that don’t interrupt legitimate user activities. - How the pandemic accelerated zero trust implementation by eliminating location-based security models and forcing more sophisticated endpoint security approaches. - The importance of creating business-aligned data security frameworks that prioritize based on risk exposure rather than applying uniform protection. - Why Faith emphasizes continuous monitoring and testing alongside preventative controls to maintain 24/7 visibility across distributed environments.
March 25, 2025
”If you aren’t investing in penetration testing, if you aren’t investing in having external auditing and third party reporting like gray and black box type testing, you’re leaving your program extremely exploitable because you’re just admiring the beauty of your own ideas.” This blunt assessment from George Al-Koura, CISO at ruby, encapsulates his refreshingly practical approach to data security. In this episode of The Future of Data Security, George challenges conventional wisdom by predicting a major shift back to controlled data centers as organizations struggle with securing AI implementations in the cloud. He reflects on why no one has successfully created secure LLMs that can safely communicate with the open web, exposes the growing threat of ”force-enabled” AI tools being integrated without proper consent, and explains why technical skills are actually the easiest part of building an effective security team. With threat actors now operating with enterprise-level organization and sophistication,” George also shares battle-tested strategies for communicating risk effectively to boards and establishing security programs that can withstand sophisticated attacks. Topics discussed: - How skills from signals intelligence directly transfer to cybersecurity leadership, particularly the ability to provide concise risk-based analysis and make decisive decisions under pressure. - The challenge of getting organizations to invest in data security beyond compliance standards, while facing increasingly sophisticated threat actors who operate with enterprise-level organization. - The importance of establishing clear leadership accountability with properly designated roles (RACI), investing in appropriate technology, and -implementing rigorous third-party auditing beyond certification standards. - The gradual shift in board attitudes toward cybersecurity as a top-level concern, and how security leaders can effectively articulate business risk to secure necessary resources. - How privacy requirements are increasingly driving security investments, creating a data-centric risk management framework that requires security leaders to articulate both concerns. - The struggle to securely deploy LLMs that can communicate with the open web while protecting sensitive data, paired with the trend of returning to controlled data center environments. - How major platforms are integrating AI capabilities with minimal user consent, creating shadow AI risks and forcing security teams to develop agile assessment processes. - Looking beyond technical skills to prioritize integrity, work ethic, problem-solving ability, and social integration when forming security teams that can handle high-pressure situations.
March 11, 2025
In this insightful episode of The Future of Data Security, Jean Le Bouthillier speaks with Daniel Maynard, VP of Privacy and Data Risk Management & CPO at Early Warning, shares his journey from law to privacy and offers a practical framework for assessing AI implementation risks — distinguishing between controllable technical risks and more complex model provenance concerns. Daniel tells Jean about the critical challenges facing financial institutions, including data quality issues, AI ethics considerations, and the paradox of balancing fraud prevention with privacy protection. Daniel provides actionable governance strategies for managing shadow AI, addresses emerging threats from AI-powered fraud, and offers valuable insights on the evolving regulatory landscape. His balanced approach emphasizes documented risk assessment processes while acknowledging varying organizational risk tolerances. Topics discussed: - The importance of data quality as a foundation for all other security and privacy initiatives in financial services. - Emerging challenges with AI ethics and trust, particularly regarding data provenance and transparency in model development. - Practical governance frameworks for implementing AI tools while documenting risk-based decision processes with executive buy-in. - Model provenance risks and IP concerns when using AI tools to create potentially valuable intellectual property. - Shadow AI challenges and strategies for managing employee use of AI tools while maintaining appropriate security controls. - File access risks with AI assistants that can search through user-accessible content more thoroughly than humans typically would. - The paradoxical relationship between stronger fraud protections and potential negative privacy impacts from increased data collection. - Predictions about federal AI regulation in the United States versus the more restrictive approach seen in Europe. - Career advice for privacy professionals, including gaining cross-functional experience and maintaining a positive, problem-solving mindset.
Pod Engine is not affiliated with, endorsed by, or officially connected with any of the podcasts displayed on this platform. We operate independently as a podcast discovery and analytics service.
All podcast artwork, thumbnails, and content displayed on this page are the property of their respective owners and are protected by applicable copyright laws. This includes, but is not limited to, podcast cover art, episode artwork, show descriptions, episode titles, transcripts, audio snippets, and any other content originating from the podcast creators or their licensors.
We display this content under fair use principles and/or implied license for the purpose of podcast discovery, information, and commentary. We make no claim of ownership over any podcast content, artwork, or related materials shown on this platform. All trademarks, service marks, and trade names are the property of their respective owners.
While we strive to ensure all content usage is properly authorized, if you are a rights holder and believe your content is being used inappropriately or without proper authorization, please contact us immediately at [email protected] for prompt review and appropriate action, which may include content removal or proper attribution.
By accessing and using this platform, you acknowledge and agree to respect all applicable copyright laws and intellectual property rights of content owners. Any unauthorized reproduction, distribution, or commercial use of the content displayed on this platform is strictly prohibited.