Chuyển sang chế độ ngoại tuyến với ứng dụng Player FM !
Risk Management and Enhanced Security Practices for AI Systems
Manage episode 399795663 series 3461851
In this episode of The MLSecOps Podcast, VP Security and Field CISO of Databricks, Omar Khawaja, joins the CISO of Protect AI, Diana Kelley. Together, Diana and Omar discuss a new framework for understanding AI risks, fostering a security-minded culture around AI, building the MLSecOps dream team, and some of the challenges that Chief Information Security Officers (CISOs) and other business leaders face when assessing the risk to their AI/ML systems.
Get the scoop on Databricks’ new AI Security Framework on The MLSecOps Podcast. To learn more about the framework, contact cybersecurity@databricks.com.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
49 tập
Manage episode 399795663 series 3461851
In this episode of The MLSecOps Podcast, VP Security and Field CISO of Databricks, Omar Khawaja, joins the CISO of Protect AI, Diana Kelley. Together, Diana and Omar discuss a new framework for understanding AI risks, fostering a security-minded culture around AI, building the MLSecOps dream team, and some of the challenges that Chief Information Security Officers (CISOs) and other business leaders face when assessing the risk to their AI/ML systems.
Get the scoop on Databricks’ new AI Security Framework on The MLSecOps Podcast. To learn more about the framework, contact cybersecurity@databricks.com.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
49 tập
Tất cả các tập
×
1 Unpacking the Cloud Security Alliance AI Controls Matrix 35:53

1 From Pickle Files to Polyglots: Hidden Risks in AI Supply Chains 41:21

1 Rethinking AI Red Teaming: Lessons in Zero Trust and Model Protection 36:52

1 AI Security: Map It, Manage It, Master It 41:18

1 Agentic AI: Tackling Data, Security, and Compliance Risks 23:22

1 AI Vulnerabilities: ML Supply Chains to LLM and Agent Exploits 24:08

1 Implementing Enterprise AI Governance: Balancing Ethics, Innovation & Risk for Business Success 38:39

1 Unpacking Generative AI Red Teaming and Practical Security Solutions 51:53

1 AI Security: Vulnerability Detection and Hidden Model File Risks 38:19

1 AI Governance Essentials: Empowering Procurement Teams to Navigate AI Risk 37:41

1 Crossroads: AI, Cybersecurity, and How to Prepare for What's Next 33:15

1 AI Beyond the Hype: Lessons from Cloud on Risk and Security 41:06

1 Generative AI Prompt Hacking and Its Impact on AI Security & Safety 31:59


1 Exploring Generative AI Risk Assessment and Regulatory Compliance 37:37
Chào mừng bạn đến với Player FM!
Player FM đang quét trang web để tìm các podcast chất lượng cao cho bạn thưởng thức ngay bây giờ. Đây là ứng dụng podcast tốt nhất và hoạt động trên Android, iPhone và web. Đăng ký để đồng bộ các theo dõi trên tất cả thiết bị.