Artwork

Player FM - Internet Radio Done Right
Checked 11d ago
Đã thêm cách đây hai năm
Nội dung được cung cấp bởi MLSecOps.com. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được MLSecOps.com hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.
Player FM - Ứng dụng Podcast
Chuyển sang chế độ ngoại tuyến với ứng dụng Player FM !
icon Daily Deals

Evaluating Real-World Adversarial ML Attack Risks and Effective Management: Robustness vs Non-ML Mitigations

41:19
 
Chia sẻ
 

Manage episode 386791830 series 3461851
Nội dung được cung cấp bởi MLSecOps.com. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được MLSecOps.com hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.

Send us a text

In this episode, co-hosts Badar Ahmed and Daryan Dehghanpisheh are joined by Drew Farris (Principal, Booz Allen Hamilton) and Edward Raff (Chief Scientist, Booz Allen Hamilton) to discuss themes from their paper, "You Don't Need Robust Machine Learning to Manage Adversarial Attack Risks," co-authored with Michael Benaroch.

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

  continue reading

48 tập

Artwork
iconChia sẻ
 
Manage episode 386791830 series 3461851
Nội dung được cung cấp bởi MLSecOps.com. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được MLSecOps.com hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.

Send us a text

In this episode, co-hosts Badar Ahmed and Daryan Dehghanpisheh are joined by Drew Farris (Principal, Booz Allen Hamilton) and Edward Raff (Chief Scientist, Booz Allen Hamilton) to discuss themes from their paper, "You Don't Need Robust Machine Learning to Manage Adversarial Attack Risks," co-authored with Michael Benaroch.

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

  continue reading

48 tập

Semua episode

×
 
Send us a text Join Keith Hoodlet from Trail of Bits as he dives into AI/ML security, discussing everything from prompt injection and fuzzing techniques to bias testing and compliance challenges. Full transcript with links to resources available at https://mlsecops.com/podcast/from-pickle-files-to-polyglots-hidden-risks-in-ai-supply-chains Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com . Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform…
 
Send us a text Full transcript with links to resources available at https://mlsecops.com/podcast/rethinking-ai-red-teaming-lessons-in-zero-trust-and-model-protection This episode is a follow up to Part 1 of our conversation with returning guest Brian Pendleton, as he challenges the way we think about red teaming and security for AI. Continuing from last week’s exploration of enterprise AI adoption and high-level security considerations, the conversation now shifts to how red teaming, zero trust, and privacy concerns intertwine with AI’s unique risks. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com . Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform…
 
Send us a text Full transcript with links to resources available at https://mlsecops.com/podcast/ai-security-map-it-manage-it-master-it In part one of our two-part MLSecOps Podcast episode, security veteran Brian Pendleton takes us from his early hacker days to the forefront of AI security. Brian explains why mapping every AI integration is essential for uncovering vulnerabilities. He also dives into the benefits of using SBOMs over model cards for risk management and stresses the need to bridge the gap between ML and security teams to protect your enterprise AI ecosystem. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com . Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform…
 
Send us a text Full transcript with links to resources available at https://mlsecops.com/podcast/agentic-ai-tackling-data-security-and-compliance-risks Join host Diana Kelley and CTO Dr. Gina Guillaume-Joseph as they explore how agentic AI, robust data practices, and zero trust principles drive secure, real-time video analytics at Camio. They discuss why clean data is essential, how continuous model validation can thwart adversarial threats, and the critical balance between autonomous AI and human oversight. Dive into the world of multimodal modeling, ethical safeguards, and what it takes to ensure AI remains both innovative and risk-aware. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com . Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform…
 
Send us a text Full transcript with links to resources available at https://mlsecops.com/podcast/ai-vulnerabilities-ml-supply-chains-to-llm-and-agent-exploits Join host Dan McInerney and AI security expert Sierra Haex as they explore the evolving challenges of AI security. They discuss vulnerabilities in ML supply chains, the risks in tools like Ray and untested AI model files, and how traditional security measures intersect with emerging AI threats. The conversation also covers the rise of open-source models like DeepSeek and the security implications of deploying autonomous AI agents, offering critical insights for anyone looking to secure distributed AI systems. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com . Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform…
 
Send us a text Full transcript with links to resources available at https://mlsecops.com/podcast/implementing-a-robust-ai-governance-framework-for-business-success In this episode of the MLSecOps podcast, host Charlie McCarthy sits down with Chris McClean, Global Lead for Digital Ethics at Avanade, to explore the world of responsible AI governance. They discuss how ethical principles, risk management, and robust security practices can be integrated throughout the AI lifecycle—from design and development to deployment and oversight. Learn practical strategies for building resilient AI frameworks, understanding regulatory impacts, and driving innovation safely. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com . Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform…
 
Send us a text Full transcript with links to resources available at https://mlsecops.com/podcast/unpacking-generative-ai-red-teaming-and-practical-security-solutions In this episode, we explore LLM red teaming beyond simple “jailbreak” prompts with special guest Donato Capitella, from WithSecure Consulting. You’ll learn why vulnerabilities live in context—how LLMs interact with users, tools, and documents—and discover best practices for mitigating attacks like prompt injection. Our guest also previews an open-source tool for automating security tests on LLM applications. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com . Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform…
 
Send us a text In this episode of the MLSecOps Podcast , the team dives into the transformative potential of Vulnhuntr: zero shot vulnerability discovery using LLMs . Madison Vorbrich hosts Dan McInerney and Marcello Salvati to discuss Vulnhuntr’s ability to autonomously identify vulnerabilities, including zero-days, using large language models (LLMs) like Claude. They explore the evolution of AI tools for security, the gap between traditional and AI-based static code analysis, and how Vulnhuntr enables both developers and security teams to proactively safeguard their projects. The conversation also highlights Protect AI’s bug bounty platform, huntr.com, and its expansion into model file vulnerabilities (MFVs), emphasizing the critical need to secure AI supply chains and systems. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com . Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform…
 
Send us a text Full transcript with links to resources available at https://mlsecops.com/podcast/ai-governance-essentials-empowering-procurement-teams-to-navigate-ai-risk . In this episode of the MLSecOps Podcast, Charlie McCarthy from Protect AI sits down with Dr. Cari Miller to discuss the evolving landscapes of AI procurement and governance. Dr. Miller shares insights from her work with the AI Procurement Lab and ForHumanity, delving into the essential frameworks and strategies needed to mitigate risks in AI acquisitions. They cover the AI Procurement Risk Management Framework, practical ways to ensure transparency and accountability, and how the September 2024 OMB Memo M-24-18 is guiding AI acquisition in government. Dr. Miller also emphasizes the importance of cross-functional collaboration and AI literacy to support responsible AI procurement and deployment in organizations of all types. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com . Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform…
 
Send us a text In this episode of the MLSecOps Podcast, Distinguished Engineer Nicole Nichols from Palo Alto Networks joins host and Machine Learning Scientist Mehrin Kiani to explore critical challenges in AI and cybersecurity. Nicole shares her unique journey from mechanical engineering to AI security, her thoughts on the importance of clear AI vocabularies, and the significance of bridging disciplines in securing complex systems. They dive into the nuanced definitions of AI fairness and safety, examine emerging threats like LLM backdoors, and discuss the rapidly evolving impact of autonomous AI agents on cybersecurity defense. Nicole’s insights offer a fresh perspective on the future of AI-driven security, teamwork, and the growth mindset essential for professionals in this field. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com . Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform…
 
Send us a text On this episode of the MLSecOps Podcast, we’re bringing together two cybersecurity legends. Our guest is the inimitable Caleb Sima, who joins us to discuss security considerations for building and using AI, drawing on his 25+ years of cybersecurity experience. Caleb's impressive journey includes co-founding two security startups acquired by HP and Lookout, serving as Chief Security Officer at Robinhood, and currently leading cybersecurity venture studio WhiteRabbit & chairing the Cloud Security Alliance AI Safety Initiative . Hosting this episode is Diana Kelley (CISO, Protect AI ) an industry powerhouse with a long career dedicated to cybersecurity, and a longtime host on this show. Together, Caleb and Diana share a thoughtful discussion full of unique insights for the MLSecOps Community of learners. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com . Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform…
 
Send us a text Welcome to Season 3 of the MLSecOps Podcast, brought to you by Protect AI ! In this episode, MLSecOps Community Manager Charlie McCarthy speaks with Sander Schulhoff, co-founder and CEO of Learn Prompting. Sander discusses his background in AI research, focusing on the rise of prompt engineering and its critical role in generative AI. He also shares insights into prompt security, the creation of LearnPrompting.org, and its mission to democratize prompt engineering knowledge. This episode also explores the intricacies of prompting techniques, "prompt hacking," and the impact of competitions like HackAPrompt on improving AI safety and security. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com . Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform…
 
Send us a text This compilation contains highlights from episodes throughout Season 2 of the MLSecOps Podcast , and it's a great one for community members who are new to the show. If there is a clip from this highlights reel that is especially interesting to you, you can note the name of the original episode that the clip came from and easily go check out that full length episode for a deeper dive. Extending enormous thanks to everyone who has supported this show, including our audience, Protect AI hosts, and stellar expert guests. Stay tuned for Season 3! Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com . Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform…
 
Send us a text In this episode of the MLSecOps Podcast we have the honor of talking with David Rosenthal, Partner at VISCHER (Swiss Law, Tax & Compliance). David is also an author & former software developer, and lectures at ETH Zürich & the University of Basel. He has more than 25 years of experience in data & technology law and kindly joined the show to discuss a variety of AI regulation topics, including the EU Artificial Intelligence Act, generative AI risk assessment, and challenges related to organizational compliance with upcoming AI regulations. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com . Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform…
 
Send us a text In this episode, we had the pleasure of welcoming Co-Founder and CISO of Weights & Biases, Chris Van Pelt, to the MLSecOps Podcast. Chris discusses a range of topics with hosts Badar Ahmed and Diana Kelley, including the history of how W&B was formed, building a culture of security & knowledge sharing across teams in an organization, real-world ML and GenAI security concerns, data lineage and tracking, and upcoming features in the Weights & Biases platform for enhancing security. More about our guest speaker: Chris Van Pelt is a co-founder of Weights & Biases, a developer MLOps platform. In 2009, Chris founded Figure Eight/CrowdFlower. Over the past 12 years, Chris has dedicated his career optimizing ML workflows and teaching ML practitioners, making machine learning more accessible to all. Chris has worked as a studio artist, computer scientist, and web engineer. He studied both art and computer science at Hope College. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com . Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform…
 
Loading …

Chào mừng bạn đến với Player FM!

Player FM đang quét trang web để tìm các podcast chất lượng cao cho bạn thưởng thức ngay bây giờ. Đây là ứng dụng podcast tốt nhất và hoạt động trên Android, iPhone và web. Đăng ký để đồng bộ các theo dõi trên tất cả thiết bị.

 

icon Daily Deals
icon Daily Deals
icon Daily Deals

Hướng dẫn sử dụng nhanh

Nghe chương trình này trong khi bạn khám phá
Nghe