Chuyển sang chế độ ngoại tuyến với ứng dụng Player FM !
Responsible AI: Defining, Implementing, and Navigating the Future; With Guest: Diya Wynn
Manage episode 363717659 series 3461851
In this episode of The MLSecOps Podcast, Diya Wynn, Sr. Practice Manager in Responsible AI in the Machine Learning Solutions Lab at Amazon Web Services shares her background and the motivations that led her to pursue a career in Responsible AI.
Diya shares her passion for work related to diversity, equity, and inclusion (DEI), and how Responsible AI offers a unique opportunity to merge her passion for DEI with what her core focus has always been: technology. She explores the definition of Responsible AI as an operating approach focused on minimizing unintended impact and maximizing benefits. The group also spends some time in this episode discussing Generative AI and its potential to perpetuate biases and raise ethical concerns.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
50 tập
Manage episode 363717659 series 3461851
In this episode of The MLSecOps Podcast, Diya Wynn, Sr. Practice Manager in Responsible AI in the Machine Learning Solutions Lab at Amazon Web Services shares her background and the motivations that led her to pursue a career in Responsible AI.
Diya shares her passion for work related to diversity, equity, and inclusion (DEI), and how Responsible AI offers a unique opportunity to merge her passion for DEI with what her core focus has always been: technology. She explores the definition of Responsible AI as an operating approach focused on minimizing unintended impact and maximizing benefits. The group also spends some time in this episode discussing Generative AI and its potential to perpetuate biases and raise ethical concerns.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
50 tập
Alle episoder
×Chào mừng bạn đến với Player FM!
Player FM đang quét trang web để tìm các podcast chất lượng cao cho bạn thưởng thức ngay bây giờ. Đây là ứng dụng podcast tốt nhất và hoạt động trên Android, iPhone và web. Đăng ký để đồng bộ các theo dõi trên tất cả thiết bị.