S2E11: Can Tech Platforms Rely on Algorithms to Remove Terrorist Content?

Chia sẻ

Manage episode 326708952 series 2577796
Thông tin tác giả Tech Against Terrorism được phát hiện bởi Player FM và cộng đồng của chúng tôi - bản quyền thuộc sở hữu của nhà sản xuất (publisher), không thuộc về Player FM, và audio được phát trực tiếp từ máy chủ của họ. Bạn chỉ cần nhấn nút Theo dõi (Subscribe) để nhận thông tin cập nhật từ Player FM, hoặc dán URL feed vào các ứng dụng podcast khác.

On this week’s episode, we’re exploring the role of automated solutions and Artificial Intelligence (AI) in tackling terrorist and violent extremist content and activity online. With the help of our expert guests, we delve into the historical use of machine learning algorithms for content moderation purposes, look at how they’ve developed over the last decade or so, and discuss their potential going forward.

We consider some of the potential biases and ethical considerations around automated removal systems, such as the mistaken removal of war crime evidence or political speech in the Arabic language. Our guests explore how we can best utilise algorithms to tackle terrorist content, highlighting their potential for understanding patterns of terrorist behaviour online.

This week, Anne Craanen speaks to Adam Hadley, Founder and Executive Director of Tech Against Terrorism. We also hear from Dia Kayyali, director for advocacy at Mnemonic where they focus on the real-life impact of policy decisions made by lawmakers and technology companies about content moderation and related topics. And Chris Meserole, a fellow in Foreign Policy at the Brookings Institution and director of research for the Brookings Artificial Intelligence and Emerging Technology Initiative. Chris is also an adjunct professor at Georgetown University.

To find out more about Tech Against Terrorism and our work, visit techagainstterrorism.org or follow us on Twitter @techvsterrorism, where you can find resources on this topic.

26 tập