Artwork

Nội dung được cung cấp bởi CCC media team. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được CCC media team hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.
Player FM - Ứng dụng Podcast
Chuyển sang chế độ ngoại tuyến với ứng dụng Player FM !

Uncovering discrimination in public fraud detection systems (dgwk2025)

29:42
 
Chia sẻ
 

Manage episode 471780587 series 1910928
Nội dung được cung cấp bởi CCC media team. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được CCC media team hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.
In recent years, algorithmic systems used by the Dutch government for fraud detection in welfare, allowances, and student loans were found to be discriminatory, causing harm to citizens. The Childcare Benefits Scandal highlighted these issues, sparking political and societal debates, investigations, and reforms. This session will explore causes of discriminatory outcomes, why they went undetected, red flags in such systems, and steps governments and society can take to ensure fair use of public algorithms. Lessons learned as an AI expert within the Dutch government will be shared. In recent years, several examples have come to light in the Netherlands of algorithmic systems developed and deployed by the government that were later found to be discriminatory. These systems, used to detect fraud in welfare benefits, allowances, and student loans, caused severe financial and emotional harm to citizens. The most devastating example of this was the Childcare Benefits Scandal. Thanks to the efforts of investigative journalists, civil society organizations, auditors, and determined individuals, these injustices came to light. The systems became a focal point of political and societal debate, leading to investigations and the introduction of new legislation, policies, and tools to address the issues. In this session, I would like to share some of the lessons I have learned as an AI expert within Dutch government. The following topics will be discussed: Causes of discriminatory outcomes: What are the main causes of discriminatory outcomes in public algorithmic fraud detection systems? Lack of Early Detection: How was it possible for these issues to remain unnoticed for so long? Red flags: What recurring patterns can be observed in these systems, and what signals indicate potential risks? Measures and Actions: What steps should governments take to prevent discrimination and other harms caused by public fraud detection algorithms? What can we, as a digital society, do to ensure the fairer use of public algorithmic systems? It is becoming increasingly clear that not only in the Netherlands, but also in countries such as Australia, the United Kingdom, Denmark, and Sweden, similar public fraud detection systems are causing harm. The lessons shared in this presentation are therefore more broadly applicable. about this event: https://winterkongress.ch/2025/talks/uncovering_discrimination_in_public_fraud_detection_systems/
  continue reading

1535 tập

Artwork
iconChia sẻ
 
Manage episode 471780587 series 1910928
Nội dung được cung cấp bởi CCC media team. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được CCC media team hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.
In recent years, algorithmic systems used by the Dutch government for fraud detection in welfare, allowances, and student loans were found to be discriminatory, causing harm to citizens. The Childcare Benefits Scandal highlighted these issues, sparking political and societal debates, investigations, and reforms. This session will explore causes of discriminatory outcomes, why they went undetected, red flags in such systems, and steps governments and society can take to ensure fair use of public algorithms. Lessons learned as an AI expert within the Dutch government will be shared. In recent years, several examples have come to light in the Netherlands of algorithmic systems developed and deployed by the government that were later found to be discriminatory. These systems, used to detect fraud in welfare benefits, allowances, and student loans, caused severe financial and emotional harm to citizens. The most devastating example of this was the Childcare Benefits Scandal. Thanks to the efforts of investigative journalists, civil society organizations, auditors, and determined individuals, these injustices came to light. The systems became a focal point of political and societal debate, leading to investigations and the introduction of new legislation, policies, and tools to address the issues. In this session, I would like to share some of the lessons I have learned as an AI expert within Dutch government. The following topics will be discussed: Causes of discriminatory outcomes: What are the main causes of discriminatory outcomes in public algorithmic fraud detection systems? Lack of Early Detection: How was it possible for these issues to remain unnoticed for so long? Red flags: What recurring patterns can be observed in these systems, and what signals indicate potential risks? Measures and Actions: What steps should governments take to prevent discrimination and other harms caused by public fraud detection algorithms? What can we, as a digital society, do to ensure the fairer use of public algorithmic systems? It is becoming increasingly clear that not only in the Netherlands, but also in countries such as Australia, the United Kingdom, Denmark, and Sweden, similar public fraud detection systems are causing harm. The lessons shared in this presentation are therefore more broadly applicable. about this event: https://winterkongress.ch/2025/talks/uncovering_discrimination_in_public_fraud_detection_systems/
  continue reading

1535 tập

Semua episod

×
 
Loading …

Chào mừng bạn đến với Player FM!

Player FM đang quét trang web để tìm các podcast chất lượng cao cho bạn thưởng thức ngay bây giờ. Đây là ứng dụng podcast tốt nhất và hoạt động trên Android, iPhone và web. Đăng ký để đồng bộ các theo dõi trên tất cả thiết bị.

 

Hướng dẫn sử dụng nhanh

Nghe chương trình này trong khi bạn khám phá
Nghe