Artwork

Nội dung được cung cấp bởi Jupiter Broadcasting. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được Jupiter Broadcasting hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.
Player FM - Ứng dụng Podcast
Chuyển sang chế độ ngoại tuyến với ứng dụng Player FM !

399: Ethics in AI

38:48
 
Chia sẻ
 

Manage episode 229370778 series 2438285
Nội dung được cung cấp bởi Jupiter Broadcasting. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được Jupiter Broadcasting hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.

Machine learning promises to change many industries, but with these changes come dangerous new risks. Join Jim and Wes as they explore some of the surprising ways bias can creep in and the serious consequences of ignoring these problems.

Links:

  • Microsoft’s neo-Nazi sexbot was a great lesson for makers of AI assistants — What started out as an entertaining social experiment—get regular people to talk to a chatbot so it could learn while they, hopefully, had fun—became a nightmare for Tay’s creators. Users soon figured out how to make Tay say awful things. Microsoft took the chatbot offline after less than a day.
  • Microsoft's Zo chatbot is a politically correct version of her sister Tay—except she’s much, much worse — A few months after Tay’s disastrous debut, Microsoft quietly released Zo, a second English-language chatbot available on Messenger, Kik, Skype, Twitter, and Groupme.
  • How to make a racist AI without really trying | ConceptNet blog — Some people expect that fighting algorithmic racism is going to come with some sort of trade-off. There’s no trade-off here. You can have data that’s better and less racist. You can have data that’s better because it’s less racist. There was never anything “accurate” about the overt racism that word2vec and GloVe learned.
  • Microsoft warned investors that biased or flawed AI could hurt the company’s image — Notably, this addition comes after a research paper by MIT Media Lab graduate researcher Joy Buolamwini showed in February 2018 that Microsoft’s facial recognition algorithm’s was less accurate for women and people of color. In response, Microsoft updated its facial recognition models, and wrote a blog post about how it was addressing bias in its software.
  • AI bias: It is the responsibility of humans to ensure fairness — Amazon recently pulled the plug on its experimental AI-powered recruitment engine when it was discovered that the machine learning technology behind it was exhibiting bias against female applicants.
  • California Police Using AI Program That Tells Them Where to Patrol, Critics Say It May Just Reinforce Racial Bias — “The potential for bias to creep into the deployment of the tools is enormous. Simply put, the devil is in the data,” Vincent Southerland, executive director of the Center on Race, Inequality, and the Law at NYU School of Law, wrote for the American Civil Liberties Union last year.
  • A.I. Could Worsen Health Disparities — A recent study found that some facial recognition programs incorrectly classify less than 1 percent of light-skinned men but more than one-third of dark-skinned women. What happens when we rely on such algorithms to diagnose melanoma on light versus dark skin?
  • Responsible AI Practices — These questions are far from solved, and in fact are active areas of research and development. Google is committed to making progress in the responsible development of AI and to sharing knowledge, research, tools, datasets, and other resources with the larger community. Below we share some of our current work and recommended practices.
  • The Ars Technica System Guide, Winter 2019: The one about the servers — The Winter 2019 Ars System Guide has returned to its roots: showing readers three real-world system builds we like at this precise moment in time. Instead of general performance desktops, this time around we're going to focus specifically on building some servers.
  • Introduction to Python Development at Linux Academy — This course is designed to teach you how to program using Python. We'll cover the building blocks of the language, programming design fundamentals, how to use the standard library, third-party packages, and how to create Python projects. In the end, you should have a grasp of how to program.
  continue reading

243 tập

Artwork

399: Ethics in AI

TechSNAP

549 subscribers

published

iconChia sẻ
 
Manage episode 229370778 series 2438285
Nội dung được cung cấp bởi Jupiter Broadcasting. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được Jupiter Broadcasting hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.

Machine learning promises to change many industries, but with these changes come dangerous new risks. Join Jim and Wes as they explore some of the surprising ways bias can creep in and the serious consequences of ignoring these problems.

Links:

  • Microsoft’s neo-Nazi sexbot was a great lesson for makers of AI assistants — What started out as an entertaining social experiment—get regular people to talk to a chatbot so it could learn while they, hopefully, had fun—became a nightmare for Tay’s creators. Users soon figured out how to make Tay say awful things. Microsoft took the chatbot offline after less than a day.
  • Microsoft's Zo chatbot is a politically correct version of her sister Tay—except she’s much, much worse — A few months after Tay’s disastrous debut, Microsoft quietly released Zo, a second English-language chatbot available on Messenger, Kik, Skype, Twitter, and Groupme.
  • How to make a racist AI without really trying | ConceptNet blog — Some people expect that fighting algorithmic racism is going to come with some sort of trade-off. There’s no trade-off here. You can have data that’s better and less racist. You can have data that’s better because it’s less racist. There was never anything “accurate” about the overt racism that word2vec and GloVe learned.
  • Microsoft warned investors that biased or flawed AI could hurt the company’s image — Notably, this addition comes after a research paper by MIT Media Lab graduate researcher Joy Buolamwini showed in February 2018 that Microsoft’s facial recognition algorithm’s was less accurate for women and people of color. In response, Microsoft updated its facial recognition models, and wrote a blog post about how it was addressing bias in its software.
  • AI bias: It is the responsibility of humans to ensure fairness — Amazon recently pulled the plug on its experimental AI-powered recruitment engine when it was discovered that the machine learning technology behind it was exhibiting bias against female applicants.
  • California Police Using AI Program That Tells Them Where to Patrol, Critics Say It May Just Reinforce Racial Bias — “The potential for bias to creep into the deployment of the tools is enormous. Simply put, the devil is in the data,” Vincent Southerland, executive director of the Center on Race, Inequality, and the Law at NYU School of Law, wrote for the American Civil Liberties Union last year.
  • A.I. Could Worsen Health Disparities — A recent study found that some facial recognition programs incorrectly classify less than 1 percent of light-skinned men but more than one-third of dark-skinned women. What happens when we rely on such algorithms to diagnose melanoma on light versus dark skin?
  • Responsible AI Practices — These questions are far from solved, and in fact are active areas of research and development. Google is committed to making progress in the responsible development of AI and to sharing knowledge, research, tools, datasets, and other resources with the larger community. Below we share some of our current work and recommended practices.
  • The Ars Technica System Guide, Winter 2019: The one about the servers — The Winter 2019 Ars System Guide has returned to its roots: showing readers three real-world system builds we like at this precise moment in time. Instead of general performance desktops, this time around we're going to focus specifically on building some servers.
  • Introduction to Python Development at Linux Academy — This course is designed to teach you how to program using Python. We'll cover the building blocks of the language, programming design fundamentals, how to use the standard library, third-party packages, and how to create Python projects. In the end, you should have a grasp of how to program.
  continue reading

243 tập

Semua episode

×
 
Loading …

Chào mừng bạn đến với Player FM!

Player FM đang quét trang web để tìm các podcast chất lượng cao cho bạn thưởng thức ngay bây giờ. Đây là ứng dụng podcast tốt nhất và hoạt động trên Android, iPhone và web. Đăng ký để đồng bộ các theo dõi trên tất cả thiết bị.

 

Hướng dẫn sử dụng nhanh