Artwork

Nội dung được cung cấp bởi Tech Policy Press. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được Tech Policy Press hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.
Player FM - Ứng dụng Podcast
Chuyển sang chế độ ngoại tuyến với ứng dụng Player FM !

Using AI to Engage People about Conspiracy Beliefs

35:53
 
Chia sẻ
 

Manage episode 432433659 series 2871479
Nội dung được cung cấp bởi Tech Policy Press. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được Tech Policy Press hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.

In May, Justin Hendrix moderated a discussion with David Rand, who is a professor of Management Science and Brain and Cognitive Sciences at MIT, the director of the Applied Cooperation Initiative, and an affiliate of the MIT Institute of Data, Systems, and Society and the Initiative on the Digital Economy. David's work cuts across fields such as cognitive science, behavioral economics, and social psychology, and with his collaborators he's done a substantial amount of work on the psychological underpinnings of belief in misinformation and conspiracy theories.

David is one of the authors, with Thomas Costello and Gordon Pennycook, of a paper published this spring titled "Durably reducing conspiracy beliefs through dialogues with AI." The paper considers the potential for people to enter into dialogues with LLMs and whether such exchanges can change the minds of conspiracy theory believers. According to the study, dialogues with GPT-4 Turbo reduced belief in various conspiracy theories, with effects lasting many months. Even more intriguingly, these dialogues seemed to have a spillover effect, reducing belief in unrelated conspiracies and influencing conspiracy-related behaviors.

While these findings are certainly promising, the experiment raises a variety of questions. Some are specific under the premise of the experiment- such as how compelling and tailored does the counter-evidence need to be, and how well do the LLMs perform? What happens if and when they make mistakes or hallucinate? And some of the questions are bigger picture- are there ethical implications in using AI in this manner? Can these results be replicated and scaled in real-world applications, such as on social media platforms, and is that a good idea? Is an internet where various AI agents and systems are poking and prodding us and trying to shape or change our beliefs a good thing? This episode contains an edited recording of the discussion, which was hosted at Betaworks.

  continue reading

301 tập

Artwork
iconChia sẻ
 
Manage episode 432433659 series 2871479
Nội dung được cung cấp bởi Tech Policy Press. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được Tech Policy Press hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.

In May, Justin Hendrix moderated a discussion with David Rand, who is a professor of Management Science and Brain and Cognitive Sciences at MIT, the director of the Applied Cooperation Initiative, and an affiliate of the MIT Institute of Data, Systems, and Society and the Initiative on the Digital Economy. David's work cuts across fields such as cognitive science, behavioral economics, and social psychology, and with his collaborators he's done a substantial amount of work on the psychological underpinnings of belief in misinformation and conspiracy theories.

David is one of the authors, with Thomas Costello and Gordon Pennycook, of a paper published this spring titled "Durably reducing conspiracy beliefs through dialogues with AI." The paper considers the potential for people to enter into dialogues with LLMs and whether such exchanges can change the minds of conspiracy theory believers. According to the study, dialogues with GPT-4 Turbo reduced belief in various conspiracy theories, with effects lasting many months. Even more intriguingly, these dialogues seemed to have a spillover effect, reducing belief in unrelated conspiracies and influencing conspiracy-related behaviors.

While these findings are certainly promising, the experiment raises a variety of questions. Some are specific under the premise of the experiment- such as how compelling and tailored does the counter-evidence need to be, and how well do the LLMs perform? What happens if and when they make mistakes or hallucinate? And some of the questions are bigger picture- are there ethical implications in using AI in this manner? Can these results be replicated and scaled in real-world applications, such as on social media platforms, and is that a good idea? Is an internet where various AI agents and systems are poking and prodding us and trying to shape or change our beliefs a good thing? This episode contains an edited recording of the discussion, which was hosted at Betaworks.

  continue reading

301 tập

Toate episoadele

×
 
Loading …

Chào mừng bạn đến với Player FM!

Player FM đang quét trang web để tìm các podcast chất lượng cao cho bạn thưởng thức ngay bây giờ. Đây là ứng dụng podcast tốt nhất và hoạt động trên Android, iPhone và web. Đăng ký để đồng bộ các theo dõi trên tất cả thiết bị.

 

Hướng dẫn sử dụng nhanh