Artwork

Nội dung được cung cấp bởi Security Voices. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được Security Voices hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.
Player FM - Ứng dụng Podcast
Chuyển sang chế độ ngoại tuyến với ứng dụng Player FM !

From Tool to Sidekick - Human/Machine Teaming with Jamie Winterton

1:01:38
 
Chia sẻ
 

Manage episode 294304951 series 2495524
Nội dung được cung cấp bởi Security Voices. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được Security Voices hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.
We’ve conditioned ourselves to look at our technology in a similar way we look at a box of tools: as instruments that passively do what we make them do. When we think of the future of artificial intelligence, it’s tempting to leap to fully autonomous solutions一 when exactly will that Tesla finally drive by itself? In our interview with Jamie Winterton, we explore a future where AI is neither a passive tool or a self-contained machine but rather an active partner.
Human/machine teaming, an approach where AI works alongside a person as an integrated pair, has been advocated by the U.S. Department of Defense for several years now and is the focus of Jamie’s recent work at Arizona State University where she is Director of Strategy for ASU’s Global Security Initiative and chairs the DARPA Working Group. From testing A.I. assisted search and rescue scenarios in Minecraft to real war time settings, Jamie takes us through the opportunity and the issues that arise when we make technology our sidekick instead of solely our instruments.
The central challenges of human/machine teaming? They’re awfully familiar. The same thorny matters of trust and communication that plague human interactions are still front and center. If we can’t understand how A.I. arrived at a recommendation, will we trust its advice? If it makes a mistake, are we willing to forgive it? And how about all those non-verbal cues that are so central to human communication and vary person to person? Jamie recounts stories of sophisticated “nerd stuff” being disregarded by people in favor of simplistic solutions they could more easily understand (e.g., Google Earth).
The future of human/machine teaming may be less about us slowly learning to trust and giving over more control to our robot partners and more about A.I. learning the soft skills that so frequently make our other interpersonal relationships work harmoniously. But what if the bad guys send their fully autonomous weapons against us in the future? Will we be too slow to survive with an integrated approach? Jamie explains the prevailing thinking on the topic of speed and autonomy vs. an arguably slower but more optimal teaming approach and what it might mean for the battlefields of the future.
Note: Our conversation on human/machine teaming follows an introductory chat about data breaches, responsible disclosure and how future breaches that involve biometric data theft may require surgeries as part of the remediation. If you want to jump straight to the human/machine teaming conversation, it picks up around the 18 minute mark.
  continue reading

66 tập

Artwork
iconChia sẻ
 
Manage episode 294304951 series 2495524
Nội dung được cung cấp bởi Security Voices. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được Security Voices hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.
We’ve conditioned ourselves to look at our technology in a similar way we look at a box of tools: as instruments that passively do what we make them do. When we think of the future of artificial intelligence, it’s tempting to leap to fully autonomous solutions一 when exactly will that Tesla finally drive by itself? In our interview with Jamie Winterton, we explore a future where AI is neither a passive tool or a self-contained machine but rather an active partner.
Human/machine teaming, an approach where AI works alongside a person as an integrated pair, has been advocated by the U.S. Department of Defense for several years now and is the focus of Jamie’s recent work at Arizona State University where she is Director of Strategy for ASU’s Global Security Initiative and chairs the DARPA Working Group. From testing A.I. assisted search and rescue scenarios in Minecraft to real war time settings, Jamie takes us through the opportunity and the issues that arise when we make technology our sidekick instead of solely our instruments.
The central challenges of human/machine teaming? They’re awfully familiar. The same thorny matters of trust and communication that plague human interactions are still front and center. If we can’t understand how A.I. arrived at a recommendation, will we trust its advice? If it makes a mistake, are we willing to forgive it? And how about all those non-verbal cues that are so central to human communication and vary person to person? Jamie recounts stories of sophisticated “nerd stuff” being disregarded by people in favor of simplistic solutions they could more easily understand (e.g., Google Earth).
The future of human/machine teaming may be less about us slowly learning to trust and giving over more control to our robot partners and more about A.I. learning the soft skills that so frequently make our other interpersonal relationships work harmoniously. But what if the bad guys send their fully autonomous weapons against us in the future? Will we be too slow to survive with an integrated approach? Jamie explains the prevailing thinking on the topic of speed and autonomy vs. an arguably slower but more optimal teaming approach and what it might mean for the battlefields of the future.
Note: Our conversation on human/machine teaming follows an introductory chat about data breaches, responsible disclosure and how future breaches that involve biometric data theft may require surgeries as part of the remediation. If you want to jump straight to the human/machine teaming conversation, it picks up around the 18 minute mark.
  continue reading

66 tập

Tất cả các tập

×
 
Loading …

Chào mừng bạn đến với Player FM!

Player FM đang quét trang web để tìm các podcast chất lượng cao cho bạn thưởng thức ngay bây giờ. Đây là ứng dụng podcast tốt nhất và hoạt động trên Android, iPhone và web. Đăng ký để đồng bộ các theo dõi trên tất cả thiết bị.

 

Hướng dẫn sử dụng nhanh