Artwork

Nội dung được cung cấp bởi Garrison Lovely. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được Garrison Lovely hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.
Player FM - Ứng dụng Podcast
Chuyển sang chế độ ngoại tuyến với ứng dụng Player FM !

35 - Yoshua Bengio on Why AI Labs are “Playing Dice with Humanity’s Future”

47:57
 
Chia sẻ
 

Manage episode 417382752 series 2513511
Nội dung được cung cấp bởi Garrison Lovely. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được Garrison Lovely hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.

I'm really excited to come out of hiatus to share this conversation with you. You may have noticed people are talking a lot about AI, and I've started focusing my journalism on the topic. I recently published a 9,000 word cover story in Jacobin’s winter issue called “Can Humanity Survive AI,” and was fortunate to talk to over three dozen people coming at AI and its possible risks from basically every angle. You can find a full episode transcript here.

My next guest is about as responsible as anybody for the state of AI capabilities today. But he's recently begun to wonder whether the field he spent his life helping build might lead to the end of the world. Following in the tradition of the Manhattan Project physicists who later opposed the hydrogen bomb, Dr. Yoshua Bengio started warning last year that advanced AI systems could drive humanity extinct.

(I’ve started a Substack since my last episode was released. You can subscribe here.)

The Jacobin story asked if AI poses an existential threat to humanity, but it also introduced the roiling three-sided debate around that question. And two of the sides, AI ethics and AI safety, are often pitched as standing in opposition to one another. It's true that the AI ethics camp often argues that we should be focusing on the immediate harms posed by existing AI systems. They also often argue that the existential risk arguments overhype the capabilities of those systems and distract from their immediate harms. It's also the case that many of the people focusing on mitigating existential risks from AI don't really focus on those issues. But Dr. Bengio is a counterexample to both of these points. He has spent years focusing on AI ethics and the immediate harms from AI systems, but he also worries that advanced AI systems pose an existential risk to humanity. And he argues in our interview that it's a false choice between AI ethics and AI safety, that it's possible to have both.

Yoshua Bengio is the second-most cited living scientist and one of the so-called “Godfathers of deep learning.” He and the other “Godfathers,” Geoffrey Hinton and Yann LeCun, shared the 2018 Turing Award, computing’s Nobel prize.

In November, Dr. Bengio was commissioned to lead production of the first “State of the Science” report on the “capabilities and risks of frontier AI” — the first significant attempt to create something like the Intergovernmental Panel on Climate Change (IPCC) for AI.

I spoke with him last fall while reporting my cover story for Jacobin’s winter issue, “Can Humanity Survive AI?” Dr. Bengio made waves last May when he and Geoffrey Hinton began warning that advanced AI systems could drive humanity extinct.

We discuss:

  • His background and what motivated him to work on AI
  • Whether there's evidence for existential risk (x-risk) from AI
  • How he initially thought about x-risk
  • Why he started worrying
  • How the machine learning community's thoughts on x-risk have changed over time
  • Why reading more on the topic made him more concerned
  • Why he thinks Google co-founder Larry Page’s AI aspirations should be criminalized
  • Why labs are trying to build artificial general intelligence (AGI)
  • The technical and social components of aligning AI systems
  • The why and how of universal, international regulations on AI
  • Why good regulations will help with all kinds of risks
  • Why loss of control doesn't need to be existential to be worth worrying about
  • How AI enables power concentration
  • Why he thinks the choice between AI ethics and safety is a false one
  • Capitalism and AI risk
  • The "dangerous race" between companies
  • Leading indicators of AGI
  • Why the way we train AI models creates risks
Background

Since we had limited time, we jumped straight into things and didn’t cover much of the basics of the idea of AI-driven existential risk, so I’m including some quotes and background in the intro. If you’re familiar with these ideas, you can skip straight to the interview at 7:24.

Unless stated otherwise, the below are quotes from my Jacobin story:

“Bengio posits that future, genuinely human-level AI systems could improve their own capabilities, functionally creating a new, more intelligent species. Humanity has driven hundreds of other species extinct, largely by accident. He fears that we could be next…”

Last May, “hundreds of AI researchers and notable figures signed an open letter stating, ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.’ Hinton and Bengio were the lead signatories, followed by OpenAI CEO Sam Altman and the heads of other top AI labs.”

“Hinton and Bengio were also the first authors of an October position paper warning about the risk of ‘an irreversible loss of human control over autonomous AI systems,’ joined by famous academics like Nobel laureate Daniel Kahneman and Sapiens author Yuval Noah Harari.”

The “position paper warns that ‘no one currently knows how to reliably align AI behavior with complex values.’”

The largest survey of machine learning researchers on AI x-risk was conducted in 2023. The median respondent estimated that there was a 50% chance of AGI by 2047 — a 13 year drop from a similar survey conducted just one year earlier — and that there was at least a 5% chance AGI would result in an existential catastrophe.

The October “Managing AI Risks” paper states:

There is no fundamental reason why AI progress would slow or halt when it reaches human-level abilities. . . . Compared to humans, AI systems can act faster, absorb more knowledge, and communicate at a far higher bandwidth. Additionally, they can be scaled to use immense computational resources and can be replicated by the millions.

“Here’s a stylized version of the idea of ‘population’ growth spurring an intelligence explosion: if AI systems rival human scientists at research and development, the systems will quickly proliferate, leading to the equivalent of an enormous number of new, highly productive workers entering the economy. Put another way, if GPT-7 can perform most of the tasks of a human worker and it only costs a few bucks to put the trained model to work on a day’s worth of tasks, each instance of the model would be wildly profitable, kicking off a positive feedback loop. This could lead to a virtual ‘population’ of billions or more digital workers, each worth much more than the cost of the energy it takes to run them. [OpenAI chief scientist Ilya] Sutskever thinks it’s likely that ‘the entire surface of the earth will be covered with solar panels and data centers.’”

“The fear that keeps many x-risk people up at night is not that an advanced AI would ‘wake up,’ ‘turn evil,’ and decide to kill everyone out of malice, but rather that it comes to see us as an obstacle to whatever goals it does have. In his final book, Brief Answers to the Big Questions, Stephen Hawking articulated this, saying, ‘You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants.’”

Links

Episode art by Ricardo Santos for Jacobin.

  continue reading

37 tập

Artwork
iconChia sẻ
 
Manage episode 417382752 series 2513511
Nội dung được cung cấp bởi Garrison Lovely. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được Garrison Lovely hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.

I'm really excited to come out of hiatus to share this conversation with you. You may have noticed people are talking a lot about AI, and I've started focusing my journalism on the topic. I recently published a 9,000 word cover story in Jacobin’s winter issue called “Can Humanity Survive AI,” and was fortunate to talk to over three dozen people coming at AI and its possible risks from basically every angle. You can find a full episode transcript here.

My next guest is about as responsible as anybody for the state of AI capabilities today. But he's recently begun to wonder whether the field he spent his life helping build might lead to the end of the world. Following in the tradition of the Manhattan Project physicists who later opposed the hydrogen bomb, Dr. Yoshua Bengio started warning last year that advanced AI systems could drive humanity extinct.

(I’ve started a Substack since my last episode was released. You can subscribe here.)

The Jacobin story asked if AI poses an existential threat to humanity, but it also introduced the roiling three-sided debate around that question. And two of the sides, AI ethics and AI safety, are often pitched as standing in opposition to one another. It's true that the AI ethics camp often argues that we should be focusing on the immediate harms posed by existing AI systems. They also often argue that the existential risk arguments overhype the capabilities of those systems and distract from their immediate harms. It's also the case that many of the people focusing on mitigating existential risks from AI don't really focus on those issues. But Dr. Bengio is a counterexample to both of these points. He has spent years focusing on AI ethics and the immediate harms from AI systems, but he also worries that advanced AI systems pose an existential risk to humanity. And he argues in our interview that it's a false choice between AI ethics and AI safety, that it's possible to have both.

Yoshua Bengio is the second-most cited living scientist and one of the so-called “Godfathers of deep learning.” He and the other “Godfathers,” Geoffrey Hinton and Yann LeCun, shared the 2018 Turing Award, computing’s Nobel prize.

In November, Dr. Bengio was commissioned to lead production of the first “State of the Science” report on the “capabilities and risks of frontier AI” — the first significant attempt to create something like the Intergovernmental Panel on Climate Change (IPCC) for AI.

I spoke with him last fall while reporting my cover story for Jacobin’s winter issue, “Can Humanity Survive AI?” Dr. Bengio made waves last May when he and Geoffrey Hinton began warning that advanced AI systems could drive humanity extinct.

We discuss:

  • His background and what motivated him to work on AI
  • Whether there's evidence for existential risk (x-risk) from AI
  • How he initially thought about x-risk
  • Why he started worrying
  • How the machine learning community's thoughts on x-risk have changed over time
  • Why reading more on the topic made him more concerned
  • Why he thinks Google co-founder Larry Page’s AI aspirations should be criminalized
  • Why labs are trying to build artificial general intelligence (AGI)
  • The technical and social components of aligning AI systems
  • The why and how of universal, international regulations on AI
  • Why good regulations will help with all kinds of risks
  • Why loss of control doesn't need to be existential to be worth worrying about
  • How AI enables power concentration
  • Why he thinks the choice between AI ethics and safety is a false one
  • Capitalism and AI risk
  • The "dangerous race" between companies
  • Leading indicators of AGI
  • Why the way we train AI models creates risks
Background

Since we had limited time, we jumped straight into things and didn’t cover much of the basics of the idea of AI-driven existential risk, so I’m including some quotes and background in the intro. If you’re familiar with these ideas, you can skip straight to the interview at 7:24.

Unless stated otherwise, the below are quotes from my Jacobin story:

“Bengio posits that future, genuinely human-level AI systems could improve their own capabilities, functionally creating a new, more intelligent species. Humanity has driven hundreds of other species extinct, largely by accident. He fears that we could be next…”

Last May, “hundreds of AI researchers and notable figures signed an open letter stating, ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.’ Hinton and Bengio were the lead signatories, followed by OpenAI CEO Sam Altman and the heads of other top AI labs.”

“Hinton and Bengio were also the first authors of an October position paper warning about the risk of ‘an irreversible loss of human control over autonomous AI systems,’ joined by famous academics like Nobel laureate Daniel Kahneman and Sapiens author Yuval Noah Harari.”

The “position paper warns that ‘no one currently knows how to reliably align AI behavior with complex values.’”

The largest survey of machine learning researchers on AI x-risk was conducted in 2023. The median respondent estimated that there was a 50% chance of AGI by 2047 — a 13 year drop from a similar survey conducted just one year earlier — and that there was at least a 5% chance AGI would result in an existential catastrophe.

The October “Managing AI Risks” paper states:

There is no fundamental reason why AI progress would slow or halt when it reaches human-level abilities. . . . Compared to humans, AI systems can act faster, absorb more knowledge, and communicate at a far higher bandwidth. Additionally, they can be scaled to use immense computational resources and can be replicated by the millions.

“Here’s a stylized version of the idea of ‘population’ growth spurring an intelligence explosion: if AI systems rival human scientists at research and development, the systems will quickly proliferate, leading to the equivalent of an enormous number of new, highly productive workers entering the economy. Put another way, if GPT-7 can perform most of the tasks of a human worker and it only costs a few bucks to put the trained model to work on a day’s worth of tasks, each instance of the model would be wildly profitable, kicking off a positive feedback loop. This could lead to a virtual ‘population’ of billions or more digital workers, each worth much more than the cost of the energy it takes to run them. [OpenAI chief scientist Ilya] Sutskever thinks it’s likely that ‘the entire surface of the earth will be covered with solar panels and data centers.’”

“The fear that keeps many x-risk people up at night is not that an advanced AI would ‘wake up,’ ‘turn evil,’ and decide to kill everyone out of malice, but rather that it comes to see us as an obstacle to whatever goals it does have. In his final book, Brief Answers to the Big Questions, Stephen Hawking articulated this, saying, ‘You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants.’”

Links

Episode art by Ricardo Santos for Jacobin.

  continue reading

37 tập

Tất cả các tập

×
 
Loading …

Chào mừng bạn đến với Player FM!

Player FM đang quét trang web để tìm các podcast chất lượng cao cho bạn thưởng thức ngay bây giờ. Đây là ứng dụng podcast tốt nhất và hoạt động trên Android, iPhone và web. Đăng ký để đồng bộ các theo dõi trên tất cả thiết bị.

 

Hướng dẫn sử dụng nhanh