Artwork

Nội dung được cung cấp bởi The 80,000 Hours Podcast, The 80, and 000 Hours team. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được The 80,000 Hours Podcast, The 80, and 000 Hours team hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.
Player FM - Ứng dụng Podcast
Chuyển sang chế độ ngoại tuyến với ứng dụng Player FM !

#184 – Zvi Mowshowitz on sleeping on sleeper agents, and the biggest AI updates since ChatGPT

3:31:22
 
Chia sẻ
 

Manage episode 412017217 series 1531348
Nội dung được cung cấp bởi The 80,000 Hours Podcast, The 80, and 000 Hours team. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được The 80,000 Hours Podcast, The 80, and 000 Hours team hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.

Many of you will have heard of Zvi Mowshowitz as a superhuman information-absorbing-and-processing machine — which he definitely is. As the author of the Substack Don’t Worry About the Vase, Zvi has spent as much time as literally anyone in the world over the last two years tracking in detail how the explosion of AI has been playing out — and he has strong opinions about almost every aspect of it.

Links to learn more, summary, and full transcript.

In today’s episode, host Rob Wiblin asks Zvi for his takes on:

  • US-China negotiations
  • Whether AI progress has stalled
  • The biggest wins and losses for alignment in 2023
  • EU and White House AI regulations
  • Which major AI lab has the best safety strategy
  • The pros and cons of the Pause AI movement
  • Recent breakthroughs in capabilities
  • In what situations it’s morally acceptable to work at AI labs

Whether you agree or disagree with his views, Zvi is super informed and brimming with concrete details.

Zvi and Rob also talk about:

  • The risk of AI labs fooling themselves into believing their alignment plans are working when they may not be.
  • The “sleeper agent” issue uncovered in a recent Anthropic paper, and how it shows us how hard alignment actually is.
  • Why Zvi disagrees with 80,000 Hours’ advice about gaining career capital to have a positive impact.
  • Zvi’s project to identify the most strikingly horrible and neglected policy failures in the US, and how Zvi founded a new think tank (Balsa Research) to identify innovative solutions to overthrow the horrible status quo in areas like domestic shipping, environmental reviews, and housing supply.
  • Why Zvi thinks that improving people’s prosperity and housing can make them care more about existential risks like AI.
  • An idea from the online rationality community that Zvi thinks is really underrated and more people should have heard of: simulacra levels.
  • And plenty more.

Chapters:

  • Zvi’s AI-related worldview (00:03:41)
  • Sleeper agents (00:05:55)
  • Safety plans of the three major labs (00:21:47)
  • Misalignment vs misuse vs structural issues (00:50:00)
  • Should concerned people work at AI labs? (00:55:45)
  • Pause AI campaign (01:30:16)
  • Has progress on useful AI products stalled? (01:38:03)
  • White House executive order and US politics (01:42:09)
  • Reasons for AI policy optimism (01:56:38)
  • Zvi’s day-to-day (02:09:47)
  • Big wins and losses on safety and alignment in 2023 (02:12:29)
  • Other unappreciated technical breakthroughs (02:17:54)
  • Concrete things we can do to mitigate risks (02:31:19)
  • Balsa Research and the Jones Act (02:34:40)
  • The National Environmental Policy Act (02:50:36)
  • Housing policy (02:59:59)
  • Underrated rationalist worldviews (03:16:22)

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
Transcriptions and additional content editing: Katy Moore

  continue reading

271 tập

Artwork
iconChia sẻ
 
Manage episode 412017217 series 1531348
Nội dung được cung cấp bởi The 80,000 Hours Podcast, The 80, and 000 Hours team. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được The 80,000 Hours Podcast, The 80, and 000 Hours team hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.

Many of you will have heard of Zvi Mowshowitz as a superhuman information-absorbing-and-processing machine — which he definitely is. As the author of the Substack Don’t Worry About the Vase, Zvi has spent as much time as literally anyone in the world over the last two years tracking in detail how the explosion of AI has been playing out — and he has strong opinions about almost every aspect of it.

Links to learn more, summary, and full transcript.

In today’s episode, host Rob Wiblin asks Zvi for his takes on:

  • US-China negotiations
  • Whether AI progress has stalled
  • The biggest wins and losses for alignment in 2023
  • EU and White House AI regulations
  • Which major AI lab has the best safety strategy
  • The pros and cons of the Pause AI movement
  • Recent breakthroughs in capabilities
  • In what situations it’s morally acceptable to work at AI labs

Whether you agree or disagree with his views, Zvi is super informed and brimming with concrete details.

Zvi and Rob also talk about:

  • The risk of AI labs fooling themselves into believing their alignment plans are working when they may not be.
  • The “sleeper agent” issue uncovered in a recent Anthropic paper, and how it shows us how hard alignment actually is.
  • Why Zvi disagrees with 80,000 Hours’ advice about gaining career capital to have a positive impact.
  • Zvi’s project to identify the most strikingly horrible and neglected policy failures in the US, and how Zvi founded a new think tank (Balsa Research) to identify innovative solutions to overthrow the horrible status quo in areas like domestic shipping, environmental reviews, and housing supply.
  • Why Zvi thinks that improving people’s prosperity and housing can make them care more about existential risks like AI.
  • An idea from the online rationality community that Zvi thinks is really underrated and more people should have heard of: simulacra levels.
  • And plenty more.

Chapters:

  • Zvi’s AI-related worldview (00:03:41)
  • Sleeper agents (00:05:55)
  • Safety plans of the three major labs (00:21:47)
  • Misalignment vs misuse vs structural issues (00:50:00)
  • Should concerned people work at AI labs? (00:55:45)
  • Pause AI campaign (01:30:16)
  • Has progress on useful AI products stalled? (01:38:03)
  • White House executive order and US politics (01:42:09)
  • Reasons for AI policy optimism (01:56:38)
  • Zvi’s day-to-day (02:09:47)
  • Big wins and losses on safety and alignment in 2023 (02:12:29)
  • Other unappreciated technical breakthroughs (02:17:54)
  • Concrete things we can do to mitigate risks (02:31:19)
  • Balsa Research and the Jones Act (02:34:40)
  • The National Environmental Policy Act (02:50:36)
  • Housing policy (02:59:59)
  • Underrated rationalist worldviews (03:16:22)

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
Transcriptions and additional content editing: Katy Moore

  continue reading

271 tập

Усі епізоди

×
 
Loading …

Chào mừng bạn đến với Player FM!

Player FM đang quét trang web để tìm các podcast chất lượng cao cho bạn thưởng thức ngay bây giờ. Đây là ứng dụng podcast tốt nhất và hoạt động trên Android, iPhone và web. Đăng ký để đồng bộ các theo dõi trên tất cả thiết bị.

 

Hướng dẫn sử dụng nhanh