Artwork

Nội dung được cung cấp bởi Machine Learning Street Talk (MLST). Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được Machine Learning Street Talk (MLST) hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.
Player FM - Ứng dụng Podcast
Chuyển sang chế độ ngoại tuyến với ứng dụng Player FM !

ROBERT MILES - "There is a good chance this kills everyone"

2:01:54
 
Chia sẻ
 

Manage episode 363923587 series 2803422
Nội dung được cung cấp bởi Machine Learning Street Talk (MLST). Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được Machine Learning Street Talk (MLST) hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.

Please check out Numerai - our sponsor @

https://numerai.com/mlst

Numerai is a groundbreaking platform which is taking the data science world by storm. Tim has been using Numerai to build state-of-the-art models which predict the stock market, all while being a part of an inspiring community of data scientists from around the globe. They host the Numerai Data Science Tournament, where data scientists like us use their financial dataset to predict future stock market performance.

Support us! https://www.patreon.com/mlst

MLST Discord: https://discord.gg/aNPkGUQtc5

Twitter: https://twitter.com/MLStreetTalk

Welcome to an exciting episode featuring an outstanding guest, Robert Miles! Renowned for his extraordinary contributions to understanding AI and its potential impacts on our lives, Robert is an artificial intelligence advocate, researcher, and YouTube sensation. He combines engaging discussions with entertaining content, captivating millions of viewers from around the world.

With a strong computer science background, Robert has been actively involved in AI safety projects, focusing on raising awareness about potential risks and benefits of advanced AI systems. His YouTube channel is celebrated for making AI safety discussions accessible to a diverse audience through breaking down complex topics into easy-to-understand nuggets of knowledge, and you might also recognise him from his appearances on Computerphile.

In this episode, join us as we dive deep into Robert's journey in the world of AI, exploring his insights on AI alignment, superintelligence, and the role of AI shaping our society and future. We'll discuss topics such as the limits of AI capabilities and physics, AI progress and timelines, human-machine hybrid intelligence, AI in conflict and cooperation with humans, and the convergence of AI communities.

Robert Miles:

@RobertMilesAI

https://twitter.com/robertskmiles

https://aisafety.info/

YT version: https://www.youtube.com/watch?v=kMLKbhY0ji0

Panel:

Dr. Tim Scarfe

Dr. Keith Duggar

Joint CTOs - https://xrai.glass/

Refs:

Are Emergent Abilities of Large Language Models a Mirage? (Rylan Schaeffer)

https://arxiv.org/abs/2304.15004

TOC:

Intro [00:00:00]

Numerai Sponsor Messsage [00:02:17]

AI Alignment [00:04:27]

Limits of AI Capabilities and Physics [00:18:00]

AI Progress and Timelines [00:23:52]

AI Arms Race and Innovation [00:31:11]

Human-Machine Hybrid Intelligence [00:38:30]

Understanding and Defining Intelligence [00:42:48]

AI in Conflict and Cooperation with Humans [00:50:13]

Interpretability and Mind Reading in AI [01:03:46]

Mechanistic Interpretability and Deconfusion Research [01:05:53]

Understanding the core concepts of AI [01:07:40]

Moon landing analogy and AI alignment [01:09:42]

Cognitive horizon and limits of human intelligence [01:11:42]

Funding and focus on AI alignment [01:16:18]

Regulating AI technology and potential risks [01:19:17]

Aligning AI with human values and its dynamic nature [01:27:04]

Cooperation and Allyship [01:29:33]

Orthogonality Thesis and Goal Preservation [01:33:15]

Anthropomorphic Language and Intelligent Agents [01:35:31]

Maintaining Variety and Open-ended Existence [01:36:27]

Emergent Abilities of Large Language Models [01:39:22]

Convergence vs Emergence [01:44:04]

Criticism of X-risk and Alignment Communities [01:49:40]

Fusion of AI communities and addressing biases [01:52:51]

AI systems integration into society and understanding them [01:53:29]

Changing opinions on AI topics and learning from past videos [01:54:23]

Utility functions and von Neumann-Morgenstern theorems [01:54:47]

AI Safety FAQ project [01:58:06]

Building a conversation agent using AI safety dataset [02:00:36]

  continue reading

151 tập

Artwork
iconChia sẻ
 
Manage episode 363923587 series 2803422
Nội dung được cung cấp bởi Machine Learning Street Talk (MLST). Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được Machine Learning Street Talk (MLST) hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.

Please check out Numerai - our sponsor @

https://numerai.com/mlst

Numerai is a groundbreaking platform which is taking the data science world by storm. Tim has been using Numerai to build state-of-the-art models which predict the stock market, all while being a part of an inspiring community of data scientists from around the globe. They host the Numerai Data Science Tournament, where data scientists like us use their financial dataset to predict future stock market performance.

Support us! https://www.patreon.com/mlst

MLST Discord: https://discord.gg/aNPkGUQtc5

Twitter: https://twitter.com/MLStreetTalk

Welcome to an exciting episode featuring an outstanding guest, Robert Miles! Renowned for his extraordinary contributions to understanding AI and its potential impacts on our lives, Robert is an artificial intelligence advocate, researcher, and YouTube sensation. He combines engaging discussions with entertaining content, captivating millions of viewers from around the world.

With a strong computer science background, Robert has been actively involved in AI safety projects, focusing on raising awareness about potential risks and benefits of advanced AI systems. His YouTube channel is celebrated for making AI safety discussions accessible to a diverse audience through breaking down complex topics into easy-to-understand nuggets of knowledge, and you might also recognise him from his appearances on Computerphile.

In this episode, join us as we dive deep into Robert's journey in the world of AI, exploring his insights on AI alignment, superintelligence, and the role of AI shaping our society and future. We'll discuss topics such as the limits of AI capabilities and physics, AI progress and timelines, human-machine hybrid intelligence, AI in conflict and cooperation with humans, and the convergence of AI communities.

Robert Miles:

@RobertMilesAI

https://twitter.com/robertskmiles

https://aisafety.info/

YT version: https://www.youtube.com/watch?v=kMLKbhY0ji0

Panel:

Dr. Tim Scarfe

Dr. Keith Duggar

Joint CTOs - https://xrai.glass/

Refs:

Are Emergent Abilities of Large Language Models a Mirage? (Rylan Schaeffer)

https://arxiv.org/abs/2304.15004

TOC:

Intro [00:00:00]

Numerai Sponsor Messsage [00:02:17]

AI Alignment [00:04:27]

Limits of AI Capabilities and Physics [00:18:00]

AI Progress and Timelines [00:23:52]

AI Arms Race and Innovation [00:31:11]

Human-Machine Hybrid Intelligence [00:38:30]

Understanding and Defining Intelligence [00:42:48]

AI in Conflict and Cooperation with Humans [00:50:13]

Interpretability and Mind Reading in AI [01:03:46]

Mechanistic Interpretability and Deconfusion Research [01:05:53]

Understanding the core concepts of AI [01:07:40]

Moon landing analogy and AI alignment [01:09:42]

Cognitive horizon and limits of human intelligence [01:11:42]

Funding and focus on AI alignment [01:16:18]

Regulating AI technology and potential risks [01:19:17]

Aligning AI with human values and its dynamic nature [01:27:04]

Cooperation and Allyship [01:29:33]

Orthogonality Thesis and Goal Preservation [01:33:15]

Anthropomorphic Language and Intelligent Agents [01:35:31]

Maintaining Variety and Open-ended Existence [01:36:27]

Emergent Abilities of Large Language Models [01:39:22]

Convergence vs Emergence [01:44:04]

Criticism of X-risk and Alignment Communities [01:49:40]

Fusion of AI communities and addressing biases [01:52:51]

AI systems integration into society and understanding them [01:53:29]

Changing opinions on AI topics and learning from past videos [01:54:23]

Utility functions and von Neumann-Morgenstern theorems [01:54:47]

AI Safety FAQ project [01:58:06]

Building a conversation agent using AI safety dataset [02:00:36]

  continue reading

151 tập

Tất cả các tập

×
 
Loading …

Chào mừng bạn đến với Player FM!

Player FM đang quét trang web để tìm các podcast chất lượng cao cho bạn thưởng thức ngay bây giờ. Đây là ứng dụng podcast tốt nhất và hoạt động trên Android, iPhone và web. Đăng ký để đồng bộ các theo dõi trên tất cả thiết bị.

 

Hướng dẫn sử dụng nhanh