Artwork

Nội dung được cung cấp bởi The Nonlinear Fund. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được The Nonlinear Fund hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.
Player FM - Ứng dụng Podcast
Chuyển sang chế độ ngoại tuyến với ứng dụng Player FM !

AF - [Aspiration-based designs] 1. Informal introduction by B Jacobs

21:03
 
Chia sẻ
 

Manage episode 415717262 series 3337166
Nội dung được cung cấp bởi The Nonlinear Fund. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được The Nonlinear Fund hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Aspiration-based designs] 1. Informal introduction, published by B Jacobs on April 28, 2024 on The AI Alignment Forum. Sequence Summary. This sequence documents research by SatisfIA, an ongoing project on non-maximizing, aspiration-based designs for AI agents that fulfill goals specified by constraints ("aspirations") rather than maximizing an objective function. We aim to contribute to AI safety by exploring design approaches and their software implementations that we believe might be promising but neglected or novel. Our approach is roughly related to but largely complementary to concepts like quantilization and satisficing (sometimes called "soft-optimization"), Decision Transformers, and Active Inference. This post describes the purpose of the sequence, motivates the research, describes the project status, our working hypotheses and theoretical framework, and has a short glossary of terms. It does not contain results and can safely be skipped if you want to get directly into the actual research. Epistemic status: We're still in the exploratory phase, and while the project has yielded some preliminary insights, we don't have any clear conclusions at this point. Our team holds a wide variety of opinions about the discoveries. Nothing we say is set in stone. Purpose of the sequence Inform: We aim to share our current ideas, thoughts, disagreements, open questions, and any results we have achieved thus far. By openly discussing the complexities and challenges we face, we seek to provide a transparent view of our project's progression and the types of questions we're exploring. Receive Feedback: We invite feedback on our approaches, hypotheses, and findings. Constructive criticism, alternative perspectives, and further suggestions are all welcome. Attract Collaborators: Through this sequence, we hope to resonate with other researchers and practitioners who our exploration appeals to and who are motivated by similar questions. Our goal is to expand our team with individuals who can contribute their unique expertise and insights. Motivation We share a general concern regarding the trajectory of Artificial General Intelligence (AGI) development, particularly the risks associated with creating AGI agents designed to maximize objective functions. We have two main concerns: (I) AGI development might be inevitable (We assume this concern needs no further justification) (II) It might be impossible to implement an objective function the maximization of which would be safe The conventional view on A(G)I agents (see, e.g., Wikipedia) is that they should aim to maximize some function of the state or trajectory of the world, often called a "utility function", sometimes also called a "welfare function". It tacitly assumes that there is such an objective function that can adequately make the AGI behave in a moral way. However, this assumption faces several significant challenges: Moral ambiguity: The notion that a universally acceptable, safe utility function exists is highly speculative. Given the philosophical debates surrounding moral cognitivism and moral realism and similar debates in welfare economics, it is possible that there are no universally agreeable moral truths, casting doubt on the existence of a utility function that encapsulates all relevant ethical considerations. Historical track-record: Humanity's long-standing struggle to define and agree upon universal values or ethical standards raises skepticism about our capacity to discover or construct a comprehensive utility function that safely governs AGI behavior (Outer Alignment) in time. Formal specification and Tractability: Even if a theoretically safe and comprehensive utility function could be conceptualized, the challenges of formalizing such a function into a computable and tractable form are immense. This inc...
  continue reading

385 tập

Artwork
iconChia sẻ
 
Manage episode 415717262 series 3337166
Nội dung được cung cấp bởi The Nonlinear Fund. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được The Nonlinear Fund hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Aspiration-based designs] 1. Informal introduction, published by B Jacobs on April 28, 2024 on The AI Alignment Forum. Sequence Summary. This sequence documents research by SatisfIA, an ongoing project on non-maximizing, aspiration-based designs for AI agents that fulfill goals specified by constraints ("aspirations") rather than maximizing an objective function. We aim to contribute to AI safety by exploring design approaches and their software implementations that we believe might be promising but neglected or novel. Our approach is roughly related to but largely complementary to concepts like quantilization and satisficing (sometimes called "soft-optimization"), Decision Transformers, and Active Inference. This post describes the purpose of the sequence, motivates the research, describes the project status, our working hypotheses and theoretical framework, and has a short glossary of terms. It does not contain results and can safely be skipped if you want to get directly into the actual research. Epistemic status: We're still in the exploratory phase, and while the project has yielded some preliminary insights, we don't have any clear conclusions at this point. Our team holds a wide variety of opinions about the discoveries. Nothing we say is set in stone. Purpose of the sequence Inform: We aim to share our current ideas, thoughts, disagreements, open questions, and any results we have achieved thus far. By openly discussing the complexities and challenges we face, we seek to provide a transparent view of our project's progression and the types of questions we're exploring. Receive Feedback: We invite feedback on our approaches, hypotheses, and findings. Constructive criticism, alternative perspectives, and further suggestions are all welcome. Attract Collaborators: Through this sequence, we hope to resonate with other researchers and practitioners who our exploration appeals to and who are motivated by similar questions. Our goal is to expand our team with individuals who can contribute their unique expertise and insights. Motivation We share a general concern regarding the trajectory of Artificial General Intelligence (AGI) development, particularly the risks associated with creating AGI agents designed to maximize objective functions. We have two main concerns: (I) AGI development might be inevitable (We assume this concern needs no further justification) (II) It might be impossible to implement an objective function the maximization of which would be safe The conventional view on A(G)I agents (see, e.g., Wikipedia) is that they should aim to maximize some function of the state or trajectory of the world, often called a "utility function", sometimes also called a "welfare function". It tacitly assumes that there is such an objective function that can adequately make the AGI behave in a moral way. However, this assumption faces several significant challenges: Moral ambiguity: The notion that a universally acceptable, safe utility function exists is highly speculative. Given the philosophical debates surrounding moral cognitivism and moral realism and similar debates in welfare economics, it is possible that there are no universally agreeable moral truths, casting doubt on the existence of a utility function that encapsulates all relevant ethical considerations. Historical track-record: Humanity's long-standing struggle to define and agree upon universal values or ethical standards raises skepticism about our capacity to discover or construct a comprehensive utility function that safely governs AGI behavior (Outer Alignment) in time. Formal specification and Tractability: Even if a theoretically safe and comprehensive utility function could be conceptualized, the challenges of formalizing such a function into a computable and tractable form are immense. This inc...
  continue reading

385 tập

Tất cả các tập

×
 
Loading …

Chào mừng bạn đến với Player FM!

Player FM đang quét trang web để tìm các podcast chất lượng cao cho bạn thưởng thức ngay bây giờ. Đây là ứng dụng podcast tốt nhất và hoạt động trên Android, iPhone và web. Đăng ký để đồng bộ các theo dõi trên tất cả thiết bị.

 

Hướng dẫn sử dụng nhanh