Artwork

Nội dung được cung cấp bởi Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.
Player FM - Ứng dụng Podcast
Chuyển sang chế độ ngoại tuyến với ứng dụng Player FM !

Exploring the NIST AI Risk Management Framework (RMF) with Patrick Hall

41:24
 
Chia sẻ
 

Manage episode 431386597 series 3475282
Nội dung được cung cấp bởi Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.

Join us as we chat with Patrick Hall, Principal Scientist at Hallresearch.ai and Assistant Professor at George Washington University. He shares his insights on the current state of AI, its limitations, and the potential risks associated with it. The conversation also touched on the importance of responsible AI, the role of the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) in adoption, and the implications of using generative AI in decision-making.
Show notes
Governance, model explainability, and high-risk applications 00:00:03

The benefits of NIST AI Risk Management Framework 00:04:01

  • Does not have a profit motive, which avoids the potential for conflicts of interest when providing guidance on responsible AI.
  • Solicits, adjudicates, and incorporates feedback from the public and other stakeholders.
  • NIST is not law, however it's recommendations set companies up for outcome-based reviews by regulators.

Accountability challenges in "blame-free" cultures 00:10:24

  • Cites these cultures have the hardest time with the framework's recommendations
  • Practices like documentation and fair model reviews need accountability and objectivity
  • If everyone's responsible, no one's responsible.

The value of explainable models vs black-box models 00:15:00

  • Concerns about replacing explainable models with LLMs for LLM's sake
  • Why generative AI is bad for decision-making

AI and its impact on students 00:21:49

  • Students are more indicative of where the hype and market is today
  • Teaching them how to work through the best model for the best job despite the hype

AI incidents and contextual failures 00:26:17

Generative AI and homogenization problems 00:34:30
Recommended resources from Patrick:

What did you think? Let us know.

Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

  • LinkedIn - Episode summaries, shares of cited articles, and more.
  • YouTube - Was it something that we said? Good. Share your favorite quotes.
  • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
  continue reading

Chương

1. Exploring the NIST AI Risk Management Framework (RMF) with Patrick Hall (00:00:00)

2. Governance, model explainability, and high-risk applications (00:00:03)

3. The benefits of NIST AI RMF (00:04:01)

4. Accountability challenges in "blame-free" cultures (00:10:24)

5. The value of explainable models vs black box models (00:15:00)

6. AI and its impact on students (00:21:49)

7. AI incidents and contextual failures (00:26:17)

8. Generative AI and homogenization concerns (00:34:30)

23 tập

Artwork
iconChia sẻ
 
Manage episode 431386597 series 3475282
Nội dung được cung cấp bởi Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.

Join us as we chat with Patrick Hall, Principal Scientist at Hallresearch.ai and Assistant Professor at George Washington University. He shares his insights on the current state of AI, its limitations, and the potential risks associated with it. The conversation also touched on the importance of responsible AI, the role of the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) in adoption, and the implications of using generative AI in decision-making.
Show notes
Governance, model explainability, and high-risk applications 00:00:03

The benefits of NIST AI Risk Management Framework 00:04:01

  • Does not have a profit motive, which avoids the potential for conflicts of interest when providing guidance on responsible AI.
  • Solicits, adjudicates, and incorporates feedback from the public and other stakeholders.
  • NIST is not law, however it's recommendations set companies up for outcome-based reviews by regulators.

Accountability challenges in "blame-free" cultures 00:10:24

  • Cites these cultures have the hardest time with the framework's recommendations
  • Practices like documentation and fair model reviews need accountability and objectivity
  • If everyone's responsible, no one's responsible.

The value of explainable models vs black-box models 00:15:00

  • Concerns about replacing explainable models with LLMs for LLM's sake
  • Why generative AI is bad for decision-making

AI and its impact on students 00:21:49

  • Students are more indicative of where the hype and market is today
  • Teaching them how to work through the best model for the best job despite the hype

AI incidents and contextual failures 00:26:17

Generative AI and homogenization problems 00:34:30
Recommended resources from Patrick:

What did you think? Let us know.

Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

  • LinkedIn - Episode summaries, shares of cited articles, and more.
  • YouTube - Was it something that we said? Good. Share your favorite quotes.
  • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
  continue reading

Chương

1. Exploring the NIST AI Risk Management Framework (RMF) with Patrick Hall (00:00:00)

2. Governance, model explainability, and high-risk applications (00:00:03)

3. The benefits of NIST AI RMF (00:04:01)

4. Accountability challenges in "blame-free" cultures (00:10:24)

5. The value of explainable models vs black box models (00:15:00)

6. AI and its impact on students (00:21:49)

7. AI incidents and contextual failures (00:26:17)

8. Generative AI and homogenization concerns (00:34:30)

23 tập

Tất cả các tập

×
 
Loading …

Chào mừng bạn đến với Player FM!

Player FM đang quét trang web để tìm các podcast chất lượng cao cho bạn thưởng thức ngay bây giờ. Đây là ứng dụng podcast tốt nhất và hoạt động trên Android, iPhone và web. Đăng ký để đồng bộ các theo dõi trên tất cả thiết bị.

 

Hướng dẫn sử dụng nhanh