Player FM - Internet Radio Done Right
101 subscribers
Checked 3d ago
Đã thêm cách đây bốn năm
Nội dung được cung cấp bởi Daniel Bashir. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được Daniel Bashir hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.
Player FM - Ứng dụng Podcast
Chuyển sang chế độ ngoại tuyến với ứng dụng Player FM !
Chuyển sang chế độ ngoại tuyến với ứng dụng Player FM !
Podcast đáng để nghe
TÀI TRỢ BỞI
I
In Her Ellement


1 Navigating Career Pivots and Grit with Milo’s Avni Patel Thompson 26:18
26:18
Nghe Sau
Nghe Sau
Danh sách
Thích
Đã thích26:18
How do you know when it’s time to make your next big career move? With International Women’s Day around the corner, we are excited to feature Avni Patel Thompson, Founder and CEO of Milo. Avni is building technology that directly supports the often overlooked emotional and logistical labor that falls on parents—especially women. Milo is an AI assistant designed to help families manage that invisible load more efficiently. In this episode, Avni shares her journey from studying chemistry to holding leadership roles at global brands like Adidas and Starbucks, to launching her own ventures. She discusses how she approaches career transitions, the importance of unpleasant experiences, and why she’s focused on making everyday life easier for parents. [01:26] Avni's University Days and Early Career [04:36] Non-Linear Career Paths [05:16] Pursuing Steep Learning Curves [11:51] Entrepreneurship and Safety Nets [15:22] Lived Experiences and Milo [19:55] Avni’s In Her Ellement Moment [20:03] Reflections Links: Avni Patel Thompson on LinkedIn Suchi Srinivasan on LinkedIn Kamila Rakhimova on LinkedIn Ipsos report on the future of parenting About In Her Ellement: In Her Ellement highlights the women and allies leading the charge in digital, business, and technology innovation. Through engaging conversations, the podcast explores their journeys—celebrating successes and acknowledging the balance between work and family. Most importantly, it asks: when was the moment you realized you hadn’t just arrived—you were truly in your element? About The Hosts: Suchi Srinivasan is an expert in AI and digital transformation. Originally from India, her career includes roles at trailblazing organizations like Bell Labs and Microsoft. In 2011, she co-founded the Cleanweb Hackathon, a global initiative driving IT-powered climate solutions with over 10,000 members across 25+ countries. She also advises Women in Cloud, aiming to create $1B in economic opportunities for women entrepreneurs by 2030. Kamila Rakhimova is a fintech leader whose journey took her from Tajikistan to the U.S., where she built a career on her own terms. Leveraging her English proficiency and international relations expertise, she discovered the power of microfinance and moved to the U.S., eventually leading Amazon's Alexa Fund to support underrepresented founders. Subscribe to In Her Ellement on your podcast app of choice to hear meaningful conversations with women in digital, business, and technology.…
The Gradient: Perspectives on AI
Đánh dấu tất cả (chưa) nghe ...
Manage series 2975159
Nội dung được cung cấp bởi Daniel Bashir. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được Daniel Bashir hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.
Deeply researched, technical interviews with experts thinking about AI and technology.
thegradientpub.substack.com
…
continue reading
thegradientpub.substack.com
150 tập
Đánh dấu tất cả (chưa) nghe ...
Manage series 2975159
Nội dung được cung cấp bởi Daniel Bashir. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được Daniel Bashir hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.
Deeply researched, technical interviews with experts thinking about AI and technology.
thegradientpub.substack.com
…
continue reading
thegradientpub.substack.com
150 tập
Tất cả các tập
×T
The Gradient: Perspectives on AI


Episode 142 Happy holidays! This is one of my favorite episodes of the year — for the third time, Nathan Benaich and I did our yearly roundup of all the AI news and advancements you need to know. This includes selections from this year’s State of AI Report, some early takes on o3, a few minutes LARPing as China Guys……… If you’ve stuck around and continue to listen, I’m really thankful you’re here. I love hearing from you. You can find Nathan and Air Street Press here on Substack and on Twitter , LinkedIn , and his personal site . Check out his writing at press.airstreet.com . Find me on Twitter (or LinkedIn if you want…) for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Outline * (00:00) Intro * (01:00) o3 and model capabilities + reasoning capabilities * (05:30) Economics of frontier models * (09:24) Air Street’s year and industry shifts: product-market fit in AI, major developments in science/biology, "vibe shifts" in defense and robotics * (16:00) Investment strategies in generative AI, how to evaluate and invest in AI companies * (19:00) Future of BioML and scientific progress: on AlphaFold 3, evaluation challenges, and the need for cross-disciplinary collaboration * (32:00) The AGI question and technology diffusion: Nathan’s take on AGI and timelines, technology adoption, the gap between capabilities and real-world impact * (39:00) Differential economic impacts from AI, tech diffusion * (43:00) Market dynamics and competition * (50:00) DeepSeek and global AI innovation * (59:50) A robotics renaissance? robotics coming back into focus + advances in vision-language models and real-world applications * (1:05:00) Compute Infrastructure: NVIDIA’s dominance, GPU availability, the competitive landscape in AI compute * (1:12:00) Industry consolidation: partnerships, acquisitions, regulatory concerns in AI * (1:27:00) Global AI politics and regulation: international AI governance and varying approaches * (1:35:00) The regulatory landscape * (1:43:00) 2025 predictions * (1:48:00) Closing Links and Resources From Air Street Press : * The State of AI Report * The State of Chinese AI * Open-endedness is all we’ll need * There is no scaling wall: in discussion with Eiso Kant (Poolside) * Alchemy doesn’t scale: the economics of general intelligence * Chips all the way down * The AI energy wars will get worse before they get better Other highlights/resources: * Deepseek: The Quiet Giant Leading China’s AI Race — an interview with DeepSeek CEO Liang Wenfeng via ChinaTalk , translated by Jordan Schneider , Angela Shen , Irene Zhang and others * A great position paper on open-endedness by Minqi Jiang, Tim Rocktäschel, and Ed Grefenstette — Minqi also wrote a blog post on this for us! * for China Guys only: China’s AI Regulations and How They Get Made by Matt Sheehan (+ an interview I did with Matt in 2022!) * The Simple Macroeconomics of AI by Daron Acemoglu + a critique by Maxwell Tabarrok (more links in the Report) * AI Nationalism by Ian Hogarth (from 2018) * Some analysis on the EU AI Act + regulation from Lawfare Get full access to The Gradient at thegradientpub.substack.com/subscribe…
T
The Gradient: Perspectives on AI


1 Philip Goff: Panpsychism as a Theory of Consciousness 1:00:04
1:00:04
Nghe Sau
Nghe Sau
Danh sách
Thích
Đã thích1:00:04
Episode 141 I spoke with Professor Philip Goff about: * What a “post-Galilean” science of consciousness looks like * How panpsychism helps explain consciousness and the hybrid cosmopsychist view Enjoy! Philip Goff is a British author, idealist philosopher, and professor at Durham University whose research focuses on philosophy of mind and consciousness. Specifically, it focuses on how consciousness can be part of the scientific worldview. He is the author of multiple books including Consciousness and Fundamental Reality, Galileo's Error: Foundations for a New Science of Consciousness and Why? The Purpose of the Universe . Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSS Follow The Gradient on Twitter Outline : * (00:00) Intro * (01:05) Goff vs. Carroll on the Knowledge Arguments and explanation * (08:00) Preferences for theories * (12:55) Curiosity (Grounding, Essence) and the Knowledge Argument * (14:40) Phenomenal transparency and physicalism vs. anti-physicalism * (29:00) How Exactly does Panpsychism Help Explain Consciousness * (30:05) The argument for hybrid cosmopsychism * (36:35) “Bare” subjects / subjects before inheriting phenomenal properties * (40:35) Bundle theories of the self * (43:35) Fundamental properties and new subjects as causal powers * (50:00) Integrated Information Theory * (55:00) Fundamental assumptions in hybrid cosmopsychism * (1:00:00) Outro Links : * Philip’s homepage and Twitter * Papers * Putting Consciousness First * Curiosity (Grounding, Essence) and the Knowledge Argument Get full access to The Gradient at thegradientpub.substack.com/subscribe…
T
The Gradient: Perspectives on AI


Hi everyone! If you’re a new subscriber or listener, welcome. If you’re not new, you’ve probably noticed that things have slowed down from us a bit recently. Hugh Zhang , Andrey Kurenkov and I sat down to recap some of The Gradient’s history, where we are now, and how things will look going forward. To summarize and give some context: The Gradient has been around for around 6 years now – we began as an online magazine, and began producing our own newsletter and podcast about 4 years ago. With a team of volunteers — we take in a bit of money through Substack that we use for subscriptions to tools we need and try to pay ourselves a bit — we’ve been able to keep this going for quite some time. Our team has less bandwidth than we’d like right now (and I’ll admit that at least some of us are running on fumes…) — we’ll be making a few changes: * Magazine : We’re going to be scaling down our editing work on the magazine. While we won’t be accepting pitches for unwritten drafts for now, if you have a full piece that you’d like to pitch to us, we’ll consider posting it . If you’ve reached out about writing and haven’t heard from us, we’re really sorry. We’ve tried a few different arrangements to manage the pipeline of articles we have, but it’s been difficult to make it work. We still want this to be a place to promote good work and writing from the ML community, so we intend to continue using this Substack for that purpose. If we have more editing bandwidth on our team in the future, we want to continue doing that work. * Newsletter : We’ll aim to continue the newsletter as before, but with a “Best from the Community” section highlighting posts. We’ll have a way for you to send articles you want to be featured, but for now you can reach us at our editor@thegradient.pub. * Podcast : I’ll be continuing this (at a slower pace), but eventually transition it away from The Gradient given the expanded range. If you’re interested in following, it might be worth subscribing on another player like Apple Podcasts , Spotify , or using the RSS feed . * Sigmoid Social : We’ll keep this alive as long as there’s financial support for it. If you like what we do and/or want to help us out in any way, do reach out to editor@thegradient.pub. We love hearing from you. Timestamps * (0:00) Intro * (01:55) How The Gradient began * (03:23) Changes and announcements * (10:10) More Gradient history! On our involvement, favorite articles, and some plugs Some of our favorite articles! There are so many, so this is very much a non-exhaustive list: * NLP’s ImageNet moment has arrived * The State of Machine Learning Frameworks in 2019 * Why transformative artificial intelligence is really, really hard to achieve * An Introduction to AI Story Generation * The Artificiality of Alignment (I didn’t mention this one in the episode, but it should be here) Places you can find us! Hugh : * Twitter * Personal site * Papers/things mentioned! * A Careful Examination of LLM Performance on Grade School Arithmetic (GSM1k) * Planning in Natural Language Improves LLM Search for Code Generation * Humanity’s Last Exam Andrey : * Twitter * Personal site * Last Week in AI Podcast Daniel : * Twitter * Substack blog * Personal site (under construction) Get full access to The Gradient at thegradientpub.substack.com/subscribe…
T
The Gradient: Perspectives on AI


1 Jacob Andreas: Language, Grounding, and World Models 1:52:43
1:52:43
Nghe Sau
Nghe Sau
Danh sách
Thích
Đã thích1:52:43
Episode 140 I spoke with Professor Jacob Andreas about: * Language and the world * World models * How he’s developed as a scientist Enjoy! Jacob is an associate professor at MIT in the Department of Electrical Engineering and Computer Science as well as the Computer Science and Artificial Intelligence Laboratory. His research aims to understand the computational foundations of language learning, and to build intelligent systems that can learn from human guidance. Jacob earned his Ph.D. from UC Berkeley, his M.Phil. from Cambridge (where he studied as a Churchill scholar) and his B.S. from Columbia. He has received a Sloan fellowship, an NSF CAREER award, MIT's Junior Bose and Kolokotrones teaching awards, and paper awards at ACL, ICML and NAACL. Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSS Follow The Gradient on Twitter Outline : * (00:00) Intro * (00:40) Jacob’s relationship with grounding fundamentalism * (05:21) Jacob’s reaction to LLMs * (11:24) Grounding language — is there a philosophical problem? * (15:54) Grounding and language modeling * (24:00) Analogies between humans and LMs * (30:46) Grounding language with points and paths in continuous spaces * (32:00) Neo-Davidsonian formal semantics * (36:27) Evolving assumptions about structure prediction * (40:14) Segmentation and event structure * (42:33) How much do word embeddings encode about syntax? * (43:10) Jacob’s process for studying scientific questions * (45:38) Experiments and hypotheses * (53:01) Calibrating assumptions as a researcher * (54:08) Flexibility in research * (56:09) Measuring Compositionality in Representation Learning * (56:50) Developing an independent research agenda and developing a lab culture * (1:03:25) Language Models as Agent Models * (1:04:30) Background * (1:08:33) Toy experiments and interpretability research * (1:13:30) Developing effective toy experiments * (1:15:25) Language Models, World Models, and Human Model-Building * (1:15:56) OthelloGPT’s bag of heuristics and multiple “world models” * (1:21:32) What is a world model? * (1:23:45) The Big Question — from meaning to world models * (1:28:21) From “meaning” to precise questions about LMs * (1:32:01) Mechanistic interpretability and reading tea leaves * (1:35:38) Language and the world * (1:38:07) Towards better language models * (1:43:45) Model editing * (1:45:50) On academia’s role in NLP research * (1:49:13) On good science * (1:52:36) Outro Links : * Jacob’s homepage and Twitter * Language Models, World Models, and Human Model-Building * Papers * Semantic Parsing as Machine Translation (2013) * Grounding language with points and paths in continuous spaces (2014) * How much do word embeddings encode about syntax? (2014) * Translating neuralese (2017) * Analogs of linguistic structure in deep representations (2017) * Learning with latent language (2018) * Learning from Language (2018) * Measuring Compositionality in Representation Learning (2019) * Experience grounds language (2020) * Language Models as Agent Models (2022) Get full access to The Gradient at thegradientpub.substack.com/subscribe…
T
The Gradient: Perspectives on AI


1 Evan Ratliff: Our Future with Voice Agents 1:19:59
1:19:59
Nghe Sau
Nghe Sau
Danh sách
Thích
Đã thích1:19:59
Episode 139 I spoke with Evan Ratliff about: * Shell Game , Evan’s new podcast, where he creates an AI voice clone of himself and sets it loose. * The end of the Longform Podcast and his thoughts on the state of journalism. Enjoy! Evan is an award-winning investigative journalist, bestselling author, podcast host, and entrepreneur. He’s the author of the The Mastermind: A True Story of Murder, Empire , and a New Kind of Crime Lord; the writer and host of the hit podcasts Shell Game and Persona: The French Deception; and the cofounder of The Atavist Magazine, Pop-Up Magazine, and the Longform Podcast . As a writer, he’s a two-time National Magazine Award finalist. As an editor and producer, he’s a two-time Emmy nominee and National Magazine Award winner. Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSS Follow The Gradient on Twitter Outline : * (00:00) Intro * (01:05) Evan’s ambitious and risky projects * (04:45) Wearing different personas as a journalist * (08:31) Boundaries and acceptability in using voice agents * (11:42) Impacts on other people * (13:12) “The kids these days” — how will new technologies impact younger people? * (17:12) Evan’s approach to children’s technology use * (20:05) Techno-solutionism and improvements in medicine, childcare * (24:15) Evan’s perspective on simulations of people * (27:05) On motivations for building tech startups * (30:42) Evan’s outlook for Shell Game’s impact and motivations for his work * (36:05) How Evan decided to write for a career * (40:02) How voice agents might impact our conversations * (43:52) Evan’s experience with Longform and podcasting * (47:15) Perspectives on doing good interviews * (52:11) Mimicking and inspiration, developing style * (57:15) Writers and their motivations, the state of longform journalism * (1:06:15) The internet and writing * (1:09:41) On the ending of Longform * (1:19:48) Outro Links : * Evan’s homepage and Twitter * Shell Game , Evan’s new podcast * Longform Podcast Get full access to The Gradient at thegradientpub.substack.com/subscribe…
T
The Gradient: Perspectives on AI


1 Meredith Ringel Morris: Generative AI's HCI Moment 1:37:45
1:37:45
Nghe Sau
Nghe Sau
Danh sách
Thích
Đã thích1:37:45
Episode 138 I spoke with Meredith Morris about: * The intersection of AI and HCI and why we need more cross-pollination between AI and adjacent fields * Disability studies and AI * Generative ghosts and technological determinism * Developing a useful definition of AGI I didn’t get to record an intro for this episode since I’ve been sick. Enjoy! Meredith is Director for Human-AI Interaction Research for Google DeepMind and an Affiliate Professor in The Paul G. Allen School of Computer Science & Engineering and in The Information School at the University of Washington, where she participates in the dub research consortium. Her work spans the areas of human-computer interaction (HCI), human-centered AI, human-AI interaction, computer-supported cooperative work (CSCW), social computing, and accessibility. She has been recognized as an ACM Fellow and ACM SIGCHI Academy member for her contributions to HCI. Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSS Follow The Gradient on Twitter Outline : * (00:00) Meredith’s influences and earlier work * (03:00) Distinctions between AI and HCI * (05:56) Maturity of fields and cross-disciplinary work * (09:03) Technology and ends * (10:37) Unique aspects of Meredith’s research direction * (12:55) Forms of knowledge production in interdisciplinary work * (14:08) Disability, Bias, and AI * (18:32) LaMPost and using LMs for writing * (20:12) Accessibility approaches for dyslexia * (22:15) Awareness of AI and perceptions of autonomy * (24:43) The software model of personhood * (28:07) Notions of intelligence, normative visions and disability studies * (32:41) Disability categories and learning systems * (37:24) Bringing more perspectives into CS research and re-defining what counts as CS research * (39:36) Training interdisciplinary researchers, blurring boundaries in academia and industry * (43:25) Generative Agents and public imagination * (45:13) The state of ML conferences, the need for more cross-pollination * (46:42) Prestige in conferences, the move towards more cross-disciplinary work * (48:52) Joon Park Appreciation * (49:51) Training interdisciplinary researchers * (53:20) Generative Ghosts and technological determinism * (57:06) Examples of generative ghosts and clones, relationships to agentic systems * (1:00:39) Reasons for wanting generative ghosts * (1:02:25) Questions of consent for generative clones and ghosts * (1:05:01) Labor involved in maintaining generative ghosts, psychological tolls * (1:06:25) Potential religious and spiritual significance of generative systems * (1:10:19) Anthropomorphization * (1:12:14) User experience and cognitive biases * (1:15:24) Levels of AGI * (1:16:13) Defining AGI * (1:23:20) World models and AGI * (1:26:16) Metacognitive abilities in AGI * (1:30:06) Towards Bidirectional Human-AI Alignment * (1:30:55) Pluralistic value alignment * (1:32:43) Meredith’s perspective on deploying AI systems * (1:36:09) Meredith’s advice for younger interdisciplinary researchers Links : * Meredith’s homepage , Twitter , and Google Scholar * Papers * Mediating Group Dynamics through Tabletop Interface Design * SearchTogether: An Interface for Collaborative Web Search * AI and Accessibility: A Discussion of Ethical Considerations * Disability, Bias, and AI * LaMPost: Design and Evaluation of an AI-assisted Email Writing Prototype for Adults with Dyslexia * Generative Ghosts * Levels of AGI Get full access to The Gradient at thegradientpub.substack.com/subscribe…
T
The Gradient: Perspectives on AI


1 Davidad Dalrymple: Towards Provably Safe AI 1:20:50
1:20:50
Nghe Sau
Nghe Sau
Danh sách
Thích
Đã thích1:20:50
Episode 137 I spoke with Davidad Dalrymple about: * His perspectives on AI risk * ARIA (the UK’s Advanced Research and Invention Agency) and its Safeguarded AI Programme Enjoy—and let me know what you think! Davidad is a Programme Director at ARIA. He was most recently a Research Fellow in technical AI safety at Oxford. He co-invented the top-40 cryptocurrency Filecoin, led an international neuroscience collaboration, and was a senior software engineer at Twitter and multiple startups. Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSS Follow The Gradient on Twitter Outline : * (00:00) Intro * (00:36) Calibration and optimism about breakthroughs * (03:35) Calibration and AGI timelines, effects of AGI on humanity * (07:10) Davidad’s thoughts on the Orthogonality Thesis * (10:30) Understanding how our current direction relates to AGI and breakthroughs * (13:33) What Davidad thinks is needed for AGI * (17:00) Extracting knowledge * (19:01) Cyber-physical systems and modeling frameworks * (20:00) Continuities between Davidad’s earlier work and ARIA * (22:56) Path dependence in technology, race dynamics * (26:40) More on Davidad’s perspective on what might go wrong with AGI * (28:57) Vulnerable world, interconnectedness of computers and control * (34:52) Formal verification and world modeling, Open Agency Architecture * (35:25) The Semantic Sufficiency Hypothesis * (39:31) Challenges for modeling * (43:44) The Deontic Sufficiency Hypothesis and mathematical formalization * (49:25) Oversimplification and quantitative knowledge * (53:42) Collective deliberation in expressing values for AI * (55:56) ARIA’s Safeguarded AI Programme * (59:40) Anthropic’s ASL levels * (1:03:12) Guaranteed Safe AI — * (1:03:38) AI risk and (in)accurate world models * (1:09:59) Levels of safety specifications for world models and verifiers — steps to achieve high safety * (1:12:00) Davidad’s portfolio research approach and funding at ARIA * (1:15:46) Earlier concerns about ARIA — Davidad’s perspective * (1:19:26) Where to find more information on ARIA and the Safeguarded AI Programme * (1:20:44) Outro Links : * Davidad’s Twitter * ARIA homepage * Safeguarded AI Programme * Papers * Guaranteed Safe AI * Davidad’s Open Agency Architecture for Safe Transformative AI * Dioptics: a Common Generalization of Open Games and Gradient-Based Learners (2019) * Asynchronous Logic Automata (2008) Get full access to The Gradient at thegradientpub.substack.com/subscribe…
T
The Gradient: Perspectives on AI


1 Clive Thompson: Tales of Technology 2:27:35
2:27:35
Nghe Sau
Nghe Sau
Danh sách
Thích
Đã thích2:27:35
Episode 136 I spoke with Clive Thompson about: * How he writes * Writing about the climate and biking across the US * Technology culture and persistent debates in AI * Poetry Enjoy—and let me know what you think! Clive is a journalist who writes about science and technology. He is a contributing writer for Wired magazine, and is currently writing his next book about micromobility and cycling across the US. Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSS Follow The Gradient on Twitter Outline : * (00:00) Intro * (01:07) Clive’s life as a Tarantino movie * (03:07) Boring life and interesting art, life as material for art * (10:25) Cycling across the US — Clive’s new book on mobility and decarbonization * (15:07) Turning inward in writing * (27:21) Including personal experience in writing * (31:53) Personal and less personal writing * (36:08) Conveying uncertainty and the “voice from nowhere” in traditional journalism * (41:10) Finding the natural end of a piece * (1:02:10) Writing routine * (1:05:08) Theories of change in Clive’s writing * (1:12:33) How Clive saw things before the rest of us * (1:27:00) Automation in software engineering * (1:31:40) The anthropology of coders, poetry as a framework * (1:43:50) Proust discourse * (1:45:00) Technology culture in NYC + interaction between the tech world and other worlds * (1:50:30) Technological developments Clive wants to see happen (free ideas) * (2:01:11) Clive’s argument for memorizing poetry * (2:09:24) How Clive finds poetry * (2:18:03) Clive’s pursuit of freelance writing and making compromises * (2:27:25) Outro Links : * Clive’s Twitter and website * Selected writing * The Attack of the Incredible Grading Machine ( Lingua Franca , 1999) * The Know-It-All Machine ( Lingua Franca , 2001) * How to teach AI some common sense ( Wired , 2018) * Blogs to Riches ( NY Mag , 2006) * Clive vs. Jonathan Franzen on whether the internet is good for writing ( The Chronicle of Higher Education , 2013) * The Minecraft Generation ( New York Times , 2016) * What AI College Exam Proctors are Really Teaching Our Kids ( Wired , 2020) * Companies Don’t Need to Be Creepy to Make Money ( Wired , 2021) * Is Sucking Carbon Out of the Air the Solution to Our Climate Crisis? ( Mother Jones , 2021) * AI Shouldn’t Compete with Workers—It Should Supercharge Them ( Wired , 2022) * Back to BASIC—the Most Consequential Programming Language in the History of Computing Wired , 2024) Get full access to The Gradient at thegradientpub.substack.com/subscribe…
T
The Gradient: Perspectives on AI


1 Judy Fan: Reverse Engineering the Human Cognitive Toolkit 1:32:39
1:32:39
Nghe Sau
Nghe Sau
Danh sách
Thích
Đã thích1:32:39
Episode 136 I spoke with Judy Fan about: * Our use of physical artifacts for sensemaking * Why cognitive tools can be a double-edged sword * Her approach to scientific inquiry and how that approach has developed Enjoy—and let me know what you think! Judy is Assistant Professor of Psychology at Stanford and director of the Cognitive Tools Lab . Her lab employs converging approaches from cognitive science, computational neuroscience, and artificial intelligence to reverse engineer the human cognitive toolkit, especially how people use physical representations of thought — such as sketches and prototypes — to learn, communicate, and solve problems. Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack! Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSS Follow The Gradient on Twitter Outline : * (00:00) Intro * (00:49) Throughlines and discontinuities in Judy’s research * (06:26) “Meaning” in Judy’s research * (08:05) Production and consumption of artifacts * (13:03) Explanatory questions, why we develop visual artifacts, science as a social enterprise * (15:46) Unifying principles * (17:45) “Hard limits” to knowledge and optimism * (21:47) Tensions in different fields’ forms of sensemaking and establishing truth claims * (30:55) Dichotomies and carving up the space of possible hypotheses, conceptual tools * (33:22) Cognitive tools and projectivism, simplified models vs. nature * (40:28) Scientific training and science as process and habit * (45:51) Developing mental clarity about hypotheses * (51:45) Clarifying and expressing ideas * (1:03:21) Cognitive tools as double-edged * (1:14:21) Historical and social embeddedness of tools * (1:18:34) How cognitive tools impact our imagination * (1:23:30) Normative commitments and the role of cognitive science outside the academy * (1:32:31) Outro Links : * Judy’s Twitter and lab page * Selected papers (there are lots!) * Overviews * Drawing as a versatile cognitive tool (2023) * Using games to understand the mind (2024) * Socially intelligent machines that learn from humans and help humans learn (2024) * Research papers * Communicating design intent using drawing and text (2024) * Creating ad hoc graphical representations of number (2024) * Visual resemblance and interaction history jointly constrain pictorial meaning (2023) * Explanatory drawings prioritize functional properties at the expense of visual fidelity (2023) * SEVA: Leveraging sketches to evaluate alignment between human and machine visual abstraction (2023) * Parallel developmental changes in children’s production and recognition of line drawings of visual concepts (2023) * Learning to communicate about shared procedural abstractions (2021) * Visual communication of object concepts at different levels of abstraction (2021) * Relating visual production and recognition of objects in the human visual cortex (2020) * Collabdraw: an environment for collaborative sketching with an artificial agent (2019) * Pragmatic inference and visual abstraction enable contextual flexibility in visual communication (2019) * Common object representations for visual production and recognition (2018) Get full access to The Gradient at thegradientpub.substack.com/subscribe…
T
The Gradient: Perspectives on AI


1 L.M. Sacasas: The Questions Concerning Technology 1:47:20
1:47:20
Nghe Sau
Nghe Sau
Danh sách
Thích
Đã thích1:47:20
Episode 135 I spoke with L. M. Sacasas about: * His writing and intellectual influences * The value of asking hard questions about technology and our relationship to it * What happens when we decide to outsource skills and competency * Evolving notions of what it means to be human and questions about how to live a good life Enjoy—and let me know what you think! Michael is Executive Director of the Christian Study Center of Gainesville, Florida and author of The Convivial Society, a newsletter about technology and society. He does some of the best writing on technology I’ve had the pleasure to read, and I highly recommend his newsletter. Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack! Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSS Follow The Gradient on Twitter Outline : * (00:00) Intro * (01:12) On podcasts as a medium * (06:12) Michael’s writing * (12:38) Michael’s intellectual influences, contingency * (18:48) Moral seriousness * (22:00) Michael’s ambitions for his work * (26:17) The value of asking the right questions (about technology) * (34:18) Technology use and the “natural” pace of human life * (46:40) Outsourcing of skills and competency, engagement with others * (55:33) Inevitability narratives and technological determinism, the “Borg Complex” * (1:05:10) Notions of what it is to be human, embodiment * (1:12:37) Higher cognition vs. the body, dichotomies * (1:22:10) The body as a starting point for philosophy, questions about the adoption of new technologies * (1:30:01) Enthusiasm about technology and the cultural milieu * (1:35:30) Projectivism, desire for knowledge about and control of the world * (1:41:22) Positive visions for the future * (1:47:11) Outro Links : * Michael’s Substack: The Convivial Society and his book, The Frailest Thing: Ten Years of Thinking about the Meaning of Technology * Michael’s Twitter * Essays * Humanist Technology Criticism * What Does the Critic Love? * The Ambling Mind * Waste Your Time, Your Life May Depend On It * The Work of Art * The Stuff of (a Well-Lived) Life Get full access to The Gradient at thegradientpub.substack.com/subscribe…
T
The Gradient: Perspectives on AI


1 Pete Wolfendale: The Revenge of Reason 2:52:57
2:52:57
Nghe Sau
Nghe Sau
Danh sách
Thích
Đã thích2:52:57
Episode 134 I spoke with Pete Wolfendale about: * The flaws in longtermist thinking * Selections from his new book, The Revenge of Reason * Metaphysics * What philosophy has to say about reason and AI Enjoy—and let me know what you think! Pete is an independent philosopher based in Newcastle. Dr. Wolfendale got both his undergraduate degree and his Ph.D in Philosophy at the University of Warwick. His Ph.D thesis offered a re-examination of the Heideggerian Seinsfrage, arguing that Heideggerian scholarship has failed to fully do justice to its philosophical significance, and supplementing the shortcomings in Heidegger’s thought about Being with an alternative formulation of the question. He is the author of Object-Oriented Philosophy: The Noumenon's New Clothes and The Revenge of Reason . His blog is Deontologistics . Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack! Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSS Follow The Gradient on Twitter Outline : * (00:00) Intro * (01:30) Pete’s experience with (para-)academia, incentive structures * (10:00) Progress in philosophy and the analytic tradition * (17:57) Thinking through metaphysical questions * (26:46) Philosophy of science, uncovering categorical properties vs. dispositions * (31:55) Structure of thought and the world, epistemological excess * (49:31) What reason is, relation to language models, semantic fragmentation of AGI * (1:00:55) Neural net interpretability and intervention * (1:08:16) World models, architecture and behavior of AI systems * (1:12:35) Language acquisition in humans and LMs * (1:15:30) Pretraining vs. evolution * (1:16:50) Technological determinism * (1:18:19) Pete’s thinking on e/acc * (1:27:45) Prometheanism vs. e/acc * (1:29:39) The Weight of Forever — Pete’s critique of What We Owe the Future * (1:30:15) Our rich deontological language and longtermism’s limits * (1:43:33) Longtermism and the opacity of desire * (1:44:41) Longtermism’s historical narrative and technological determinism, theories of power * (1:48:10) The “posthuman” condition, language and techno-linguistic infrastructure * (2:00:15) Type-checking and universal infrastructure * (2:09:23) Multitudes and selfhood * (2:21:12) Definitions of the self and (non-)circularity * (2:32:55) Freedom and aesthetics, aesthetic exploration and selfhood * (2:52:46) Outro Links : * Pete’s blog and Twitter * Book: The Revenge of Reason * Writings / References * The Weight of Forever * On Neorationalism * So, Accelerationism, what’s that all about? Get full access to The Gradient at thegradientpub.substack.com/subscribe…
T
The Gradient: Perspectives on AI


1 Peter Lee: Computing Theory and Practice, and GPT-4's Impact 1:01:48
1:01:48
Nghe Sau
Nghe Sau
Danh sách
Thích
Đã thích1:01:48
Episode 133 I spoke with Peter Lee about: * His early work on compiler generation, metacircularity, and type theory * Paradoxical problems * GPT-4s impact, Microsoft’s “Sparks of AGI” paper, and responses and criticism Enjoy—and let me know what you think! Peter is President of Microsoft Research. He leads Microsoft Research and incubates new research-powered products and lines of business in areas such as artificial intelligence, computing foundations, health, and life sciences. Before joining Microsoft in 2010, he was at DARPA, where he established a new technology office that created operational capabilities in machine learning, data science, and computational social science. Prior to that, he was a professor and the head of the computer science department at Carnegie Mellon University. Peter is a member of the National Academy of Medicine and serves on the boards of the Allen Institute for Artificial Intelligence, the Brotman Baty Institute for Precision Medicine, and the Kaiser Permanente Bernard J. Tyson School of Medicine. He served on President Obama’s Commission on Enhancing National Cybersecurity. He has testified before both the US House Science and Technology Committee and the US Senate Commerce Committee. With Carey Goldberg and Dr. Isaac Kohane, he is the coauthor of the best-selling book, “The AI Revolution in Medicine: GPT-4 and Beyond.” In 2024, Peter was named by Time magazine as one of the 100 most influential people in health and life sciences. Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack! Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSS Follow The Gradient on Twitter Outline : * (00:00) Intro * (00:50) Basic vs. applied research * (05:20) Theory and practice in computing * (10:28) Traditional denotational semantics and semantics engineering in modern-day systems * (16:47) Beauty and practicality * (20:40) Metacircularity in the polymorphic lambda calculus: research directions * (24:31) Understanding the nature of difficulties with metacircularity * (26:30) Difficulties with reflection, classic paradoxes * (31:02) Sparks of AGI * (31:41) Reproducibility * (38:04) Confirming and disconfirming theories, foundational work * (42:00) Back and forth between commitments and experimentation * (51:01) Dealing with responsibility * (56:30) Peter’s picture of AGI * (1:01:38) Outro Links : * Peter’s Twitter , LinkedIn , and Microsoft Research pages * Papers and references * The automatic generation of realistic compilers from high-level semantic descriptions * Metacircularity in the polymorphic lambda calculus * A Fresh Look at Combinator Graph Reduction * Sparks of AGI * Re-envisioning DARPA * Fundamental Research in Engineering Get full access to The Gradient at thegradientpub.substack.com/subscribe…
T
The Gradient: Perspectives on AI


1 Manuel & Lenore Blum: The Conscious Turing Machine 2:23:04
2:23:04
Nghe Sau
Nghe Sau
Danh sách
Thích
Đã thích2:23:04
Episode 132 I spoke with Manuel and Lenore Blum about: * Their early influences and mentors * The Conscious Turing Machine and what theoretical computer science can tell us about consciousness Enjoy—and let me know what you think! Manuel is a pioneer in the field of theoretical computer science and the winner of the 1995 Turing Award in recognition of his contributions to the foundations of computational complexity theory and its applications to cryptography and program checking, a mathematical approach to writing programs that check their work. He worked as a professor of computer science at the University of California, Berkeley until 2001. From 2001 to 2018, he was the Bruce Nelson Professor of Computer Science at Carnegie Mellon University. Lenore is a Distinguished Career Professor of Computer Science, Emeritus at Carnegie Mellon University and former Professor-in-Residence in EECS at UC Berkeley. She is president of the Association for Mathematical Consciousness Science and newly elected member of the American Academy of Arts and Sciences. Lenore is internationally recognized for her work in increasing the participation of girls and women in Science, Technology, Engineering, and Math (STEM) fields. She was a founder of the Association for Women in Mathematics, and founding Co-Director (with Nancy Kreinberg) of the Math/Science Network and its Expanding Your Horizons conferences for middle- and high-school girls. Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack! Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSS Follow The Gradient on Twitter Outline : * (00:00) Intro * (03:09) Manuel’s interest in consciousness * (05:55) More of the story — from memorization to derivation * (11:15) Warren McCulloch’s mentorship * (14:00) McCulloch’s anti-Freudianism * (15:57) More on McCulloch’s influence * (27:10) On McCulloch and telling stories * (32:35) The Conscious Turing Machine (CTM) * (33:55) A last word on McCulloch * (35:20) Components of the CTM * (39:55) Advantages of the CTM model * (50:20) The problem of free will * (52:20) On pain * (1:01:10) Brainish / CTM’s multimodal inner language, language and thinking * (1:13:55) The CTM’s lack of a “central executive” * (1:18:10) Empiricism and a self, tournaments in the CTM * (1:26:30) Mental causation * (1:36:20) Expertise and the CTM model, role of TCS * (1:46:30) Dreams and dream experience * (1:50:15) Disentangling components of experience from multimodal language * (1:56:10) CTM Robot, meaning and symbols, embodiment and consciousness * (2:00:35) AGI, CTM and AI processors, capabilities * (2:09:30) CTM implications, potential worries * (2:17:15) Advice for younger (computer) scientists * (2:22:57) Outro Links : * Manuel’s homepage * Lenore’s homepage ; find Lenore on Twitter ( https://x.com/blumlenore ) and Linkedin ( https://www.linkedin.com/in/lenore-blum-1a47224 ) * Articles * “ The ‘Accidental Activist’ Who Changed the Face of Mathematics ” — Ben Brubaker’s Q&A with Lenore * “ How this Turing-Award-winning researcher became a legendary academic advisor ” — Sheon Han’s profile of Manuel * Papers (Manuel and Lenore) * AI Consciousness is Inevitable: A Theoretical Computer Science Perspective * A Theory of Consciousness from a Theoretical Computer Science Perspective: Insights from the Conscious Turing Machine * A Theoretical Computer Science Perspective on Consciousness and Artificial General Intelligence * References (McCulloch) * Embodiments of Mind * Rebel Genius Get full access to The Gradient at thegradientpub.substack.com/subscribe…
T
The Gradient: Perspectives on AI


1 Kevin Dorst: Against Irrationalist Narratives 2:15:21
2:15:21
Nghe Sau
Nghe Sau
Danh sách
Thích
Đã thích2:15:21
Episode 131 I spoke with Professor Kevin Dorst about: * Subjective Bayesianism and epistemology foundations * What happens when you’re uncertain about your evidence * Why it’s rational for people to polarize on political matters Enjoy—and let me know what you think! Kevin is an Associate Professor in the Department of Linguistics and Philosophy at MIT. He works at the border between philosophy and social science, focusing on rationality. Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack! Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSS Follow The Gradient on Twitter Outline : * (00:00) Intro * (01:15) When do Bayesians need theorems? * (05:52) Foundations of epistemology, metaethics, formal models, error theory * (09:35) Extreme views and error theory, arguing for/against opposing positions * (13:35) Changing focuses in philosophy — pragmatic pressures * (19:00) Kevin’s goals through his research and work * (25:10) Structural factors in coming to certain (political) beliefs * (30:30) Acknowledging limited resources, heuristics, imperfect rationality * (32:51) Hindsight Bias is Not a Bias * (33:30) The argument * (35:15) On eating cereal and symmetric properties of evidence * (39:45) Colloquial notions of hindsight bias, time and evidential support * (42:45) An example * (48:02) Higher-order uncertainty * (48:30) Explicitly modeling higher-order uncertainty * (52:50) Another example (spoons) * (54:55) Game theory, iterated knowledge, even higher order uncertainty * (58:00) Uncertainty and philosophy of mind * (1:01:20) Higher-order evidence about reliability and rationality * (1:06:45) Being Rational and Being Wrong * (1:09:00) Setup on calibration and overconfidence * (1:12:30) The need for average rational credence — normative judgments about confidence and realism/anti-realism * (1:15:25) Quasi-realism about average rational credence? * (1:19:00) Classic epistemological paradoxes/problems — lottery paradox, epistemic luck * (1:25:05) Deference in rational belief formation, uniqueness and permissivism * (1:39:50) Rational Polarization * (1:40:00) Setup * (1:37:05) Epistemic nihilism, expanded confidence akrasia * (1:40:55) Ambiguous evidence and confidence akrasia * (1:46:25) Ambiguity in understanding and notions of rational belief * (1:50:00) Claims about rational sensitivity — what stories we can tell given evidence * (1:54:00) Evidence vs presentation of evidence * (2:01:20) ChatGPT and the case for human irrationality * (2:02:00) Is ChatGPT replicating human biases? * (2:05:15) Simple instruction tuning and an alternate story * (2:10:22) Kevin’s aspirations with his work * (2:15:13) Outro Links : * Professor Dorst’s homepage and Twitter * Papers * Modest Epistemology * Hedden: Hindsight bias is not a bias * Higher-order evidence + (Almost) all evidence is higher-order evidence * Being Rational and Being Wrong * Rational Polarization * ChatGPT and human irrationality Get full access to The Gradient at thegradientpub.substack.com/subscribe…
T
The Gradient: Perspectives on AI


1 David Pfau: Manifold Factorization and AI for Science 2:00:52
2:00:52
Nghe Sau
Nghe Sau
Danh sách
Thích
Đã thích2:00:52
Episode 130 I spoke with David Pfau about: * Spectral learning and ML * Learning to disentangle manifolds and (projective) representation theory * Deep learning for computational quantum mechanics * Picking and pursuing research problems and directions David’s work is really (times k for some very large value of k ) interesting—I’ve been inspired to descend a number of rabbit holes because of it. (if you listen to this episode, you might become as cool as this guy) While I’m at it — I’m still hovering around 40 ratings on Apple Podcasts . It’d mean a lot if you’d consider helping me bump that up! Enjoy—and let me know what you think! David is a staff research scientist at Google DeepMind . He is also a visiting professor at Imperial College London in the Department of Physics , where he supervises work on applications of deep learning to computational quantum mechanics. His research interests span artificial intelligence, machine learning and scientific computing. Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack! Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSS Follow The Gradient on Twitter Outline : * (00:00) Intro * (00:52) David Pfau the “critic” * (02:05) Scientific applications of deep learning — David’s interests * (04:57) Brain / neural network analogies * (09:40) Modern ML systems and theories of the brain * (14:19) Desirable properties of theories * (18:07) Spectral Inference Networks * (19:15) Connections to FermiNet / computational physics, a series of papers * (33:52) Deep slow feature analysis — interpretability and findings on eigenfunctions * (39:07) Following up on eigenfunctions (there are indeed only so many hours in a day; I have been asking the Substack people if they can ship 40-hour days, but I don’t think they’ve gotten to it yet) * (42:17) Power iteration and intuitions * (45:23) Projective representation theory * (46:00) ??? * (46:54) Geomancer and learning to decompose a manifold from data * (47:45) we consider the question of whether you will spend 90 more minutes of this podcast episode (there are not 90 more minutes left in this podcast episode, but there could have been) * (1:08:47) Learning embeddings * (1:11:12) The “unexpected emergent property” of Geomancer * (1:14:43) Learned embeddings and disentangling and preservation of topology * n/b I still haven’t managed to do this in colab because I keep crashing my instance when I use s3o4d :( * (1:21:07) What’s missing from the ~ current (deep learning) paradigm ~ * (1:29:04) LLMs as swiss-army knives * (1:32:05) RL and human learning — TD learning in the brain * (1:37:43) Models that cover the Pareto Front (image below) * (1:46:54) AI accelerators and doubling down on transformers * (1:48:27) On Slow Research — chasing big questions and what makes problems attractive * (1:53:50) Future work on Geomancer * (1:55:35) Finding balance in pursuing interesting and lucrative work * (2:00:40) Outro Links : * Papers * Natural Quantum Monte Carlo Computation of Excited States (2023) * Making sense of raw input (2021) * Integrable Nonparametric Flows (2020) * Disentangling by Subspace Diffusion (2020) * Ab initio solution of the many-electron Schrödinger equation with deep neural networks (2020) * Spectral Inference Networks (2018) * Connecting GANs and Actor-Critic Methods (2016) * Learning Structure in Time Series for Neuroscience and Beyond (2015, dissertation) * Robust learning of low-dimensional dynamics from large neural ensembles (2013) * Probabilistic Deterministic Infinite Automata (2010) * Other * On Slow Research * “I just want to put this out here so that no one ever says ‘we can just get around the data limitations of LLMs with self-play’ ever again.” Get full access to The Gradient at thegradientpub.substack.com/subscribe…
Chào mừng bạn đến với Player FM!
Player FM đang quét trang web để tìm các podcast chất lượng cao cho bạn thưởng thức ngay bây giờ. Đây là ứng dụng podcast tốt nhất và hoạt động trên Android, iPhone và web. Đăng ký để đồng bộ các theo dõi trên tất cả thiết bị.