AXRP - the AI X-risk Research Podcast
A podcast by Daniel Filan
59 Episodes
-  46 - Tom Davidson on AI-enabled CoupsPublished: 8/7/2025
-  45 - Samuel Albanie on DeepMind's AGI Safety ApproachPublished: 7/6/2025
-  44 - Peter Salib on AI Rights for Human SafetyPublished: 6/28/2025
-  43 - David Lindner on Myopic Optimization with Non-myopic ApprovalPublished: 6/15/2025
-  42 - Owain Evans on LLM PsychologyPublished: 6/6/2025
-  41 - Lee Sharkey on Attribution-based Parameter DecompositionPublished: 6/3/2025
-  40 - Jason Gross on Compact Proofs and InterpretabilityPublished: 3/28/2025
-  38.8 - David Duvenaud on Sabotage Evaluations and the Post-AGI FuturePublished: 3/1/2025
-  38.7 - Anthony Aguirre on the Future of Life InstitutePublished: 2/9/2025
-  38.6 - Joel Lehman on Positive Visions of AIPublished: 1/24/2025
-  38.5 - Adrià Garriga-Alonso on Detecting AI SchemingPublished: 1/20/2025
-  38.4 - Shakeel Hashim on AI JournalismPublished: 1/5/2025
-  38.3 - Erik Jenner on Learned Look-AheadPublished: 12/12/2024
-  39 - Evan Hubinger on Model Organisms of MisalignmentPublished: 12/1/2024
-  38.2 - Jesse Hoogland on Singular Learning TheoryPublished: 11/27/2024
-  38.1 - Alan Chan on Agent InfrastructurePublished: 11/16/2024
-  38.0 - Zhijing Jin on LLMs, Causality, and Multi-Agent SystemsPublished: 11/14/2024
-  37 - Jaime Sevilla on AI ForecastingPublished: 10/4/2024
-  36 - Adam Shai and Paul Riechers on Computational MechanicsPublished: 9/29/2024
-  New Patreon tiers + MATS applicationsPublished: 9/28/2024
AXRP (pronounced axe-urp) is the AI X-risk Research Podcast where I, Daniel Filan, have conversations with researchers about their papers. We discuss the paper, and hopefully get a sense of why it's been written and how it might reduce the risk of AI causing an existential catastrophe: that is, permanently and drastically curtailing humanity's future potential. You can visit the website and read transcripts at axrp.net.
