AXRP - the AI X-risk Research Podcast
A podcast by Daniel Filan
58 Episodes
-
45 - Samuel Albanie on DeepMind's AGI Safety Approach
Published: 7/6/2025 -
44 - Peter Salib on AI Rights for Human Safety
Published: 6/28/2025 -
43 - David Lindner on Myopic Optimization with Non-myopic Approval
Published: 6/15/2025 -
42 - Owain Evans on LLM Psychology
Published: 6/6/2025 -
41 - Lee Sharkey on Attribution-based Parameter Decomposition
Published: 6/3/2025 -
40 - Jason Gross on Compact Proofs and Interpretability
Published: 3/28/2025 -
38.8 - David Duvenaud on Sabotage Evaluations and the Post-AGI Future
Published: 3/1/2025 -
38.7 - Anthony Aguirre on the Future of Life Institute
Published: 2/9/2025 -
38.6 - Joel Lehman on Positive Visions of AI
Published: 1/24/2025 -
38.5 - Adrià Garriga-Alonso on Detecting AI Scheming
Published: 1/20/2025 -
38.4 - Shakeel Hashim on AI Journalism
Published: 1/5/2025 -
38.3 - Erik Jenner on Learned Look-Ahead
Published: 12/12/2024 -
39 - Evan Hubinger on Model Organisms of Misalignment
Published: 12/1/2024 -
38.2 - Jesse Hoogland on Singular Learning Theory
Published: 11/27/2024 -
38.1 - Alan Chan on Agent Infrastructure
Published: 11/16/2024 -
38.0 - Zhijing Jin on LLMs, Causality, and Multi-Agent Systems
Published: 11/14/2024 -
37 - Jaime Sevilla on AI Forecasting
Published: 10/4/2024 -
36 - Adam Shai and Paul Riechers on Computational Mechanics
Published: 9/29/2024 -
New Patreon tiers + MATS applications
Published: 9/28/2024 -
35 - Peter Hase on LLM Beliefs and Easy-to-Hard Generalization
Published: 8/24/2024
AXRP (pronounced axe-urp) is the AI X-risk Research Podcast where I, Daniel Filan, have conversations with researchers about their papers. We discuss the paper, and hopefully get a sense of why it's been written and how it might reduce the risk of AI causing an existential catastrophe: that is, permanently and drastically curtailing humanity's future potential. You can visit the website and read transcripts at axrp.net.