The Inside View
A podcast by Michaël Trazzi

Categories:
54 Episodes
-
Curtis Huebner on Doom, AI Timelines and Alignment at EleutherAI
Published: 7/16/2023 -
Eric Michaud on scaling, grokking and quantum interpretability
Published: 7/12/2023 -
Jesse Hoogland on Developmental Interpretability and Singular Learning Theory
Published: 7/6/2023 -
Clarifying and predicting AGI by Richard Ngo
Published: 5/9/2023 -
Alan Chan And Max Kauffman on Model Evaluations, Coordination and AI Safety
Published: 5/6/2023 -
Breandan Considine on Neuro Symbolic AI, Coding AIs and AI Timelines
Published: 5/4/2023 -
Christoph Schuhmann on Open Source AI, Misuse and Existential risk
Published: 5/1/2023 -
Simeon Campos on Short Timelines, AI Governance and AI Alignment Field Building
Published: 4/29/2023 -
Collin Burns On Discovering Latent Knowledge In Language Models Without Supervision
Published: 1/17/2023 -
Victoria Krakovna–AGI Ruin, Sharp Left Turn, Paradigms of AI Alignment
Published: 1/12/2023 -
David Krueger–Coordination, Alignment, Academia
Published: 1/7/2023 -
Ethan Caballero–Broken Neural Scaling Laws
Published: 11/3/2022 -
Irina Rish–AGI, Scaling and Alignment
Published: 10/18/2022 -
Shahar Avin–Intelligence Rising, AI Governance
Published: 9/23/2022 -
Katja Grace on Slowing Down AI, AI Expert Surveys And Estimating AI Risk
Published: 9/16/2022 -
Markus Anderljung–AI Policy
Published: 9/9/2022 -
Alex Lawsen—Forecasting AI Progress
Published: 9/6/2022 -
Robert Long–Artificial Sentience
Published: 8/28/2022 -
Ethan Perez–Inverse Scaling, Language Feedback, Red Teaming
Published: 8/24/2022 -
Robert Miles–Youtube, AI Progress and Doom
Published: 8/19/2022
The goal of this podcast is to create a place where people discuss their inside views about existential risk from AI.