Generally Intelligent

A podcast by Kanjun Qiu

Categories:

36 Episodes

  1. Episode 36: Ari Morcos, DatologyAI: On leveraging data to democratize model training

    Published: 7/11/2024
  2. Episode 35: Percy Liang, Stanford: On the paradigm shift and societal effects of foundation models

    Published: 5/9/2024
  3. Episode 34: Seth Lazar, Australian National University: On legitimate power, moral nuance, and the political philosophy of AI

    Published: 3/12/2024
  4. Episode 33: Tri Dao, Stanford: On FlashAttention and sparsity, quantization, and efficient inference

    Published: 8/9/2023
  5. Episode 32: Jamie Simon, UC Berkeley: On theoretical principles for how neural networks learn and generalize

    Published: 6/22/2023
  6. Episode 31: Bill Thompson, UC Berkeley, on how cultural evolution shapes knowledge acquisition

    Published: 3/29/2023
  7. Episode 30: Ben Eysenbach, CMU, on designing simpler and more principled RL algorithms

    Published: 3/23/2023
  8. Episode 29: Jim Fan, NVIDIA, on foundation models for embodied agents, scaling data, and why prompt engineering will become irrelevant

    Published: 3/9/2023
  9. Episode 28: Sergey Levine, UC Berkeley, on the bottlenecks to generalization in reinforcement learning, why simulation is doomed to succeed, and how to pick good research problems

    Published: 3/1/2023
  10. Episode 27: Noam Brown, FAIR, on achieving human-level performance in poker and Diplomacy, and the power of spending compute at inference time

    Published: 2/9/2023
  11. Episode 26: Sugandha Sharma, MIT, on biologically inspired neural architectures, how memories can be implemented, and control theory

    Published: 1/17/2023
  12. Episode 25: Nicklas Hansen, UCSD, on long-horizon planning and why algorithms don't drive research progress

    Published: 12/16/2022
  13. Episode 24: Jack Parker-Holder, DeepMind, on open-endedness, evolving agents and environments, online adaptation, and offline learning

    Published: 12/6/2022
  14. Episode 23: Celeste Kidd, UC Berkeley, on attention and curiosity, how we form beliefs, and where certainty comes from

    Published: 11/22/2022
  15. Episode 22: Archit Sharma, Stanford, on unsupervised and autonomous reinforcement learning

    Published: 11/17/2022
  16. Episode 21: Chelsea Finn, Stanford, on the biggest bottlenecks in robotics and reinforcement learning

    Published: 11/3/2022
  17. Episode 20: Hattie Zhou, Mila, on supermasks, iterative learning, and fortuitous forgetting

    Published: 10/14/2022
  18. Episode 19: Minqi Jiang, UCL, on environment and curriculum design for general RL agents

    Published: 7/19/2022
  19. Episode 18: Oleh Rybkin, UPenn, on exploration and planning with world models

    Published: 7/11/2022
  20. Episode 17: Andrew Lampinen, DeepMind, on symbolic behavior, mental time travel, and insights from psychology

    Published: 2/28/2022

1 / 2

Technical discussions with deep learning researchers who study how to build intelligence. Made for researchers, by researchers.