549 Episodes

  1. Iterative Data Smoothing: Mitigating Reward Overfitting and Overoptimization in RLHF

    Published: 10/9/2025
  2. OpenAI Agent Builder and n8n: Orchestrating Reasoning Versus Automating Process

    Published: 10/8/2025
  3. Training Agents Inside of Scalable World Models

    Published: 10/8/2025
  4. Small Language Models are the Future of Agentic AI

    Published: 10/7/2025
  5. Activation Steering in Generative Settings via Contrastive Causal Mediation Analysis

    Published: 10/6/2025
  6. Eliciting Secret Knowledge from Language Models

    Published: 10/6/2025
  7. Temporal difference flow

    Published: 10/6/2025
  8. Personalized reasoning: just-in-time personalization and why LLMs fail at it

    Published: 10/5/2025
  9. Prompt Curriculum Learning for Efficient LLM Post-Training

    Published: 10/5/2025
  10. Personalizing Reinforcement Learning from Human Feedback with Variational Preference Learning

    Published: 10/4/2025
  11. Enhancing Personalized Multi-Turn Dialogue with Curiosity Reward

    Published: 10/4/2025
  12. Learning to summarize user information for personalized reinforcement learning from human feedback

    Published: 10/4/2025
  13. Distributional Preference Learning: Understanding and Accounting for Hidden Context in RLHF

    Published: 10/3/2025
  14. LIMI: Less is More for Agency

    Published: 10/1/2025
  15. LoRA Without Regret

    Published: 10/1/2025
  16. Actor-Critic without Actor: Critic-Guided Denoising for RL

    Published: 9/29/2025
  17. DELTA-Code: How Does RL Unlock and Transfer New Programming Algorithms in LLMs?

    Published: 9/29/2025
  18. Linear Transformers Implicitly Discover Unified Numerical Algorithms

    Published: 9/29/2025
  19. Regularizing Extrapolation in Causal Inference

    Published: 9/27/2025
  20. DoubleGen - Debiased Generative Modeling of Counterfactuals

    Published: 9/27/2025

5 / 28

Cut through the noise. We curate and break down the most important AI papers so you don’t have to.