Best AI papers explained
A podcast by Enoch H. Kang
550 Episodes
-
Partner Modelling Emerges in Recurrent Agents (But Only When It Matters)
Published: 5/29/2025 -
LLM Populations Form Social Conventions and Collective Bias
Published: 5/29/2025 -
LLM Generated Persona is a Promise with a Catch
Published: 5/29/2025 -
Large Language Models for Digital Twin Simulation
Published: 5/29/2025 -
From RL Distillation to Autonomous LLM Agents
Published: 5/29/2025 -
Prompting, Auto-Prompting, and Human-AI Communication
Published: 5/29/2025 -
Textual Gradients for LLM Optimization
Published: 5/29/2025 -
Large Language Models as Markov Chains
Published: 5/28/2025 -
Metastable Dynamics of Chain-of-Thought Reasoning: Provable Benefits of Search, RL and Distillation
Published: 5/28/2025 -
Selective induction heads: how transformers select causal structures in context
Published: 5/28/2025 -
The Evolution of Statistical Induction Heads: In-Context Learning Markov Chains
Published: 5/28/2025 -
How Transformers Learn Causal Structure with Gradient Descent
Published: 5/28/2025 -
Planning anything with rigor: general-purpose zero-shot planning with llm-based formalized programming
Published: 5/28/2025 -
Automated Design of Agentic Systems
Published: 5/28/2025 -
What’s the Magic Word? A Control Theory of LLM Prompting
Published: 5/28/2025 -
BoNBoN Alignment for Large Language Models and the Sweetness of Best-of-n Sampling
Published: 5/27/2025 -
RL with KL penalties is better viewed as Bayesian inference
Published: 5/27/2025 -
Asymptotics of Language Model Alignment
Published: 5/27/2025 -
Qwen 2.5, RL, and Random Rewards
Published: 5/27/2025 -
Theoretical guarantees on the best-of-n alignment policy
Published: 5/27/2025
Cut through the noise. We curate and break down the most important AI papers so you don’t have to.
