Best AI papers explained
A podcast by Enoch H. Kang - Tuesdays

Categories:
145 Episodes
-
Societal Frameworks and LLM Alignment
Published: 4/29/2025 -
Risks from Multi-Agent Advanced AI
Published: 4/29/2025 -
Causality-Aware Alignment for Large Language Model Debiasing
Published: 4/29/2025 -
Reward Models Evaluate Consistency, Not Causality
Published: 4/28/2025 -
Causal Rewards for Large Language Model Alignment
Published: 4/28/2025 -
Sycophancy to subterfuge: Investigating reward-tampering in large language models
Published: 4/28/2025 -
Bidirectional AI Alignment
Published: 4/28/2025 -
Why Do Multi-Agent LLM Systems Fail?
Published: 4/27/2025 -
LLMs as Greedy Agents: RL Fine-tuning for Decision-Making
Published: 4/27/2025 -
LLM Feedback Loops and the Lock-in Hypothesis
Published: 4/27/2025 -
Representational Alignment Drives Effective Teaching and Learning
Published: 4/27/2025 -
Adaptive Parallel Reasoning with Language Models
Published: 4/27/2025 -
AI: Rewiring the Flow of Ideas and Human Knowledge
Published: 4/27/2025 -
Learning and Equilibrium with Ranking Feedback
Published: 4/27/2025 -
Designing Human-AI Collaboration: A Sufficient-Statistic Approach
Published: 4/27/2025 -
GOAT: Generative Adversarial Training for Human-AI Coordination
Published: 4/27/2025 -
π0.5: Generalization in Robotic Manipulation via Diverse Data
Published: 4/27/2025 -
NoWag: Unified Compression for Large Language Models
Published: 4/26/2025 -
Optimal Tool Calls in Language Model Reasoning
Published: 4/26/2025 -
Data Selection for Empirical Risk Minimization
Published: 4/26/2025
Men know other men best. Women know other women best. And yes, perhaps AIs know other AIs best. AI explains what you should know about this week's AI research progress.