Reward Models Evaluate Consistency, Not Causality
Best AI papers explained - A podcast by Enoch H. Kang - Tuesdays

Categories:
How do reward models (RMs) used with large language models (LLMs) actually function when evaluating reasoning tasks? The authors discover that current RMs prioritize structural consistency and the completeness of reasoning steps over true causal understanding of the problem. Experiments show that removing the original question has less impact than altering numerical values or disrupting the logical flow, suggesting RMs primarily assess coherence and learned patterns rather than genuine problem comprehension. The paper argues for a shift towards developing causality-aware reward models that can verify logical validity beyond just structural alignment.