The More You Ask, the Less You Get: When Additional Questions Hurt External Validity
Best AI papers explained - A podcast by Enoch H. Kang - Tuesdays

Categories:
This research paper explores how the act of answering multiple, similar preference elicitation questions can ironically diminish the accuracy of predicting real-world behavior. The authors argue that as respondents answer more questions, they adapt and employ task-specific decision-making processes that may not align with how they make choices in different contexts. Using methods like mouse tracking, eye tracking, and analysis of existing datasets, the studies demonstrate that this adaptation leads to a decrease in the external validity of measured preferences, suggesting that asking fewer, well-designed questions can sometimes be more effective for forecasting actual behavior in marketing, economics, and policy. The findings indicate an important trade-off between the precision gained from numerous questions and the potential mismatch in decision processes that can reduce the real-world applicability of the elicited preferences.