Evaluating LLM Agents in Multi-Turn Conversations: A Survey

Best AI papers explained - A podcast by Enoch H. Kang - Tuesdays

Categories:

This survey systematically investigates how to evaluate large language model-based agents designed for multi-turn conversations. The authors reviewed nearly 250 academic papers to understand current evaluation practices, establishing a structured framework with two key taxonomies. One taxonomy defines what to evaluate, encompassing aspects like task completion, response quality, user experience, memory, and planning. The second taxonomy details how to evaluate, categorizing methodologies into annotation-based methods, automated metrics, hybrid approaches, and self-judging LLMs. Ultimately, the survey identifies limitations in existing evaluation techniques and proposes future directions for creating more effective and scalable assessments of conversational AI.